You have to test. You also have to be intimately familiar with what your compiler is doing. I can't give you an example right now, but it is possible for the resulting compiler-optimized code to change from run to run based on changes in the source code. If you want repeatable and reliable performance you have to be drive the optimization rather than hoping for the compiler to do the work for you.
I very much doubt an optimizing compiler is going to take an up-counting for loop and convert it to a down-counting loop. One thing is certain, if such a thing exists, there is no way I would trust it to be consistently true across compilers and architectures.
There are also fundamental choices that can have a serious effect on performance and reliability.
What's faster, using a struct or a bunch of individual variables?
Say you have three different copies of the same data corresponding to three different items. What's the fastest way to represent, store and process the data?
Hypothetical example: You have three robots generating telemetry you must track using a real time embedded system in order to coordinate their actions and react to the environment. Each robot generates one hundred variables you must track.
Is it better to define a struct and then create and populate three instances, one per robot, or is it better create one hundred distinct variables (i.e.: robot01_temperature_01, robot01_voltage_01, etc.)?
Which one results in code that executes faster, with lower resources and consumes less power?
These are not decisions an optimizing compiler is going to make.
Optimizing compilers will occasionally make these decisions, but often they don’t have to because the processor will make them themselves. Modern Intel processors already do macro fusion for the common pattern of “test this and jump the result”, micro-optimizing by changing the direction your loop runs in is generally not a good idea unless you really know what you are doing, have tested the code, and intend to never change your compiler or processor ever again. (And it still depends on how your CPU is feeling that day.) In general, the kinds of things you are mentioning produce in their best case a minor win over the optimizing compiler, and if you’re not checking whether they actually work, often make no difference or do much worse most of the time. And they often come at a huge readability/maintainability cost, so really you have to ask yourself if it’s really worth making these superstitious microptimizations.