Short example:
#include <iostream>
#include <string_view>
#include <iomanip>
#define PRINTVAR(x) printVar(#x, (x) )
void printVar( const std::string_view name, const float value )
{
    std::cout 
        << std::setw( 16 )
        << name 
        << " = " << std::setw( 12 ) 
        << value << std::endl;
}
int main()
{
    std::cout << std::hexfloat;
    
    const float x = []() -> float
    {
        std::string str;
        std::cin >> str; //to avoid 
                         //trivial optimization
        return strtof(str.c_str(), nullptr);
    }();
    const float a = 0x1.bac178p-5;
    const float b = 0x1.bb7276p-5;
    const float x_1 = (1 - x);
    PRINTVAR( x );
    PRINTVAR( x_1 );
    PRINTVAR( a );
    PRINTVAR( b );
    PRINTVAR( a * x_1 + b * x );
    return 0;
}
This code produces different output on different platforms/compilers/optimizations:
X = 0x1.bafb7cp-5 //this is float in the std::hexfloat notation
Y = 0x1.bafb7ep-5
The input value is always the same: 0x1.4fab12p-2
| compiler | optimization | x86_64 | aarch64 | 
|---|---|---|---|
| GCC-12.2 | -O0 | X | X | 
| GCC-12.2 | -O2 | X | Y | 
| Clang-14 | -O0 | X | Y | 
| Clang-14 | -O2 | X | Y | 
As we can see, Clang gives us identical results between -O0 and -O2 within same architecture, but GCC does not.
The question is - should we expect the identical result with -O0 and -O2 on the same platform?