Consider the following loops:
while((expressionA) & (expressionB)){
// do something
}
while((expressionA) && (expressionB)){
// do something
}
where expressionA and expressionB are expressions of type bool and expressionB has no side-effects. Under these conditions, the two cases are as-if-equivalent (right?).
A (hypothetical) compiler that naively takes its cue from the source code would put a branch in the && version and we would end up paying for branch prediction failures.
With a modern compiler (such as current GCC), can there ever be any conditions under which the & version gives a substantial performance gain over of the && version?
My guess is no, because:
- If
expressionBis sufficiently cheap, the compiler will recognize this and avoid creating the short-circuiting branch. - If
expressionBis sufficiently expensive, the compiler will create the short-circuit because:- if probability of
expressionAis not close to 1.0, we get a substantial average performance gain from short-circuiting. - if probability of
expressionAis close to 1.0, we won't pay much because branch prediction will tend to succeed.
- if probability of