In HLSL I must use semantics to pass info from a vertex shader to a fragment shader. In GLSL no semantics are needed. What is an objective benefit of semantics?
Example: GLSL
vertex shader
varying vec4 foo
varying vec4 bar;
void main() {
...
foo = ...
bar = ...
}
fragment shader
varying vec4 foo
varying vec4 bar;
void main() {
gl_FragColor = foo * bar;
}
Example: HLSL
vertex shader
struct VS_OUTPUT
{
float4 foo : TEXCOORD3;
float4 bar : COLOR2;
}
VS_OUTPUT whatever()
{
VS_OUTPUT out;
out.foo = ...
out.bar = ...
return out;
}
pixel shader
void main(float4 foo : TEXCOORD3,
float4 bar : COLOR2) : COLOR
{
return foo * bar;
}
I see how foo and bar in the VS_OUTPUT get connected to foo and bar in main in the fragment shader. What I don't get is why I have the manually choose semantics to carry the data. Why, like GLSL, can't DirectX just figure out where to put the data and connect it when linking the shaders?
Is there some more concrete advantage to manually specifying semantics or is it just left over from assembly language shader days? Is there some speed advantage to choosing say TEXCOORD4 over COLOR2 or BINORMAL1?
I get that semantics can imply meaning, there's no meaning to foo or bar but they can also obscure meaning just was well if foo is not a TEXCOORD and bar is not a COLOR. We don't put semantics on C# or C++ or JavaScript variables so why are they needed for HLSL?