I'm new to Java, and I've recently learned about hashCode(). On the wikipedia article about Java hashCode(), there is the following example of a hashCode() method: 
public class Employee {
    int        employeeId;
    String     name;
    Department dept;
    // other methods would be in here
    @Override
    public int hashCode() {
        int hash = 1;
        hash = hash * 17 + employeeId;
        hash = hash * 31 + name.hashCode();
        hash = hash * 13 + (dept == null ? 0 : dept.hashCode());
        return hash;
    }
}
I understand that multiplying by 31 and 13 decreases the chance of collision, but I don't see why hash is initialized to 1 rather than to employeeId. In the end this simply has the effect of adding 17*31*13 to the hashCode(), which is not going to change whether two hashCode() values are equal or not.
Bloch's "Effective Java (Second Edition)" has a very similar example in Item 9 (pages 47 and 48), but his explanation of this additive constant is quite mysterious to me.
Edit: This question was marked as a duplicate of the question Why does Java's hashCode() in String use 31 as a multiplier? That question is not the same: it is asking whether there is any reason to prefer the number 31 to any other number in the formula for the hashCode() of a String. My question is why it is the case that in many examples of hashCode() which I have found online there is a single constant added to the hashCode() of all objects. 
In fact, the example of hashCode() of a String is relevant here, because in that example there is no constant added. If adding 17*31*13 has any effect in the example I gave above, why not add such a constant when computing hashCode() of a String?
 
    