I understand that there are two ways that a hash collision can occur in Java's HashMap , 
1.
hashCode()forKey Objectproduces samehash valueas already produced one ( even if hash bucket is not full yet )2.Hash Bucket is already full so new
Entryhas to go at existing index.
In case of Java's HashMap, situation#2 would really be rare due to so large number of allowed entries and automatic resizing ( See My other question )
Am I correct in my understanding?
But for the sake of theoretical knowledge, do programmers or JVM do anything or can do anything to avoid scenario # 2? OR
Is allowing hash-bucket to be of largest possible size and then continous re sizing the only strategy? ( As is being done in case of HashMap ). 
I guess, as a programmer , I should be focused in writing a good hasCode() only and not worry about scenario#2 ( since that is already taken care of by API ).
 
     
    