VALID padding: this is with zero padding. Hope there is no confusion. 
x = tf.constant([[1., 2., 3.], [4., 5., 6.],[ 7., 8., 9.], [ 7., 8., 9.]])
x = tf.reshape(x, [1, 4, 3, 1])
valid_pad = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 2, 2, 1], padding='VALID')
print (valid_pad.get_shape()) # output-->(1, 2, 1, 1)
SAME  padding: This is kind of tricky to understand in the first place because we have to consider two conditions separately as mentioned in the official docs. 
Let's take input as  , output as
 , output as  , padding as
, padding as  ,  stride as
,  stride as  and kernel size as
 and kernel size as  (only a single dimension is considered)
 (only a single dimension is considered)
Case 01:  :
 :
Case 02:   :
 : 
 is calculated such that the minimum value which can be taken for padding. Since value of
 is calculated such that the minimum value which can be taken for padding. Since value of  is known, value of
 is known, value of  can be found using this formula
 can be found using this formula  .
. 
Let's work out this example:
x = tf.constant([[1., 2., 3.], [4., 5., 6.],[ 7., 8., 9.], [ 7., 8., 9.]])
x = tf.reshape(x, [1, 4, 3, 1])
same_pad = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 2, 2, 1], padding='SAME')
print (same_pad.get_shape()) # --> output (1, 2, 2, 1)
Here the dimension of x is (3,4). Then if the horizontal direction is taken (3):

If the vertial direction is taken (4):

Hope this will help to understand how actually SAME padding works in TF.