From "Hadoop: The Definitive Guide" by Tom White:
Over-replicated blocks
These are blocks  that    exceed  their   target  replication for the file    they    belong  to.
Normally, over-replication    is  not a   problem,    and HDFS    will    automatically   delete  excess
replicas.
Under-replicated blocks
These are blocks  that    do  not meet    their   target  replication for the file    they    belong  to.
HDFS  will    automatically   create  new replicas    of  under-replicated    blocks  until   they    meet
the   target  replication.    You can get information about   the blocks  being   replicated  (or
waiting   to  be  replicated) using    hdfs   dfsadmin    -metasave .
Misreplicated   blocks
These are blocks  that    do  not satisfy the block   replica placement   policy  (see    Replica
Placement).   For example,    for a   replication level   of  three   in  a   multirack   cluster,    if  all
three replicas    of  a   block   are on  the same    rack,   then    the block   is  misreplicated   because
the   replicas    should  be  spread  across  at  least   two racks   for resilience. HDFS    will
automatically re-replicate    misreplicated   blocks  so  that    they    satisfy the rack    placement
policy.
Corrupt  blocks
These are blocks  whose   replicas    are all corrupt.    Blocks  with    at  least   one noncorrupt
replica   are not reported    as  corrupt;    the namenode    will    replicate   the noncorrupt  replica
until the target  replication is  met.
Missing  replicas
These are blocks  with    no  replicas    anywhere    in  the cluster.
Hope this answers your question.