Still somewhat confused how Ceph crush maps work and was hoping someone can shed some light. Here's my osd tree:
core@store101 ~ $ ceph osd tree
ID  WEIGHT  TYPE NAME                                UP/DOWN REWEIGHT PRIMARY-AFFINITY 
 -1 6.00000 root default                                                               
 -2 3.00000     datacenter dc1                                                         
 -4 3.00000         rack rack_dc1                                                      
-10 1.00000             host store101                                   
  4 1.00000                 osd.4                         up  1.00000          1.00000 
 -7 1.00000             host store102                                   
  1 1.00000                 osd.1                         up  1.00000          1.00000 
 -9 1.00000             host store103                                   
  3 1.00000                 osd.3                         up  1.00000          1.00000 
 -3 3.00000     datacenter dc2                                                         
 -5 3.00000         rack rack_dc2                                                      
 -6 1.00000             host store104                                   
  0 1.00000                 osd.0                         up  1.00000          1.00000 
 -8 1.00000             host store105                                   
  2 1.00000                 osd.2                         up  1.00000          1.00000 
-11 1.00000             host store106                                   
  5 1.00000                 osd.5                         up  1.00000          1.00000 
I'm simply trying to make sure that, with a replication value of 2 or more, all replicas of an object are not in the same datacenter. The rule I had (taken from the internet) is:
rule replicated_ruleset_dc {
        ruleset 0
        type replicated
        min_size 1
        max_size 10
        step take default
        step choose firstn 2 type datacenter
        step choose firstn 2 type rack
        step chooseleaf firstn 0 type host
        step emit
}
However, if I dump the placement groups, straight off I see two osd's from the same datacenter. osd's 5,0
core@store101 ~ $ ceph pg dump | grep 5,0
1.73    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.939197  0'0 96:113  [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854945  0'0 2015-07-09 12:05:01.854945
1.70    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.947403  0'0 96:45   [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854941  0'0 2015-07-09 12:05:01.854941
1.6f    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.947056  0'0 96:45   [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854940  0'0 2015-07-09 12:05:01.854940
1.6c    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.938591  0'0 96:45   [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854939  0'0 2015-07-09 12:05:01.854939
1.66    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.937803  0'0 96:107  [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854936  0'0 2015-07-09 12:05:01.854936
1.67    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.929323  0'0 96:33   [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854937  0'0 2015-07-09 12:05:01.854937
1.65    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.928200  0'0 96:33   [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854936  0'0 2015-07-09 12:05:01.854936
1.63    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.927642  0'0 96:107  [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854935  0'0 2015-07-09 12:05:01.854935
1.3f    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.924738  0'0 96:33   [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854920  0'0 2015-07-09 12:05:01.854920
1.36    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.917833  0'0 96:45   [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854916  0'0 2015-07-09 12:05:01.854916
1.33    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.911484  0'0 96:104  [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854915  0'0 2015-07-09 12:05:01.854915
1.2b    0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.878280  0'0 96:58   [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854911  0'0 2015-07-09 12:05:01.854911
1.5 0   0   0   0   0   0   0   0   active+clean    2015-07-09 13:41:36.942620  0'0 96:98   [5,0]   5   [5,0]   5   0'0 2015-07-09 12:05:01.854892  0'0 2015-07-09 12:05:01.854892
How do I get ensure that at least one replica is alwasy in another dc?