I have a 9TB XFS partition consisting of four 3TB disks in a RAID-5 array with a chunk size of 256KB, using MDADM.
When I created the partition, the optimal stripe unit and width values (64 and 192 blocks) were detected and set automatically, which xfs_info confirms:
# xfs_info /dev/md3
meta-data=/dev/md3 isize=256 agcount=32, agsize=68675072 blks
= sectsz=512 attr=2
data = bsize=4096 blocks=2197600704, imaxpct=5
= sunit=64 swidth=192 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal bsize=4096 blocks=521728, version=2
= sectsz=512 sunit=64 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
However, I was experiencing slow transfer speeds, and while investigating I noticed that unless I specifically mount the partition with -o sunit=64,swidth=192, the stripe unit is always set to 512 and the width at 1536. For instance:
# umount /dev/md3
# mount -t xfs -o rw,inode64 /dev/md3 /data
# grep xfs /proc/mounts
/dev/md3 /data xfs rw,relatime,attr2,delaylog,inode64,logbsize=256k,sunit=512,swidth=1536,noquota 0 0
Is this intended behavior?
I suppose that I could just start mounting it with sunit=64,swidth=192 every time, but wouldn't that make the current data (which was written while mounted with sunit=512,swidth=1536) misaligned?
The operating system is Debian Wheezy with kernel 3.2.51.
All four harddisks are advanced format disks (smartctl says 512 bytes logical, 4096 bytes physical).
The fact that the values are multiplied by 8 makes me wonder if this has anything to do with the issue, seeing that it matches the multiplication factor between 512 and 4096 sector size disks.
Can anyone shed some light on this? :-)