19

I'm trying to make unique a set of lines pulled from a file with egrep with sort -u, then count them. About 10% of the lines(all 100 characters long from the alphabet [ATCG]) are duplicated. There are two files, about 3 gigs each, 50% aren't relevant, so perhaps 300 million lines.

LC_ALL=C  grep -E  <files> |  sort --parallel=24  -u | wc -m

Between LC_ALL=C and using -x to accelerate grep, the slowest part by far is the sort. Reading the man pages led me to --parallel=n, but experimentation showed absolutely no improvement. A little digging with top showed that even with --parallel=24, the sort process only ever runs on one processor at a time.

I have 4 chips with 6 cores and 2 threads/core, giving a total of 48 logical processors. See lscpu because /proc/cpuinfo would be too long.

Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                48
On-line CPU(s) list:   0-47
Thread(s) per core:    2
Core(s) per socket:    6
Socket(s):             4
NUMA node(s):          8
Vendor ID:             AuthenticAMD
CPU family:            21
Model:                 1
Stepping:              2
CPU MHz:               1400.000
BogoMIPS:              5199.96

What am I missing? Even if the process is IO-bound, shouldn't I see parallel processing anyway? The sort process uses 99% of the processor it is actually on at any given time, so I should be able to see parallelization if it's happening. Memory isn't a concern, I have 256 Gb to play with and none of it is used by anything else.

Something I discovered piping grep to a file then reading the file with sort:

 LC_ALL=C  grep -E  <files>  > reads.txt ; sort reads.txt  -u | wc -m

default, file 1m 50s
--parallel=24, file 1m15s
--parallel=48, file 1m6s
--parallel=1, no file 10m53s
--parallel=2, no file 10m42s
--parallel=4 no file 10m56s

others still running

In doing these benchmarks it's pretty clear that when piped input sort isn't parallelizing at all. When allowed to read a file sort splits the load as instructed.

Jeremy Kemball
  • 293
  • 1
  • 2
  • 6

2 Answers2

37

sort doesn't create a thread unless it needs to, and for small files it's just too much overhead. Now unfortunately sort treats a pipe like a small file. If you want to feed enough data to 24 threads then you'll need to specify to sort to use a large internal buffer (sort does that automatically when presented with large files). This is something we should improve on upstream (at least in documentation). So you'll want something like:

(export LC_ALL=C; grep -E  <files> | sort -S1G --parallel=24 -u | wc -m)

Note I've set LC_ALL=C for all processes, since they'll all benefit with this data).

BTW you can monitor the sort threads with something like:

watch -n.1 ps -C sort -L -o pcpu
pixelbeat
  • 1,380
6

With parsort you can sort big files faster on a multi-core machine.

On a 48 core machine you should see a speedup of 3x over sort.

parsort is part of GNU Parallel and should be a drop-in replacement for sort.

Ole Tange
  • 5,099