To compile a software package on a workstation with many CPU cores (say 12), the configuration stage often takes much longer than the actual compilation stage because ./configure does the tests one by one, while make -j runs gcc as well as other commands in parallel.
I feel that it is a huge waste of resources to have the remaining 11 cores sitting idle most of the time waiting for the slow ./configure to complete. Why does it need to do the tests sequentially? Does each test depend on each other? I can be mistaken, but it looks like the majority of them are independent.
More importantly, are there any ways to speed up ./configure?
Edit: To illustrate the situation, here is an example with GNU Coreutils
cd /dev/shm
rm -rf coreutils-8.9
tar -xzf coreutils-8.9.tar.gz
cd coreutils-8.9
time ./configure
time make -j24
Results:
# For `time ./configure`
real 4m39.662s
user 0m26.670s
sys 4m30.495s
# For `time make -j24`
real 0m42.085s
user 2m35.113s
sys 6m15.050s
With coreutils-8.9, ./configure takes 6 times longer than make. Although ./configure use less CPU time (look at "user" & "sys" times), it takes much longer ("real") because it isn't parallelized. I have repeated the test a few times (with the relevant files probably staying in the memory cache) and the times are within 10%.