I remember working on an old midrange in ksh and dynamically building commands that ran over the 2kb I had available in the buffer.
I recently came upon an issue where one possible easy fix might create very long commands with lots of long arguments. A coworker asked what the limit was in modern bash, and it occurred to me that I have no idea. Searches all seem to get sidetracked into the number of lines in the history buffer, but that's not relevant here.
So I ran some tests (please check my logic here...)
time echo $( printf "%01024d" $( seq 1 $max ) ) | wc -c
I ran a few simple tests with great success. Even on my laptop's git bash emulation, if I run this with max=32 I get 
$: time echo $( printf "%01024d" $( seq 1 $max ) ) | wc -c
32769
real    0m0.251s
user    0m0.061s
sys     0m0.215s
That's an echo followed by 32 1kb strings as space-delimited arguments, piped to wc -c reporting an appropriate number of bytes received, in about a quarter second. Needless to say I was pleased and surprised, so I started doubling max looking for the cap... and failed. Look at this.
max=40960
$: time echo $( printf "%01024d" $( seq 0 40960 ) ) | wc -c
41944065
real    0m10.985s
user    0m4.117s
sys     0m7.565s
Eleven seconds to process, but that's a single command line of 41MB, successfully created, loaded, executed and parsed.
Holy crap... What's the upper end of this spec???
And/or is there some reason this test isn't a good proof that I could build an almost arbitrarily long command?
 
    