If you have two computers, each with it's own independent connection to the internet (i.e. two ISP connections) not one shared connection then, in theory, by using something like FTP servers that support resumption of interrupted downloads, each computer could simultaneously download half of the file.
However I don't know of any software that does this (though perhaps wget or curl can be made to perform the appropriate offset fetch)
If the bottleneck is the ftp-server (or equivalent protocol server) then having two connection won't help.
Update: The sort of thing I had in mind was
Computer 1
dd if=/dev/zero bs=10000 count=5 > name-of-big.file
wget -continue http://www.example.com/name-of-big.file
Computer 2 (concurrently via separate Internet connection)
wget http://www.example.com/name-of-big.file
Stop this when it reaches the size of the chunk skipped on Computer 1. I did think you could get wget to stop by piping output to a dd statement that breaks the pipe but this turns out to not work
wget -O - $URL | dd bs=10000 count=5
Wget does stop when dd breaks the pipe but the resulting file isn't the right size. So maybe just let it run, manually stop it and cut the part you need (e.g. using dd)
Finally you can chop the non-zero part of the file on computer 1 (e.g. using dd) copy to computer 2 and cat the pieces together.
This seems messy to me, I'd rather find or write a distributed HTTP client :-)