I'm looking for some tips on how to use my single url wget script and implement a list of urls from a text file instead. I'm not sure how to script it though - in a loop or enumerate it somehow? Here's the code I use to gather up everything from a single page:
wget \
    --recursive \
    --no-clobber \
    --page-requisites \
    --html-extension \
    --convert-links \
    --restrict-file-names=windows \
    --domains example.com \
    --no-parent \
        http://www.example.com/folder1/folder/
It works remarkably well - I'm just lost with how to use a list.txt with urls listed such as:
http://www.example.com/folder1/folder/
http://www.example.com/sports1/events/
http://www.example.com/milfs21/delete/
...
I would imagine it's fairly simple, but then again one never knows, thanks.
 
    