I am doing web scraping with bash. I have these URLs saved in a file called URL.txt.
?daypartId=1&catId=1
?daypartId=1&catId=11
?daypartId=1&catId=2
I want to pass these URL to an array in another file main.sh which would append in the base URL https://www.mcdelivery.com.pk/pk/browse/menu.html**(append here)**. I want to append all the URl in URL.txt file in the end of the base URL one by one.
I have come up with the code to extract the URL from the URL.txt but it is unable to append it to the base URL one by one.
#!/bin/bash
ARRAY=()
while read -r LINE
do
    ARRAY+=("$LINE")
done < URL.txt
for LINE in "${ARRAY[@]}"
do    
    echo $LINE
    curl https://www.mcdelivery.com.pk/pk/browse/menu.html$LINE | grep -o '<span class="starting-price">.*</span>' | sed 's/<[^>]\+>//g' >> price.txt 
done  
Just need help with the loop so that i can append different URL in URL.txt file at the end of the base URL in the main.sh file.
 
     
    
.*
' | sed 's/<[^>]\+>//g' >> price.txt done` I have come up with this code but the output repeats itself like it only gives the output of the main page can you please spot the error? – Jun 07 '20 at 09:11