I want to parse some info from website that has data spread among several pages.
The problem is I don't know how many pages there are. There might be 2, but there might be also 4, or even just one page.
How can I loop over pages when I don't know how many pages there will be?
I know however the url pattern which looks something like in the code below.
Also, the pages names are not plain numbers but they are in 'pe2' for page 2 and 'pe4' for page 3 etc. so can't just loop over range(number). 
This dummy code for the loop I am trying to fix.
pages=['','pe2', 'pe4', 'pe6', 'pe8',]
import requests 
from bs4 import BeautifulSoup
for i in pages:
    url = "http://www.website.com/somecode/dummy?page={}".format(i)
    r = requests.get(url)
    soup = BeautifulSoup(r.content)
    #rest of the scraping code