I am scraping blog urls from main page, and later I iterate over all urls to retrive text on it.
Will generator be faster if I move loop to blogscraper and make yield some_text ? I guess app will still be one threaded and It wont request next pages while computing text from html.
Should I use asyncio? or there are some better modules to make it parrel? Create generator that yields coroutine results as the coroutines finish
I also want to make later small rest app for displaying results
def readmainpage(self):
   blogurls = []
   while(nextPage):       
       r = requests.get(url)
       ...
       blogurls += [new_url]
   return blogurls
def blogscraper(self, url):
   r = request.get(url)
   ...
   return sometext
def run(self):
    blog_list = self.readmainpage()
    for blog in blog_list:
        data = self.blogscraper(blog['url'])