I am attempting to crawl a site for news articles. My start_url contains:
(1) links to each article: http://example.com/symbol/TSLA
and
(2) a "More" button that makes an AJAX call that dynamically loads more articles within the same start_url: http://example.com/account/ajax_headlines_content?type=in_focus_articles&page=0&slugs=tsla&is_symbol_page=true
A parameter to the AJAX call is "page", which is incremented each time the "More" button is clicked. For example, clicking "More" once will load an additional n articles and update the page parameter in the "More" button onClick event, so that next time "More" is clicked, "page" two of articles will be loaded (assuming "page" 0 was loaded initially, and "page" 1 was loaded on the first click).
For each "page" I would like to scrape the contents of each article using Rules, but I do not know how many "pages" there are and I do not want to choose some arbitrary m (e.g., 10k). I can't seem to figure out how to set this up.
From this question, Scrapy Crawl URLs in Order, I have tried to create a URL list of potential URLs, but I can't determine how and where to send a new URL from the pool after parsing the previous URL and ensuring it contains news links for a CrawlSpider. My Rules send responses to a parse_items callback, where the article contents are parsed.
Is there a way to observe the contents of the links page (similar to the BaseSpider example) before applying Rules and calling parse_items so that I may know when to stop crawling?
Simplified code (I removed several of the fields I'm parsing for clarity):
class ExampleSite(CrawlSpider):
    name = "so"
    download_delay = 2
    more_pages = True
    current_page = 0
    allowed_domains = ['example.com']
    start_urls = ['http://example.com/account/ajax_headlines_content?type=in_focus_articles&page=0'+
                      '&slugs=tsla&is_symbol_page=true']
    ##could also use
    ##start_urls = ['http://example.com/symbol/tsla']
    ajax_urls = []                                                                                                                                                                                                                                                                                                                                                                                                                          
    for i in range(1,1000):
        ajax_urls.append('http://example.com/account/ajax_headlines_content?type=in_focus_articles&page='+str(i)+
                      '&slugs=tsla&is_symbol_page=true')
    rules = (
             Rule(SgmlLinkExtractor(allow=('/symbol/tsla', ))),
             Rule(SgmlLinkExtractor(allow=('/news-article.*tesla.*', '/article.*tesla.*', )), callback='parse_item')
            )
        ##need something like this??
        ##override parse?
        ## if response.body == 'no results':
            ## self.more_pages = False
            ## ##stop crawler??   
        ## else: 
            ## self.current_page = self.current_page + 1
            ## yield Request(self.ajax_urls[self.current_page], callback=self.parse_start_url)
    def parse_item(self, response):
        self.log("Scraping: %s" % response.url, level=log.INFO)
        hxs = Selector(response)
        item = NewsItem()
        item['url'] = response.url
        item['source'] = 'example'
        item['title'] = hxs.xpath('//title/text()')
        item['date'] = hxs.xpath('//div[@class="article_info_pos"]/span/text()')
        yield item
 
     
    