Apparently, I shouldn't be using ScrapyFileLogObserver anymore (http://doc.scrapy.org/en/1.0/topics/logging.html). But I still want to be able to save my log messages to a file, and I still want all the standard Scrapy console information to be saved to the file too.
From reading up on how to use the logging module, this is the code that I have tried to use:
class BlahSpider(CrawlSpider):
    name = 'blah'
    allowed_domains = ['blah.com']
    start_urls = ['https://www.blah.com/blahblahblah']
    rules = (
        Rule(SgmlLinkExtractor(allow=r'whatever'), callback='parse_item', follow=True),
    )
    def __init__(self):
        CrawlSpider.__init__(self)
        self.logger = logging.getLogger()
        self.logger.setLevel(logging.DEBUG)
        logging.basicConfig(filename='debug_log.txt', filemode='w', format='%(asctime)s %(levelname)s: %(message)s',
                            level=logging.DEBUG)
        console = logging.StreamHandler()
        console.setLevel(logging.DEBUG)
        simple_format = logging.Formatter('%(levelname)s: %(message)s')
        console.setFormatter(simple_format)
        self.logger.addHandler(console)
        self.logger.info("Something")
    def parse_item(self):
        i = BlahItem()
        return i
It runs fine, and it saves the "Something" to the file. However, all of the stuff that I see in the command prompt window, all of the stuff that used to be saved to the file when I used ScrapyFileLogObserver, is not saved now.
I thought that my "console" handler with "logging.StreamHandler()" was supposed to deal with that, but this is just what I had read and I don't really understand how it works.
Can anyone point out what I am missing or where I have gone wrong?
Thank you.
 
     
     
    