I have an example of a scrapy project. it is pretty much default. its folder structure:
craiglist_sample/
├── craiglist_sample
│   ├── __init__.py
│   ├── items.py
│   ├── pipelines.py
│   ├── settings.py
│   └── spiders
│       ├── __init__.py
│       ├── test.py
└── scrapy.cfg
When you write  scrapy crawl craigs -o items.csv -t csv  to windows command prompt it writes craiglist items and links to console. 
I want to create an example.py in main folder and print these items to python console inside it.
I tried
 from scrapy import cmdline
cmdline.execute("scrapy crawl craigs".split())
but it writes the same as windows shell output. How can I make it print only items and list?
test.py :
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from craiglist_sample.items import CraiglistSampleItem
class MySpider(CrawlSpider):
    name = "craigs"
##    allowed_domains = ["sfbay.craigslist.org"]
##    start_urls = ["http://sfbay.craigslist.org/npo/"]
    allowed_domains = ["craigslist.org"]
    start_urls = ["http://sfbay.tr.craigslist.org/search/npo?"]
##search\/npo\?s=
    rules = (Rule (SgmlLinkExtractor(allow=('s=\d00',),restrict_xpaths=('//a[@class="button next"]',))
    , callback="parse_items", follow= True),
    )
    def parse_items(self, response):
        hxs = HtmlXPathSelector(response)
        titles = hxs.select('//span[@class="pl"]')
##        titles = hxs.select("//p[@class='row']")
        items = []
        for titles in titles:
            item = CraiglistSampleItem()
            item ["title"] = titles.select("a/text()").extract()
            item ["link"] = titles.select("a/@href").extract()
            items.append(item)
        return(items)
 
     
    