I have a couchDB with over 1 million entries spread over a few databases. I need to draw random samples from such that I have a record of the members of each sample. To that end, and following this question I want to add a field with a random number to every document in my couchDB.
Code to add a random number
def add_random_fields():
    from numpy.random import rand
    server = couchdb.Server()
    databases = [database for database in server if not database.startswith('_')]
    for database in databases:
        print database
        for document in server[database]:
            if 'results' in server[database][document].keys():
                for tweet in server[database][document]['results']:
                    if 'rand_num' not in tweet.keys():
                        tweet['rand_num'] = rand()
                        server[database].save(tweet)
This fails because I do not have enough RAM to hold a copy of all my CouchDB databases.
First attempt- load databases in chunks
Following this question.
def grouper(n, iterable, fillvalue=None):
    "Collect data into fixed-length chunks or blocks"
    # grouper(3, 'ABCDEFG', 'x') --> ABC DEF Gxx
    args = [iter(iterable)] * n
    return izip_longest(fillvalue=fillvalue, *args)
# ..Just showing relevant part of add_random_fields()
   #..
        chunk_size=100
        for tweet in grouper(server[database][document]['results'],chunk_size):
If I were iterating over a large list in python, I would write a generator expression. How can I do that in couchdb-python? Or, is there a better way?
 
    