In the latest version of requests, the sessions object comes equipped with Cookie Persistence, look at the requests Sessions ojbects docs.
So you don't need add the cookie artificially.
Just
import requests
s=requests.Session()
login_data = dict(userName='user', password='pwd')
ra=s.post('http://example/checklogin.php', data=login_data)
print ra.content
print ra.headers
ans = dict(answer='5')
r=s.post('http://example/level1.php',data=ans)
print r.content
Just print the cookie to look up wheather you were logged.
for cookie in s.cookies:
print (cookie.name, cookie.value)
And is the example site is yours?
If not maybe the site reject the bot/crawler !
And you can change your requests's user-agent as looks likes you are using a browser.
For example:
import requests
s=requests.Session()
headers = {
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/29.0.1547.62 Safari/537.36'
}
login_data = dict(userName='user', password='pwd')
ra=s.post('http://example/checklogin.php', data=login_data, headers=headers)
print ra.content
print ra.headers
ans = dict(answer='5')
r=s.post('http://example/level1.php',data=ans, headers = headers)
print r.content