I know I should use NumPy with big arrays like the list I have, but I can't because I am solving a problem on Kattis (a site with problems, where you have to submit the code and they compile it) and submitting the code with line import numpy gives a run time error. 
I think the process that takes the most time is me reading and sorting all the data. I read and sort it like this:
def readData():
    import sys
    lines = [line.strip() for line in sys.stdin]
    points = [tuple([int(num) for num in line.split(" ")]) for line in lines] #we have a list of tuples: [(x1, y1), ... (xn, yn)]
    sortedBySums = sorted(points, key = lambda x: (sum(x), x[0]))
    sortedBySub = sorted(points, key = lambda x: (sub(x), x[0]))
Later on I also access the elements of tables sortedBySums and sortedBySubs, so maybe I could also speed the lookup, but don't know how. I am using list.index(element) for element lookup. Also, the sub(x) function just subtracts the second coordinate from the first one.
Is there any way I could speed this process up? The sort build in function should be really fast, from what I have learned and using the for loop like this should also be a lot faster than using regular for loop, but is there anything else that could drastically improve time consumption?
 
    