I've got a system which needs to get the latest 200 lines from a very large public file every day. The file is exposed over an url. Currently I run a simple script which does a wget and then tails the last 200 lines into a different file, after which the original file is deleted again.
Because the original file is very large (about 250MB) most of the time the script runs is taken up by downloading the file.
My system works fine, but it's just annoying that it takes so long, also because I am often just waiting for it.
I found suggestions such as this one, but that basically does the same as I do now; downloading the entire file and tailing it.
Does anybody know a way that I can tail the public file without downloading it entirely? All tips are welcome!