Basically, for trec_eval you need a (human generated) ground truth. That has to be in a special format: 
query-number 0 document-id relevance
Given a collection like 101Categories (wikipedia entry) that would be something like
Q1046   0   PNGImages/dolphin/image_0041.png    0
Q1046   0   PNGImages/airplanes/image_0671.png  128
Q1046   0   PNGImages/crab/image_0048.png   0
The query-number identifies therefore a query (e.g. a picture from a certain category to find similiar ones). The results from your search engine has then to be transformed to look like
query-number    Q0  document-id rank    score   Exp
or in reality
Q1046   0   PNGImages/airplanes/image_0671.png  1   1   srfiletop10
Q1046   0   PNGImages/airplanes/image_0489.png  2   0.974935    srfiletop10
Q1046   0   PNGImages/airplanes/image_0686.png  3   0.974023    srfiletop10
as described here. You might have to adjust the path names for the "document-id". Then you can calculate the standard metrics trec_eval groundtrouth.qrel results.
trec_eval --help should give you some ideas to choose the right parameters for using the measurements needed for your thesis.
trec_eval does not send any queries, you have to prepare them yourself. trec_eval does only the analysis given a ground trouth and your results.
Some basic information can be found here and here.