Training my own SyntaxNet model for Spanish-Ancora UD corpus following the instructions from here did not give me errors.
After all the steps the final files it creates were:
-category-map  
-char-map  
-checkpoint  
-context  
-graph  
-label-map  
-latest-model  
-latest-model.meta  
-lcword-map  
-model  
-model.meta  
-prefix-table  
-status  
-suffix-table  
-tag-map  
-tag-to-category  
-tagged-dev-corpus  
-tagged-training-corpus  
-tagged-tunning-corpus  
-word-map 
The context.pbtxt file used for the training was the one from syntaxnet/models/parsey_universal.
Then when I try to test it calling parser.sh from syntaxnet/models/parsey_universal return a couple errors:  
F syntaxnet/term_frequency_map.cc:63] Check failed: ::tensorflow::Status::OK() == (tensorflow::Env::Default()->NewRandomAccessFile(filename, &file)) (OK vs. Not found: syntaxnet/models/parsey_universal/modeltest/char-ngram-map)
F syntaxnet/term_frequency_map.cc:63] Check failed: ::tensorflow::Status::OK() == (tensorflow::Env::Default()->NewRandomAccessFile(filename, &file)) (OK vs. Not found: syntaxnet/models/parsey_universal/modeltest/morphology-map) 
Then I downloaded the Spanish pretrained model from here. And I checked the files. It seems like there are two missing files, the pretrained model has it by default but in the one I trained, these files were missing.
So my questions are how to get these files in the SyntaxNet training phase?
there are other ways to produce them?
should I test it in a different way?
NoSuchKey