You want to leverage transfer learning as much as possible: this means you most likely want to use a pre-trained model (e.g. on Wikipedia data) and fine-tune it for your use case. This is because training a spacy.blank model from scratch will require lots of data, whereas fine tuning a pretrained model might require as few as a couple hundreds labels.
However, pay attention to catastrophic forgetting which is the fact that when fine-tuning on some of your new labels, the model might 'forget' some of the old labels because they are no longer present in the training set.
For example, let's say you are trying to label the entity DOCTOR on top of a pre-trained NER model that labels LOC, PERSON and ORG. You label 200 DOCTOR records and fine tune your model with them. You might find that the model now predicts every PERSON as a DOCTOR.
That's all one can say without knowing more about your data. Please check out the spacy docs on training ner for more details.