The executor-cores flag used in the spark-submit command simply set the spark.executor.cores setting on the Spark-configuration. So they have the same effect :)
A couple of things you may try:
1) You have tagged the question with YARN, so if you find that you are not utilizing all of your cores, you should have a look at Apache Hadoop Yarn - Underutilization of cores
2) Many memory issues on YARN are solved when you increase the memory-overhead by explicitly setting spark.yarn.executor.memoryOverhead. It will default to max(386MB, 0.10* executorMemory) and that is frequently not enough.