I'm running a Spark job on the Hadoop cluster, using EC2 machines. Let's say my machines have N core - which value should I set to the spark.executor.cores configuration?
N-1? or should I leave some spares?
            Asked
            
        
        
            Active
            
        
            Viewed 195 times
        
    0
            
            
         
    
    
        nirkov
        
- 697
- 10
- 25
- 
                    1Please see the [The number of cores vs. the number of executors](https://stackoverflow.com/questions/24622108/apache-spark-the-number-of-cores-vs-the-number-of-executors) discussion for some things that should be considered in the decision-making. – Leonid Vasilev Nov 04 '22 at 11:00
- 
                    This a very interesting topic that I will check, thank you. But my question is a bit different - I ask what the correct number is, given that I have machines with N core (I can't change the type of machines in this example). I'm trying to understand if there is any reason not the use N-1, but less – nirkov Nov 04 '22 at 12:00