Have been stuck on this error for a few days, desperately need help.
So with standalone deploying mode, I am trying to run sample jobs, yet no matter how simple the job is (even starting spark-shell with counting an array of numbers) the following error just keeps popping out (a full cycle of the looping log info):
INFO AppClient$ClientActor: Executor added: app-20150604024526-0001/3 on worker-20150604023842-localhost-54417 (localhost:54417) with 1 cores
INFO SparkDeploySchedulerBackend: Granted executor ID app-20150604024526-0001/3 on hostPort localhost:54417 with 1 cores, 512.0 MB RAM
INFO AppClient$ClientActor: Executor updated: app-20150604024526-0001/3 is now RUNNING
INFO AppClient$ClientActor: Executor updated: app-20150604024526-0001/3 is now LOADING
WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
INFO AppClient$ClientActor: Executor updated: app-20150604024526-0001/2 is now EXITED (Command exited with code 1)
INFO SparkDeploySchedulerBackend: Executor app-20150604024526-0001/2 removed: Command exited with code 1
ERROR SparkDeploySchedulerBackend: Asked to remove non-existent executor 2
Any insights or suggestions will be deeply appreciated! Thanks a lot!