153

I'm trying to run example from Celery documentation.

I run: celeryd --loglevel=INFO

/usr/local/lib/python2.7/dist-packages/celery/loaders/default.py:64: NotConfigured: No 'celeryconfig' module found! Please make sure it exists and is available to Python.
  "is available to Python." % (configname, )))
[2012-03-19 04:26:34,899: WARNING/MainProcess]  

 -------------- celery@ubuntu v2.5.1
---- **** -----
--- * ***  * -- [Configuration]
-- * - **** ---   . broker:      amqp://guest@localhost:5672//
- ** ----------   . loader:      celery.loaders.default.Loader
- ** ----------   . logfile:     [stderr]@INFO
- ** ----------   . concurrency: 4
- ** ----------   . events:      OFF
- *** --- * ---   . beat:        OFF
-- ******* ----
--- ***** ----- [Queues]
 --------------   . celery:      exchange:celery (direct) binding:celery

tasks.py:

# -*- coding: utf-8 -*-
from celery.task import task

@task
def add(x, y):
    return x + y

run_task.py:

# -*- coding: utf-8 -*-
from tasks import add
result = add.delay(4, 4)
print (result)
print (result.ready())
print (result.get())

In same folder celeryconfig.py:

CELERY_IMPORTS = ("tasks", )
CELERY_RESULT_BACKEND = "amqp"
BROKER_URL = "amqp://guest:guest@localhost:5672//"
CELERY_TASK_RESULT_EXPIRES = 300

When I run "run_task.py":

on python console

eb503f77-b5fc-44e2-ac0b-91ce6ddbf153
False

errors on celeryd server

[2012-03-19 04:34:14,913: ERROR/MainProcess] Received unregistered task of type 'tasks.add'.
The message has been ignored and discarded.

Did you remember to import the module containing this task?
Or maybe you are using relative imports?
Please see http://bit.ly/gLye1c for more information.

The full contents of the message body was:
{'retries': 0, 'task': 'tasks.add', 'utc': False, 'args': (4, 4), 'expires': None, 'eta': None, 'kwargs': {}, 'id': '841bc21f-8124-436b-92f1-e3b62cafdfe7'}

Traceback (most recent call last):
  File "/usr/local/lib/python2.7/dist-packages/celery/worker/consumer.py", line 444, in receive_message
    self.strategies[name](message, body, message.ack_log_error)
KeyError: 'tasks.add'

Please explain what's the problem.

culebrón
  • 34,265
  • 20
  • 72
  • 110
Echeg
  • 2,321
  • 2
  • 21
  • 26

40 Answers40

119

I think you need to restart the worker server. I meet the same problem and solve it by restarting.

Wei An
  • 1,779
  • 4
  • 13
  • 18
  • 6
    This fixed it for me. If you're using celeryd scripts, the worker imports your task module(s) at startup. Even if you then create more task functions or alter existing ones, the worker will be using its in-memory copies as they were when it read them. – Mark Jul 23 '13 at 08:19
  • 4
    Note: you can verify that your tasks is or is not registered by running `celery inspect registered` – Nick Brady Mar 08 '16 at 18:52
  • 5
    You also can start celery with option `--autoreload` which will restart celery each time code was changed. – Sergey Lyapustin Aug 02 '16 at 15:09
  • Unfortunately deprecated. One could use a solution from this link: https://avilpage.com/2017/05/how-to-auto-reload-celery-workers-in-development.html – Tomasz Szkudlarek May 17 '19 at 08:53
60

I had the same problem: The reason of "Received unregistered task of type.." was that celeryd service didn't find and register the tasks on service start (btw their list is visible when you start ./manage.py celeryd --loglevel=info ).

These tasks should be declared in CELERY_IMPORTS = ("tasks", ) in settings file.
If you have a special celery_settings.py file it has to be declared on celeryd service start as --settings=celery_settings.py as digivampire wrote.

Community
  • 1
  • 1
igolkotek
  • 1,687
  • 18
  • 16
  • 2
    Thanks, I actually had the issue because I started celery using ~/path/to/celery/celeryd instead of using the manage.py command! – Antoine Feb 17 '14 at 10:22
56

You can see the current list of registered tasks in the celery.registry.TaskRegistry class. Could be that your celeryconfig (in the current directory) is not in PYTHONPATH so celery can't find it and falls back to defaults. Simply specify it explicitly when starting celery.

celeryd --loglevel=INFO --settings=celeryconfig

You can also set --loglevel=DEBUG and you should probably see the problem immediately.

enticedwanderer
  • 4,346
  • 28
  • 24
45
app = Celery('proj',
             broker='amqp://',
             backend='amqp://',
             include=['proj.tasks'])

please include=['proj.tasks'] You need go to the top directory, then execute this

celery -A app.celery_module.celeryapp worker --loglevel=info

not

celery -A celeryapp worker --loglevel=info

in your celeryconfig.py input imports = ("path.path.tasks",)

please in other module invoke task!!!!!!!!

heyue
  • 504
  • 4
  • 7
  • 3
    The `include` param need to be add if you're using relative imports. I've solved my issue by adding it – CK.Nguyen Sep 28 '18 at 11:32
  • This should be the accepted answer; you need to call the worker from the top dir so that that path in the celery launch command matches the import path in the client. – Edward Gaere Feb 13 '22 at 20:34
  • I don't understand this at all. is there any change of a code sample or to explain what 'proj.tasks' means? are you giving the root folder name where settings.py is or the app where tasks.py is held? – codyc4321 Sep 04 '22 at 19:00
  • theres no code sample to explain what to put in celeryconfig.py – codyc4321 Sep 04 '22 at 19:00
42

Whether you use CELERY_IMPORTS or autodiscover_tasks, the important point is the tasks are able to be found and the name of the tasks registered in Celery should match the names the workers try to fetch.

When you launch the Celery, say celery worker -A project --loglevel=DEBUG, you should see the name of the tasks. For example, if I have a debug_task task in my celery.py.

[tasks]
. project.celery.debug_task
. celery.backend_cleanup
. celery.chain
. celery.chord
. celery.chord_unlock
. celery.chunks
. celery.group
. celery.map
. celery.starmap

If you can't see your tasks in the list, please check your celery configuration imports the tasks correctly, either in --setting, --config, celeryconfig or config_from_object.

If you are using celery beat, make sure the task name, task, you use in CELERYBEAT_SCHEDULE matches the name in the celery task list.

Shih-Wen Su
  • 2,589
  • 24
  • 21
  • This was very helpful. The name of the task needs to match the the 'task' key in your CELERYBEAT_SCHEDULE – ss_millionaire Dec 02 '18 at 01:23
  • *The important point is the tasks are able to be found and the name of the tasks registered in Celery should match the names the workers try to fetch. * Good point!!! – Light.G Jan 25 '19 at 07:09
  • This is the correct answer. Your task name in the BEAT_SCHEDULER should match whatever shows up on the list of autodiscovered tasks. So if you used `@task(name='check_periodically')` then it should match what you put in the beat schedule, IE: `CELERY_BEAT_SCHEDULE = { 'check_periodically': { 'task': 'check_periodically', 'schedule': timedelta(seconds=1) }` – Mormoran Aug 13 '19 at 14:03
30

I also had the same problem; I added

CELERY_IMPORTS=("mytasks")

in my celeryconfig.py file to solve it.

Martijn Pieters
  • 1,048,767
  • 296
  • 4,058
  • 3,343
Rohitashv Singhal
  • 4,517
  • 13
  • 57
  • 105
14

What worked for me, was to add explicit name to celery task decorator. I changed my task declaration from @app.tasks to @app.tasks(name='module.submodule.task')

Here is an example

At first my task was like:

# tasks/test_tasks.py
@celery.task
def test_task():
    print("Celery Task  !!!!")

I changed it to :

# tasks/test_tasks.py
@celery.task(name='tasks.test_tasks.test_task')
def test_task():
    print("Celery Task  !!!!")

This method is helpful when you don't have a dedicated tasks.py file to include it in celery config.

Lukasz Dynowski
  • 11,169
  • 9
  • 81
  • 124
  • This also worked for me, but not if I indicated the full path in the `name` kwarg, but only if I just copied the name, so just `celery.task(name='test_task')`. Stupid, but it worked. Trying to figure out why – Chris Oct 05 '21 at 15:11
  • Also worked for me. – Edward Gaere Feb 23 '22 at 09:43
12

Using --settings did not work for me. I had to use the following to get it all to work:

celery --config=celeryconfig --loglevel=INFO

Here is the celeryconfig file that has the CELERY_IMPORTS added:

# Celery configuration file
BROKER_URL = 'amqp://'
CELERY_RESULT_BACKEND = 'amqp://'

CELERY_TASK_SERIALIZER = 'json'
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TIMEZONE = 'America/Los_Angeles'
CELERY_ENABLE_UTC = True

CELERY_IMPORTS = ("tasks",)

My setup was a little bit more tricky because I'm using supervisor to launch celery as a daemon.

Jarie Bolander
  • 384
  • 3
  • 4
11

For me this error was solved by ensuring the app containing the tasks was included under django's INSTALLED_APPS setting.

cars
  • 421
  • 7
  • 18
7

In my case the issue was, my project was not picking up autodiscover_tasks properly.

In celery.py file the code was for getting autodiscover_tasks was:

app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)

I changed it to the following one:

from django.apps import apps
app.autodiscover_tasks(lambda: [n.name for n in apps.get_app_configs()])

Best wishes to you.

Farid Chowdhury
  • 2,766
  • 1
  • 26
  • 21
6

I had this problem mysteriously crop up when I added some signal handling to my django app. In doing so I converted the app to use an AppConfig, meaning that instead of simply reading as 'booking' in INSTALLED_APPS, it read 'booking.app.BookingConfig'.

Celery doesn't understand what that means, so I added, INSTALLED_APPS_WITH_APPCONFIGS = ('booking',) to my django settings, and modified my celery.py from

app.autodiscover_tasks(lambda: settings.INSTALLED_APPS)

to

app.autodiscover_tasks(
    lambda: settings.INSTALLED_APPS + settings.INSTALLED_APPS_WITH_APPCONFIGS
)
Adam Barnes
  • 2,922
  • 21
  • 27
5

I had the same problem running tasks from Celery Beat. Celery doesn't like relative imports so in my celeryconfig.py, I had to explicitly set the full package name:

app.conf.beat_schedule = {
   'add-every-30-seconds': {
        'task': 'full.path.to.add',
        'schedule': 30.0,
        'args': (16, 16)
    },
}
Unheilig
  • 16,196
  • 193
  • 68
  • 98
Justin Regele
  • 59
  • 1
  • 1
  • I wish the celery docs had more examples with full package names. After seeing full.path.to.add in this answer, I found out I did not need the imports. I knew the solution was simple, and just needed to have a better example of the app.conf.beat_schedule. – zerocog Aug 11 '17 at 17:26
4

Try importing the Celery task in a Python Shell - Celery might silently be failing to register your tasks because of a bad import statement.

I had an ImportError exception in my tasks.py file that was causing Celery to not register the tasks in the module. All other module tasks were registered correctly.

This error wasn't evident until I tried importing the Celery task within a Python Shell. I fixed the bad import statement and then the tasks were successfully registered.

3

This, strangely, can also be because of a missing package. Run pip to install all necessary packages: pip install -r requirements.txt

autodiscover_tasks wasn't picking up tasks that used missing packages.

kakoma
  • 1,179
  • 13
  • 17
2

I encountered this problem as well, but it is not quite the same, so just FYI. Recent upgrades causes this error message due to this decorator syntax.

ERROR/MainProcess] Received unregistered task of type 'my_server_check'.

@task('my_server_check')

Had to be change to just

@task()

No clue why.

markwalker_
  • 12,078
  • 7
  • 62
  • 99
stonefury
  • 466
  • 4
  • 7
2

I did not have any issue with Django. But encountered this when I was using Flask. The solution was setting the config option.

celery worker -A app.celery --loglevel=DEBUG --config=settings

while with Django, I just had:

python manage.py celery worker -c 2 --loglevel=info

Nihal Sharma
  • 2,397
  • 11
  • 41
  • 57
2

If you are using the apps config in installed apps like this:

LOCAL_APPS = [
'apps.myapp.apps.MyAppConfig']

Then in your config app, import the task in ready method like this:

from django.apps import AppConfig

class MyAppConfig(AppConfig):
    name = 'apps.myapp'

    def ready(self):
        try:
            import apps.myapp.signals  # noqa F401
            import apps.myapp.tasks
        except ImportError:
            pass
Gourav Chawla
  • 470
  • 1
  • 4
  • 12
2

did you include your tasks.py file or wherever your async methods are stored?

app = Celery('APP_NAME', broker='redis://redis:6379/0', include=['app1.tasks', 'app2.tasks', ...])
Martin Nowosad
  • 791
  • 8
  • 15
1

I have solved my problem, my 'task' is under a python package named 'celery_task',when i quit this package,and run the command celery worker -A celery_task.task --loglevel=info. It works.

HengHeng
  • 11
  • 1
1

As some other answers have already pointed out, there are many reasons why celery would silently ignore tasks, including dependency issues but also any syntax or code problem.

One quick way to find them is to run:

./manage.py check

Many times, after fixing the errors that are reported, the tasks are recognized by celery.

Pablo Guerrero
  • 936
  • 1
  • 12
  • 22
0

If you are running into this kind of error, there are a number of possible causes but the solution I found was that my celeryd config file in /etc/defaults/celeryd was configured for standard use, not for my specific django project. As soon as I converted it to the format specified in the celery docs, all was well.

tufelkinder
  • 1,176
  • 1
  • 15
  • 37
0

The solution for me to add this line to /etc/default/celeryd

CELERYD_OPTS="-A tasks"

Because when I run these commands:

celery worker --loglevel=INFO
celery worker -A tasks --loglevel=INFO

Only the latter command was showing task names at all.

I have also tried adding CELERY_APP line /etc/default/celeryd but that didn't worked either.

CELERY_APP="tasks"
fatihpense
  • 618
  • 9
  • 11
0

I had the issue with PeriodicTask classes in django-celery, while their names showed up fine when starting the celery worker every execution triggered:

KeyError: u'my_app.tasks.run'

My task was a class named 'CleanUp', not just a method called 'run'.

When I checked table 'djcelery_periodictask' I saw outdated entries and deleting them fixed the issue.

djangonaut
  • 7,233
  • 5
  • 37
  • 52
0

I've found that one of our programmers added the following line to one of the imports:

os.chdir(<path_to_a_local_folder>)

This caused the Celery worker to change its working directory from the projects' default working directory (where it could find the tasks) to a different directory (where it couldn't find the tasks).

After removing this line of code, all tasks were found and registered.

Nathaniel Ford
  • 20,545
  • 20
  • 91
  • 102
Amit Zitzman
  • 71
  • 1
  • 6
0

Just to add my two cents for my case with this error...

My path is /vagrant/devops/test with app.py and __init__.py in it.

When I run cd /vagrant/devops/ && celery worker -A test.app.celery --loglevel=info I am getting this error.

But when I run it like cd /vagrant/devops/test && celery worker -A app.celery --loglevel=info everything is OK.

Kostas Demiris
  • 3,415
  • 8
  • 47
  • 85
0

Celery doesn't support relative imports so in my celeryconfig.py, you need absolute import.

CELERYBEAT_SCHEDULE = {
        'add_num': {
            'task': 'app.tasks.add_num.add_nums',
            'schedule': timedelta(seconds=10),
            'args': (1, 2)
        }
}
Eds_k
  • 944
  • 10
  • 12
0

An additional item to a really useful list.

I have found Celery unforgiving in relation to errors in tasks (or at least I haven't been able to trace the appropriate log entries) and it doesn't register them. I have had a number of issues with running Celery as a service, which have been predominantly permissions related.

The latest related to permissions writing to a log file. I had no issues in development or running celery at the command line, but the service reported the task as unregistered.

I needed to change the log folder permissions to enable the service to write to it.

0

My 2 cents

I was getting this in a docker image using alpine. The django settings referenced /dev/log for logging to syslog. The django app and celery worker were both based on the same image. The entrypoint of the django app image was launching syslogd on start, but the one for the celery worker was not. This was causing things like ./manage.py shell to fail because there wouldn't be any /dev/log. The celery worker was not failing. Instead, it was silently just ignoring the rest of the app launch, which included loading shared_task entries from applications in the django project

Shadi
  • 9,742
  • 4
  • 43
  • 65
0

In my case the error was because one container created files in a folder that were mounted on the host file-system with docker-compose.

I just had to do remove the files created by the container on the host system and I was able to launch my project again.

sudo rm -Rf foldername

(I had to use sudo because the files were owned by the root user)

Docker version: 18.03.1

jjacobi
  • 385
  • 2
  • 9
0

If you use autodiscover_tasks, make sure that your functions to be registered stay in the tasks.py, not any other file. Or celery can not find the functions you want to register.

Use app.register_task will also do the job, but seems a little naive.

Please refer to this official specification of autodiscover_tasks.

def autodiscover_tasks(self, packages=None, related_name='tasks', force=False):
    """Auto-discover task modules.

    Searches a list of packages for a "tasks.py" module (or use
    related_name argument).

    If the name is empty, this will be delegated to fix-ups (e.g., Django).

    For example if you have a directory layout like this:

    .. code-block:: text

        foo/__init__.py
           tasks.py
           models.py

        bar/__init__.py
            tasks.py
            models.py

        baz/__init__.py
            models.py

    Then calling ``app.autodiscover_tasks(['foo', bar', 'baz'])`` will
    result in the modules ``foo.tasks`` and ``bar.tasks`` being imported.

    Arguments:
        packages (List[str]): List of packages to search.
            This argument may also be a callable, in which case the
            value returned is used (for lazy evaluation).
        related_name (str): The name of the module to find.  Defaults
            to "tasks": meaning "look for 'module.tasks' for every
            module in ``packages``."
        force (bool): By default this call is lazy so that the actual
            auto-discovery won't happen until an application imports
            the default modules.  Forcing will cause the auto-discovery
            to happen immediately.
    """
W.Perrin
  • 4,217
  • 32
  • 31
0

Write the correct path to the file tasks

app.conf.beat_schedule = {
'send-task': {
    'task': 'appdir.tasks.testapp',
    'schedule': crontab(minute='*/5'),  
},

}

Kairat Koibagarov
  • 1,385
  • 15
  • 9
0

when running the celery with "celery -A conf worker -l info" command all the tasks got listed in log like i was having . conf.celery.debug_task i was getting the error because I was not giving this exact task path. So kindly recheck this by copying and pasting exact task id.

0
app = Celery(__name__, broker=app.config['CELERY_BROKER'], 
backend=app.config['CELERY_BACKEND'], include=['util.xxxx', 'util.yyyy'])
node_modules
  • 4,790
  • 6
  • 21
  • 37
Dave2034
  • 11
  • 1
0

The answer to your problem lies in THE FIRST LINE of the output you provided in your question: /usr/local/lib/python2.7/dist-packages/celery/loaders/default.py:64: NotConfigured: No 'celeryconfig' module found! Please make sure it exists and is available to Python. "is available to Python." % (configname, ))). Without the right configuration Celery is not able to do anything.

Reason why it can't find the celeryconfig is most likely it is not in your PYTHONPATH.

DejanLekic
  • 18,787
  • 4
  • 46
  • 77
0

This solved my issue (put it inside your create_app() function):

celery.conf.update(app.config)

class ContextTask(celery.Task):
    def __call__(self, *args, **kwargs):
        with app.app_context():
            return self.run(*args, **kwargs)

celery.Task = ContextTask
Nadhem Maaloul
  • 433
  • 5
  • 11
0

if you're using Docker, like said @ here will kill your pain.

docker stop $(docker ps -a -q)
AgE
  • 387
  • 2
  • 8
  • 1
    If you are using docker or docker-compose this is the answer. Re-build, for some reason, it doesn't work quite right. I have my suspicions why, but not the time to explore them. Not just restart, rebuild. – ThatGuyRob Dec 07 '21 at 20:10
  • Probably, your app context and celery worker's context don't match. Using celery with 3 different frameworks taught me the real reason. :D – AgE Jun 14 '22 at 17:26
  • This also stops *all* docker containers... – BlakBat Aug 24 '23 at 19:20
0

For me, restarting the broker (Redis) solved it.


The task already showed up correctly in Celery's task list and all relevant Django settings and imports worked fine.

My broker was running before I wrote the task, and restarting Celery and Django alone didn't solve it.

However, stopping Redis with Ctrl+C and then restarting it with redis-server helped Celery to correctly identify the task.

martin-martin
  • 3,274
  • 1
  • 33
  • 60
-1

In my case, the wrong task name had already been persisted by celery beat... was still early enough for me to nuke everything.

karuhanga
  • 3,010
  • 1
  • 27
  • 30
-1

I was getting same kind of error in Flask when I was running the server using python app.py. In order to solve it, I ran the server using flask run

-1

After searching for wholeday, finally got working after cleaning .pyc files

py3clean .
Om Prakash
  • 2,675
  • 4
  • 29
  • 50