Easy to get real-time results of celery tasks

In daily web development work, there are often some background tasks that need to run for a long time, such as sending emails, uploading files, compiling software, etc. When these tasks are completed, the front-end page can be changed in time.

Let users know that the task has been completed or that there is a change.

This article lists the simplest implementation solution:

socket.io front-end + flask-socketio + celery

1. Front-end page

Create a test.html file with the following content.

 1<!DOCTYPE html>
 2<html lang="en">
 3<head> 
 4<title>Socket.IO Test</title> 
 5<script src="https://cdn.socket.io/4.0.1/socket.io.min.js"></script> 
 6<script> 
 7var socket = io.connect('http://127.0.0.1:5000'); 
 8
 9socket.on('connect', function() { 
10console.log('Connected:', socket.id); 
11}); 
12
13socket.on('disconnect', function() { 
14console.log('Disconnected'); 
15}); 
16
17socket.on('test event', function(data) { 
18console.log('Received message:', data); 
19// alert(data.message); 
20}); 
21
22function sendMessage() { 
23var message = document.getElementById('message').value; socket.emit('my_event', message);
24}
25</script>
26</head>
27<body>
28<input type="text" id="message">
29<button onclick="sendMessage()">Send</button>
30</body>
31</html>

The page is very simple. Import the socket.io.min.js file from the CDN, establish a ws connection with the backend, and there is a simple send message button. After receiving the message, it will print the relevant information. It is very friendly to novices.

If you still don’t understand, you can follow the public account and leave a message. I will try my best to answer.

2. Flask backend

Create an app.py file and install various dependency packages.

 1
 2from flask import Flask
 3from flask_socketio import SocketIO
 4from celery import celery
 5
 6app = Flask(__name__)
 7app.config['SECRET_KEY'] ='my_secret_key'
 8socketio = SocketIO(app, cors_allowed_origins="*",message_queue='redis://',logger=True, engineio_logger=True)
 9
10# Celery configuration
11app.config['CELERY_BROKER_URL'] ='redis://localhost:6379/0'
12app.config['CELERY_RESULT_BACKEND'] ='redis://localhost:6379/0'
13
14celery = Celery(app.name, broker=app.config['CELERY_BROKER_URL'],backend=app.config['CELERY_RESULT_BACKEND'])
15celery.conf.beat_schedule = {
16'task_interval': {
17'task': 'app.long_running_task',
18'schedule': 10.0, # time interval
19}
20}
21
22@celery.task
23def long_running_task():
24# simulate task execution
25import time
26time.sleep(8)
27# send notification via SocketIO after task completion
28socketio.emit('test event', { 'message': 'Job done! time is: {}'.format( time.ctime() ) })
29
30@socketio.on('connect')
31def handle_connect():
32print('connection successful')
33
34@socketio.on('disconnect')
35def handle_disconnect():
36print('Client disconnected')
37
38if __name__ == '__main__':
39socketio.run(app, host='0.0.0.0', port=5000)

The above code executes the simulated task long_running_task every 10 seconds. After the task is completed, it will send a message to the front end to tell it that the work has been completed and print the current time.

The key points are:

1socketio = SocketIO(app, cors_allowed_origins="*",message_queue='redis://',logger=True, engineio_logger=True)

Cross-domain settings, dbug is turned on, which is convenient for viewing the sent and received information on the back end. The message_queue configuration item is also very important. People who don’t read the document carefully may spend a day to debug when deploying in a production environment.

Don’t know why the front end did not receive the information.

3. Test environment operation

Front-end operation: firefox test.html, open the test.html file with the browser, and then open the debugging window.

Backend runs:

  • Run the server
 1(.venv) ➜ flask-socketio-celery python app.py
 2Server initialized for threading. 
 3* Serving Flask app 'app' 
 4* Debug mode: off
 5WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead. 
 6* Running on all addresses (0.0.0.0) 
 7* Running on http://127.0.0.1:5000 
 8* Running on http://192.168.124.5:5000
 9Press CTRL+C to quit
10OSxZbltBcO8kRT9NAAAA: Sending packet OPEN data {'sid': 'OSxZbltBcO8kRT9NAAAA', 'upgrades': ['websocket'], 'pingTimeout': 20000, 'pingInterval': 25000}
  • Start redis server
 1➜ ~ redis-server
 221407:C 04 Aug 2024 15:39:58.061 # WARNING Memory overcommit must be enabled! Without it, a background save or replication may fail under low memory condition. Being disabled, it can also cause failures without low memory condition, see https://github.com/jemalloc/jemalloc/issues/1328. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
 321407:C 04 Aug 2024 15:39:58.061 * oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
 421407:C 04 Aug 2024 15:39:58.061 * Redis version=7.2.5, bits=64, commit=00000000, modified=0, pid=21407, just started
 521407:C 04 Aug 2024 15:39:58.061 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
 621407:M 04 Aug 2024 15:39:58.061 * Increased maximum number of open files to 10032 (it was originally set to 1024).
 721407:M 04 Aug 2024 15:39:58.061 * monotonic clock: POSIX clock_gettime 
 8_._ 
 9_.-``__ ''-._ 
10_.-`` `. `_. ''-._ Redis 7.2.5 (00000000/0) 64 bit 
11.-`` .-```. ```\/ _.,_ ''-._ 
12( ' , .-` | `, ) Running in standalone mode 
13|`-._`-...-` __...-.``-._|'` _.-'| Port: 6379 
14| `-._ `._ / _.-' | PID: 21407 
15`-._ `
16-._ `-./ _.-' _.-' 
17|`-._`-._ `-.__.-' _.-'_.-'| 
18| `-._`-._ _.-'_.-' | https://redis.io 
19`-._ `-._`-.__.-'_.-' _.-' 
20|`-._`-._ `-.__.-' _.-'_.-'| 
21| `-._`-._ _.-'_.-' | 
22`-._ `-._`-.__.-'_.-' _.-' 
23`-._ `-.__.-' _.-' 
24`-._ _.-' 
25`-.__.-'
26
2721407:M 04 Aug 2024 15:39:58.061 * Server initialized
2821407:M 04 Aug 2024 15:39:58.061 * Loading RDB produced by version 7.2.5
  • run celery beat
 1(.venv) ➜ flask-socketio-celery celery -A app.celery beat -l info
 2Server initialized for threading.
 3celery beat v5.4.0 (opalescent) is starting.
 4__ - ... __ - _
 5LocalTime -> 2024-08-04 15:41:43
 6Configuration -> 
 7.broker -> redis://localhost:6379/0 
 8. loader -> celery.loaders.app.AppLoader 
 9. scheduler -> celery.beat.PersistentScheduler 
10.db->celerybeat-schedule 
11.logfile -> [stderr]@%INFO 
12. maxinterval -> 5.00 minutes (300s)
13[2024-08-04 15:41:43,989: INFO/MainProcess] beat: Starting...
14[2024-08-04 15:41:50,632: INFO/MainProcess] Scheduler: Sending due task task_interval (app.long_running_task)
  • Run celery worker
 1(.venv) ➜ flask-socketio-celery celery -A app.celery worker -l info
 2Server initialized for threading. 
 3
 4-------------- celery@minipc v5.4.0 (opalescent)
 5---****-----
 6-- ******* ---- Linux-6.10.2-arch1-1-x86_64-with-glibc2.40 2024-08-04 15:43:05
 7- *** --- * ---
 8- ** ---------- [config]
 9- ** ---------- .> app: app:0x78e41973b140
10- ** ---------- .> transport: redis://localhost:6379/0
11- ** ---------- .> results: redis://localhost:6379/0
12- *** --- * --- .> concurrency: 16 (prefork)
13-- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
14---****----- 
15--------------- [queues] 
16.> celery exchange=celery(direct) key=celery
17
18
19[tasks] 
20.app.long_running_task
21
22[2024-08-04 15:43:05,949: WARNING/MainProcess] /home/mephisto/github/flask-socketio-celery/.venv/lib/python3.12/site-packages/celery/worker/consumer/consumer.py:508: CPendingDeprecationWarning: The broker_connection_retry configuration setting will no longer determine
23whether broker connection retries are made during startup in Celery 6.0 and above.
24If you wish to retain the existing behavior for retrying connections on startup,
25you should set broker_connection_retry_on_startup to True. 
26warnings.warn(
27
28[2024-08-04 15:43:05,954: INFO/MainProcess] Connected to redis://localhost:6379/0
29[2024-08-04 15:43:05,954: WARNING/MainProcess] /home/mephisto/github/flask-socketio-celery/.venv/lib/python3.12/site-packages/celery/worker/consumer/consumer.py:508: CPendingDeprecationWarning: The broker_connection_retry configuration setting will no longer determine
30whether broker connection retries are made during startup in Celery 6.0 and above.
31If you wish to retain the existing behavior for retrying connections on startup,
32you should set broker_connection_retry_on_startup to True. 
33warnings.warn(
34
35[2024-08-04 15:43:05,957: INFO/MainProcess] mingle: searching for neighbors
36[2024-08-04 15:43:06,962: INFO/MainProcess] mingle: all alone
37[2024-08-04 15:43:06,970: INFO/MainProcess] celery@minipc ready.
38[2024-08-04 15:43:06,971: INFO/MainProcess] Task app.long_running_task[324cef1b-a8e4-43b0-925d-9e28c841e3b5] received

4. View the results

I am too lazy to type, just look at the screenshots.

socketio-flask

The left side shows the debugging of the test.tml file opened by the browser, and the right side is clockwise, which is flask, redis, beat, and worker.

The expected information is printed every ten seconds in the console on the left.

In the end, it takes at least 2 hours to write an article each time, and the benefits are almost zero, and the motivation is gradually fading.

The new job is busier than before, The update frequency can only be lowered. Today is the weekend (if you don't believe it, look at the time in the screenshot above). I usually play games. Migrant workers have little time for entertainment. They have to eat!

I can only give up my entertainment time to write this trivial article. If it is helpful to readers or you have questions, you can follow the official account, which is also considered support.

In addition, when deploying in a production environment, it is much more complicated than this.

For example, when using vue3, when the backend task is completed, the frontend must automatically refresh the page immediately, and watch must be used to dynamically monitor changes (polling is also possible, but it is uglier);

For backend deployment, the configuration of Nginx proxy will be involved. If you use gunicorn and don't read the official documentation carefully, improper configuration may take a long time and the frontend will not receive any information;

The most outrageous thing is that when you write the wrong URL, socketio will send part of the wrong URL as a protocol message to the backend, and the connection cannot be established.

It may take dozens of minutes to write and explain these details. I personally think that it is the limit to use my free time on weekends to write. After all, it is not profitable, but paid Q&A is happy.

Lastmod: Wednesday, June 25, 2025

See Also:

Translations: