You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With message queue applied,will the emit event handler keep running in the same process if or not if the events are emitted by the client with same sid?
#727
Specifically,if request.sid in test_connect( ) and test_disconnect ( ) are same,then pid is same too?or undefined due to celery message queue running?
my aim is in some emit event handler to create a child process to do some CPU bound work ,and emit event by message queue with progress and data in this child process to specified client with same sid with the one of that emit event handler said before,so i need to record this child process's info in case it could be terminated when client disconnect
Thank you!
........
from eventlet import monkey_patch
monkey_patch(socket=True)
........
@socketio.on('disconnect')
def test_disconnect():
print('ws disConnected')
print(request.sid)
print("disConnected pid and ppid", os.getpid(), os.getppid())
if request.sid in client_g:
#since message queue is introduced,global here become doutful
client_g[request.sid]['connected']=False
The text was updated successfully, but these errors were encountered:
GitKKg
changed the title
Will the emit event handler keep running in the same process if or not if the events are emitted by the client with same sid?
With message queue applied,will the emit event handler keep running in the same process if or not if the events are emitted by the client with same sid?
Jul 1, 2018
that's great,does it because "socketio = SocketIO(app,async_mode='eventlet',message_queue='amqp://')" just make server run in only one progress with all the handler?(eventlet 's coroutine confuse me)
or got multi-server actually to respond to client but keep session identity arranged by celery ?so called session sticky?
While the client remains connected, it will always be handled by the same server process. If it disconnects, when it reconnects it may end up on another server.
hi, Miguel
i'm developing a stock web app by flask-socketio, get some uncertain about if celery worker bind async process with sid
the relevant code part is shown as below ,the complete one is in websocket.py at https://github.com/GitKKg/stock_server
Specifically,if request.sid in test_connect( ) and test_disconnect ( ) are same,then pid is same too?or undefined due to celery message queue running?
my aim is in some emit event handler to create a child process to do some CPU bound work ,and emit event by message queue with progress and data in this child process to specified client with same sid with the one of that emit event handler said before,so i need to record this child process's info in case it could be terminated when client disconnect
Thank you!
........
from eventlet import monkey_patch
monkey_patch(socket=True)
........
`static_folder="C:\WebProgramming\quasar_init1\dist\spa-mat"
app = Flask(name,static_folder=static_folder, static_url_path='')
app.config['SECRET_KEY'] = 'secret!'
app.config.update(
CELERY_BROKER_URL='amqp://localhost//',
CELERY_RESULT_BACKEND='amqp://localhost//'
)
socketio = SocketIO(app,async_mode='eventlet',message_queue='amqp://')
#Note:how to run
#1.celery worker -A websocket.celery --loglevel=info
#2.python websocket.py
celery = Celery('my_task',broker=app.config['CELERY_BROKER_URL'])
client_g={}
@socketio.on('connect')
def test_connect():
print('ws Connected sid is 0x', request.sid)
print(request)
print("test_connect pid and ppid", os.getpid(), os.getppid())
session={'connected':True}
#session.connected=True
client_g[request.sid]=session
return {'sid':request.sid}
@socketio.on('disconnect')
def test_disconnect():
print('ws disConnected')
print(request.sid)
print("disConnected pid and ppid", os.getpid(), os.getppid())
if request.sid in client_g:
#since message queue is introduced,global here become doutful
client_g[request.sid]['connected']=False
if name == 'main':
The text was updated successfully, but these errors were encountered: