You can run the worker in background (&) and send the output to a textfile (nohup): nohup rqworker & Bydefault this will write the output to a file nohup.out within the same directory (or $HOME/nohup.out if that's not permitted). You can now close the ssh connection. With default settings, rq...
python,django,python-rq,django-rq
The problem is that RQ's default pickler is cPickle which does not know how to serialize django model instances. A simpler approach would be to use model_to_dict and pass a pickable object to your queue. from django.models import model_to_dict my_dict = model_to_dict(my_instance,fields=[],exclude=[]) If you are intent on using django model...
Well cluck a duck; I knew it would be simple as this is just a refactoring of something that worked, but what was the holdup?! The job doesn't automatically update itself after a save at the other end. One must do a refresh locally to update it (previously I was...
Silly error - had to supply redis connection url to rqworker rqworker --url redis://localhost:5001 medium
duh. I missed it in the documentation: http://python-rq.org/docs/workers/ Under "performance notes", it even gives sample code to help with this problem...
mongodb,pymongo,mongoengine,python-rq
Got it! The default Python-RQ worker uses the fork model, and the forking blocked PyMongo from sharing connection sockets. I switched to the GeventWorker and now the sockets are shared by default....
As long as each job you schedule gives some clear, observable indication that it's all done, you can definitely use RQ and wait for such "indications"; then you'll rely on such indications to be able to tell when to access each job's result. In the example you quote, it's apparently...