Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Most of the popular background workers in Ruby run as a separate process (Sidekiq, Resque, GoodJob). The same goes for using Celery with Python. I'm not sure about PHP but Laravel's docs mention running a separate command for the worker so I'm guessing that's also a 2nd process.

It's common to separate them due to either language limitations or to let you individually scale your workers vs your web apps since in a lot of cases you might be doing a lot of computationally intensive work in the workers and need more of them vs your web apps. Not just more in number of replicas but potentially a different class of compute resources too. Your wep apps might be humming along with a consistent memory / CPU usage but your workers might need double or triple the memory and better cpus.



Yeah, it definitely makes sense to be able to scale workers and web processes separately. It just so happens that they app I work on for my day job is:

1. Fairly low traffic (requests per minute not requests per second except very occasional bursts)

2. Has somewhat prematurely been split into 6 microservices (used to be 10, but I've managed to rein that back a bit!). Which means despite running on the smallest instances available we are rather over-provisioned. We could likely move up one instances size and run absolutely everything on the one machine rather than having 12 separate instances!

3. Is for the most part only really using queue-tasks to keep request latency low.

Probably what would make most sense for us is to merge back in to a monolith, but continue to run web and worker processes separately I guess. But in general, I there is maybe a niche for running both together for apps with very small resource requirements.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: