(Migrated) Scheduled jobs with multiple servers

(This message has been automatically imported from the retired mailing list)

I have a service that reads files and pushes the data to a Redis queue.
The service runs as a scheduled job. The cluster has 2 servers with
each 2 workers. It looks like the job is started 2 times at each
interval, once for every server. Is that how the scheduler is supposed
to work?

I had expected that the scheduler would just pick a server and run it
once for the whole cluster. If that’s not the case, is there a way to
make the scheduler behave like that? Or should I use a distributed lock?

On 16/11/2015 21:29, Sam Geeraerts wrote:

It looks like the job is started 2 times at each
interval, once for every server. Is that how the scheduler is supposed
to work?
Which version of zato are you running?

There was an old bug like that, but it was fixed a long time back (in 2.0.4)

On 16/11/15 22:29, Sam Geeraerts wrote:

I have a service that reads files and pushes the data to a Redis queue.
The service runs as a scheduled job. The cluster has 2 servers with
each 2 workers. It looks like the job is started 2 times at each
interval, once for every server. Is that how the scheduler is supposed
to work?

Hi Sam,

it’s not expected, please send in:

  • Definition of the job
  • scheduler.log from both servers
  • server.log from both servers

thanks,