Zato Hardware High Availability Question


#1

Hello,

So we’re trying to understand how Zato works under High Availability (in terms of hardware High Availability)

A basic proposed architecture would be:

  • 2 Hardware Server
  • 1 HW Load Balancer balancing traffic
  • 1 Zato Load Balancer on each HW Server
  • 2 Zato Servers on each HW Server
  • 1 Zato Web-Admin on each HW Server

How can we create a proper High Availability solution as haven’t come across any documents that help explain this architecture.

Thank You.


#2

Not a direct answer, but maybe my old topic can help:


#3

@rtrind thank you for your reply.
I’ve read through that and it’s somewhat on the same track as us but not exactly.

What we’re trying to implement is something along these lines: https://zato.io/docs/_images/ha-no-connectors-2-clusters.png

Would you happen to know whats the best possible way to implement that?

So Cluster1 would be on PhysicalServer1 and Cluster2 would be on PhysicalServer2 and if any of the Physical Servers are down, everything would still work as expected since there is a Hardware Load Balancer in place.


#4

Hello @nikhil,

what you have described sounds feasible - basically, you are installing two clusters with an external load-balancer in front of them, is that correct?

If it is, then you just need to take into account the fact that the clusters will be, from Zato’s perspective, completely independent, and it will be the external load-balancer’s job to make it all work in an HA fashion.

Also, if each system (physical server or virtual) contains one cluster, that is, if one cluster is wholly installed on one system only, then there is no difference in performance between having one or more Zato servers on that system as long as they have the same total of gunicorn workers as there are CPUs available.

For instance, let’s say that each system has four CPUs. Then, as long as we are discussing HTTP-based transactions, you will get the same req/s regardless if it’s one server with four workers, two servers each with two workers or if it’s four servers each with one worker.

Things will be different with in 3.0 if publish/subscribe is used because there, it will be possible to guarantee better performance depending on what kind of pub/sub clients are connected to Zato (e.g. WebSockets/REST/AMQP/Flat files) and how many there are of them (e.g. dozens vs. thousands).

But this will apply for 3.0 onwards, and only with pub/sub, which will effectively turn Zato into a possibly stateful platform that keeps track of messages in transit whereas 2.0 is fairly stateless and what really matters is simply that you have a total of gunicorn workers that matches the number of CPUs, as far as performance is concerned.

Regards.


#5

Hi @dsuch,

That is exactly what we are aiming for. Thank you.
So we’ve tried Zato 2.8 and Zato 3.0 and here are our findings.

Using Zato 2.8
When trying to create cluster2 which is pointing to the same database it fails and gives an error saying:
Cluster name [Cluster2] already exists
whereas when we run the create server command it then throws the error:
Cluster [Cluster2] doesn’t exist in the ODB

And we haven’t been able to move forward with this scenario.
Is there something we’re missing here? Can both the clusters connect to the same database or is that not possible as in 3.0 its completely different.

Using Zato 3.0
So we’ve managed to create 2 different clusters of Zato on 2 different servers and it shows up in their own web-admin. We can see the Cluster2 in Web-Admin1 but can’t manage it which is fine.

But we’re faced with the issue of services and outgoing sql connections.

Is there a way to sync all the services between the 2 clusters? And the SQL connections?
Does that happen automatically or manual steps are needed or it’s not possible and all services and connections need to be deployed on each cluster to keep them aligned for HA?

Cheers