Issues using the same ODB for different clusters/deployments via Quickstart Image

Hello.

I would like to know if there are any caveats that we should be aware when using the same ODB for different deployments (CD). Our set-up goes like this:

  • 3 environments: dev, staging, prod.
  • 3 DBs to be used as ODB (one per environment)
  • Changes are deployed by creating a new container based on the quick-start image (the previous one that doesn’t use the Ansible provisioner) and copying the service files to the container hot-deploy folder.

Right now, the issue we are hitting is related to importing enmasse configuration of scheduler Jobs.

If I have Job-A, Job-B and Job-C that are imported via enmasse on each deployment, what happens is that they get imported on just the first deployment, following deployments can’t load those Jobs to the new Scheduler.

|                 | File \"\/opt\/zato\/3.2.0\/code\/zato-server\/src\/zato\/server\/service\/internal\/
scheduler.py\",  |
|                 | line 197, in handle\n    _create_edit(self.__class__.__name__.lower(), self.cid, sel
f.request.input, |
|                 | self.request.payload,\n  File \"\/opt\/zato\/3.2.0\/code\/zato-                     
                 |
|                 | server\/src\/zato\/server\/service\/internal\/scheduler.py\", line 70, in _create_ed
it\n    raise    |
|                 | ZatoException(cid, 'Job `{}` already exists on this                                 
                 |
|                 | cluster'.format(name))\nzato.common.exception.ZatoException: <ZatoException at 0x7f7
957e20dc0        |
|                 | cid:`bf176753bfd831ba702bba7`, msg:`Job `Job A` already exists on this    
                 |
|                 | cluster`>\n\n\n\u00b7\u00b7\u00b7 Context                                           
                 |
|                 | \u00b7\u00b7\u00b7\n\nbf176753bfd831ba702bba7\n2022-04-27T14:40:58.287171 (UTC)\nZat
o                |
|                 | 3.2+rev.a29e0d7a-py3.8.10-ubuntu.20.04-focal"}'                                     
                 |
+-----------------+-------------------------------------------------------------------------------------
-----------------+

Besides getting the input of the community on this, I have some questions:

  • Why is it saying that it already exists on this cluster if the quick-start image creates a new cluster each time?
  • Is it expected that there will be one ODB per cluster?

Hello @felixsuarez,

  • The version of Docker Quickstart that you are using is not supported anymore.
  • The Zato version inside your container (a29e0d7a) is also from a couple of months ago, not the latest one.

Here is the link to the latest versions - Zato | Docker quickstart installation - if you still have the same questions with that, please let me know.

As to the question about ODB per cluster, yes, exactly, there should be one ODB per cluster. Historically, it used to be the case that multiple clusters were able to share the database but this is no longer the case.

Regards.

Thanks for the prompt response.

It seems that our approach was not the correct one.

We like the easy of use of the quickstart images, but it seems that it could be tricky to accommodate our needs. I would like to not make assumptions, so just to be clear:

  1. It seems that we can specify the “cluster-name” when using the quick-start command. Does that mean that it will use the same previously created cluster (if it has the same name and the ODB is already there), or will it create a new cluster (read new cluster record) with the same name?
  2. I didn’t see this section of the docs before. For truly HA deployments, can the quickstart cluster be used or only distributed clusters?
  3. What are your thoughts of the best deployment strategy with the following requirements:
    3.1. Use of single instance of ODB per environment. If possible, that could be automated with CD.
    3.2. Truly HA deployment.
    3.3. Between 2 and 3 servers.

Hello @felixsuarez,

I would need to understand what “truly HA” means in your case. Please post a detailed design, with diagrams, descriptions, use cases as well as technical and business expectations.

Regards.

I wrote that in the context of the quick-start image, I saw it mentioned on the docs:

The fact that there is one host server for the whole cluster has its implications regarding high availability (HA) and this is what most of the sections below are about.

So, what I mean was: “One cluster with servers that spawn across different hosts”

We were originally working on a K8s deployment (based on the Helm Chart provided on the repo), but are considering using VM instances. So would like to clarify if our starting point should be the quickstart image or if it would be better to work on images for each Zato component (server, webadmin, scheduler) for our needs:

  • 3 Environments
  • One ODB per environment (you clarified that this implies one Cluster per environment)
  • Servers that could live on different hosts (in the future K8s pods)

I understand the background but I still do not know any details.

At the end of this chapter there is a section about stateless vs. stateful connection types - does any of it apply to your situation?

Please realize that you are requiring me to ask for additional details whereas it is me who is expecting you to provide full context as well concrete technical and business requirements.

Thanks, the section about “stateless vs. stateful” clarified what approach we should take. We will go for a distributed cluster (server, scheduler, web admin) each on its own pod.

I have to apologize for not providing more thorough details before, in case that a similar question arises in the future I will make sure to add all the needed details from the beginning.

Thanks for your help, it is much appreciated.