(Migrated) Staged roll-outs

(This message has been automatically imported from the retired mailing list)

Hello list,

We’re designing an internal SaaS platform on top of Zato, and I was
wondering if any of you had insights about staged migrations.

Here’s the situation. The SaaS platform would be used by corporate
branches worldwide, and roll-outs would be incremental, one branch at a
time. Most database migrations should be backward compatible, but some
will undoubtedly not be, and so it looks like multiple versions (2 or 3)
of both the Zato services and the DB schema should be able to coexist,
at least during the period of time the roll-out isn’t complete. Might
last days, and going through one branch can’t stop other branches from
functioning.

So my question is: any of you has pointers to some useful info on the
subject?

Nicolas Cadou

http://ajah.ca

http://ca.linkedin.com/in/nicolascadou

On 18/02/15 00:42, Nicolas Cadou wrote:

Hi Nicolas,

So my question is: any of you has pointers to some useful info on the
subject?

First thing is - try to get around it by not sharing environments. This
is the easiest thing and will save the most time. Just have multiple
environments. If needed, each can be sharing information using
publish/subscribe:

https://zato.io/docs/pubsub/index.html

Next, if you are sure you want to keep it all shared, you need to have
versioning on both major layers:

  • Services need to be versioned
  • Same thing with the data model

On top of it, you can consider making use of two new features in 2.0:

  • Invocation targets
  • Request filtering

https://zato.io/docs/admin/guide/install-config/config-server.html#invoke-target-patterns-allowed

https://zato.io/docs/progguide/service-dev.html#progguide-write-service-accept

The idea with invocation targets is that you can use self.invoke and
self.invoke_async as usual and along with the basic parameters you can
give it a list of targets (servers) to invoke the service on.

Let’s say you have three servers:

  • Moon1
  • Moon2
  • Jupiter1

Moon1 and Moon2 have the ‘moon’ target assigned while Jupter1 does not.

Now doing something like:

self.invoke_async(‘myservice@moon’, My Request’)

Will make the request go to one of Moon1 or Moon2. It’s guaranteed
myservice1 on Jupiter1 won’t see it.

Request filtering lets you discard a message before ‘def handle’ is called.

For instance:

class MyService(Service):

def accept(self):
if some_condition:
return False
it
def handle(self):
# We reach here only if self.accept didn’t return False

That all said, why is there a requirement to share the environment(s)?

If I picture it correctly, each actual client, each branch, will connect
to servers in their own branch so why cannot they simply have their own
environments?

The environments wouldn’t have to be completely separated, various
business data would still be shared from a central DB, if applicable.

It would be good if we could go through a couple of use-cases - I’m
still not 100% sure I understand why it needs to be shared if there is
already a question how to make them not share things.

thanks,

Hi Dariusz,

On 15-02-20 01:34 AM, Dariusz Suchojad wrote:

First thing is - try to get around it by not sharing environments. This
is the easiest thing and will save the most time. Just have multiple
environments.

This is indeed the route we’ve found makes the most sense currently.

Next, if you are sure you want to keep it all shared, you need to have
versioning on both major layers:

  • Services need to be versioned

That’s the first idea I looked at. Hauling an API version number around
across all service invocations is okay, but parallel source tree
versions inside the same server process, although not impossible, ranks
a bit high on my craziness scale.

  • Same thing with the data model

For this we’ll go with separate PostgreSQL schemas for all branches, an
additional one for the few shared tables that we have, and a cube of
some sort for analytics.

On top of it, you can consider making use of two new features in 2.0:

  • Invocation targets

Very interesting. That would allow a sane middle ground between
shared-nothing and ungodly multi-versioned shared-everything. I guess
I’ll revisit when we switch to a one docker container per server process
model at some point. We’re using the quickstart model for the time being.

Request filtering lets you discard a message before ‘def handle’ is called.

I’m curious. What happens to a not accepted request? I guess it is
completely dropped, and not retried on another server somehow, right?

That all said, why is there a requirement to share the environment(s)?

There appears to be only a minimal amount of sharing needed at this
point. The one environment per release model looks like the lowest
complexity path right now.

Thanks for your response!