(Migrated) quick question, where to store sqlalchemy models

(This message has been automatically imported from the retired mailing list)

Hi,
I am toying with zato 2.0 and SQL connections with the SIO approach.
At the bottom of this page
https://zato.io/docs/2.0/progguide/examples/sio.html is a bit of code
demonstrating a SIO service using SQLalchemy.
I fail to grasp where the model.py file goes. Do I drop that into a server
"pickup-dir" together with a service .py file, or is there some other place
within the cluster folder hierarchy to store SQLAlchemy models?

Thanks

-Bad

On 30/01/15 23:24, Baad Sequel wrote:

Do I drop that into a server “pickup-dir” together with a service .py
file, or is there some other place within the cluster folder hierarchy
to store SQLAlchemy models?

Hi,

I’d rather place it on zato_extra_paths so they land on PYTHONPATH off
which they can be imported by multiple services.

https://zato.io/docs/admin/guide/installing-services.html#zato-extra-paths

Being able to update SQLAlchemy model in run-time through pickup-dir
would be nice but the whole thing revolves around SA’s being a regular
Python library, i.e. you import the model using normal Python imports.

We’d need a wrapper to mediate the access and that wrapper would know
when and how to refresh the model after a new version arrives. Would be
a cool feature.

This is the same situation as with services - you cannot simply import
Python classes they are represented through because refreshing them
after a hot-deploy would be a major task amounting, essentially, to
hot-reloading any arbitrary code seeing as services can contain and use
just about everything.

Thank you for the clarification. Most appreciated.

-Bad

On Sat, Jan 31, 2015 at 12:07 AM, Dariusz Suchojad dsuch@zato.io wrote:

On 30/01/15 23:24, Baad Sequel wrote:

Do I drop that into a server “pickup-dir” together with a service .py
file, or is there some other place within the cluster folder hierarchy
to store SQLAlchemy models?

Hi,

I’d rather place it on zato_extra_paths so they land on PYTHONPATH off
which they can be imported by multiple services.

https://zato.io/docs/admin/guide/installing-services.html#zato-extra-paths

Being able to update SQLAlchemy model in run-time through pickup-dir
would be nice but the whole thing revolves around SA’s being a regular
Python library, i.e. you import the model using normal Python imports.

We’d need a wrapper to mediate the access and that wrapper would know
when and how to refresh the model after a new version arrives. Would be
a cool feature.

This is the same situation as with services - you cannot simply import
Python classes they are represented through because refreshing them
after a hot-deploy would be a major task amounting, essentially, to
hot-reloading any arbitrary code seeing as services can contain and use
just about everything.

Reviving this thread to ask:

Is there a recommended way to use Alembic to track changes to these SQL Alchemy models that are deployed to zato_extra_paths? Thanks!

After a little exloration, I see that I can just use Alembic as it is installed already under the zato user in the machine environment.

I will try that and report back here

Some follow up:

The path of least resistance for me so far has been to create a 2nd db for my custom app models, and a new set of alembic migrations. I am still placing my models in zato_extra_paths.

A 2nd DB for my models seemed to cause the least intrusion into the core zato code, which also relies on alembic and it’s migrations. (a 2nd set of migrations).