ODB HA reconnection issue (affecting scheduled services)


#1

Finally following up on the ODB High Availability topic from Zato general architecture questions, I discovered about MySQL InnoDB Cluster, and since it’s already in GA and has no custom external solution to make it HA, I tried to create an environment similar to my production one in my laboratory:

3 servers, each with a:

  • zato server (2.0.7, since it’s the version I’m using at production right now) with 2 workers;
  • zato load balancer;
  • redis with sentinel;
  • mysql using innodb clustering + mysql router;

Since using InnoDB Cluster there is a new component called MySQL Router, which abstracts the notion of the HA from the Zato, so in the server and webadmin configuration I just use the port from the local mysql router, and this module knows how to detect which server is the primary and changes the connection from one machine to the other in case of failure.

When all is up and running, everything seems smooth. If a secondary machine goes down, no problem, all continues to work, as expected. But when the primary goes down, we have some issues…

Before the node goes down, 2 out of the 3 servers keep logging the odb last_alive message, like below:

2018-01-11 11:16:51,279 - INFO - 7336:Dummy-17395 - zato.server.odb:22 - last_keep_alive:[2018-01-11 13:16:33], grace_time:[90], max_allowed:[2018-01-11 13:18:03], now:[2018-01-11 13:16:51.279006]

As soon as the primary goes down, we see some stack traces, generally with the message “Lost connection to MySQL server during query” (complete stack traces from one of the servers will be pasted at the end) or redis unavailable messages (since it takes sometime for the redis to redirect the master to another machine).

After a couple of minutes the traces stop, since I imagine each connection to the ODB manages to be reconnected with another server which is now the new primary. But when this happens, the servers do not log the last_alive message and the scheduled services do not run. The logs actually stop printing anything except for messages logged for any service called externally. Calling a service of mine works normally, acessing Redis data without any issue.

In my test scenario I created a new scheduled service and it never fired, even when clicking on the execute button. When I brought the down machine back up, it rejoined the cluster and started working with the scheduled jobs.

This translates to a HA scenario which still would need human intervention for restarting the remaining servers when the primary ODB machine goes down, which is not ideal.

Looking at how the mysql router works, the first or second query you perform after the node is down is kinda lost, not being rerouted immediately to the new server, which may break something inside Zato when it happens. See below:

[root@zato03 ~]# mysql -u root -P 6446 -h 127.0.0.1 -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
...
mysql> select @@hostname;
+------------+
| @@hostname |
+------------+
| zato01     |
+------------+
1 row in set (0.00 sec)

mysql> select @@hostname;
ERROR 2013 (HY000): Lost connection to MySQL server during query
mysql> select @@hostname;
ERROR 2006 (HY000): MySQL server has gone away
No connection. Trying to reconnect...
Connection id:    40
Current database: *** NONE ***

+------------+
| @@hostname |
+------------+
| zato03     |
+------------+
1 row in set (0.05 sec)

So my question to you is if there any configuration I might change which could lead to the remaining servers to not stop running the scheduled jobs after a primary ODB failure.

Since I know Zato 3.0 has some differences in how it handles the scheduler architecture, is it subject to the same issues reported here? If not, how hard it is for it to be able to handle reconnecting to the DB without losing functionality until the next restart?

This is the last major architecture point for me to solve HA involving Zato so any help is appreciated.

Logs from one of the nodes at failure time:

2018-01-11 12:31:27,588 - INFO - 2757:Dummy-15 - zato.helpers.input-logger:800 - {u'impl_name': u'zato.server.service.internal.helpers.InputLogger', u'name': u'zato.helpers.input-logger', u'cid': u'K05YNF9KG5TKFF9TF37Q0AWFJ9C7', u'invocation_time': datetime.datetime(2018, 1, 11, 14, 31, 27, 585848), u'job_type': None, u'data_format': None, u'slow_threshold': 99999, u'request.payload': u'Sample payload for a startup service (first worker)', u'wsgi_environ': {u'zato.request_ctx.async_msg': Bunch(action=u'101802', channel=u'startup-service', cid=u'K05YNF9KG5TKFF9TF37Q0AWFJ9C7', msg_type=u'0001', payload=u'Sample payload for a startup service (first worker)', service=u'zato.helpers.input-logger'), u'zato.request_ctx.fanout_cid': None, u'zato.request_ctx.in_reply_to': None, u'zato.request_ctx.parallel_exec_cid': None}, u'environ': {}, u'usage': 32, u'channel': u'startup-service'}
2018-01-11 12:36:48,148 - WARNING - 2757:Dummy-53 - zato.server.service:22 - Traceback (most recent call last):
  File "/opt/zato/2.0.7/code/zato-server/src/zato/server/service/__init__.py", line 401, in update_handle
    self._invoke(service, channel)
  File "/opt/zato/2.0.7/code/zato-server/src/zato/server/service/__init__.py", line 348, in _invoke
    service.handle()
  File "/opt/zato/2.0.7/code/zato-server/src/zato/server/service/internal/server.py", line 40, in handle
    filter(Cluster.id == cluster_id).\
  File "/opt/zato/2.0.7/code/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/orm/query.py", line 2398, in one
    ret = list(self)
  File "/opt/zato/2.0.7/code/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/orm/query.py", line 2441, in __iter__
    return self._execute_and_instances(context)
  File "/opt/zato/2.0.7/code/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/orm/query.py", line 2456, in _execute_and_instances
    result = conn.execute(querycontext.statement, self._params)
  File "/opt/zato/2.0.7/code/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/engine/base.py", line 841, in execute
    return meth(self, multiparams, params)
  File "/opt/zato/2.0.7/code/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/sql/elements.py", line 322, in _execute_on_connection
    return connection._execute_clauseelement(self, multiparams, params)
  File "/opt/zato/2.0.7/code/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/engine/base.py", line 938, in _execute_clauseelement
    compiled_sql, distilled_params
  File "/opt/zato/2.0.7/code/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1070, in _execute_context
    context)
  File "/opt/zato/2.0.7/code/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1271, in _handle_dbapi_exception
    exc_info
  File "/opt/zato/2.0.7/code/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/util/compat.py", line 199, in raise_from_cause
    reraise(type(exception), exception, tb=exc_tb)
  File "/opt/zato/2.0.7/code/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1063, in _execute_context
    context)
  File "/opt/zato/2.0.7/code/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/engine/default.py", line 442, in do_execute
    cursor.execute(statement, parameters)
  File "/opt/zato/2.0.7/code/eggs/PyMySQL-0.6.2-py2.7.egg/pymysql/cursors.py", line 132, in execute
    result = self._query(query)
  File "/opt/zato/2.0.7/code/eggs/PyMySQL-0.6.2-py2.7.egg/pymysql/cursors.py", line 271, in _query
    conn.query(q)
  File "/opt/zato/2.0.7/code/eggs/PyMySQL-0.6.2-py2.7.egg/pymysql/connections.py", line 726, in query
    self._affected_rows = self._read_query_result(unbuffered=unbuffered)
  File "/opt/zato/2.0.7/code/eggs/PyMySQL-0.6.2-py2.7.egg/pymysql/connections.py", line 861, in _read_query_result
    result.read()
  File "/opt/zato/2.0.7/code/eggs/PyMySQL-0.6.2-py2.7.egg/pymysql/connections.py", line 1064, in read
    first_packet = self.connection._read_packet()
  File "/opt/zato/2.0.7/code/eggs/PyMySQL-0.6.2-py2.7.egg/pymysql/connections.py", line 825, in _read_packet
    packet = packet_type(self)
  File "/opt/zato/2.0.7/code/eggs/PyMySQL-0.6.2-py2.7.egg/pymysql/connections.py", line 242, in __init__
    self._recv_packet(connection)
  File "/opt/zato/2.0.7/code/eggs/PyMySQL-0.6.2-py2.7.egg/pymysql/connections.py", line 248, in _recv_packet
    packet_header = connection._read_bytes(4)
  File "/opt/zato/2.0.7/code/eggs/PyMySQL-0.6.2-py2.7.egg/pymysql/connections.py", line 841, in _read_bytes
    "Lost connection to MySQL server during query")
OperationalError: (OperationalError) (2013, 'Lost connection to MySQL server during query') 'SELECT cluster.id AS cluster_id, cluster.name AS cluster_name, cluster.description AS cluster_description, cluster.odb_type AS cluster_odb_type, cluster.odb_host AS cluster_odb_host, cluster.odb_port AS cluster_odb_port, cluster.odb_user AS cluster_odb_user, cluster.odb_db_name AS cluster_odb_db_name, cluster.odb_schema AS cluster_odb_schema, cluster.broker_host AS cluster_broker_host, cluster.broker_port AS cluster_broker_port, cluster.lb_host AS cluster_lb_host, cluster.lb_port AS cluster_lb_port, cluster.lb_agent_port AS cluster_lb_agent_port, cluster.cw_srv_id AS cluster_cw_srv_id, cluster.cw_srv_keep_alive_dt AS cluster_cw_srv_keep_alive_dt \nFROM cluster \nWHERE cluster.id = %s FOR UPDATE' (1,)
2018-01-11 12:36:48,168 - ERROR - 2757:Dummy-53 - zato.server.base:22 - Could not handle broker msg:[Bunch(action=u'100004', cid=u'K06A1K0EBG5M29KBQXVDMDN2RMM9', job_type=u'interval_based', msg_type=u'0001', name=u'zato.server.cluster-wide-singleton-keep-alive', payload=u'server_id:3;cluster_id:1', service=u'zato.server.cluster-wide-singleton-keep-alive')], e:[Traceback (most recent call last):
  File "/opt/zato/2.0.7/code/zato-server/src/zato/server/base/__init__.py", line 47, in on_broker_msg
    getattr(self, handler)(msg)
  File "/opt/zato/2.0.7/code/zato-server/src/zato/server/base/worker.py", line 1028, in on_broker_msg_SCHEDULER_JOB_EXECUTED
    return self.on_message_invoke_service(msg, CHANNEL.SCHEDULER, 'SCHEDULER_JOB_EXECUTED', args)
  File "/opt/zato/2.0.7/code/zato-server/src/zato/server/base/worker.py", line 1007, in on_message_invoke_service
    environ=msg.get('environ'))
  File "/opt/zato/2.0.7/code/zato-server/src/zato/server/service/__init__.py", line 401, in update_handle
    self._invoke(service, channel)
  File "/opt/zato/2.0.7/code/zato-server/src/zato/server/service/__init__.py", line 348, in _invoke
    service.handle()
  File "/opt/zato/2.0.7/code/zato-server/src/zato/server/service/internal/server.py", line 40, in handle
    filter(Cluster.id == cluster_id).\
  File "/opt/zato/2.0.7/code/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/orm/query.py", line 2398, in one
    ret = list(self)
  File "/opt/zato/2.0.7/code/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/orm/query.py", line 2441, in __iter__
    return self._execute_and_instances(context)
  File "/opt/zato/2.0.7/code/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/orm/query.py", line 2456, in _execute_and_instances
    result = conn.execute(querycontext.statement, self._params)
  File "/opt/zato/2.0.7/code/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/engine/base.py", line 841, in execute
    return meth(self, multiparams, params)
  File "/opt/zato/2.0.7/code/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/sql/elements.py", line 322, in _execute_on_connection
    return connection._execute_clauseelement(self, multiparams, params)
  File "/opt/zato/2.0.7/code/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/engine/base.py", line 938, in _execute_clauseelement
    compiled_sql, distilled_params
  File "/opt/zato/2.0.7/code/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1070, in _execute_context
    context)
  File "/opt/zato/2.0.7/code/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1271, in _handle_dbapi_exception
    exc_info
  File "/opt/zato/2.0.7/code/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/util/compat.py", line 199, in raise_from_cause
    reraise(type(exception), exception, tb=exc_tb)
  File "/opt/zato/2.0.7/code/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1063, in _execute_context
    context)
  File "/opt/zato/2.0.7/code/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/engine/default.py", line 442, in do_execute
    cursor.execute(statement, parameters)
  File "/opt/zato/2.0.7/code/eggs/PyMySQL-0.6.2-py2.7.egg/pymysql/cursors.py", line 132, in execute
    result = self._query(query)
  File "/opt/zato/2.0.7/code/eggs/PyMySQL-0.6.2-py2.7.egg/pymysql/cursors.py", line 271, in _query
    conn.query(q)
  File "/opt/zato/2.0.7/code/eggs/PyMySQL-0.6.2-py2.7.egg/pymysql/connections.py", line 726, in query
    self._affected_rows = self._read_query_result(unbuffered=unbuffered)
  File "/opt/zato/2.0.7/code/eggs/PyMySQL-0.6.2-py2.7.egg/pymysql/connections.py", line 861, in _read_query_result
    result.read()
  File "/opt/zato/2.0.7/code/eggs/PyMySQL-0.6.2-py2.7.egg/pymysql/connections.py", line 1064, in read
    first_packet = self.connection._read_packet()
  File "/opt/zato/2.0.7/code/eggs/PyMySQL-0.6.2-py2.7.egg/pymysql/connections.py", line 825, in _read_packet
    packet = packet_type(self)
  File "/opt/zato/2.0.7/code/eggs/PyMySQL-0.6.2-py2.7.egg/pymysql/connections.py", line 242, in __init__
    self._recv_packet(connection)
  File "/opt/zato/2.0.7/code/eggs/PyMySQL-0.6.2-py2.7.egg/pymysql/connections.py", line 248, in _recv_packet
    packet_header = connection._read_bytes(4)
  File "/opt/zato/2.0.7/code/eggs/PyMySQL-0.6.2-py2.7.egg/pymysql/connections.py", line 841, in _read_bytes
    "Lost connection to MySQL server during query")
OperationalError: (OperationalError) (2013, 'Lost connection to MySQL server during query') 'SELECT cluster.id AS cluster_id, cluster.name AS cluster_name, cluster.description AS cluster_description, cluster.odb_type AS cluster_odb_type, cluster.odb_host AS cluster_odb_host, cluster.odb_port AS cluster_odb_port, cluster.odb_user AS cluster_odb_user, cluster.odb_db_name AS cluster_odb_db_name, cluster.odb_schema AS cluster_odb_schema, cluster.broker_host AS cluster_broker_host, cluster.broker_port AS cluster_broker_port, cluster.lb_host AS cluster_lb_host, cluster.lb_port AS cluster_lb_port, cluster.lb_agent_port AS cluster_lb_agent_port, cluster.cw_srv_id AS cluster_cw_srv_id, cluster.cw_srv_keep_alive_dt AS cluster_cw_srv_keep_alive_dt \nFROM cluster \nWHERE cluster.id = %s FOR UPDATE' (1,)
]
2018-01-11 12:39:44,370 - INFO - 2757:Dummy-61 - company.xxx.helper:22 - This is a custom message inside my service

#2

Hi @rtrind,

thanks for the questions - one thing that I’d like to ask you to do is to repeat everything with 2.0.8 - there were a couple dozen of changes between 2.0.7 and 2.0.8 is the version that I will be checking out your questions with as far as 2.0.x goes.

About InnoDB Cluster and Router - is it something that one needs to set up manually in one’s environment or is there a ready-to-use version somewhere on AWS, for instance?

Regards.


#3

@rtrind

Are there enough connections available from Zato to the “pool” of connections to the Mysql router?


#4

Hello @dsuch,

I’m upgrading the test environment and will report back the results after the new tests are done.

In my quick research about AWS, I don’t believe there is a ready solution, but it should be easy to test in a quickstart environment. You install mysql-community, mysql-shell and mysql-router. Then with one command inside mysql shell you can deploy local instances and configure them for replication and failover (instructions at https://dev.mysql.com/doc/refman/5.7/en/mysql-innodb-cluster-sandbox-deployment.html).

I didn’t use it, but there is also this page https://github.com/mattlord/Docker-InnoDB-Cluster, where you can use for a Docker setup, which maybe it’s easier to quickly create an environment.

@samuel.rose, I suppose not. It’s a test environment with nothing connected but me and my tests. Also during the same time as the primary goes down, I have shells opened manually to the instances and they are all available, so I don’t believe this is a possibility.

[]'s


#5

It seems the scenario improved a little, but I am stuck in web-admin not working to complete the tests and give a complete report.

After the upgrade (using zato rpm for 2.0.8 from a VM connected to the internet and transfering it to my sandboxed instances) I performed the upgrade. 3 issues:

  • zato current symbolic link missing: remade the link manually and zato paths now were reachable;
  • zato complaining about globre module missing: downloaded them manually and installed them at zato_extra_paths, now zato is working and can start;
  • web admin does not start: the pidfile is created. It never starts and no logs are generated at the web-admin folder. Recreating the web-admin from zato cli still does not work.

Suggestions for the last step?

Edit: Trying a cleaner install procedure to see if this improves things. The current folder is now present directly from the installation, but the globre module is still missing…


#6

What happens when you run this command?

$ zato start /path/to/web/admin --fg

#7

@rtrind - also, does the same happen if you install 2.0.8 from scratch?

There was also this pull request - does that seem relevant?


#8

Using the --fg command I can see more missing modules, now it’s pysqllite2.

I don’t have experience about django, so I cannot say if the setup from it has relevance to these issues.

I also quickly created a new VM from scratch and the same “missing globre” message was there on 2.0.8 offline rpm install. I’m using the zato-2.0.8-stable.el6.x86_64.rpm from your yum repo, on RHEL 6.7 x64 under XenServer. Tried on my local VirtualBox VM using the same OS and connected directly on the internet (so exactly like official instructions), same results.

Clean install seems to have the same issues (except for the missing “current” link).

EDIT: Seems related to Zato 2.0.8 released. RHEL/CentOS has always some gotchas and it seems libraries which are on the proper paths are not recognized by zato python.

For the globre library, the egg was not on the same path as installed on the 2.0.8 branch, so maybe this one is an extra thing.

EDIT2: I managed to solve globre using “python setup.py install” from the globre source folder. It gives some errors when trying to download some elements from Pypi (since my machine does not have access to internet), but zato then works properly. But the pysqlite2 error persists. It seems on RHEL, something is different in the compiled version you are embedding, related to the presence of the sqlite-devel package, which is used instead of pysqlite2 when present. I tried installing it but it seems python would need to be recompiled for it to work.
Ideas? Anything else we can try to make 2.0.8 work on RHEL 6.7 so we can advance on the previous discussion?


#9

Found something. At “current/code/lib/python2.7/lib-dynload” the _sqlite3.so file was missing. Getting this same file from 2.0.7 installation now allowed me to continue a little further (at the clean install, I cannot finish the quickstart environment, it stops at step 6). Will keep you posted.

EDIT: Yeah, as expected 2.0.8 is kinda broken at RHEL 6, now it’s complaining about module django_openid_auth. Please advise on next steps (remember we are on a side track to retest under 2.0.8 but it’s not the main quest, not sure if we should keep at this line).

[]'s


#10

Hi @rtrind - just letting you know that I will not be able to look into it until the next weekend at the earliest (Jan 20-21).


Zato 2.0.8 released
#11

No rush. I appreciate the feedback.


#12

Hello @rtrind - I spent the entire day on it and I’d like to ask you to install 2.0.8 on a clean system without upgrading from a previous 2.0.7 installation.

Otherwise we will be resolving things that are not the core matter.

As a side note, I can see that Zato 3.0 will be the last version with free support for RHEL 6. This is becoming too burdensome, particularly now when pip will soon stop to support Python 2.6.


#13

Hello, @dsuch,

I already am at a system installed from scratch without upgrade, since the message “Clean install seems to have the same issues (except for the missing “current” link).”, so we are probably not troubleshooting upgrade issues anymore.

Thanks for the info about RHEL6. I don’t have a say about the upgrade path for the company product and I need to use the same servers, since we don’t have a budget to spend on extra HW for this project. I appreciate the effort nonetheless for the 3.0 release.

[]'s


#14

Is using RHEL 7 an option? You will likely have growing issues with Python software in one way or another given that pip will not support Python 2.6.


#15

Or simply update the python version in use?


#16

As long as I am stuck in using part of the deployments servers, I’m stuck with the company oficial OSs, so I don’t foresee any change from RHEL6.7 soon (since it was less than one year we migrated from 5.10). Zato is not being used as a standard company solution, it’s a local devops effort to cover some customer use cases, so we cannot drive changes in the platform right now. Sorry about that.

If you can’t support RHEL6 without paid support because it’s not worth it, I completely understand. I would just ask if there is any way you can leave easier ways in Zato3 for us as a community to help create custom installation packages somehow for such systems. I am not a competent enough architect/programmer to give any suggestions on how to do this, but if you lead the way I could try my best to contribute in making those images working and available for anyone in the same boat for (RHEL6 offline, in my case). Let me know if or how I could help with it.

I wouldn’t mind having to use another system (with whatever OS version needed and with internet connection) to prepare such packages.

@aekroft, if you update the OS python from 2.6 you break the system (tools like yum stop working), so it’s not an option.

[]'s


#17

Hi @rtrind,

I’m not saying that RHEL 6 cannot be supported, this is not the case. It is fully supported and what you see I consider an issue that needs to be fixed.

Just to clarify - I only said that Zato 3.0 will be the last version with free support for RHEL 6.

It’s not about lack of want, it is simply the case that RHEL 6 is completely out of touch with today’s server landscape. We are talking kernel 2.6 and (coincidentally) Python 2.6. This Python version is not supported by core Python developers. Pip will not support it, current PostgreSQL Python driver (psycopg2) does not support it.

And the list will grow with time because if you use older versions of core dependencies then other dependencies do not work which cascades and means there are some more dependencies that refuse to work with older dependencies etc. etc. The only reason it is possible to use this system is because RedHat backport everything and offer it as a paid service and at one point we will not be able to do the same for free.

That all said, this GitHub ticket #817 is something under which we will move all of our compilation from zc.buildout to pip, which will be the first step towards making the build process more streamlined.

This will be also an opportunity to re-think how to build packages for RHEL 6 - this is the issue now, to get all the modern tooling to work on that system, not Zato as such which has worked just fine for years now on this OS.

Using Python 2.7 on RHEL 6, as @aekroft suggested, should be possible and it is not ruled out that we will start to use it. One needs to enable SCL - Software Collections and here is Python 2.7. It installs to /opt/rh and needs a special session wrapper before it can be used (scl enable python27 bash). Perhaps it will suffice to isolate it from the rest of the system - we need time to investigate it.

As for building Zato packages, I will post a longer message about it but a short version is that it is easy.

  • Log in to a system for which you’d like to build a package, including the same CPU architecture, e.g. RHEL 6 64-bit if you plan to build an .rpm for that OS or Ubuntu 14.04 32-bit if it is a .deb for that OS

  • Run git clone https://github.com/zatosource/zato-build/

  • Navigate to the directory with build scripts for your OS, e.g. ./rhel6

  • Run ./build-zato.sh - this is a command that works the same on all systems, be it RHEL, Ubuntu, Alpine or anything

  • Sample usage: ./build-zato.sh support/2.0-rhel6 2.0.9 custom1

This is it, after some time (10-30 minutes) there will be a package for you in a directory specific for each OS - its name will be given on output.


Zato 2.0.8 released
#18

Here is how the python update in RHEL 6 can be done using SCL.
http://docs.datastax.com/en/cassandra/2.1/cassandra/install/installPython27RHEL.html

Also you could install another python version alongside the default OS python version and use it to update the Zato virtualenv. Here is how