Zato 3.1 Py3 version - duplicate key error when uploading service

Hi guys.

Got a Zato 3.1 py 3 quickstart cluster up and running on my test server. All looks well until I try uploading a py file with a service that does nothing more than dump a row to server.log.

The service is registered in the database, but not without errors in the log. See below

The ODB was brand new and devoid of anything except a schema named “zato” before quickstart create.

Any insights to be had?

server1.log
2019-06-25 08:47:17,764 - INFO - 31995:DummyThread-44 - zato.hot-deploy.create:0 - Creating tar archive
2019-06-25 08:47:17,765 - INFO - 31995:DummyThread-44 - zato.hot-deploy.create:0 - Creating tar archive
2019-06-25 08:47:17,785 - WARNING - 31995:DummyThread-44 - zato.server.service:0 - Traceback (most recent call last):
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/sqlalchemy/engine/base.py”, line 1193, in _execute_context
context)
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/sqlalchemy/engine/default.py”, line 508, in do_execute
cursor.execute(statement, parameters)
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/pg8000/core.py”, line 861, in execute
self._c.execute(self, operation, args)
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/pg8000/core.py”, line 1907, in execute
self.handle_messages(cursor)
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/pg8000/core.py”, line 1974, in handle_messages
raise self.error
pg8000.core.IntegrityError: {‘S’: ‘ERROR’, ‘V’: ‘ERROR’, ‘C’: ‘23505’, ‘M’: ‘duplicate key value violates unique constraint “service_name_cluster_id_key”’, ‘D’: ‘Key (name, cluster_id)=(dumb-service.dumb-service, 1) already exists.’, ‘s’: ‘zato’, ‘t’: ‘service’, ‘n’: ‘service_name_cluster_id_key’, ‘F’: ‘nbtinsert.c’, ‘L’: ‘535’, ‘R’: ‘_bt_check_unique’}

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “/opt/zato/3.1.0/code/zato-server/src/zato/server/service/init.py”, line 639, in update_handle
self._invoke(service, channel)
File “/opt/zato/3.1.0/code/zato-server/src/zato/server/service/init.py”, line 533, in _invoke
service.handle()
File “/opt/zato/3.1.0/code/zato-server/src/zato/server/service/internal/hot_deploy/init.py”, line 263, in handle
self.response.payload.services_deployed = self.deploy_package(self.request.input.package_id, session)
File “/opt/zato/3.1.0/code/zato-server/src/zato/server/service/internal/hot_deploy/init.py”, line 215, in deploy_package
return self._deploy_package(session, package_id, dp.payload_name, dp.payload)
File “/opt/zato/3.1.0/code/zato-server/src/zato/server/service/internal/hot_deploy/init.py”, line 185, in _deploy_package
services_deployed = self._deploy_file(current_work_dir, payload, file_name)
File “/opt/zato/3.1.0/code/zato-server/src/zato/server/service/internal/hot_deploy/init.py”, line 159, in _deploy_file
info = self.server.service_store.import_services_from_anywhere(file_name, current_work_dir)
File “/opt/zato/3.1.0/code/zato-server/src/zato/server/service/store.py”, line 816, in import_services_from_anywhere
self._store_in_odb(session, info.to_process)
File “/opt/zato/3.1.0/code/zato-server/src/zato/server/service/store.py”, line 734, in _store_in_odb
needs_commit = self._store_services_in_odb(session, batch_indexes, to_process)
File “/opt/zato/3.1.0/code/zato-server/src/zato/server/service/store.py”, line 650, in _store_services_in_odb
self.odb.add_services(session, elems)
File “/opt/zato/3.1.0/code/zato-common/src/zato/common/odb/api.py”, line 745, in add_services
session.execute(ServiceTableInsert().values(data))
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/sqlalchemy/orm/session.py”, line 1176, in execute
bind, close_with_result=True).execute(clause, params or {})
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/sqlalchemy/engine/base.py”, line 948, in execute
return meth(self, multiparams, params)
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/sqlalchemy/sql/elements.py”, line 269, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/sqlalchemy/engine/base.py”, line 1060, in _execute_clauseelement
compiled_sql, distilled_params
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/sqlalchemy/engine/base.py”, line 1200, in _execute_context
context)
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/sqlalchemy/engine/base.py”, line 1413, in _handle_dbapi_exception
exc_info
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/sqlalchemy/util/compat.py”, line 203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/sqlalchemy/util/compat.py”, line 186, in reraise
raise value.with_traceback(tb)
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/sqlalchemy/engine/base.py”, line 1193, in _execute_context
context)
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/sqlalchemy/engine/default.py”, line 508, in do_execute
cursor.execute(statement, parameters)
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/pg8000/core.py”, line 861, in execute
self._c.execute(self, operation, args)
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/pg8000/core.py”, line 1907, in execute
self.handle_messages(cursor)
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/pg8000/core.py”, line 1974, in handle_messages
raise self.error
sqlalchemy.exc.IntegrityError: (pg8000.core.IntegrityError) {‘S’: ‘ERROR’, ‘V’: ‘ERROR’, ‘C’: ‘23505’, ‘M’: ‘duplicate key value violates unique constraint “service_name_cluster_id_key”’, ‘D’: ‘Key (name, cluster_id)=(dumb-service.dumb-service, 1) already exists.’, ‘s’: ‘zato’, ‘t’: ‘service’, ‘n’: ‘service_name_cluster_id_key’, ‘F’: ‘nbtinsert.c’, ‘L’: ‘535’, ‘R’: ‘_bt_check_unique’} [SQL: “INSERT INTO service (id, name, is_active, impl_name, is_internal, slow_threshold, cluster_id) VALUES (nextval(‘service_id_seq’), %s, %s, %s, %s, %s, %s)”] [parameters: (‘dumb-service.dumb-service’, True, ‘dumb_service.DumbService’, False, 99999, 1)] (Background on this error at: http://sqlalche.me/e/gkpj)

2019-06-25 08:47:17,785 - ERROR - 31995:DummyThread-44 - zato:0 - Could not handle broker msg:Bunch(action='102200', cid='181d08ecd34d96dbf2a11359', msg_type='0002', package_id=1, payload={'package_id': 1}, service='zato.hot-deploy.create'), e:`Traceback (most recent call last):
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/sqlalchemy/engine/base.py”, line 1193, in _execute_context
context)
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/sqlalchemy/engine/default.py”, line 508, in do_execute
cursor.execute(statement, parameters)
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/pg8000/core.py”, line 861, in execute
self._c.execute(self, operation, args)
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/pg8000/core.py”, line 1907, in execute
self.handle_messages(cursor)
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/pg8000/core.py”, line 1974, in handle_messages
raise self.error
pg8000.core.IntegrityError: {‘S’: ‘ERROR’, ‘V’: ‘ERROR’, ‘C’: ‘23505’, ‘M’: ‘duplicate key value violates unique constraint “service_name_cluster_id_key”’, ‘D’: ‘Key (name, cluster_id)=(dumb-service.dumb-service, 1) already exists.’, ‘s’: ‘zato’, ‘t’: ‘service’, ‘n’: ‘service_name_cluster_id_key’, ‘F’: ‘nbtinsert.c’, ‘L’: ‘535’, ‘R’: ‘_bt_check_unique’}

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File “/opt/zato/3.1.0/code/zato-broker/src/zato/broker/init.py”, line 52, in on_broker_msg
getattr(self, handler)(msg)
File “/opt/zato/3.1.0/code/zato-server/src/zato/server/base/worker/init.py”, line 1784, in on_broker_msg_HOT_DEPLOY_CREATE_SERVICE
serialize=False, needs_response=True)
File “/opt/zato/3.1.0/code/zato-server/src/zato/server/base/worker/init.py”, line 1775, in on_broker_msg_hot_deploy
return self.on_message_invoke_service(msg, ‘hot-deploy’, ‘HOT_DEPLOY_{}’.format(action), args, **kwargs)
File “/opt/zato/3.1.0/code/zato-server/src/zato/server/base/worker/init.py”, line 1550, in on_message_invoke_service
environ=msg.get(‘environ’))
File “/opt/zato/3.1.0/code/zato-server/src/zato/server/service/init.py”, line 691, in update_handle
raise e if isinstance(e, Exception) else Exception(e)
File “/opt/zato/3.1.0/code/zato-server/src/zato/server/service/init.py”, line 639, in update_handle
self._invoke(service, channel)
File “/opt/zato/3.1.0/code/zato-server/src/zato/server/service/init.py”, line 533, in _invoke
service.handle()
File “/opt/zato/3.1.0/code/zato-server/src/zato/server/service/internal/hot_deploy/init.py”, line 263, in handle
self.response.payload.services_deployed = self.deploy_package(self.request.input.package_id, session)
File “/opt/zato/3.1.0/code/zato-server/src/zato/server/service/internal/hot_deploy/init.py”, line 215, in deploy_package
return self._deploy_package(session, package_id, dp.payload_name, dp.payload)
File “/opt/zato/3.1.0/code/zato-server/src/zato/server/service/internal/hot_deploy/init.py”, line 185, in _deploy_package
services_deployed = self._deploy_file(current_work_dir, payload, file_name)
File “/opt/zato/3.1.0/code/zato-server/src/zato/server/service/internal/hot_deploy/init.py”, line 159, in _deploy_file
info = self.server.service_store.import_services_from_anywhere(file_name, current_work_dir)
File “/opt/zato/3.1.0/code/zato-server/src/zato/server/service/store.py”, line 816, in import_services_from_anywhere
self._store_in_odb(session, info.to_process)
File “/opt/zato/3.1.0/code/zato-server/src/zato/server/service/store.py”, line 734, in _store_in_odb
needs_commit = self._store_services_in_odb(session, batch_indexes, to_process)
File “/opt/zato/3.1.0/code/zato-server/src/zato/server/service/store.py”, line 650, in _store_services_in_odb
self.odb.add_services(session, elems)
File “/opt/zato/3.1.0/code/zato-common/src/zato/common/odb/api.py”, line 745, in add_services
session.execute(ServiceTableInsert().values(data))
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/sqlalchemy/orm/session.py”, line 1176, in execute
bind, close_with_result=True).execute(clause, params or {})
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/sqlalchemy/engine/base.py”, line 948, in execute
return meth(self, multiparams, params)
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/sqlalchemy/sql/elements.py”, line 269, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/sqlalchemy/engine/base.py”, line 1060, in _execute_clauseelement
compiled_sql, distilled_params
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/sqlalchemy/engine/base.py”, line 1200, in _execute_context
context)
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/sqlalchemy/engine/base.py”, line 1413, in _handle_dbapi_exception
exc_info
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/sqlalchemy/util/compat.py”, line 203, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb, cause=cause)
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/sqlalchemy/util/compat.py”, line 186, in reraise
raise value.with_traceback(tb)
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/sqlalchemy/engine/base.py”, line 1193, in _execute_context
context)
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/sqlalchemy/engine/default.py”, line 508, in do_execute
cursor.execute(statement, parameters)
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/pg8000/core.py”, line 861, in execute
self._c.execute(self, operation, args)
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/pg8000/core.py”, line 1907, in execute
self.handle_messages(cursor)
File “/opt/zato/3.1.0/code/lib/python3.6/site-packages/pg8000/core.py”, line 1974, in handle_messages
raise self.error
sqlalchemy.exc.IntegrityError: (pg8000.core.IntegrityError) {‘S’: ‘ERROR’, ‘V’: ‘ERROR’, ‘C’: ‘23505’, ‘M’: ‘duplicate key value violates unique constraint “service_name_cluster_id_key”’, ‘D’: ‘Key (name, cluster_id)=(dumb-service.dumb-service, 1) already exists.’, ‘s’: ‘zato’, ‘t’: ‘service’, ‘n’: ‘service_name_cluster_id_key’, ‘F’: ‘nbtinsert.c’, ‘L’: ‘535’, ‘R’: ‘_bt_check_unique’} [SQL: “INSERT INTO service (id, name, is_active, impl_name, is_internal, slow_threshold, cluster_id) VALUES (nextval(‘service_id_seq’), %s, %s, %s, %s, %s, %s)”] [parameters: (‘dumb-service.dumb-service’, True, ‘dumb_service.DumbService’, False, 99999, 1)] (Background on this error at: http://sqlalche.me/e/gkpj)

server2.log
2019-06-25 08:47:17,764 - INFO - 32046:DummyThread-45 - zato.hot-deploy.create:0 - Creating tar archive
2019-06-25 08:47:17,765 - INFO - 32046:DummyThread-45 - zato.hot-deploy.create:0 - Creating tar archive
2019-06-25 08:47:17,801 - INFO - 32046:DummyThread-45 - zato.hot-deploy.create:0 - Uploaded package id:1, payload_name:dumb_service.py

Shutting down the cluster and restaring with a single server gives no errors in log when deploying a test service.

2019-06-25 09:14:18,071 - INFO - 2738:MainThread - zato:0 - Starting Zato 3.1+rev.544100e-py3.6.3
2019-06-25 09:14:18,074 - INFO - 2738:MainThread - zato:0 - Listening at: http://0.0.0.0:17010 (2738)
2019-06-25 09:14:18,074 - INFO - 2738:MainThread - zato:0 - Using worker: gevent
2019-06-25 09:14:18,078 - INFO - 2769:MainThread - zato:0 - Booting worker with pid: 2769
2019-06-25 09:14:18,500 - INFO - 2769:MainThread - zato.server.base.parallel:0 - Preferred address of server1@quickstart-1042557 (pid: 2769) is http://172.22.71.101:17010
2019-06-25 09:14:19,731 - INFO - 2769:MainThread - zato.server.service.store:0 - Deploying cached internal services (server1)
2019-06-25 09:14:20,990 - INFO - 2769:MainThread - zato.server.service.store:0 - Deployed 538 cached internal services (server1)
2019-06-25 09:14:20,991 - INFO - 2769:MainThread - zato.server.base.parallel:0 - Deploying user-defined services (server1)
2019-06-25 09:14:21,023 - INFO - 2769:MainThread - zato.server.base.parallel:0 - Deployed 1 user-defined service (server1)
2019-06-25 09:14:21,444 - INFO - 2769:DummyThread-16 - zato:0 - Set sk_server for sub_key zpsk.rest.zeci.1480a8 (wsx:0) - server1:2769, current sk_servers:
±-------±----------------±---------±-------±------------------±----------------------------------------------------------------------------+
| # | created | name | pid | channel_name | sub_key |
+========+=================+==========+========+===================+=============================================================================+
| 1 | 2019-06-25 | server1 | 2769 | — | zpsk.rest.zeci.1480a8 |
| | 07:14:21.443330 | | | | |
±-------±----------------±---------±-------±------------------±----------------------------------------------------------------------------+
2019-06-25 09:14:21,451 - INFO - 2769:MainThread - zato.server.base.parallel:0 - Found extra services to deploy: dumb-service.dumb-service
2019-06-25 09:14:21,472 - INFO - 2769:MainThread - zato.hot-deploy.create:0 - Creating tar archive
2019-06-25 09:14:21,474 - INFO - 2769:MainThread - zato.hot-deploy.create:0 - Creating tar archive
2019-06-25 09:14:21,534 - INFO - 2769:MainThread - zato.hot-deploy.create:0 - Uploaded package id:5, payload_name:dumb_service.py
2019-06-25 09:14:21,537 - INFO - 2769:MainThread - zato.server.base.parallel:0 - Deployed extra services found: [‘dumb-service.dumb-service’]
2019-06-25 09:14:21,550 - INFO - 2769:MainThread - zato.server.base.parallel:0 - First worker of server1 is 2769
2019-06-25 09:14:21,556 - INFO - 2769:DummyThread-17 - zato.helpers.input-logger:0 - {‘request.payload’: ‘Sample payload for a startup service (first worker)’, ‘channel’: ‘startup-service’, ‘cid’: ‘2410584c6f173f4a3af89b63’, ‘data_format’: None, ‘environ’: {}, ‘impl_name’: ‘zato.server.service.internal.helpers.InputLogger’, ‘invocation_time’: datetime.datetime(2019, 6, 25, 7, 14, 21, 556547), ‘job_type’: None, ‘name’: ‘zato.helpers.input-logger’, ‘slow_threshold’: None, ‘usage’: 3, ‘wsgi_environ’: {‘zato.request_ctx.async_msg’: Bunch(action=‘101802’, channel=‘startup-service’, cid=‘2410584c6f173f4a3af89b63’, msg_type=‘0001’, payload=‘Sample payload for a startup service (first worker)’, service=‘zato.helpers.input-logger’), ‘zato.request_ctx.in_reply_to’: None, ‘zato.request_ctx.fanout_cid’: None, ‘zato.request_ctx.parallel_exec_cid’: None}}
2019-06-25 09:14:21,566 - INFO - 2769:MainThread - zato.server.connection.connector.subprocess_.ipc:0 - Starting SFTP connector for server server1 on 34567
2019-06-25 09:14:21,585 - INFO - 2769:DummyThread-32 - pub.zato.channel.web-socket.cleanup-wsx:0 - Cleaning up old WSX connections; now:2019-06-25T07:14:21.585309, md:7200, ma:2019-06-25 05:14:21.585309
2019-06-25 09:14:22,800 - INFO - 2769:DummyThread-33 - zato.common.ipc.subscriber.Subscriber:0 - Established sub/bind to ipc:///tmp/zato-ipc-quickstart_1042557_server1_2769 (self.pid: 2769)
2019-06-25 09:14:23,001 - INFO - 2769:MainThread - zato.server.base.parallel:0 - Started server1@quickstart-1042557 (pid: 2769)
2019-06-25 09:15:12,946 - INFO - 2769:DummyThread-38 - zato.hot-deploy.create:0 - Creating tar archive
2019-06-25 09:15:12,948 - INFO - 2769:DummyThread-38 - zato.hot-deploy.create:0 - Creating tar archive
2019-06-25 09:15:12,981 - INFO - 2769:DummyThread-38 - zato.hot-deploy.create:0 - Uploaded package id:6, payload_name:Testing_testing.py

And after a cluster restart with both servers error reappears.

I notice that in the ODB the sequence “service_id_seq” is bumped up with a +2 when deploying a service.

Perhaps my issue is that BOTH servers are performing the task against the database at same time?

Edit:
On deleting a service via the services gui a KeyError appears in the log. The service in question will be gone from the gui and the record gone from the service table in ODB. Looks to me like the servers are not in agreement on who is the singleton so they both try to do the same operations. Dual insert attempts in “service” table based on the same value from the db sequence. Dual attempts at deleting the same record.

On Zato 2.0.7 we had these log events for keep-alive from the singleton server talking with the ODB. I see nothing like that in the logs on 3.1

2.0.7 keep-alive:
2019-06-25 11:57:50,700 - INFO - 6357:Dummy-909395 - zato.server.odb:22 - last_keep_alive:[2019-06-25 09:57:45.828724], grace_time:[90], max_allowed:[2019-06-25 09:59:15.828724], now:[2019-06-25 09:57:50.700284]

Delete a service error:
2019-06-25 11:51:38,537 - ERROR - 15542:DummyThread-101 - zato:0 - Could not handle broker msg:Bunch(action='101801', id=628, impl_name='dumb_testing.DumbTest', is_internal=False, msg_type='0002', name='dumb-testing.dumb-test'), e:`Traceback (most recent call last):
File “/opt/zato/3.1.0/code/zato-broker/src/zato/broker/init.py”, line 52, in on_broker_msg
getattr(self, handler)(msg)
File “/opt/zato/3.1.0/code/zato-server/src/zato/server/base/worker/init.py”, line 1721, in on_broker_msg_SERVICE_DELETE
self.server.service_store.delete_service_data(msg.name)
File “/opt/zato/3.1.0/code/zato-server/src/zato/server/service/store.py”, line 236, in delete_service_data
impl_name = self.name_to_impl_name[name] # type: unicode
KeyError: ‘dumb-testing.dumb-test’

Hello @badsequel - I cannot reproduce it under any conditions.

Can you, please, change level of all handlers in logging.conf from INFO to DEBUG, try again and upload server.log files in their entirety from both servers?

Thank you.

Hi again.
There is a limit on number of characters I am allowed to post.

Send them by email, perhaps?

Yes, please feel free to do so.

Hi, @dsuch!

May this be related to this issue on Github?

Thanks, yes, it looks to be the same. The caveat though is that I cannot reproduce it for now.

Hello @badsequel - I just have two notes.

  • Singleton servers were removed in Zato 3.0, which is a version that I believe you skipped
  • Related to it is the fact that the scheduler is now its own component, it is started/stopped just like servers or web-admin

Hi @dsuch

Yeah, we skipped 2.0.8 and 3.0 altogether.

More testing tomorrow if time allows. Let me know if there is something I can provide that might help nail the issue.

It happened to me only upon the first deploy of a service. Further hot-deploys of the same (updated) service did not produce the error.

Hello @badsequel

There is a new version of the Docker image. Can you download it with

docker pull registry.gitlab.com/zatosource/docker-registry/quickstart:3.1-py3

and run the quickstart again? Thank you

Hi,

Sorry, but Docker is not my cup of tea. I do not use it and do not have time to learn it at this point.

Hi,

I got a dev here to test new image and we get exactly the same issues with that docker image as on our non-docker test vm.