(Migrated) About problem "(2006, "MySQL server has gone away )"

(This message has been automatically imported from the retired mailing list)

Hello, Im Chinese and my English is poor. So I try my best to make it clear to all my problem about the connection to mysql.

We changed the odb to Mysql from sqlite3 recently. But some problems have occurred every day from then on.

Every day in the morning I login in the zato the following exception occurs which is shown on page(part of the error infomation):

Traceback:
File “/opt/zato/2.0.7/eggs/Django-1.3.7-py2.7.egg/django/core/handlers/base.py” in get_response
89. response = middleware_method(request)
File “/opt/zato/2.0.7/zato-web-admin/src/zato/admin/middleware.py” in process_request
97. req.zato.cluster = req.zato.odb.query(Cluster).filter_by(id=req.zato.cluster_id).one()
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/orm/query.py” in one
2398. ret = list(self)
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/orm/query.py” in iter
2441. return self._execute_and_instances(context)
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/orm/query.py” in _execute_and_instances
2456. result = conn.execute(querycontext.statement, self._params)
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/engine/base.py” in execute
841. return meth(self, multiparams, params)
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/sql/elements.py” in _execute_on_connection
322. return connection._execute_clauseelement(self, multiparams, params)
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/engine/base.py” in _execute_clauseelement
938. compiled_sql, distilled_params
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/engine/base.py” in _execute_context
1070. context)
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/engine/base.py” in _handle_dbapi_exception
1271. exc_info
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/util/compat.py” in raise_from_cause
199. reraise(type(exception), exception, tb=exc_tb)
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/engine/base.py” in _execute_context
1063. context)
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/engine/default.py” in do_execute
442. cursor.execute(statement, parameters)
File “/opt/zato/2.0.7/eggs/PyMySQL-0.6.2-py2.7.egg/pymysql/cursors.py” in execute
132. result = self._query(query)
File “/opt/zato/2.0.7/eggs/PyMySQL-0.6.2-py2.7.egg/pymysql/cursors.py” in _query
271. conn.query(q)
File “/opt/zato/2.0.7/eggs/PyMySQL-0.6.2-py2.7.egg/pymysql/connections.py” in query
725. self._execute_command(COM_QUERY, sql)
File “/opt/zato/2.0.7/eggs/PyMySQL-0.6.2-py2.7.egg/pymysql/connections.py” in _execute_command
888. self._write_bytes(prelude + sql[:chunk_size-1])
File “/opt/zato/2.0.7/eggs/PyMySQL-0.6.2-py2.7.egg/pymysql/connections.py” in _write_bytes
848. raise OperationalError(2006, “MySQL server has gone away (%r)” % (e,))

Exception Type: OperationalError at /zato/scheduler/execute/482/cluster/1/
Exception Value: (OperationalError) (2006, "MySQL server has gone away (error(32, 'Broken pipe'))") 'SELECT cluster.id AS cluster_id, cluster.name AS cluster_name, cluster.description AS cluster_description, cluster.odb_type AS cluster_odb_type, cluster.odb_host AS cluster_odb_host, cluster.odb_port AS cluster_odb_port, cluster.odb_user AS cluster_odb_user, cluster.odb_db_name AS cluster_odb_db_name, cluster.odb_schema AS cluster_odb_schema, cluster.broker_host AS cluster_broker_host, cluster.broker_port AS cluster_broker_port, cluster.lb_host AS cluster_lb_host, cluster.lb_port AS cluster_lb_port, cluster.lb_agent_port AS cluster_lb_agent_port, cluster.cw_srv_id AS cluster_cw_srv_id, cluster.cw_srv_keep_alive_dt AS cluster_cw_srv_keep_alive_dt \nFROM cluster \nWHERE cluster.id = %s' ('1',)

And when I hot-deploy the service code, the following exception occur in server.log:
2015-11-25 10:43:38,427 - [1;31mERROR[0m - 25683:Dummy-6416 - zato.server.odb:22 - Could not add service, name:[syncdata.http-service], e:[Traceback (most recent call last):
File “/opt/zato/2.0.7/zato-server/src/zato/server/odb.py”, line 233, in add_service
self._session.commit()
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/orm/session.py”, line 788, in commit
self.transaction.commit()
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/orm/session.py”, line 384, in commit
self._prepare_impl()
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/orm/session.py”, line 364, in _prepare_impl
self.session.flush()
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/orm/session.py”, line 1985, in flush
self._flush(objects)
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/orm/session.py”, line 2103, in _flush
transaction.rollback(_capture_exception=True)
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/util/langhelpers.py”, line 60, in exit
compat.reraise(exc_type, exc_value, exc_tb)
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/orm/session.py”, line 2067, in _flush
flush_context.execute()
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/orm/unitofwork.py”, line 372, in execute
rec.execute(self)
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/orm/unitofwork.py”, line 526, in execute
uow
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/orm/persistence.py”, line 55, in save_obj
table, states_to_update)
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/orm/persistence.py”, line 385, in collect_update_commands
state, state_dict)
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/orm/attributes.py”, line 589, in get
value = callable
(state, passive)
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/orm/state.py”, line 433, in call
self.manager.deferred_scalar_loader(self, toload)
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/orm/loading.py”, line 613, in load_scalar_attributes
only_load_props=attribute_names)
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/orm/loading.py”, line 235, in load_on_ident
return q.one()
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/orm/query.py”, line 2398, in one
ret = list(self)
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/orm/query.py”, line 2441, in iter
return self._execute_and_instances(context)
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/orm/query.py”, line 2456, in _execute_and_instances
result = conn.execute(querycontext.statement, self._params)
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/engine/base.py”, line 841, in execute
return meth(self, multiparams, params)
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/sql/elements.py”, line 322, in _execute_on_connection
return connection._execute_clauseelement(self, multiparams, params)
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/engine/base.py”, line 938, in _execute_clauseelement
compiled_sql, distilled_params
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/engine/base.py”, line 1070, in _execute_context
context)
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/engine/base.py”, line 1271, in _handle_dbapi_exception
exc_info
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/util/compat.py”, line 199, in raise_from_cause
reraise(type(exception), exception, tb=exc_tb)
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/engine/base.py”, line 1063, in _execute_context
context)
File “/opt/zato/2.0.7/eggs/SQLAlchemy-0.9.9-py2.7-linux-x86_64.egg/sqlalchemy/engine/default.py”, line 442, in do_execute
cursor.execute(statement, parameters)
File “/opt/zato/2.0.7/eggs/PyMySQL-0.6.2-py2.7.egg/pymysql/cursors.py”, line 132, in execute
result = self._query(query)
File “/opt/zato/2.0.7/eggs/PyMySQL-0.6.2-py2.7.egg/pymysql/cursors.py”, line 271, in _query
conn.query(q)
File “/opt/zato/2.0.7/eggs/PyMySQL-0.6.2-py2.7.egg/pymysql/connections.py”, line 725, in query
self._execute_command(COM_QUERY, sql)
File “/opt/zato/2.0.7/eggs/PyMySQL-0.6.2-py2.7.egg/pymysql/connections.py”, line 888, in _execute_command
self._write_bytes(prelude + sql[:chunk_size-1])
File “/opt/zato/2.0.7/eggs/PyMySQL-0.6.2-py2.7.egg/pymysql/connections.py”, line 848, in _write_bytes
raise OperationalError(2006, “MySQL server has gone away (%r)” % (e,))
OperationalError: (OperationalError) (2006, “MySQL server has gone away (error(32, ‘Broken pipe’))”) ‘SELECT cluster.id AS cluster_id, cluster.name AS cluster_name, cluster.description AS cluster_description, cluster.odb_type AS cluster_odb_type, cluster.odb_host AS cluster_odb_host, cluster.odb_port AS cluster_odb_port, cluster.odb_user AS cluster_odb_user, cluster.odb_db_name AS cluster_odb_db_name, cluster.odb_schema AS cluster_odb_schema, cluster.broker_host AS cluster_broker_host, cluster.broker_port AS cluster_broker_port, cluster.lb_host AS cluster_lb_host, cluster.lb_port AS cluster_lb_port, cluster.lb_agent_port AS cluster_lb_agent_port, cluster.cw_srv_id AS cluster_cw_srv_id, cluster.cw_srv_keep_alive_dt AS cluster_cw_srv_keep_alive_dt \nFROM cluster \nWHERE cluster.id = %s’ (1,)
]
2015-11-25 10:43:38,433 - [1;31mERROR[0m - 25683:Dummy-6416 - zato.server.service.store:22 - Exception while visit mod:[<module ‘syncdata’ from ‘/opt/zato/genscript_esb/server2/work/hot-deploy/current/syncdata.py’>], is_internal:[False], fs_location:[/opt/zato/genscript_esb/server2/work/hot-deploy/current/syncdata.py], e:[Traceback (most recent call last):
File “/opt/zato/2.0.7/zato-server/src/zato/server/service/store.py”, line 261, in _visit_module
name, impl_name, is_internal, timestamp, dumps(str(depl_info)), si)
TypeError: ‘NoneType’ object is not iterable
]

I read the http://docs.sqlalchemy.org/en/latest/core/engines.html?highlight=pool_timeout and googled and found that the parameter pool_recycle matters, so I set the parameter in server.conf upon odb:
[odb]
extra=pool_recycle=600

but the problems still exists, Im very puzzled. How should I do to solve the problem?

Thank you all very much.
wangxi

=====================================================================
GenScript -

It is 8 hours;

mysql> show variables like ‘%timeout%’;
±---------------------------±---------+
| Variable_name | Value |
±---------------------------±---------+
| connect_timeout | 10 |
| delayed_insert_timeout | 300 |
| innodb_lock_wait_timeout | 50 |
| innodb_rollback_on_timeout | OFF |
| interactive_timeout | 28800 |
| lock_wait_timeout | 31536000 |
| net_read_timeout | 30 |
| net_write_timeout | 60 |
| slave_net_timeout | 3600 |
| wait_timeout | 28800 |
±---------------------------±---------+
10 rows in set (0.01 sec)

Thanks,
wangxi

=====================================================================
GenScript -

On 26/11/15 11:39, Xi Wang wrote:

raise OperationalError(2006, “MySQL server has gone away (%r)” % (e,))

Hi there,

Zato deals with MySQL reconnections as follows:

  • In web-admin the connection pool’s TTL is set to 600 seconds.

  • In servers there is a scheduler job called
    ’zato.outgoing.sql.auto-ping’ which each 180 seconds pings all the
    outgoing SQL connections to keep them alive.

Can you please send information what the value of wait_timeout in your
MySQL installation is?

thanks,

On 26/11/15 12:19, Xi Wang wrote:

mysql> show variables like ‘%timeout%’;
±---------------------------±---------+
| Variable_name | Value |
±---------------------------±---------+
| connect_timeout | 10 |
| delayed_insert_timeout | 300 |
| innodb_lock_wait_timeout | 50 |
| innodb_rollback_on_timeout | OFF |
| interactive_timeout | 28800 |
| lock_wait_timeout | 31536000 |
| net_read_timeout | 30 |
| net_write_timeout | 60 |
| slave_net_timeout | 3600 |
| wait_timeout | 28800 |
±---------------------------±---------+
10 rows in set (0.01 sec)

Ok, can you please do the following?

  • Log in to web-admin
  • Browse around, for instance, list all of the services and scheduler jobs
  • Leave the web-admin
  • Wait 15 minutes (but not 20)
  • Run SHOW FULL PROCESSLIST
  • Send the command’s output here?

https://dev.mysql.com/doc/refman/5.7/en/show-processlist.html

I’m trying to understand if the pools get recycled as they should be in
180 or 600 seconds for servers and web-admin, respectively.

Am 26.11.2015 um 12:19 schrieb Xi Wang:

It is 8 hours;

mysql> show variables like ‘%timeout%’;

That infamous error can also be caused by sending too large packages
when max_allowed_package is set too low. If you google you will find
many tips for dealing with it (besides switching to Postgres :wink:

mysql> SHOW FULL PROCESSLIST;
±------±-----±----------------±-----±--------±------±------±----------------------+
| Id | User | Host | db | Command | Time | State | Info |
±------±-----±----------------±-----±--------±------±------±----------------------+
| 11371 | zato | localhost:39615 | zato | Sleep | 907 | | NULL |
| 11723 | zato | localhost:38035 | zato | Sleep | 15780 | | NULL |
| 11725 | zato | localhost:38127 | zato | Sleep | 15779 | | NULL |
| 11727 | zato | localhost:38383 | zato | Sleep | 15780 | | NULL |
| 11728 | zato | localhost:38406 | zato | Sleep | 15780 | | NULL |
| 11855 | root | localhost | NULL | Query | 0 | NULL | SHOW FULL PROCESSLIST |
| 11939 | zato | localhost:54312 | zato | Sleep | 22 | | NULL |
| 11940 | zato | localhost:54454 | zato | Sleep | 92 | | NULL |
| 11941 | zato | localhost:54455 | zato | Sleep | 92 | | NULL |
| 11942 | zato | localhost:54640 | zato | Sleep | 4 | | NULL |
| 11943 | zato | localhost:54641 | zato | Sleep | 4 | | NULL |
| 11944 | zato | localhost:54747 | zato | Sleep | 22 | | NULL |
| 11945 | zato | localhost:54826 | zato | Sleep | 5 | | NULL |
±------±-----±----------------±-----±--------±------±------±----------------------+
13 rows in set (0.00 sec)

Ok, the above is the process info.

In fact, I’ve watched the changes of the process info by using “show processlist” command.
And found that:
1. When I refresh the web-admin, the first process’ “time” will be changed to 0
2. when the second to fifth process’ “time” are out of 28800, the four processes will be removed, and the exception occurs again while I hot-deploy service code.

It seems that the web-admin’s recycling and the scheduler job zato.outgoing.sql.auto-ping don’t work normally. Is that a bug? Or what shoud I do to fix it

Thanks,
wangxi

=====================================================================
GenScript -

On 26/11/15 13:53, Xi Wang wrote:

In fact, I’ve watched the changes of the process info by using “show processlist” command.
And found that:
1. When I refresh the web-admin, the first process’ “time” will be changed to 0
2. when the second to fifth process’ “time” are out of 28800,
the four processes will be removed, and the exception occurs again while I hot-deploy service code.

Ok, thanks, I think I know what it is.

You are using more than 1 gunicorn_worker per server, right? (As found
in main.gunicorn_workers in server.conf)

There is an edge case where the auto ping service that is taking care of
SQL timeouts will not run on all of the workers - this is because being
invoked from the scheduler there is no guarantee which worker will
execute it.

Can you please do the following:

  • Make sure all servers are started

  • Download and deploy this service
    https://gist.github.com/dsuch/2599ac624460ce76ad11

  • Run this service from web-admin once only

  • Observe in SHOW FULL PROCESSLIST how ‘Time’ goes from 0 to 30, then
    restarts from 0 ad infinitum and there are no ‘MySQL server has gone
    away’ messages anymore.

thanks a lot,

Hi Dariusz,

The following is the process info in mysql after I run the service which you supplied

mysql> show processlist;
±------±-----±----------------±-----±--------±-----±------±-----------------+
| Id | User | Host | db | Command | Time | State | Info |
±------±-----±----------------±-----±--------±-----±------±-----------------+
| 12451 | zato | localhost:45296 | zato | Sleep | 273 | | NULL |
| 12482 | zato | localhost:45948 | zato | Sleep | 1098 | | NULL |
| 12484 | zato | localhost:45950 | zato | Sleep | 1099 | | NULL |
| 12486 | zato | localhost:45961 | zato | Sleep | 1098 | | NULL |
| 12488 | zato | localhost:45963 | zato | Sleep | 1099 | | NULL |
| 12510 | zato | localhost:46636 | zato | Sleep | 3 | | NULL |
| 12511 | zato | localhost:46637 | zato | Sleep | 3 | | NULL |
| 12512 | zato | localhost:46657 | zato | Sleep | 3 | | NULL |
| 12513 | zato | localhost:46827 | zato | Sleep | 3 | | NULL |
| 12514 | zato | localhost:46828 | zato | Sleep | 3 | | NULL |
| 12516 | zato | localhost:46888 | zato | Sleep | 3 | | NULL |
| 12517 | zato | localhost:46890 | zato | Sleep | 3 | | NULL |
| 12518 | root | localhost | NULL | Query | 0 | NULL | show processlist |
±------±-----±----------------±-----±--------±-----±------±-----------------+
13 rows in set (0.00 sec)


It seems the service works on all processes except the first five processes. The following is the five processes’ info on linux.
It shows the first process is "/opt/zato/2.0.7/bin/python /opt/zato/2.0.7/bin/py -m zato.admin.main “, and others are " gunicorn: worker [gunicorn]”. Have you all met this problem?

root@dev-zato-bj:/# netstat -anpl | egrep '(45296|45948|45950|45961|45963)'
tcp 0 0 127.0.0.1:45296 127.0.0.1:3306 ESTABLISHED 25686/python
tcp 0 0 127.0.0.1:45963 127.0.0.1:3306 ESTABLISHED 25683/gunicorn: wor
tcp 0 0 127.0.0.1:45961 127.0.0.1:3306 ESTABLISHED 25674/gunicorn: wor
tcp 0 0 127.0.0.1:45948 127.0.0.1:3306 ESTABLISHED 25682/gunicorn: wor
tcp 0 0 127.0.0.1:45950 127.0.0.1:3306 ESTABLISHED 25675/gunicorn: wor

root@dev-zato-bj:/# ps -efww | egrep '(25683|25686|25674|25682|25675)'
zato 25674 25650 0 Nov23 pts/0 00:15:05 gunicorn: worker [gunicorn]
zato 25675 25650 0 Nov23 pts/0 00:09:10 gunicorn: worker [gunicorn]
zato 25682 25662 0 Nov23 pts/0 00:15:45 gunicorn: worker [gunicorn]
zato 25683 25662 0 Nov23 pts/0 00:03:46 gunicorn: worker [gunicorn]
zato 25686 1 0 Nov23 pts/0 00:01:03 /opt/zato/2.0.7/bin/python /opt/zato/2.0.7/bin/py -m zato.admin.main

Thanks a lot,
wangxi

=====================================================================
GenScript -

Two servers and each server has two gunicorn workers

Thanks,
wangxi

-----邮件原件-----
发件人: Dariusz Suchojad [mailto:dsuch@zato.io]
发送时间: 2015年11月27日 18:26
收件人: Xi Wang xi.wang@genscript.com; zato-discuss@lists.zato.io
主题: Re: 答复: 答复: 答复: [Zato-discuss] About problem "(2006, “MySQL server has gone away )”

On 27/11/15 03:05, Xi Wang wrote:

The following is the process info in mysql after I run the service which you supplied

Hi,

how many servers do you have how many gunicorn workers each has?

thanks,


Dariusz Suchojad

https://zato.io
ESB, SOA, REST, APIs and Cloud Integrations in Python


敬告各收件方:在享有法律赋予的特权和在其他方面披露受到保护的前提下,本通信所载或其随附的信息属于保密信息,只供指定接收方使用。如阁下并非本通信指明的接收方,请将阁下拥有的本通信及其所有备份(包括所有附件)删除及销毁,以及通知发件方阁下错误收到本通信,在此特提请阁下注意不得阅览或分发本通信,并不得基于对本通信的倚赖而采取任何行动。 Unless expressly stated otherwise, this message is confidential and may be privileged. It is intended for the addressee(s) only. Access to this e-mail by anyone else is unauthorized. If you are not an addressee, any disclosure or copying of the content of this e-mail or any action taken (or not taken) in reliance on it is unauthorized and may be unlawful. If you are not an addressee, please inform the sender immediately.

In server.conf the “pool_size” is set to 5, I think it is related to that 12 connections

[odb]
pool_size=5

Thanks,
wangxi

=====================================================================
GenScript -

On 27/11/15 03:05, Xi Wang wrote:

The following is the process info in mysql after I run the service which you supplied

Hi,

how many servers do you have how many gunicorn workers each has?

thanks,

On 27/11/15 11:36, Xi Wang wrote:

Two servers and each server has two gunicorn workers

Ok - but why is it 12 connections to MySQL in the output from SHOW FULL
PROCESSLIST?

On 27/11/15 11:46, Xi Wang wrote:

In server.conf the “pool_size” is set to 5, I think it is related to that 12 connections

[odb]
pool_size=5

Thanks, can you change it to 1, restart servers, make sure they are all
running and invoke that service again?

It’s very rarely needed to up this value - it’s an internal pool of
connections used when you reconfigure servers, i.e. when adding new
scheduler jobs.

Hi Daiusz,

I set the each server’s “poo_size” to 1, but I found there is still 12 processes exists. Then I run the sql_auto_ping.py service I found that:

  1. There’re four “gunicorn woker” s, and each “gunicorn woker” will produce two processes, and one of the processes’ “time” will be clear to 0
  2. there exits the following processes in linux(using " ps -efww " command), and each of them will produce one mysql’s process, and all of them’s “time” won’t change to 0 when I run the service " sql_auto_ping.py" :
    zato 11838 11837 0 19:09 pts/0 00:00:02 /opt/zato/2.0.7/bin/python /opt/zato/2.0.7/bin/py /opt/zato/2.0.7/zato-server/src/zato/server/connection/amqp/outgoing.py
    zato 11836 11834 0 19:09 pts/0 00:00:02 /opt/zato/2.0.7/bin/python /opt/zato/2.0.7/bin/py /opt/zato/2.0.7/zato-server/src/zato/server/connection/amqp/channel.py
    zato 11835 11833 0 19:09 pts/0 00:00:02 /opt/zato/2.0.7/bin/python /opt/zato/2.0.7/bin/py /opt/zato/2.0.7/zato-server/src/zato/server/connection/amqp/channel.py
    zato 11787 1 0 19:07 pts/0 00:00:03 /opt/zato/2.0.7/bin/python /opt/zato/2.0.7/bin/py -m zato.admin.main

the four " gunicorn woker "'s process info in linux is:
zato 11777 11755 1 19:07 pts/0 00:00:26 gunicorn: worker [gunicorn]
zato 11778 11755 1 19:07 pts/0 00:00:23 gunicorn: worker [gunicorn]
zato 11789 11765 1 19:07 pts/0 00:00:24 gunicorn: worker [gunicorn]
zato 11790 11765 1 19:07 pts/0 00:00:22 gunicorn: worker [gunicorn]

is that normal?

Thanks,
wangxi

=====================================================================
GenScript -