(This message has been automatically imported from the retired mailing list)
I am investigating how to use test an application, possibly using
zato-apitest. If so, it would look like
[zato-apitest] =======> [zato app under test] ======> [stub backend
services]
I have three points of concern or areas where I’m looking for best practice.
(1) How can I stub out the services which my zato application makes
outbound calls to?
I want to be able to check that the app invokes the backend service with
the appropriate data, and to return a canned reply suitable for that
request, and then check that the zato app processes it correctly.
Options I can see:
a. I could build new zato services to simulate the remote systems, and
reconfigure the outbound channels to point to these stub services on a
given port/URL.
When they are called, I would have to log the request somewhere (e.g. in
a database table), and I would return a canned reply from a stack
(possibly also from a database table).
It looks like the zato-apitest application would have to prepare the
canned replies prior to each test cycle, and at the end of the test
cycle check that all the requests were received as expected, rather than
testing as each reply is received.
b. Add a HTTP listener to the zato-apitest process, running in its own
thread, and point the zato outbound channels to that.
This HTTP listener could communicate with the main thread via queues, so
the main thread can wait until an incoming HTTP message arrives,
validate it, and say what response to return.
This means you might be able to write a test like this:
Given ...
When the URL is invoked
Then an external request is made to another service
# Checking what request was sent
And the external request contains "foo"
# Defining how the remote service responded
And the external reply status is 200
And the external reply body is "<bar/>"
And the the reply is sent back to zato
# From here we're talking about the reply which zato send back to
the apitest client
And JSON Pointer “/xxx” is "bar"
And status is 200
But this looks like a bunch of work to implement. Perhaps it can be done
using wsgiref, but ultimately it means that zato-apitest grows a bunch
of functionality for receiving inbound HTTP requests, which doesn’t feel
right.
c. Perhaps there’s some way that the stub services can be written in the
real zato, but coordinate with the zato-apitest process via some sort of
IPC (a redis queue perhaps?) This would tell zato-apitest that a request
had been received, and then wait to be told what response to return.
d. I could refactor the code so that all outbound requests are
implemented in their own services, and then install stub versions of
those services.
Before
class GetClientDetails(Service):
def handle(request):
crm = self.outgoing.plain_http.get(‘CRM’)
payments = self.outgoing.plain_http.get(‘Payments’)
response = crm.conn.send(self.cid, self.request.input)
cust = response.data
response = payments.conn.send(self.cid, self.request.input)
last_payment = response.data
...
After
class GetClientDetails(Service):
def handle(request):
# Indirect through services
response = self.invoke('foo.get-crm-payments')
cust = response.data
response = self.invoke('foo.get-last-payment')
last_payment = response.data
....
Then create separate services: for live these services would use
self.outgoing.plain_http.get(…).send(…), and under test these would
be fake (stub) services.
However I don’t think this helps much, since although this would get rid
of the outbound HTTP calls and avoid the need for a HTTP listener, the
stub services would still be running in a different process to
zato-apitest and therefore unable to interact.
(2) Creating a reproducible test environment
It looks like this could be scripted using some combination of
- script to create empty database
- alembic
- zato-qs-*.sh
but would be quite a multi-step setup
(3) Preparing fixtures and checking database state after tests. This
should be fairly straightforward as zato-apitest can use SQLAlchemy to
open its own independent connection to the database. This assumes that
the tests do complete transactions which commit - this in turn means no
"fast" tests which use database rollback to reset state.
Regards,
Brian.