(Migrated) Testing strategy

(This message has been automatically imported from the retired mailing list)

I am investigating how to use test an application, possibly using
zato-apitest. If so, it would look like

 [zato-apitest] =======> [zato app under test] ======> [stub backend 

services]

I have three points of concern or areas where I’m looking for best practice.

(1) How can I stub out the services which my zato application makes
outbound calls to?

I want to be able to check that the app invokes the backend service with
the appropriate data, and to return a canned reply suitable for that
request, and then check that the zato app processes it correctly.

Options I can see:

a. I could build new zato services to simulate the remote systems, and
reconfigure the outbound channels to point to these stub services on a
given port/URL.

When they are called, I would have to log the request somewhere (e.g. in
a database table), and I would return a canned reply from a stack
(possibly also from a database table).

It looks like the zato-apitest application would have to prepare the
canned replies prior to each test cycle, and at the end of the test
cycle check that all the requests were received as expected, rather than
testing as each reply is received.

b. Add a HTTP listener to the zato-apitest process, running in its own
thread, and point the zato outbound channels to that.

This HTTP listener could communicate with the main thread via queues, so
the main thread can wait until an incoming HTTP message arrives,
validate it, and say what response to return.

This means you might be able to write a test like this:

 Given ...
 When the URL is invoked
 Then an external request is made to another service
 # Checking what request was sent
 And the external request contains "foo"
 # Defining how the remote service responded
 And the external reply status is 200
 And the external reply body is "<bar/>"
 And the the reply is sent back to zato
 # From here we're talking about the reply which zato send back to 

the apitest client
And JSON Pointer “/xxx” is "bar"
And status is 200

But this looks like a bunch of work to implement. Perhaps it can be done
using wsgiref, but ultimately it means that zato-apitest grows a bunch
of functionality for receiving inbound HTTP requests, which doesn’t feel
right.

c. Perhaps there’s some way that the stub services can be written in the
real zato, but coordinate with the zato-apitest process via some sort of
IPC (a redis queue perhaps?) This would tell zato-apitest that a request
had been received, and then wait to be told what response to return.

d. I could refactor the code so that all outbound requests are
implemented in their own services, and then install stub versions of
those services.

Before

class GetClientDetails(Service):
def handle(request):
crm = self.outgoing.plain_http.get(‘CRM’)
payments = self.outgoing.plain_http.get(‘Payments’)

     response = crm.conn.send(self.cid, self.request.input)
     cust = response.data
     response = payments.conn.send(self.cid, self.request.input)
     last_payment = response.data
     ...

After

class GetClientDetails(Service):
def handle(request):

 # Indirect through services
 response = self.invoke('foo.get-crm-payments')
 cust = response.data
 response = self.invoke('foo.get-last-payment')
 last_payment = response.data
 ....

Then create separate services: for live these services would use
self.outgoing.plain_http.get(…).send(…), and under test these would
be fake (stub) services.

However I don’t think this helps much, since although this would get rid
of the outbound HTTP calls and avoid the need for a HTTP listener, the
stub services would still be running in a different process to
zato-apitest and therefore unable to interact.

(2) Creating a reproducible test environment

It looks like this could be scripted using some combination of

service-sources

  • script to create empty database
  • alembic
  • zato-qs-*.sh

but would be quite a multi-step setup

(3) Preparing fixtures and checking database state after tests. This
should be fairly straightforward as zato-apitest can use SQLAlchemy to
open its own independent connection to the database. This assumes that
the tests do complete transactions which commit - this in turn means no
"fast" tests which use database rollback to reset state.

Regards,

Brian.

Brian, this may be a bit out of “left-field” but our team is using
http://learnyousomeerlang.com/common-test-for-uncommon-tests for
testing SOA (which includes Zato). Could be a nice option for some
complex scenarios.

On Tue, Apr 14, 2015 at 9:38 AM, Brian Candler b.candler@pobox.com wrote:

On 14/04/2015 14:20, Brian Candler wrote:

However I don’t think this helps much, since although this would get rid
of the outbound HTTP calls and avoid the need for a HTTP listener, the stub
services would still be running in a different process to zato-apitest and
therefore unable to interact.

e. It would be great if I could use something like the Mock library to stub
out particular outbound requests with a temporary python function which
checks the parameters and returns a canned reply:

def stub_crm(body):
    assert "foo" in body
    return "<bar/>"
monkeypatch.setitem(self.outgoing.plain_http, 'crm', stub_crm)
... now invoke the service ...

Aside: monkeypatch is a feature of the amazing py.test framework:
http://pytest.org/latest/monkeypatch.html

However, again the problem I have is that the test is being orchestrated
from a different process than where the service under test is running
(indeed, the service could be running in multiple processes behind a load
balancer, although testing could require only a single process).
Furthermore, self.outgoing doesn’t exist until the request has actually been
triggered.

So the test running in process ‘A’ doesn’t have the ability to mock out a
function in process ‘B’.

I could replace the service in ‘B’ with a stub service, but then the assert
would have to communicate assertion failures back to ‘A’.

Regards,

Brian.

On 14/04/2015 14:20, Brian Candler wrote:

However I don’t think this helps much, since although this would get
rid of the outbound HTTP calls and avoid the need for a HTTP listener,
the stub services would still be running in a different process to
zato-apitest and therefore unable to interact.
e. It would be great if I could use something like the Mock library to
stub out particular outbound requests with a temporary python function
which checks the parameters and returns a canned reply:

 def stub_crm(body):
     assert "foo" in body
     return "<bar/>"
 monkeypatch.setitem(self.outgoing.plain_http, 'crm', stub_crm)
 ... now invoke the service ...

Aside: monkeypatch is a feature of the amazing py.test framework:
http://pytest.org/latest/monkeypatch.html

However, again the problem I have is that the test is being orchestrated
from a different process than where the service under test is running
(indeed, the service could be running in multiple processes behind a
load balancer, although testing could require only a single process).
Furthermore, self.outgoing doesn’t exist until the request has actually
been triggered.

So the test running in process ‘A’ doesn’t have the ability to mock out
a function in process ‘B’.

I could replace the service in ‘B’ with a stub service, but then the
assert would have to communicate assertion failures back to ‘A’.

Regards,

Brian.

On 14/04/15 15:20, Brian Candler wrote:

But this looks like a bunch of work to implement. Perhaps it can be done
using wsgiref, but ultimately it means that zato-apitest grows a bunch
of functionality for receiving inbound HTTP requests, which doesn’t feel
right.

Actually, this would not been that a bad idea methinks but then again,
that would work with HTTP only whereas mocking could be employed with
other protocols as well (it could be difficult to start an embedded AMQP
or any arbitrary server on demand).

However, what could work is extending the platform by adding means to
stub out calls in certain well defined places, like the moment requests
get called with an HTTP request.

That could be paired up with zato-apitest so the stubs were used only
when needed when a service was under test.

Something like:

Given Zato credentials user:password
Given Zato server ‘http://localhost:17010

Stub out HTTP ‘My Connection Name’ with '/path/in/fs/foo.py’
Stub out AMQP ‘My Connection Name’ with ‘/path/in/fs/foo.py’

(Call an HTTP channel here)

The path would need to point to a Python module whose 'def mock’
function would get called instead of what Zato would have called normally.

That could mean everything would be contained in zato-apitest which is a
general-purpose tool used by people who don’t use Zato itself but I see
no problems with making it contain Zato-specific features as well.