API and data caching

Hello,

I started to work on a new feature for Zato - caching.

Here is how it will work:

  • Data will be kept in RAM and/or in persistent storage

  • Persistent storage is SQL but the API will be meant to be extended with custom backends such as Redis, Memcached and other

  • Caches will be added for HTTP channels, HTTP outgoing connections and services (self.cache)

  • self.cache caches will be named or default, i.e. self.cache will be equal to self.cache[‘default’] but more caches can be had, e.g. self.cache[‘customer’], self.cache[‘invoices’] etc. Each cache can have its own configuration and is managed independently

  • As with most things in Zato - caches will be subject to hot-deployment and reconfiguration rules without restarts

  • Cache entries will be based on LRU and TTL: least recently used entries will be evicted if cache is full, optionally a time-to-live may be set for entries to remove them automatically

  • Caches will reject entries bigger than N megabytes/kilobytes so as not to overload servers

  • Replication in RAM-based caches will be synchronous or asynchronous, i.e. self.cache.set(‘foo’, ‘bar’) will be able to block until all servers confirm to have stored the entry

  • It will be also possible to turn off replication in RAM-based caches at all - in this case their contents will be approximate

  • Statistics for each cache will show hits/misses ratio, cache utilization (how much space there still is) and cache effectiveness (recommendation on whether a cache is big enough or can be made bigger or smaller)

  • User-defined functions will be able to indicate what parts of requests or responses are to be taken into account while deciding if to cache data or not. For instance, consider this request:

    {
    “action”: “get-customer-data”,
    “customer_id”: 123,
    “request_id”: “abcdef”
    }

We would now like to cache the response of get-customer-data for 5 minutes. However, we cannot use the whole of the request as is because request_id will be unique for each request and will never repeat - we would then cache data that would never repeat which beats the purpose of using the cache in the first place.

That’s why a user function will be optionally called to extract only the relevant pieces that matter - here it could return a string of “action:get-customer-data, customer_id:123”. Hence, in subsequent requests for the same customer and action but different values of request_id we would still return data from cache.

  • External applications will have REST/SOAP and AMQP endpoints to notify Zato about changes in their data sources so that Zato will be able to update it caches - in the above scenario, if customer’s data changes before TTL of 5 minutes is reached, an external CRM will be able to let Zato know about it and force the update in the cache

  • A GUI in web-admin will be added for everything above, including cache browsing and means to update individual entries in caches

  • What we won’t have for now are Bloom filters or Golomb-coded sets for checking whether any outliers skew the cache usage, i.e. ultimately we should be able to decide that an entry is to be stored in cache only if it’s been requested at least N times, otherwise we may be possibly saving data that is used once only yet it occupies space in cache - here is a PDF paper on how Akamai use this technique (“one-hit wonders”) It’s not super-difficult but I’d like to have to the core built first.

  • I expect for most of it to be implemented in upcoming weeks. If you are building Zato from source, works are performed in the dsuch-f-gh681-caches branch

  • Implementation-wise, a lot of things will be implemented in hand-tuned Cython = it will be fast

That’s the general idea - I will happy to hear how the feature can fulfil all your caching needs and if there is perhaps anything else in particular that you’d like to see added?