Cache entries with multiple keys and same content

When using the cache API to store record (e.g. a booking), I have this situation in which a booking has a primary key called id and also a unique 6-character locator (a hash), and a booking can be looked up via either of them (i.e. there is a Get service and a Locate service in the booking module).

In other scenarios (users, rooms, guests, etc.), since I do not have this duplicity, I have always created cache keys based on the id, ala id-1. Should we use UUID instead of id, the same pattern would/could be followed.

So in this case I am using get_by_prefix and get_by_suffix methods to find a key such as id-1-locator-a783er, as per this article on the blog, although I have not been able to find it in the Zato 3.0 Public API. There is also a get_by_regexp method.

And I was wondering if this is the recommended way to go or whether some sort of key aliases (e.g. id-1 having aliases locator-a783er and, maybe, uuid-53cf06187b6f45e7a79ad1c6ca748de7) would be a better, and possible, way.

Or maybe this situation is not really that common?


This is an intriguing scenario and it does not seem overly difficult to introduce a notion of aliases, e.g.

self.cache.set('key', 'value', alias=['alias1', 'alias2'])

Now ‘value’ will be returned by all of:


All of the aliases would be simple keys with a special prefix pointing to the original one. Some consideration would be needed around expiration mechanisms to make sure such aliases are also taken into account when the main key expires or the other way around.

What I do not know is how to make it work with set_by_* functions - I think aliases would not really make sense in this case? For instance, what would it signify to have a list of aliases in set_by_prefix?

As a side note - from your description, it looks that it would be also good to have a limit parameter in get_by_* functions - if you know that you have at most N keys with a particular suffix or prefix then there is not a need to scan them all if N of them have been already found.

self.cache.get_by_prefix('my_prefix', limit=2)

Yes, that would be one (good) way to do it (through aliases). And yes, a key and their aliases would have to have a quick and simple mechanism to know they are, actually, one thing.

Now I have not been able to find anything similar in Memcached so, in order to keep compatibility with it, this mechanism may need to be implemented as a layer inside Zato. How does that sound?

In the case I have elaborated I agree that it does not make sense. But I’ve just gone this far (multiple ids for the same model class, which is a very common pattern in the tourism industry).

Yes, I think that it always makes sense. Since you are the one customising the cache collection, you know exactly the case :+1:

Nonetheless, from your experience, what do you think about my current “solution”, with keys such as:


Is it just fine or good enough and, thus, renders the whole alias thing not worth the hassle?


Apart from the most common cases of a simple .get or .set and similar I do not mean to maintain API compatibility with Memcached so it is fine that APIs will be different or that the feature sets will be distinct.

Now that you have suggested it, I think the idea of aliases is a useful one and can be added.

In your particular case, a compound key that works both ways, as a suffix and prefix to .get, will continue to work but this is just a specialization of a more general case - e.g. with 5 aliases this would not really be convenient.

Out of curiosity, in this industry, why is it common to have both keys pointing to the same value?

Yes, of course. :+1:

Because one is your internal, private id of the record and ther other is the external, public id of the service you let your customer know about and even instruct to use when arriving at the desk. For example, a flight locator, which I am sure you use very often :wink:

Please open a ticket or tickets in GitHub for alias and limit parameters and this will be done. Thanks.

Done. Can be found here. Thanks!