I have a client who implemented the following caching strategy:
In this strategy, each server maintains his own local and in-memory cache preventing repetitive requests to the original data source.
The beauty of this strategy is that it is simple to understand and to implement. Of course, it has some drawbacks. For example, requests for the very same data would be sent to the server for each cache in the edge. Anyway, in the long run, this cache strategy is better than no cache at all.
One interesting fact here is that, if all caches are using the same expiration time, then all servers will probably ask for the same data at the same time stressing the source. That is a simple version of the thundering herd problem.
A right and creative solution are to introduce a jitter – randomizing (a little) the expiration time for each cache. This solution was introduced by Youtube (This video does not talk only about the Thundering herd problem, and it is a little bit outdated, but it is still relevant).
Another excellent resource to learning about how to solve The Thundering Problem in a much more complex scenario, I recommend this short video, from the Facebook Engineering team.