Memcached Horizontal Scaling

Memcached is a distributed memory object caching server, aimed to greatly accelerate the incoming requests serving. This is achieved by means of caching weighty data rather than generating it from the scratch on each request, which requires a considerable amount of resources. To make it more clear, some details on the Memory Allocation approach, used by Memcached system, are presented  in the linked guide.

Take into consideration that each new Memcached server, created as a result of horizontal scaling, will contain the default set of data and configurations without any customizations.

cache scaling

Tip: You are also able to automate Memcached server horizontal scaling based on incoming load with the help of tunable triggers.

Adding several Memcached instances to your environment will improve the application’s failover capabilities. For example, you can designate each of the nodes to serve a particular part of an application’s data, or, even more beneficial, adjust your application for storing its cache in all Memcached servers simultaneously. In such a way, each server will contain the full cache duplicate, which eliminates the risks of probable application downtime or cached data loss, due to the particular Memcached server failure.

In addition, you can also use your Memcached cluster as storage for users’ sessions, which is especially useful while working with a number of clustered application servers. With such a solution adjusted for your Java, PHP, Ruby, Python, Node.js or .NET application, all of the already handled sessions are backed up to the Memcached system. Afterwards, they can be fetched and reused by any app server in the cluster if the original one (which has initially processed this session) fails. And, your customers will not notice anything.

With several caching nodes added to the environment, you can configure the storing of copied sessions in each of them, ensuring these sessions will be accessible until at least a single Memcached server is working.