How to Distribute Java Application on Multiple JVMs
Overview
Distributing your Java application on multiple JVM allows to process more user requests just by adding servers. This article tells how to do it.
Introduction
As business grows, applications must process increasing number of user requests. The amount of load that a single-JVM application can process has a hard limit. The limit is set by the capacity of the server the application is running on. Once this limit is reached, the application cannot process more requests.
Distributing Java applications on multiple JVMs allows applications to handle increasing load by using combined power of multiple servers.
Benefits of Distributing Java Applications
Distributing a Java application on multiple JVMs has three main benefits: processing more user requests just by adding servers, increasing application availability, and reducing request latency.
A Java application that can run on multiple serves, or a cluster, can process more requests because each server handles its own share of the load. The combined processing power of the cluster is much higher than the capacity of a single server, and it increases as more servers are added to the cluster.
Distributed Java applications provide better availability by running multiple application instances. Even if a single server fails, applications running on other servers continue to process user requests.
Java applications running in a cluster offer reduced latency by handling lesser load as compared to a single large JVM instance, and by having shorter garbage collections resulting from smaller heaps.
Architecture for Distributed Java Applications
To distribute a Java application on multiple JVMs, three things are necessary:
1. A single source of truth such as a database
2. A clustered cache such as Cacheonix
3. A load balancer
4. Database
A single source of truth such as a database ensures that application data is stored reliably even if the application servers fail. In addition to the durable storage, the database provides applications shared access to the persistent application state.
Though the database plays an important role of a reliable transactional storage, this role also makes it a main bottleneck in the distributed application. As the number of application servers grows, they become increasingly stuck in waiting for responses from the database.
Clustered Cache
A clustered cache makes sure that the application can scale horizontally by removing the database bottleneck. It significantly reduces the number of the requests that applications have to make to the database by keeping all frequently accessed data in memory.
The clustered cache also ensures that all JVMs have a consistent view of the shared data. Without consistency, JVMs will be dealing with conflicting data, which is usually a serious problem. When the data is updated on a single JVM, the clustered cache enforces data consistency by reliably distributing updates to all JVMs in the cluster.
Load Balancer
A load balancer in front of the cluster makes sure that all servers receive fair share of user requests. A hardware load balancer is usually a best option. Companies such F5 and Cisco are known for good hardware load balancers.
Conclusion
An architecture combining a load balancer, a clustered cache and a database allows to develop distributed, scalable and fast Java applications.
Distributing your Java application on multiple JVM allows to process more user requests just by adding servers. This article tells how to do it.
Introduction
As business grows, applications must process increasing number of user requests. The amount of load that a single-JVM application can process has a hard limit. The limit is set by the capacity of the server the application is running on. Once this limit is reached, the application cannot process more requests.
Distributing Java applications on multiple JVMs allows applications to handle increasing load by using combined power of multiple servers.
Benefits of Distributing Java Applications
Distributing a Java application on multiple JVMs has three main benefits: processing more user requests just by adding servers, increasing application availability, and reducing request latency.
A Java application that can run on multiple serves, or a cluster, can process more requests because each server handles its own share of the load. The combined processing power of the cluster is much higher than the capacity of a single server, and it increases as more servers are added to the cluster.
Distributed Java applications provide better availability by running multiple application instances. Even if a single server fails, applications running on other servers continue to process user requests.
Java applications running in a cluster offer reduced latency by handling lesser load as compared to a single large JVM instance, and by having shorter garbage collections resulting from smaller heaps.
Architecture for Distributed Java Applications
To distribute a Java application on multiple JVMs, three things are necessary:
1. A single source of truth such as a database
2. A clustered cache such as Cacheonix
3. A load balancer
4. Database
A single source of truth such as a database ensures that application data is stored reliably even if the application servers fail. In addition to the durable storage, the database provides applications shared access to the persistent application state.
Though the database plays an important role of a reliable transactional storage, this role also makes it a main bottleneck in the distributed application. As the number of application servers grows, they become increasingly stuck in waiting for responses from the database.
Clustered Cache
A clustered cache makes sure that the application can scale horizontally by removing the database bottleneck. It significantly reduces the number of the requests that applications have to make to the database by keeping all frequently accessed data in memory.
The clustered cache also ensures that all JVMs have a consistent view of the shared data. Without consistency, JVMs will be dealing with conflicting data, which is usually a serious problem. When the data is updated on a single JVM, the clustered cache enforces data consistency by reliably distributing updates to all JVMs in the cluster.
Load Balancer
A load balancer in front of the cluster makes sure that all servers receive fair share of user requests. A hardware load balancer is usually a best option. Companies such F5 and Cisco are known for good hardware load balancers.
Conclusion
An architecture combining a load balancer, a clustered cache and a database allows to develop distributed, scalable and fast Java applications.