Skip to main content

Cluster Synchronization

The CloudHSM service employs a sophisticated architecture built around synchronized Hardware Security Modules (HSMs) for High Availability. These HSMs work together to safeguard your sensitive data and ensure seamless operations. Depending on the specific CloudHSM service package chosen, such as the ECO Global Cluster, your key data can be replicated up to four times across different geographical locations. This creates an additional layer of protection against any potential hardware failures or outages, offering exceptional redundancy and peace of mind.

One of the key advantages of CloudHSM is its automated key synchronization. Subscribers don't need to worry about manually updating keys or managing complex configurations. The service handles everything behind the scenes, ensuring that subscriber's keys are always consistent and readily available across all synchronized HSMs. This not only simplifies subscribers workload but also minimizes the risk of human error, further enhancing security.

Adding new encryption keys within a partition requires the master HSM of the cluster to be operational. This master HSM plays a crucial role in coordinating key distribution and maintaining data integrity. While temporary unavailability of the master HSM won't impact existing keys or ongoing cryptographic operations, it will prevent the generation of new keys until the master HSM comes back online.

One of the advantages of this architecture is that it enables (global) enterprises to deploy their applications across different regions, as illustrated in the scenario below. Each application instance can interact with their prefered CloudHSM cluster instance, and keys created by one instance are automatically replicated to other sites. This ensures they are available for use in any region, significantly optimizing latency to the CloudHSM service.

ECO Global Cluster deployment scenario

The ECO Global Cluster in this scenario not only minimizes latency but also enhances redundancy. Data is inherently replicated across more than three data centers, ensuring protection against data loss and making local network issues highly unlikely. Furthermore, each application instance can be set up to failover to secondary sites as illustrated as the configuration of the Primus API providers allow for random failover or failover based on priorities.