Configuring Peer Persistence (aka Sync Rep) with HPE Nimble Storage dHCI

dHCI sync rep diagram

You may had noticed that I have been posting a series of technical blogs for HPE Nimble Storage dHCI install and operation, and shortly after the second post went up I had a question come in via DM:

“Is there support for Peer Persistence under Nimble dHCI? or when can we expect it?”

Background: Peer Persistence is HPE’s marketing name for Synchronous Replication with Application Transparent Failover. It’s something that was released as part of NimbleOS 5.1 in April 2019, and has had good adoption around the world since then.

The answer? Why YES, Peer Persistence is supported under dHCI v1.0, and like anything – it’s super easy to configure!

If you’re interested in learning how Peer Persistence works, I have written an overview blog here, and one of our HPE partners wrote an in-depth blog here. The same methodologies apply to dHCI.

Here’s how you do it:

Step 1: Configure your first dHCI Nimble array as per my initial blog. And add your servers into the cluster.

Step 2: Jump into the Nimble array UI, and discover your second, downstream dHCI Nimble array (which at this stage will be un-configured, and will be broadcasting on the network for discovery).

A new, shiny un-configured Nimble AFA ready for installation

Step 3: Give your new array it’s own IP addressing, and specify the new array to live in a new pool of storage (as synchronous replication occurs between pools within a single Nimble group).

Configuring the second dHCI array to live inside the same Nimble dHCI group

From here, the second array will show up in the single Nimble UI as two separate pools. You will also see “Pool Partners” for synchronous replication between the two systems.

To mirror this, jump into the dHCI Management plane, and you will now see two arrays configured with a replication partnership between them.

From here on, setup the Nimble Sync Rep Witness to give you transparent failover, and you’re ready to rock with Peer Persistence with automated failover. You’ll want to add in your servers on the downstream site using the “Add Servers” feature I showed in blog two.

Edit: With our new NimbleOS 5.2 release we now support a max of 32 compute nodes in dHCI. This extends your potential scale-out reach to 16 nodes per site for Peer Persistence 🙂

One thing to consider; Nimble dHCI has a 1.0 limit of 20 servers in the cluster. In a dhCI Peer Persistence configuration this limit now becomes persistent across both sites. If you like to mirror resources across sites, then this means a 1.0 max of 10 servers per site (for a total of 20 across both sites. Expect to see this number bump up considerably in future releases to allow more sync rep’d hosts in your environment.

So there we have it – synchronous replication and app-transparent failover on your dHCI cluster for ultimate business continuity, but managed through a single place in vCenter 🙂


One thought on “Configuring Peer Persistence (aka Sync Rep) with HPE Nimble Storage dHCI

Leave a Reply

Your email address will not be published. Required fields are marked *