I’ve managed to carve out some time in the lab over the past few days, and one of my main requirements here was to test & configure VMware Virtual Volume (vVol) based replication within VMware vSphere using HPE Nimble Storage.
If you’re new to VMware vVols and want a quick overview, take a look at my lightboard video here:
OK, back on topic. Firstly, you’re going to need the following:
- Two Nimble arrays – doesn’t matter the generation or drive type – All Flash or Hybrid is perfectly fine.
- Two ESXi clusters – one for Production and one for DR.
- Two vCenters – again, one for Production and one for DR.
Depending on your version of vSphere, you may want to ensure your NimbleOS version running on the arrays is supported for things such as the vCenter Plugin, Nimble Connection Manager etc. You can do so at the Infosight Validated Config Matrix (VCM).
If you have all of the above, then it’s time to start provisioning!
Here are the steps we’re going to take as part of the workflow:
- Create replication partnerships between the Nimble arrays within the GUI or CLI.
- Connect the VASA provider & vCenter plugin from each array to the vCenter server.
- Provision vVol datastores to each ESXi cluster – one for prod, one for DR.
- Create vSphere Storage Policies on each site to mirror eachother.
- Provision VMs.
- Relax 🙂
STEP 1 – Creating array based replication partnerships
First thing we’ll want to do is ensure that we have the ability for our arrays to replicate to eachother. This is a simple thing to do, and takes a few minutes for both systems.
Login to your primary array, head to Manage->Data Protection->Replication Partners, and click the (+) button.
Fill out the details for creating a Replication Partner, including your destination Partner Name, IP Address (or Hostname), and a shared secret password between the two systems.
You can also configure replication traffic for this partnership over a specific network if you wish – Management, or a data network / VLAN of your choice. You’ll notice that I don’t get that option as I was lazy and used a single subnet for management & data – never recommend doing that in production environments!
IMPORTANT: make sure you click “Use Same Pool and Folder as the source location”. Otherwise your vVol policies will fail later on.
Once configured, do the same thing on the DR array, so it can speak to the production system.
To finish, hit the “test” button on each system to ensure they’re able to speak with eachother, and that your shared secret password is correct. All being well, you should see status be GREEN and show as “Alive”.
STEP 2 – Provisioning VMware Integration
Now this is SUPER easy, and quite possibly my favourite thing, as it takes a second to configure for an administration, yet the sheer complexity & amount of steps it masks for end to end provisioning of VASA providers, Protocol Endpoints & all other work is very, very clever indeed.
Head to Administration->VMware Integration in each array GUI, and from there, add a vCenter. Fill out the credentials, and ensure you click the appropriate options for Web Client and VASA Provider.
The array will auto-provision the full Nimble integration into vCenter for you, as well as register a single, highly available VASA provider – which runs natively HA within the array controllers.
Do the same thing for your DR array too.
Verify the additions by jumping into the vCenter Web Client (if you’re on vSphere 6.5 or below) or HTML5 client (if you’re on vSphere 6.7 or above):
VASA Provider: Found in The Configure->Storage Providers menu when selecting the vCenter instance at the top of the directory tree.
When interrogating the VASA provider, it shows as a provider provisioned from Nimble, using VASA 3.0 and fully supporting SPBM and vVol replication.
Note: It shows active/standby status as “–” because the Nimble array itself is taking care of the high availability of VASA for you automatically with nothing for you to worry about. Other vendors require you to manually provision and add active AND standby VASA providers and register them as such, which is a clunkier way of doing it.
vCenter Plugin: Found in Menu->HPE Nimble Storage. Here you can see the plugins provisioned to your vCenter server.
Sweet new feature: If you have Linked vCenters, you’re also able to jump from one to another to view the plugin provisioned to another vCenter! This is a new as of NimbleOS 126.96.36.199.
Alright, this took us all of 5 minutes to do end-to-end. Tiring work right?
Next, we’re onto the fun stuff. Creating shiz!
STEP 3: Provision vVol Datastores
OK, now we’re going to want to provision some new vVol datastores – one on the Production array, and another on the DR array.
Note: you’ll want to keep these named the same for compliance & automation reasons!
Head to the Nimble vCenter Plugin and get yourself into the vVol datastore view.
Note: if you’re using Nimble dHCI then you’ll need to click into the “Datastores” menu first.
Then provision a new vVol datastore by clicking the (+) button.
Give your vVol datastore a name (remember to keep this the same on both Production and DR systems), a good description, and if you’re in a scale-out group, you can designate which Pool you wish for this to reside in.
You can provision as much space as you like into the datastore (up to 62TB which is VMware’s maximum) and also assign QoS for either IOPS or Throughput on the datastore – useful if this is for a multi-tenanted environment, or even for Dev/Test workloads.
Note: All vVol datastores are 100% thin provisioned by design. This has no performance overhead.
What I absolutely LOVE about our vCenter integration is that you can select the Host(s) or even clusters you wish this vVol datastore to be provisioned to, and the Nimble vCenter integratio will auto-provision an initiator group on the array with those servers, and auto-configure your servers to connect to the new datastore!! Amazing stuff – it cuts out hours (or even days) of work and troubleshooting into a few minutes. It will do this for iSCSI and/or Fibre Channel presentation. (I only have iSCSI).
PROTIP: if you get a failure here and you’re on ethernet/iSCSI, it’s most likely a network configuration issue. Head over to “Configuration Checks” to see if you’ve mis-configured something like vLANs, Jumbo Frames MTU or overall configurations – which are the a major headache normally! Here’s an example of some bad misconfigurations spotted within the vSphere Configuration Checker of Nimble 🙂
Verify the creation of the vVol datastore by looking at the vVol Datastore Tab. You can also click through into the VMware Datastore view, which will also give you some nice information from the Nimble plugin with regards to the settings such as QoS, pool designation & space information.
STEP 4: Create Storage Policies
Storage Polcies are the intrinsinct way of creating custom vVol deployment models based on application, data protection or performance needs. It’s a core part of the Storage Policy Based Management (SPBM) of vSphere. And of course, Nimble is integrated directly into it 🙂
Firstly, head to Menu->Policies and Profiles, and then jump into VM Storage Policies.
You’ll notice there is a “VVol No Requirements Policy” already available to use. This is a VMware default, and you should ignore this. Instead, Create VM Storage Policy.
First, i’m going to create a policy for my Windows C:\ Operating System drives.
I want to ensure that they all have the same Performance Policy, as well as daily Snapshot, Replication & Retention requirements. I also want deduplication ON. I could also provision QoS (IOPS/Throughput) if I wanted to, or turn Encryption ON/OFF (I have this default to ON on array-wide anyway).
Another cool Nimble perk – we will auto-configure the Replication configuration & ruleset for VMware vVols for you, nothing manual to do. Other vendors need lots of manual work here 🙂
Once that’s confirmed, you can ensure that this will work correctly, as it validates the policy against your datastores and tells you whether it’ll be compatible or not.
You must create the same policies on each site – they must match. If you have your vCenter’s in Linked Mode, they will both show up in the VM Storage Policies. Sadly, there’s no way to clone a policy from one vCenter to another (that would be cool!).
I also created the following policies dedicated for SQL databases, where I want to ensure they have better local & remote data protection, as well as confirmed encryption & assigned the correct Performance Policies:
SQL DB Drives:
SQL Server Policy (8K, Compression On, Caching On)
AES-256-XTS Encryption ON
Hourly Snapshots locally (retain 24)
Hourly Snapshots replicated (retain 96)
SQL Log Drives:
SQL Server Log Policy (4K, Compression On, Caching On)
AES-256-XTS Encryption ON
Hourly Snapshots locally (retain 24)
Hourly Snapshots replicated (retain 96)
Once all that was done, we can now see that I have the same policies on both my Production & DR sites.
This is easy, huh? Now we’re onto the final bit – creating VMs!
STEP 5: Provision Workloads
Every VM admin will know this process extremely well, so I’m not going to labour the point of creating VMs. The only difference now is that you can select Storage Profiles when you’re deploying your Virtual Disks.
Firstly, you can assign your new OS vVol Storage Policy when selecting the storage for your config & first disk – selecting my Windows OS policy has made my vVol datastore be selected automatically:
Note: leave “Replication Group” as automatic. Nimble will once again create this for you in the form of it’s Volume Collection on the array.
I then provisioned two more drives – A 20GB drive dedicated for my SQL DB, and another 10GB drive for my SQL Log. This is done under the “7 – Customise Hardware” tab.
With that provisioned and powered on, let’s go check out what’s going on. Firstly – vCenter is now reporting that the VM itself is using all three of our storage policies and is compliant:
The Nimble array itself now shows some new volumes provisioned (5 volumes in fact – 3 for data, and 2 for VMware configuration & vswap)
There is a new volume collection created on the array specifically for the vVol workflow we demanded, and the array has auto-configured local & remote replication for us, with the required retention policies that we’ve specified.
We can also see which volumes are part of the policy and which application policies they belong to on the array 🙂
Checking out the DR array (aka downstream, or destination) we can see the volumes have been created with remote replica schedules created. Again, all automatically with no manual creation at all.
So there we have it! End to end Nimble replication, vVol provisioning, Storage Policy designation and VM creation in the space of about 60 minutes or so 🙂
Hope this was helpful to you! If it was, give me a comment below. If for any reason you get stuck at any stage, don’t forget that your friends at Nimble Support are available to assist should you ever need it 🙂
For now, have a great one – and BE NIMBLE!