In the first post, I covered the initial setup and deployment of the stack, using a purpose-built wizard to deploy a whole VMware vSphere environment from scratch. In this post, I want to go beyond initial deployment and talk about what happens after the setup.
At the end of the last blog, I finished with a screenshot showing a “success” screen of deploying the dHCI cluster. However, I really want to highlight just how deep the dHCI configuration goes, as it’s all about ensuring the whole stack is deployed exactly to the known best practices that Nimble Support see as ‘best of breed’, based on their years of troubleshooting issues and gremlins within vSphere environments.

The above is subtle, but would save any admin a few days of time, therefore i’ve broken out what this first step does:
- Configure & setup ILO on each host.
- Configure & setup ESXi on each host.
- Configure ESXi networking on all hosts to required best practice.
- Same vNIC selection, IP address, subnet etc etc on each host.
- Same vSwitch naming, configuration & consistency on each host.
- Same iSCSI VMKernel/vSwitch/Port Binding config on each host.
- Jumbo frame & firewall setup for iSCSI
- iSCSI SW initiator creation, with iSCSI Discovery Addresses attached
- NTP configured
- SLP deployed
- Check to make sure everything is exactly as it should be.
Next, it either deploys a vCenter (which would go to the local drive of the server as a staging area) or connects to a current vCenter in order to do the next steps of the configuration:

Here it builds our Datacenter, cluster and pulls the hosts into HA and DRS mode, and ensures that the entire process is working as it’s expected.

From here, the Nimble array deploys the new Nimble dHCI management plane directly into vCenter, configures vVols (VASA provider deployed) as well as deploying the iSCSI shared storage across all hosts, with scanning, mounting and formatting of the VMFS volumes.
All of this done within 5 minutes! Pretty impressive stuff.
However… a few minutes of a slick setup manager, whilst looking nice, is only the first part of the experience. HPE NimbleStorage dHCI is designed to be fully automated and have a single management plane for the whole cluster, and therefore the full experience of storage, hosts and VM management are all controlled via a single interface found within VMware vCenter. This also includes things such as physical server information found within ILO.
Nimble has always had an ever evolving vCenter storage plugin, to empower vSphere admins to deploy/snapshot/restore/clone VMs or VVols. This plugin has always been seen as best of breed within the industry. dHCI offers a complete redesign of the vSphere integration to bring a single domain – allowing for full stack management, monitoring and deployment of additional feature, services or even hardware.
The management domain appears as a snap-in to vCenter, is directly controlled via the HTML5 vSphere client of VMware vSphere, and is deployed natively by the storage platform into the cluster – no additional configuration required. The management plane also wraps vVol deployment, management & troubleshooting, with auto VASA provider setup as well as Protocol Endpoints. Nimble’s vVol deployment is already industry leading, dHCI takes that innovation to the next level, as it’s all done in the wizard for us.
Once into the plugin, we can see that the layout is very similar to a HPE Nimble Storage array – but rather than presenting storage centric information, it displays information for the entire stack under the dHCI domain control.
At the top of the screen, we are presented with various different areas within the stack for visibility, further configuration or troubleshooting using AI.
For example, in the Inventory screen I can see the physical Proliant DL360 Gen10 servers we initially deployed into our dHCI cluster, and it shows our status not just of the ESXi layer, but of the physical host information pulled from the ILO interfaces within the servers themselves. We can also add more servers to the cluster in this screen (which we’ll do later).
A quick peruse to the storage tab shows me the physical storage I have connected to dHCI, with usage, savings, performance, health information and even replication partnerships with either another premises array, or to Public Cloud with HPE Cloud Volumes. In dHCI 1.0, there’s not much to do in this screen as far as additional configuration goes – but expect new features and options to pop up here as the platform evolves.
Next, we can jump into the storage presentation layer and check out our volumes presented via VMFS datastores, or Virtual Volumes (vVols). Current Nimble customers will recognise this screen, as this is similar to the revamped HTML5 vSphere plugin we delivered in NimbleOS 5.1. Check out these blogs for a summary as well as deep dive into end-to-end vVol coolness.
My favourite and perhaps the pièce de résistance is the “dHCI Configuration Checker”. And this is why:
Think to yourself; how many times have you deployed something new into your environment, yet it doesn’t work… and no matter how hard you try, for some reason you just can’t figure out WHY it doesn’t work. To top it off, you’re utterly scared that by making any configuration changes you could just make it 100000000x worse! My personal example is I (as well as a few of my colleagues) spent 2 days trying to troubleshoot a Cisco UCS environment which turned out to be a Jumbo Frame mismatch inside of the Server Profile (grrrr).
Well, that is EXACTLY what the dHCI Configuration Checker is for! – it’s 54 independent full-stack system checks embedded by Nimble Support to proactively spot all of the known, problematic yet potentially subtle things that typically cause elevated blood pressure in an IT deployment project, whether it’s in the host, VMware, the network or storage platform.

You can see in the above example that I have an issue with my ESXi hosts not being in sync time wise, which can cause huge problems in a vVol environment. I’ve also got some issues with my ILO configuration (i’ll let you into a secret here – these are nested Virtual Machines, hence they don’t have ILO!). You can also see that the ESXi hosts don’t have the correct BIOS profile set correctly for “Virtualisation”.
But it goes much deeper – it will check the networking design for all servers (including Jumbo Frames, Firewall etc etc), it will check to see if you have snapshot issues within VMware that need consolidating, it will check if you have unbalanced or incorrect MPIO configurations – the lot. Super cool stuff, which is designed to give you visibility into the entire stack without the need to call Support!

So what else can we do in the plugin? Well, we can create more storage datastores for the dHCI cluster (VMFS/vVols) & perform data management on the storage as a whole, but interestingly we can add new physical servers very easily into this new cluster by clicking the “+” button.
Here we’re greeted with a popup, which scans the IP network and presents new dHCI-enabled ESXi hosts that can be added to the cluster. You’ll also notice there are now “3 Unsupported Servers Found” (rather than the 1 we saw when creating the initial setup). When hovering over, we can see that the two hosts currently inside the dHCI cluster (.61 and .63) are not valid anymore as candidates to add in – which makes sense.

By clicking the ‘next’ tab, it will then ask us for similar information the Stack Setup requested in Part 1 – namely IP addresses and passwords.
You’ll notice that there is no ‘next’ button, just ‘Add’. This is all the configuration information dHCI requires – from here, it will:
- Deploy the dHCI config on the ESXi host
- Configure the vSwitches, VMKernel Ports, iSCSI Initiators, Firewall etc
- Add the ESXi host to vCenter, into the Cluster, enable HA and DRS
- Add the created VMFS datastores and/or vVol containers to the new server
Pretty slick huh 🙂 within a few clicks I have a full ESXi cluster node ready to go in my environment. And my dHCI GUI now shows the newly integrated host into the cluster, as does vCenter.

That’s it for my second part of this blog with HPE Nimble Storage dHCI. As I keep playing with the solution, I’ll be sure to add more articles to my website to spread the knowledge!
I’d love to hear from you and your thoughts, so please do leave questions or comments here.
For now, be safe and stay Nimble 🙂
Borat love this. We’re looking at DHCI at the moment – we’re wondering about the flexibility of the dvswitch setup – do you have to subscribe to the way hpe / nimble set them up or can you alter them – number of DVswitches / uplinks per dvswitch – for example – can you define 2 dvswitches, one for storage, one for customer / mgmt/ everything else them have nimble / dhci use that as the template for new hosts / environment?
It is good, yes.
Cheers,
Jonathan
Hey Jonathan!
dHCI requires some core networking elements – which are fixed – and are based on standard vSwitches – it was the simplest thing to support on first release. These are management, prod, iSCSI 1 & iSCSI 2. You are absolutely free to create _additional_ vSwitches for networking requirements of your business, and these can be standard or distributed vSwitches. They will of course need additional NIC ports.
The good news is full dvSwitch support is coming in 2021 and will be an option for configuration on initial deployment as well as new host additions later on.
Hello,
1. Are you also ‘free’ to ADD vmkernel / vmgroups to the Vswitch0 after deploying Nimble DHCI / Proliant DL380 setup, offcourse in other vlans?
2. if you are talking about dvswitch support in 2021, any timing? Possibility to start dvswitch instead of vswitch0?
Thanks for any feedback
Eddy
Hi Eddy,
You can certainly add more vmkernel/port groups to vSwitch0, as long as the core dHCI networking is kept intact without any modification.
dHCI can support dvSwitches today for additional networking requirements (and that would require additional NIC ports/cards). Later in 2021 dvSwitches will be supported for the core dHCI network stack – ie for mgmt/prod/iSCSI-1/iSCSI-2.
Hope that helps!