Non-Disruptive Data Migration using HPE Nimble Storage Scale-Out Tech

Many years ago back in the days of NimbleOS 2.0, the engineering teams introduced the ability to non-disruptively cluster Nimble arrays together in a scale-out fabric using a concept of groups and pools. Since then, customers have successfully used scale-out as a usecase to allow for online data migration & within their Nimble estate with no application disruption. It’s a common question from customers and partners quite how this works, and thus in this blog, i’ll walk through exactly how to perform this action in your environment.

Update: I was able to get into the studio recently and recorded a lightboard discussing this concept. Watch here if you don’t fancy reading text 🙂

NimbleOS Scale-Out – Concepts

Every Nimble array post NimbleOS 2.0 contains a concept of a Group, which is created as part of the initial system deployment. A group is a construct in which the management domain resides; all administration tasks (such as IP/network configuration, Active Directory user config, as well as plugin integration such as vCenter full-stack analytics) all function on the group level.

Inside of the group, a Nimble array (and all of it’s capacity expansion shelves) form a logical entity called a Storage Pool. This is where all data services such as volumes, snapshots and clones operate, as well as data reduction features such as deduplication.

SingleGroupGen2

Here is an example of my environment. I have a group name called “vnmbl-group” which contains a single, older Nimble iSCSI array (it could be FC – as the functionality is the same). Inside of my array group, I have a storage pool called default (which is the enforced default name prior to operating NimbleOS 5.1 – check my blog here to change it).

Inside my pool, there could be numerous volumes, with their snapshot policies, as well as zero-copy clones.

Screenshot 2019-10-14 at 21.05.52
Array UI showing vnmbl-group with a single array

Whilst this array has done a fantastic job, I now wish to replace my older Nimble array with a newer Nimble platform. Perhaps I want to migrate my environments to All Flash, or perhaps I want to repurpose my older Nimble array to Disaster Recovery, or even for another project elsewhere. Most importantly, I want to do this as easy as possible, with the most minimal amount of application disruption! Typically, storage array data migrations are the bugbear of the enterprise IT industry.

And this is exactly what we can use Nimble Scale-Out technology to do for us, with minimal app disruption or downtime.

Let’s Scale-Out!

Before you start, it’s important to ensure:

  • Both old and new platforms are residing on the same data network – ie iSCSI or Fibre Channel.
  • Both systems need to run the same NimbleOS software release.
  • Nimble Connection Manager should also be installed on your Windows, VMware or Linux servers. This is a critical part, and should not be overlooked.

If you’ve done all of the above, then we’re ready to get going.

Once you’ve racked, cabled and powered on your new system, the array will start sending broadcast packets onto your network in order to be discovered. Therefore, you can jump into your current Nimble array UI, head to the “Hardware” tab, and click Actions->Add Array To Group. 

Screenshot 2019-10-14 at 21.41.00

This will discover any Nimble arrays currently on your layer 2 network domain, and return values to identify them. As you can see in my screenshot – it shows the Serial Number, the Technology Category (All Flash or Hybrid), Model and Software version. It also shows any predicted errors that it can see with this system which will cause problems if you were to scale out!

Screenshot 2019-10-14 at 21.41.15

We wish to add this new array to the group. By clicking “Add“, it will start the integration process, and ask you for some new details for the new system to have as part of the group. Here, it asks me to assign an array name, some Interface IP addresses, and what I wish to call this new Storage Pool.

Screenshot 2019-10-14 at 21.42.28

Once this has been done, click “OK” to finish the process. If all goes as expected, the Hardware screen now shows two arrays in my group, each residing within it’s own pool.

Screenshot 2019-10-14 at 21.42.53.png
Hardware view of two arrays, two pools, one group
Screenshot 2019-10-14 at 21.43.08.png
Software upgrade screen allows single-click firmware upgrades of the whole group

Heading to the “Software” screen, we can now see that it’s possible to upgrade both of the Nimble arrays within the group to the same NimbleOS release. This is a fully managed process which will roll the process across all arrays in the group – it will start with the “group leader” first (which is typically the first array in the group) and will then conclude with other members in the group. This is a 100% non-disruptive process – as always with NimbleOS upgrades.

Checking out the “Volumes” screen, you’ll see that we have two pools in the group, with aggregated capacity, performance, data reduction and volume counts.

Screenshot 2019-10-14 at 21.49.05.png

It’s possible to run my environment within a scale-out environment if required (the maximum we support is up to 4 arrays in a group). However, this blog is about migrating my workloads from my old array to new with minimal downtime! So let’s do that 🙂

Non-Disruptive Data Migration Between Pools

In this example, there could be many applications (VMs, physical servers, containers etc) all connected and having it’s IO served by the array group. Nimble Connection Manager (mentioned above as a critical component) is used to abstract the IO away from the iSCSI or FC persona of each array – instead it virtualises the connections to the group itself and is able to re-map and direct I/O to each array on the fly without the need for hopping between systems. This ensures that the latency is kept consistently low whilst performing data migrations.

What’s also cool here is that deduplication on Nimble occurs at the Pool level rather than volume level. Thus, if you have an older array which isn’t capable of dedupe, and you’re replacing it with a new shiny one that CAN dedupe, the systems will deduplicate the data as it transfers from the old to the new – on the fly!

2 array group with vms

Let’s kick off a data migration from old to new! To do this, in the Volumes screen – select your Volume(s) or Folder and select “Move“.

Screenshot 2019-10-14 at 21.49.32
Move my volume please!

Then, it asks you to confirm the pool location to move the volumes:

Screenshot 2019-10-14 at 21.49.40
Where would you like to move it to?

Finally, it asks you to confirm that you want this action to happen. You can see a few points of action to note here:

  • It will move volumes, as well as the family of snapshots and clones to the target pool you selected in a single operation.
  • It will move your data _very slowly_ in order to protect front-end IO performance for your applications. Therefore, it could take a while for this action to complete – but has no performance overhead – nor does it introduce any data integrity risk whilst performing the action.
Screenshot 2019-10-14 at 21.49.49.png
Are you sure? 🙂
Screenshot 2019-10-14 at 21.55.21
Volumes report data migration progress, including predicted finish time

You’ll also notice a new tab under Monitor – an overall “Data Migration” pane, which shows the progress & completion time of of each volume family.

The volume in this example has now been successfully migrated, with all associated snapshots & clones – with the catalog 100% intact.

Screenshot 2019-10-14 at 22.32.53
AFA pool contains not just the storage volumes, but also historic snapshots and clones

Evacuating The Old Array From Group

Once the group has successfully completed the migration of data non-disruptively to the new system, one can easily delete the old pool and evacuate the array from the group as all IO is now being serviced from that system. Evacuating the array will also serve as a factory reset, as all group information is wiped.

remove pool

To do this, first head to the Volumes panel to delete the pool. This will push the new default pool to become the AFA pool. This should take a second or two.

Screenshot 2019-10-14 at 22.17.01
Delete the old “default” named pool from group
Screenshot 2019-10-14 at 22.17.14
Confirmation of pool deletion

If you’re on NimbleOS 5.1 or above: we have introduced the automatic migration of group management services within the group without requiring a call to support, as it forms part of the Peer Persistence feature.

To move your Group Leader from your old system to your new shiny platform SSH into your Nimble array group and type:

group --migrate

If you’re on NimbleOS 5.0 or below (you should really upgrade!!) you may see the following warning of not being able to migrate the management services and a request to call Nimble Support to perform this action – do give them a call, and they can do this quickly for you.

Screenshot 2019-10-14 at 22.17.50.png

Finally, you can remove the array from the group from Hardware -> “Remove From Group”.

Screenshot 2019-10-14 at 22.17.41.png

Conclusion

So there we have it – how to use Nimble Scale Out technology to perform a non-disruptive data migration. Of course, this isn’t the only way to skin this particular cat – and in some instances it may make more sense to perform this via host-based methods. The nice thing about this technique is that it’s inbuilt to Nimble, has no performance overhead for any application IO, and is a few-click operation to complete.

Let me know if you have any further questions on how scale-out or the data migration works. But for now, be safe and stay Nimble! 🙂

15 thoughts on “Non-Disruptive Data Migration using HPE Nimble Storage Scale-Out Tech

  1. Thanks, Great article. Question, after adding new unit to Group will it pull all iSCSI initiators and initiator groups from existing old to new unit?
    Or is this a manual process to add all iSCSI initiators to new unit?

    1. Hi! Thanks for the comment.

      All objects such as iSCSI initiators and initiator groups reside at the Group level within Nimble, so when creating a scale-out group, those initiators & groups are visible and transferrable across different array pools and volumes. Nothing manual to do!

      1. Thanks for the quick response, I changed out my CS220 5yrs ago with our current CS700 and trying to brush up on this. We will be replacing our CS700 with a HF40 in the next month or so and moving the CS700 to our DR site. Just trying to remember the steps to get this done.

  2. Great Article, Question, if we want to migrate Volumes from OLD Array which are part of replication, do we need to do any configuration on the New array prior or during adding it to the Group ?

    1. Hello! Excuse the late reply, i’ve been off on hols for the last few weeks.

      If your array is participating in replication – this is taking place at the group level rather than physical system. This means you are able to move your replication relationship across to the new system without any disruption.

    1. Hello,

      The best thing is always to use tools like vMotion / Live Migration for VM data.

      For application data, you can use RMC Peer Copy to copy/move data between 3PAR, Nimble and Primera.

      Here’s a video showing you how to do it for 3PAR->Primera, and we’ll have a video posted for 3PAR->Nimble soon. https://www.youtube.com/watch?v=JL5L1maBMmA

  3. If you have 2 arrays in a group can you still run replication from one of the members to an offsite target nimble? I was told replication can’t be run while moving volumes between arrays in a group.

    thx,
    Dave

    1. Hi David, you absolutely can have an async replication going to another location whilst having that system also part of a scale-out group.

  4. Great write up!
    I have one question about iSCSI attached volumes using the Nimble Windows Toolkit. We have a small number of servers running an older version of the NWT. Is there a minimum version that needs leveraged in order to achieve a seamless migration from one array to another?

    1. Hello! The best thing to do is to call Nimble Support who will verify this for you. There is a section in the compatability matricies on Infosight which tells you the minimum version required of the NWT for different NimbleOS’s too. i’m not associated with Nimble anymore so I can’t give any new accurate information 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *