Quantcast
Viewing all articles
Browse latest Browse all 6

Storage Old, New and Past Due? (Updated)

It’s been an interesting day of contrasts in the world of storage, one that shows storage is a diverse and wide ranging segment of IT.

The Old

Tape has been part of the discussion on the twitterverse and despite everyone’s best attempts, is not dead yet.  Tape and backup may not be seen as cool  - but data protection is an essential requirement of sustainable data management and tape still provides the one of the most cost effective methods of data protection and of course archive.  This has happened because tape continues to innovate.  Tape drive speeds and media capacities continue to push upwards to meet demand, driving the effective cost per GB down and so keeping tape a player in the long term data retention market.  Tape will be around for a long time to come.

The New

Flash storage is all the rage and today Pure Storage announced they have finally gone GA with their all-flash storage arrays.  They have also produced pretty funny video, taking a side swipe at traditional storage arrays using spinning disk.

Click here to view the embedded video.

Whilst this was a clever piece of marketing, it’s more useful to understand how this technology is implemented and why flash in a traditional array was only a stopgap.  At the recent Storage Field Day, Pure Storage presented a technical deep dive of their architecture, explaining some of the thoughts that led to their second generation array – available from today.  There was some pretty amazing detail presented, including a discussion on maintaining I/O latency when an SSD decides to falter.  The Pure Storage array can choose to recreate the data from parity rather than wait for the I/O to complete and so maintain low latency.  This is how solid state should be designed into storage arrays.  Here are the videos from the Pure Storage presentation.

Click here to view the embedded video.

Click here to view the embedded video.

Click here to view the embedded video.

 

Past Due

The Register carried two articles today discussing EMC VMAX, which is due for a refresh and expected to be announced at EMC World next week.  The first talks about EMC scaling VMAX to 4PB of storage and/or 3,200 drives.  This is a huge capacity to store in a single platform and represents a massive amount of information to keep in a single chassis.  Symmetrix will probably go down as one of the most successful and pivotal storage arrays in history, however I think it is coming close to the end of it’s useful life because:

  • flash will be the dominant technology for high performance applications
  • bulk capacity can be done cheaper and easier
  • vendors are building technology towers, not centralising storage in the way they did 10 years ago
  • intelligence is being pushed up to the hypervisor
One of the issues with placing such a large quantity of data into a single chassis is the ability to migrate to and from the array, especially when the device is due to be decommissioned.  Perhaps this is one of the reasons why EMC has also chosen to implement storage virtualisation, which was the subject of The Register’s other article today.  Yes, it’s true, EMC are finally admitting Storage Virtualisation is cool and HDS and IBM were right all along.  One of the easiest ways to migrate data in and out of large arrays is to virtualise.  What’s ironic is the way EMC (and their Symmetrix strategist Barry Burke) have parodied the idea of storage virtualisation in so many blog posts.  Here’s just a few to savour:
The fact is, EMC had to have some technology built into VMAX to enable migration.  Otherwise, building 4PB arrays creates the customer a world of pain.
I have no details, but if the technology EMC is using here is RecoverPoint, then it’s hardly a native solution, but will be a sticking plaster before the arrival of XtremIO finally puts Symmetrix out to pasture.

 

 


Viewing all articles
Browse latest Browse all 6

Trending Articles