First, let's establish my cred before I start jumping up and down screaming that File Virtualization is relevant.
I am speaking for myself here – not my employer. While acknowledging that I work for a company that sells file virtualization technology, I want to be clear that my position here is mine – it’s personal this time. Sure, I want my employer to succeed, but I am speaking now from the heart, and from my 30 years of storage industry experience.
In those 30 years, I have created, instigated, and perpetuated dozens of break-through, first, best, only market-changing PR-bubbles. I admit that.
Some of them were complete BS. I admit that, too.
For a long time, creating PR-bubbles put bread on the table in the OSG(N) home.
(And for you techno-snooty, marketing is all BS and I only want the technical documentation types...well, all I can say is you can’t handle the truth. Because, the truth is without me and mine, you’d still be wrapping 9mm tapes around capstans and working on 3270 terminals. New technology gets to markets through people like me – we who are willing to do what it takes to break through the noise, to get somebody’s attention, and get you techno-snoots to try something new.)
For the sake of this argument, I am the (still alive, for now) Billy Mays of storage. Ok – maybe that’s a stretch- let’s make it, the Ron Popeil of storage…
Sure some of my PR-bubbles were BS – but ah…some of them–like modular storage arrays, RAID, multi-vendor storage, and storage services (now cloud storage) were eventually, in fact, very important if not game-changing to the storage industry.
And let me tell ya, folks, file virtualization is gonna change your life in ways you can’t even predict…its gonna cut your storage costs, reduce user disruption, cut backup in half, and best of all folks it’s so easy you can…”SET IT, and FORGET IT!”
I am actually sorry we, at Acopia, settled on promoting the term, File Virtualization. It was probably lazy of us (me). We allowed ourselves to describe the product market we were creating by the function of the technology that delivered it. That is dumb. Ford did not create the motorized wagon market, and then sit back and let people argue about whether a steam engine was better at replacing a horse than a gas engine.
Unfortunately, that is exactly what we did. Take a lesson, whippersnapper.
What is now called, File Virtualization, should be more accurately thought of as the concept of decoupling servers from file storage, and then applying intelligence in between. Same as RAID controllers decoupled DASD (look it up newbie) from processors. Same as Load Balancers decoupled servers from the network. Decoupling (ok, virtualizing) the connection between file storage and application servers has massive advantages. Cost, flexibility, scale, availability, the list of ‘ities’ is endless, as is the market opportunity.
File Virtualization – this intelligent decoupling – provides us the ability to intermediate between different types of storage technology and seamlessly and intelligently position data on those storage types. That capability is more relevant now than ever before:
- Some vendors may claim file virtualization and tiering is dead when in fact they are just hiding this function in their arrays.
- There are others that have outright co-opted the value proposition – automated storage tiering now seems to be part of the industry's marketing repertoire (Google, storage tiering). If tiering is file based - and it should be - it's FV at heart.
- Other vendors call this ability to intelligently decouple storage and servers different terms,- like ILM and HSM.
- Even caching from the likes of the new cloud storage gateway start-ups like Avere and Nasuni is a form of this intelligent decoupling.
At a high level, customer use cases for file virtualization haven’t changed in six years – what’s interesting, however, is how these use cases continue to evolve in response to developments in the storage ecosystem.
- Storage tiering
- Past – tiering started out as a way to migrate inactive data from FC to SATA disk.
- Present – as data deduplication has started to become more mainstream, customers are using storage tiering as a way to move appropriate files onto deduplicated storage systems.
- Future – customers will use file virtualization to help them move their data into the cloud. I personally know of several live projects with Nirvanix that will require this functionality, and have heard of similar projects involving Iron Mountain, Rackspace and other cloud storage providers. I'd argue you can't have effective cloud storage in the enterprise without file virtualization.
- Capacity balancing
- Past – capacity balancing started out as a way to aggregate the performance of multiple storage devices
- Present – customers are now using capacity balancing in a couple new ways
- To support their applications that are scaling faster than the storage devices behind them. One user I know needed a workspace greater than what was supported by their storage system for their video on demand application
- Improve backups by presenting large file systems to applications while breaking them up into smaller file systems behind it. I know another user who reduced backup times by up to 20x doing this
- To overcome scale limitations of physical devices. File virtualization can help customers better utilize Netapp A-SIS volumes, which have small volume sizes on the smaller devices, by aggregating these into larger virtual volumes, for example.
A great way to understand the value of this very relevant technology is to spend some time checking out the great stories and case studies available at TechValidate.
So enough – I think my point is made. Call it whatever you want – we chose to call it File Virtualization – it’s critical to the future of file based storage. It’s critical to storage customers, and it's critical to the storage industry.
And, anyone who tries to tell you different is a big, fat, dummy.
Post a Comment