Earlier this month Tintri did a survey that asked customers about what challenges they faced with virtualization and more. The full article is can be found here. It was clear from the survey that the most significant hurdles to virtualization expansion were with storage. Storage performance and storage cost in particular as shown in the graph below.
I’d have to echo the same findings in my experience with virtualization. I would say that I’d steer more towards the Storage Cost aspect but really Storage Performance drives Storage Cost to a big degree. For the most part the tried and true way to guarantee storage performance one would implement a fibre channel based solution. This meant fibre switches, expensive storage arrays, and all the other things that comes with it like hbas, ports and cabling which gets even more expensive as you grow in that type of environment.
In the environments I work in today, I see projects consistently being delayed because of the data center trying to keep up with the “Fast and Furious” storage demands driven by virtualization. So trying to strike a balance between staying ahead of the demand and reducing the cost associated with virtualization while at the same time continuing to deliver adequate levels of performance to satisfy Service Level Agreements (SLAs) is crucial.
There are of course things like thin provisioning, linked clones, IP base storage protocols, solid state drive (SSD), things like vStorage APIs for Array Integration (VAAI), etc which help reduce storage capacity and improve storage performance but is it enough? I wonder if virtualizations dependency on storage has superseded our dependency on oil? Either way, I’m sure people will be looking at new ways of reducing storage costs and improving storage performance going forward.