Today a brief conversation on Twitter caught my attention:
Ops complexity is not only a hidden cost for (flash) arrays but also for acceleration platforms as David Owens states:
I verified with David to see if he tested PernixData and unfortunately has hasn’t (yet). His tweet might be more explicit by excluding FVP ;).
FVP user experience is designed from the ground up to be as low touch as possible. With the minimal number of simple choices you should be able to accelerate any workload you are running. We believe that management of the platform itself should be minimal, allowing the administrator to use its time more useful.
Accelerating a workload in FVP
In FVP you can select to accelerate a virtual machine or a datastore. A virtual machine is accelerated in its entirety. No need to select a specific virtual hard disk or other metrics. We believe its best to accelerate all I/O generated by the virtual machine than wildcard a subset of I/Os. Just select the virtual machine and choose the acceleration policy. This can be write through or write back. When selecting write back you can specify the level of write redundancy and that’s it.
From that point on the I/Os are accelerated. No need to change the virtual machine configuration, move it to a different datastore or install an agent or driver inside the guest.
Accelerating a datastore allows you to create a default acceleration policy for that datastore and follows the same workflow as the process of accelerating a virtual machine. Select Add datastore in the Flash Cluster, select the datastore and choose the appropriate write policy. When selecting Write Back, select the level of write redundancy and click on OK to accelerate all existing and future workloads of that datastore.
The write policy is automatically assigned to the existing and newly provisioned virtual machines that land on this datastore. A virtual machine level acceleration policy overwrites the default datastore policy, allowing you to differentiate specific workloads in a common accelerated system.
Datastore acceleration is ideal for environments that require a similar set of services. For example VDI landscapes. Another great use is a Storage DRS cluster. Best practices for a Datastore cluster is to group datastores that are serviced by storage resources with the same set of data-services and characteristics, such as spindle speed and spindle count. By applying a common acceleration policy to all datastores in the datastore cluster you further reduce the amount of operations of a provision process.
The virtual machine is provisioned and DRS selects an appropriate host for the compute performance, Storage DRS provides a recommendation of the best suitable datastore in the datastore cluster. As all datastores are accelerated, no further configuration is needed. FVP automatically assigns the acceleration policy when the virtual machine is provisioned on the datastore. By accelerating all datastores in the datastore cluster, migration done by Storage DRS does not impact acceleration. A virtual machine level acceleration policy is unaffected by a migration as well, as the vm-level policy overwrites any datastore level acceleration policy.
It works even with vCloud director environments. Connect the Provider vDC with the DRS and Storage DRS cluster and when your customer provisions a vAPP in its org vDC it will be accelerated automatically by the datastore level acceleration policy set in your vSphere environment.
A change of a datastore level policy is automatically pushed to the virtual machines, which allows the administrator to easily change the level of acceleration for a group of virtual machines. Again, VM-level acceleration policies overwrite datastore-level policies and are therefor not affected by the datastore acceleration change.
Operational simplicity goes beyond the provision process and FVP is design to be as self-sustaining as possible. For example Fault tolerant write acceleration. If an error occurs, such as hardware failure or network loss, FVP is instructed to protect the data and will transition any write-back virtual machine to write through mode as long as the error is current. FVP will transition to this state automatically when an error is detected, but will also transition back to the request write policy when the error is solved. Providing the architecture with an acceleration platform that has the highest priority of providing data availability and integrity while reducing the amount of contact between the administrator and the platform itself. More info on Fault Tolerance write acceleration
Just install FVP on the infrastructure layer, add virtual machines to the flash platform and see how the performance of your virtual machines is increased with any interruption of service.