When installing PernixData FVP or VMware vFlash Read Cache you are introducing an acceleration layer that (drumroll)…..accelerates data. Although this might seem very obvious to you, it will have an impact on how you test the new environment. Let’s start of by reviewing the traditional architecture and the new architecture.
The traditional centralized Storage system
Up till recently when you designed an virtual infrastructure the storage system was sized to provide two distinctive elements, data-services and I/O performance. Testing this layer is rather straight-forward. Open up your favorite performance tool, choose you workload simulator and run the test. Let’s use IOmeter for example. In IOmeter you prepare the drive and a file (IObw.tst) is created on the persistent datalayer. This persistent datalayer is a datastore provided by your storage array. Any read or write I/O issued to this file reflects the performance you can expect when a similar pattern is fired at the array by a real life program.
In the following scenario, IOmeter is set to do a random read test, IOmeter prepares the drive and start to read random blocks from the Iobw.tst file. The VM reads three blocks and retrieves, block A, C, B in that order. The latency of this process is easy to compute as you just have to measure how long it takes for a single I/O request or an average of many I/O request to complete.
Testing the acceleration platform
Let’s see what happens if we do the same test on an acceleration platform?
Did you benefit from the acceleration layer? If you run the test in which you just read the data once, then you will actually see a degradation in performance. Not shocking performance drops but it will not accelerate the data because you just introduced a false write into the data completion path. In essence, you didn’t accelerate the data access, you just introduced more data movement without reaping the benefit of ever using that data again. For more info on false writes, I recommend reading the article “Write-Back and Write-Through policies in FVP”
The obvious statement in the introduction paragraph needs to be expanded a little bit “An acceleration platform accelerates writes (FVP) and subsequent read access”. Therefor you need to change your default test patterns if you are testing the read performance of you architecture. You need to make sure the application reads the same data multiple times. This simulates real life scenario’s and I know FVP is optimized for real-life performance requirements, not to shine in IOmeter “dance-offs”.
It’s for this reason why we recommend to use real-life applications for testing. Use the applications you run in your environment, talk to the application owners and identify the bottlenecks. Then when you test the application in your acceleration platform you can measure the benefit of having the data closer to the application.
However if you want to use IOmeter, we recommend you to warm the platform and run the same test twice subsequently. This ensures that most or all data from the Iobw.tst file is on the flash resource and you are testing flash access instead of going all the way down again to the storage array. In addition size the Iobw.tst file correctly in order to have it served from disk instead of the cache of the storage controller. If you want to simulate real life performance, you cannot expect to have all your data sitting in the storage array controller cache.
Soon Chethan will publish his first article explaining IOmeter settings in a more detailed manner. This article will be updated with the link as soon as possible.