• Skip to primary navigation
  • Skip to main content

frankdenneman.nl

  • AI/ML
  • NUMA
  • About Me
  • Privacy Policy

Do you use vApps?

February 26, 2013 by frankdenneman

We’re interested in learning more about how you use vApps for workload provisioning today and how you envision it evolving in the future.
If you have a couple of spare minutes, please fill out these 15 questions: http://www.surveymethods.com/EndUser.aspx?FFDBB7AEFDB5AAAAFB
Thanks!

Filed Under: DRS

Reserve all guest memory (all locked)

February 21, 2013 by frankdenneman

Some applications do not perform well when memory is reclaimed from the virtual machine. Most users set a virtual machine memory reservation to prevent memory reclamation and to ensure stable performance levels.
Memory reservation settings are static, meaning that when you change the memory configuration of the virtual machine itself the memory reservation remains the same. If you want to keep the reservation equal to the virtual machine memory reservation, the UI (included in both the vSphere client and the web client) offers the setting: “Reserve all guest memory (all locked)”.
This setting is linked to the virtual machine memory configuration. The memory reservation is immediately readjusted when the memory configuration changes. Increase the memory size and the memory reservation is automatically increased as well. Reduce the memory size of a virtual machine, and the reservation is immediately reduced.
The behavior is extremely useful when using the vSphere client as management tool. Within the vSphere client the memory configuration and the memory reservation settings do not share the same screen. While changing the memory configuration one can easily forget to adjust the memory reservation.
00-virtual-machine-memory-configuration-vSphere client
01-reserve-all-guest-memory-vSphere-client
The web client is redesigned and shows the memory configuration and reservation in a single screen. Yet having a setting that automates and controls alignment of memory configuration and reservation reduce the change for human error.
 
02-reserve-all-guest-memory-web-client

Filed Under: Memory

PernixData Flash Virtualization Platform will revolutionize virtual infrastructure ecosystem design

February 20, 2013 by frankdenneman

A couple of weeks ago I was fortunate enough to attend a tech preview of PernixData Flash Virtualization Platform (FVP). Today PernixData exited stealth mode so we can finally talk about FVP. Duncan already posted a lengthy article about PernixData and FVP and I recommend you to read it.
At this moment a lot of companies are focusing on flash based solutions. PernixData distinguishes itself in today’s flash focused world by providing a new flash based technology but that is not a storage array based solution or a server-bound service. I’ll expand on what FVP does in a bit, let’s take a look at the aforementioned solutions. The solutions have drawbacks. A Storage array based flash solution is plagued by common physics. Distance between the workload and the fast medium (flash) generates a higher latency than when the flash disk is placed near the workload. Placing the flash inside a server provides the best performance but it must be shared between the hosts in the cluster to become a true enterprise solution. If the solution breaks important functions such as DRS and vMotion than the use case of this technology remains limited.
FVP solves these problems by providing a flash based data tier that becomes a cluster-based resource. FVP virtualizes server side flash devices such as SSD drives or PCIe flash devices (or both) and pools these resources into a data tier that is accessible to all the hosts in the cluster. One feature that stands out is remote access. By allowing access to remote devices, FVP allows the cluster to migrate virtual machines around while still offering performance acceleration. Therefor cluster features such as HA, DRS and Storage DRS are fully supported when using FVP.
Unlike other server based flash solutions, FVP accelerates both read and write operations. Turning the flash pool in to a “data-in-motion-tier”. All hot data exists in this tier, thus turning the compute layer into an all-IOPS-providing platform. Data that is at rest is moved to the storage array level, turning this layer into the capacity platform. By keeping the I/O operations as close to the source (virtual machines) as possible, performance is increased while reducing the traffic load to the storage platform as well. By filtering out read I/Os the traffic pattern to the array is changed as well, allow the array to focus more on the writes
Another great option is the ability to configure multiple protection levels when using write-back. Data is synchronously replicated to remote devices. During the tech preview Satyam and Poojan provided some insights on the available protection levels, however I’m not sure if I’m allowed to share these publically. For more information about FVP visit Pernixdata.com
The beauty of FVP is that its not a virtual appliance and that it does not require any agents installed in the guest OS. FVP is embedded inside the hypervisor. Now this for me is the key to believe that this ”data-in-motion-tier” is only the beginning of PernixData. By having insights in the hypervisor and understanding the dataflow of the virtual machines, FVP can become a true platform that accelerates all types of IOPS. I do not see any reasons why FVP is not able to replicate/encrypt/duplicate any type of input and output of a virtual machine. 🙂
As you can see I’m quite excited by this technology. I believe FVP is as revolutionary/disruptive as vMotion. It might not be as “flashy” (forgive the pun) as vMotion but it sure is exciting to know that the limitation of use-cases is actually the limitation of your imagination. I truly believe this technology will revolutionize virtual infrastructure ecosystem design.

Filed Under: Miscellaneous

Voting for the 2013 top virtualization blogs – A year in review

February 20, 2013 by frankdenneman

When Eric Siebert opens up the voting for the top VMware & virtualization blogs you know another (blogging) year has passed. First of all I want to thank Eric for organizing this year in year out. I know he spends an awful lot of time on this. Thanks Eric!
Its amazing to see that there are more than 200 blogs dedicated to virtualization and that each month new blogs appear. Unfortunately I don’t have the time to read them all but I do want to show my appreciation for the blog sites that I usually visit. Best newcomer is an easy one, Cormac Hogan. The content is absolutely great and he should be in the top 10. Then we have the usually suspects, my technical marketing colleagues and buddies: Alan Renouf, Rawlinson Rivera and William Lam. I start of the day by making coffee, checking my email and logging into yellow-bricks.com. It’s the de facto standard of the virtualization blogs. Duncan’s blog provide not only technical in-depth articles, but also insights in the industry. Who else? Eric Sloof of course! Always nice to read to find out that your white paper is published before you get the official word through company channels. 😉 Two relative unknown blog sites but quality content: Erik Bussink and Rickard Nobel. These guys create awesome material. One blog that I’m missing in the list is the one from Josh Odgers. Great content. Hope to be able to vote for him next year.
When reviewing content from others you end up reviewing the stuff you did yourself and 2012 was a very busy year for me. During the year I published and co-authored a couple of white papers such as the vSphere Metro Cluster Case Study, Storage DRS interoperability guide and vCloud Director Resource Allocation Models.
I presented at a couple of VMUGS and at both VMword San Francisco and Europe. The resource pool best practice session was voted as one of the top 10 presentations of VMworld. And of course Duncan and I released the vSphere 5.1 Clustering Deepdive, also know as 50 shades of Orange. ☺ I believe it’s the best one of the series.
50-shades-of-orange1
In the mean time I ended up writing for the vSphere blog, appearing on a couple of podcast and writing a little over a 100 blog articles on frankdenneman.nl. I then to focus on DRS, Storage DRS, SIOC and vMotion but once in a while I like to write about something that gives a little insight peek of my life such as the whiteboard desk or the documentaries I like to watch. It seems you like these articles also as they are frequently visited.
In my articles I try to give insights in the behavior of the features of vSphere, this to help you understand the impact of these features. Understanding the behavior allows you to match your design to the requirements and constrains of the project/virtual infrastructure your working on. During my years in the field I was always looking for this type of information, by providing this material I hope to help out my fellow architects.
When publishing more than over 100 articles you tend to like some more than others. While it’s very difficult to choose individual articles, I enjoyed spending time on writing a series of articles on the same topic, such as the series Architecture and design of datastore clusters (5 posts) and Designing your (Multi-NIC) vMotion network (5 posts). But I also like the individual post:
• vSphere 5.1 vMotion Deepdive
• A primer on Network I/O Control
• vSphere 5.1 Storage DRS load balancing and SIOC threshold enhancements
• HA admission control is not a capacity management tool
• Limiting the number of storage vMotions
I hope you can spare a couple of minutes to cast your vote and show your appreciation for the effort these bloggers put into their work. Instead of picking the customary names please look back and review last year, think about the cool articles you read that helped you or sparked your interest to dive into the technology yourself. Thanks
I can’t wait to watch the Top 25 countdown show Eric, John and Simon did in the previous years.

Filed Under: Miscellaneous

Implicit anti-affinity rules and DRS placement behavior

February 19, 2013 by frankdenneman

Yesterday I had an interesting conversation with a colleague about affinity rules and if DRS reviews the complete state of the cluster and affinity rules when placing a virtual machine. The following scenario was used to illustrate to question:

The following affinity rules are defined:
1. VM1 and VM2 must stay on the same host
2. VM3 and VM4 must stay on the same host
3. VM1 and VM3 must NOT stay on the same host
If VM1 and VM3 is deployed first, everything will be fine. Because VM1 and
VM3 will be placed on 2 different hosts, and VM2 and VM 4 will also be
placed accordingly
However, if VM1 is deployed first, and then VM4, there isn’t an explicit
rule to say these two need to be on separate hosts, this is implied by
looking into dependencies of the 3 rules created above. Would DRS be
intelligent enough to recognize this? Or will it place VM1 and VM4 on the
same host, but by the time VM3 needs to be placed, there is a clear
deadlock.

The situation where its not logical to place VM4 and VM1 on the same host can be deemed as a implicit anti-affinity rule. It’s not a real rule, but if all virtual machines are operational, VM4 should not be on the same host as VM1. DRS doesn’t react to these implicit rules. Here’s why:
When provisioning a virtual machine, DRS sorts the available hosts on utilization first. Then it goes through a series of checks such as the compatibility between the virtual machine and the host. Does the host have a connection to the datastore? Is the vNetwork available at the host? And then it will check to see if placing the virtual machine violates any constraints. A constraint could be a VM-VM affinity/anti-affinity rule or a VM-Host affinity/anti-affinity rule.
In the scenario where VM1 is running, DRS is safe to place VM4 on the same host as it does not violate any affinity rule. When DRS wants to place VM3, it determines that placing VM3 on the same host VM4 is running violates the anti-affinity rule VM1 and VM3. Therefor it will migrate VM4 the moment VM3 is deployed.
During placement DRS only checks the current affinity rules and determines if placement violates any affinity rules. If not, then the host with the most connections and the lowest utilization is selected. DRS cannot be aware of any future power-on operations, there is no vCrystal bowl. The next power-on operation might be 1 minute away or might be 4 days away. By allowing DRS to select the best possible placement, the virtual machine is provided an operating environment that has the most resources available at that time. If DRS took al the possible placement configurations into account, it could either end up in gridlock or place the virtual machine on a higher utilized host for a long time in order to prevent a vMotion operation of another virtual machine to satisfy the affinity rule. All that time that virtual machine could be performing beter if it was placed on a lower utilized host. On the long run, dealing with constraints the moment they occur is far more economical.
Similar behavior occurs when creating a rule. DRS will not display a warning when creating a collections of rules that create a conflict when all virtual machines are turned on. As DRS is unaware of the intentions of the user, it cannot throw a warning. Maybe the virtual machines will not be powered on in the current cluster state. Or maybe this ruleset is in preparation for the new hosts that will be added to the cluster shortly. Also understands that if a host is in maintenance mode, this host is considered to be external to the cluster. It does not count as an valid destination and the resources are not used in the equation. However we as users still see the host part of the cluster. If those rule sets are created while a host is in maintenance mode, than according to the previous logic DRS must throw an error, while the user assumes the rules are correct as the cluster provides enough placement options. As clusters can grow and shrink dynamically, DRS deals only with violations when the rules become active and that is during power-on operations (DRS placement).

Filed Under: DRS

  • « Go to Previous Page
  • Page 1
  • Interim pages omitted …
  • Page 44
  • Page 45
  • Page 46
  • Page 47
  • Page 48
  • Interim pages omitted …
  • Page 89
  • Go to Next Page »

Copyright © 2025 · SquareOne Theme on Genesis Framework · WordPress · Log in