This question regularly pops up on twitter and the community forums. And yes it works but VMware does not support vMotion interfaces in different subnets.
The reason is that this can break functionality in higher-level features that rely on vMotion to work.
If you think Routed vMotion (vMotion interfaces in different subnets) is something that should be available in the modern datacenter, please fill out a feature request. The more feature requests we receive; the more priority can be applied to the development process of the feature.
vMotion over layer 3?
18 sec read
Hey mate, Assuming the pre-requisities for vMotion are met – at least 5ms latency response time or better (I thought I read somewhere this has been bumped up to 10ms?) then will this work (unsupported or not) in any scenario with different subnets? What scenarios/topologies could you see it not working? Also do you think routed vMotion has a limit of the number of hops it can handle also or does it also come back to latency/response time?
Andre, yes, the 10ms “Metro vMotion” capability is possible with Enterprise+ licensing.
I agree – I don’t think the complexity has much to do with *technically* supporting L3 vs. L2 for vMotion so much as the options L3 gives for people to attempt vMotion over all kinds of “weird” infrastructure. For example, the current L2 restriction tends to discourage people from attempting to configure vMotion over a low-bandwidth WAN links since they don’t typically try to span L2 networks over those same links. It’s one of those “it depennds” things: the bandwidth and latency need to support transferring X MB data between locations faster than those X MB are being changed on the source machine. If not, the target will never catch up — or will need to be suspended/throttled too much to be practical.
For L3 vMotion, I think a lot of people would give that a go just because it is now ‘supported.’ The support matrix just became exponentially more complex, and the number of support calls would likely increase the same way.
Beyond the bandwidth+latency issues, we have to look at security: that traffic is in-memory host data and not protected by the vSphere layer. Encryption would have to come into play if that traffic were to leave the datacenter — at least IMHO.