A Quick Understanding of Full Virtualization and Paravirtualization


  • Service Provider

    Virtualization and Paravirtualization (or PV as it is normally called) are two different approaches to virtualizing a system.

    Full virtualization is easier to conceptualize as it is truly a software representation of an existing computer. In the PC world this means making an entire PC "in software" so that an OS installed on top of it sees a full CPU, memory, disks, etc. all completely represented in software so that the OS has no way to determine if this is hardware or software as it is all visible exactly as it would be as it if was a physical machine. There is no modification to the OS being installed because there is no need, the virtual PC is identical to a physical one as far as the OS is concerned.

    Full virtualization generally uses hardware assisted acceleration where the CPU does some of the work of virtualization for performance reasons. This is not always available and is a recent addition in the PC world only since the move to the AMD64 architecture. Full virtualization can be done completely in software and is how VMware ESX was able to be on the market some time before hardware virtualization acceleration was available.

    Paravirtualization is different from this in that it does not make any attempt to hide the fact that a system is not truly physical and it actually presents itself as a slightly different architecture to the OS being installed. This means that the OS has to be PV-aware and has to have full support for the PV architecture built in. That means that any OS that will run paravirtualized must have this support compiled in from the beginning. So this is anything but a casual or transparent process. The only major OS that has PV support baked in today is the Linux family and the only enterprise paravirtualization platform is Xen. Paravirtualization allows for faster speeds and better stability as there is less overhead, fewer lines of code and better communications between the platform and the guest.

    Common Virtualization Platforms for PC: ESXi, Xen, KVM, HyperV, VirtualBox, Virtual PC
    Common Paravirtualization Platform for PC: Xen


  • Service Provider

    To make Full Virtualization platforms attempt to have improved speed and stability, a hybrid approach is commonly attempted and is used by all major full virtualization platforms. This approach is to take key drivers (such as the NIC and storage devices) and make paravirtualized drives for those specific needs. These PV drivers don't mimic physical devices but are completely custom designed software on both sides that talk directly without the overhead of pretending to be a physical device.

    This method is flexible because OSes for which drivers have been created get most of the benefits of paravirtualization (but not all) and OSes that lack PV drivers can fall back to fully virtualized devices and their drivers.


  • Service Provider

    Xen is unique in that it supports full PV, PV drivers and full virtualization. It is the only platform for PC that offers this. (It is also the only platform for ARM.)

    This is a driving factor in the choice of Xen for large cloud platforms (Amazon, Rackspace, IBM / Softlayer, etc.) because it provides higher deployment density, higher performance, better stability, etc. for the bulk of workloads (as Linux is the vast majority of workloads.) But it is flexible and can run any OS via full virtualization and most with the benefit of PV drivers.


  • Service Provider

    When do you choose PV, Virtualization with PV Drivers or straight virtualization?

    This is actually pretty simple. PV is by far the best choice and you want it whenever possible. Lowest overhead, most stability. If it exists, this is what you want.

    If you cannot do a full PV install (for example, if you don't have Xen and you are not installing Linux) then you are forced to go for regular virtualization and use PV Drivers if they are available for you.

    You always want as much PV as you can get. Full PV when possible, PV drivers when not and going with no PV at all is purely a fallback for when there is simply no other choice.



  • I've seen a couple articles that state you get better performance with PVHVM vs PV. Is this true in your experience?



  • me also i read it in this article, they said Debian in PVHVM is 2 or 3 times faster than Debian PV,
    https://xen-orchestra.com/blog/debian-pvhvm-vs-pv/
    so based on your experience Mr Scott that is not true, right ??


  • Service Provider

    @johnhooks said:

    I've seen a couple articles that state you get better performance with PVHVM vs PV. Is this true in your experience?

    Should not even be possible. I've only seen that from places that seemed to be trying to show that other platforms were keeping up with Xen's speed.


  • Service Provider

    @IT-ADMIN said:

    me also i read it in this article, they said Debian in PVHVM is 2 or 3 times faster than Debian PV,
    https://xen-orchestra.com/blog/debian-pvhvm-vs-pv/
    so based on your experience Mr Scott that is not true, right ??

    I've seen rS making this claim. The only reason that I can imagine that this has value is if the hardware assistance has become that highly perform any. Which would indicate that there is simply more tweaking to be done to the PV model to leverage the hardware fully.



  • Ok thanks. Here's one article, if you wanted it. http://www.frederickding.com/posts/2014/07/pvhvm-centos-7-xenserver-312113/



  • Does providing this level of access to the hardware remove any of separation strengths of VM? In other words, would Viruii, etc have an easier time breaking out of their containers because they are PV instead of HVM?


  • Service Provider

    @Dashrender said:

    Does providing this level of access to the hardware remove any of separation strengths of VM? In other words, would Viruii, etc have an easier time breaking out of their containers because they are PV instead of HVM?

    Theoretically, yes. But the separation is still complete. It's good to wonder, but there is no known (to me) threats that use either.



  • @scottalanmiller said:

    @Dashrender said:

    Does providing this level of access to the hardware remove any of separation strengths of VM? In other words, would Viruii, etc have an easier time breaking out of their containers because they are PV instead of HVM?

    Theoretically, yes. But the separation is still complete. It's good to wonder, but there is no known (to me) threats that use either.

    There was a threat recently found in, I think it was, ESXi through the floppy drive controller that would allow one VM to break out of it's container and access the hypervisor, and the other VMs. Though I don't believe it was ever found in the wild.


  • Service Provider

    I think the biggest threat is the oversight of the code rather than the technology approach. If security is the top concern then the open source aspect of Xen is a big deal for security. Far more important than PV vs FV.



  • @scottalanmiller said:

    I think the biggest threat is the oversight of the code rather than the technology approach. If security is the top concern then the open source aspect of Xen is a big deal for security. Far more important than PV vs FV.

    Traditionally I'd agree with you. But the reality is that the code is rarely audited. How long was that bug in OpenSSL before it was discovered? A decade, wasn't it?


  • Service Provider

    @Dashrender said:

    Traditionally I'd agree with you. But the reality is that the code is rarely audited. How long was that bug in OpenSSL before it was discovered? A decade, wasn't it?

    That's not a useful example. One anecdote of an undiscovered bug leads us nowhere. You can't assume that "more" and "better" mean perfect and that a lack of perfection suggests that more and better are not true. Open source is a more secure model. Just because a model is secure does not mean that every project using that model is secure, either. It's the approach that is more secure.

    Also, that open source projects announce when a flaw is found (and was patched in hours) actually reflects just how secure they are and should be counted in their favour not against them. Closed source projects have decades old flaws too, but rarely announce it and rarely have a good way to prove how long the flaw has been around.

    And that code is actually audited all of the time. Companies like Red Hat, Suse, Canonical, IBM, the big banks, big governments, Linux Foundation, etc. are constantly auditing code. No code is or ever will be perfect. We are looking for better, not perfect.


  • Service Provider

    @Dashrender said:

    But the reality is that the code is rarely audited.

    Again, you are looking for perfect. Rarely audited is still better than never audited.



  • @Dashrender said:

    @scottalanmiller said:

    I think the biggest threat is the oversight of the code rather than the technology approach. If security is the top concern then the open source aspect of Xen is a big deal for security. Far more important than PV vs FV.

    Traditionally I'd agree with you. But the reality is that the code is rarely audited. How long was that bug in OpenSSL before it was discovered? A decade, wasn't it?

    If I remember correctly a lot of open source companies (ones that sell support) actually have bug bounties where you can be paid for finding bugs in the open source software. As opposed to certain closed source vendors who have threatened lawsuits against people who reveal vulnerabilities.



  • @scottalanmiller said:

    @Dashrender said:

    But the reality is that the code is rarely audited.

    Again, you are looking for perfect. Rarely audited is still better than never audited.

    I'd argue this point at this time. If you really want to find something to exploit, you might have an easier time with it today in the sea of Open Source software (again I point to OpenSSL). Who knows how long the NSA knew about that problem and just didn't report it, and why did they know - it's just as likely that they stumbled upon it as it is that they scoured over the code and found it.



  • Now, does this mean to stop using Open Source code, no of course not. Just be aware.


  • Service Provider

    @Dashrender said:

    @scottalanmiller said:

    @Dashrender said:

    But the reality is that the code is rarely audited.

    Again, you are looking for perfect. Rarely audited is still better than never audited.

    I'd argue this point at this time. If you really want to find something to exploit, you might have an easier time with it today in the sea of Open Source software (again I point to OpenSSL). Who knows how long the NSA knew about that problem and just didn't report it, and why did they know - it's just as likely that they stumbled upon it as it is that they scoured over the code and found it.

    Completely not relevant. Open source will ALWAYS be more secure then closed source code, simply because it IS reviewed by a third party.

    Want an example of how bad closed source code is? EverQuest.
    EQ was reverse engineered in less than a year to the point that people were able to being injecting their own packet data into the stream. After 2 years it was almost completely exposed. Things like MacroQuest were developed.

    At the same time other people began to actually make use of that hacked information and began to create the first EverQuest emulators.

    Today, there are Emulator servers with hack detection so good that it is (so far as is publicly known) not possible to play on those emulated servers with hacks. Compared to Sony server were use of MacroQuest is a given to almost the entire population.


  • Service Provider

    @Dashrender said:

    Now, does this mean to stop using Open Source code, no of course not. Just be aware.

    You need to be aware of ALL software, but Open Source the least. Every concern you have with open source you also have with closed source, plus more. The logic is that simple. Open source introduces no risk from being open but eliminates many and encourages better practice. Closed source literally introduces some types of risks and encourages others.

    So yes, you need to be aware. But stating it like this is misleading. You need to be less aware.


  • Service Provider

    @Dashrender said:

    @scottalanmiller said:

    @Dashrender said:

    But the reality is that the code is rarely audited.

    Again, you are looking for perfect. Rarely audited is still better than never audited.

    I'd argue this point at this time. If you really want to find something to exploit, you might have an easier time with it today in the sea of Open Source software (again I point to OpenSSL). Who knows how long the NSA knew about that problem and just didn't report it, and why did they know - it's just as likely that they stumbled upon it as it is that they scoured over the code and found it.

    You would argue it based on what reasoning? Pointing to OpenSSL means nothing. Like I said, that case showed how good open source was, not what you are implying. Also it is an anecdote and means nothing, literally nothing. We find issues like this is closed source too, every day. But you aren't pointing to those, only to OpenSSL, why?


  • Service Provider

    @coliver said:

    @Dashrender said:

    @scottalanmiller said:

    I think the biggest threat is the oversight of the code rather than the technology approach. If security is the top concern then the open source aspect of Xen is a big deal for security. Far more important than PV vs FV.

    Traditionally I'd agree with you. But the reality is that the code is rarely audited. How long was that bug in OpenSSL before it was discovered? A decade, wasn't it?

    If I remember correctly a lot of open source companies (ones that sell support) actually have bug bounties where you can be paid for finding bugs in the open source software. As opposed to certain closed source vendors who have threatened lawsuits against people who reveal vulnerabilities.

    Yes, it is very common for there to be bug and security bounties, not just from the groups that make the code but also from companies that use it.


  • Service Provider

    Open source can also be fixed by anyone, not only by the company that owns the rights to fix it.



  • @scottalanmiller said:

    @Dashrender said:

    @scottalanmiller said:

    @Dashrender said:

    But the reality is that the code is rarely audited.

    Again, you are looking for perfect. Rarely audited is still better than never audited.

    I'd argue this point at this time. If you really want to find something to exploit, you might have an easier time with it today in the sea of Open Source software (again I point to OpenSSL). Who knows how long the NSA knew about that problem and just didn't report it, and why did they know - it's just as likely that they stumbled upon it as it is that they scoured over the code and found it.

    You would argue it based on what reasoning? Pointing to OpenSSL means nothing. Like I said, that case showed how good open source was, not what you are implying. Also it is an anecdote and means nothing, literally nothing. We find issues like this is closed source too, every day. But you aren't pointing to those, only to OpenSSL, why?

    I'm pointing to this in OpenSSL because it is open source - i.e. anyone and their dog and read the source code and find this problem, and frankly should have many ages ago (and I'm sure that someone actually did, but didn't report it). At least with closed source software you must decompile it and trudge through the machine code to fine mistakes.

    Also, don't take this as my saying that closed source is better - I'm not. I'm just saying that anyone who isn't already familiar with this situation needs to be aware that just because something is open source does in no way imply that anyone has ever done an audit, let along a security audit of the code.

    The audit is something that can be easily done because it's open source (but who is going to pay for it?). Obviously you can't do this with closed source unless you're the owner and you hire an audit team.


  • Service Provider

    @Dashrender said:

    I'm pointing to this in OpenSSL because it is open source - i.e. anyone and their dog and read the source code and find this problem, and frankly should have many ages ago (and I'm sure that someone actually did, but didn't report it). At least with closed source software you must decompile it and trudge through the machine code to fine mistakes.

    This is not how vulnerabilities are found, however, and many have gone through it and not found the flaw. It isn't like you just look at code and see the flaw, if they did it wouldn't have happened in the first place. That you can look through code finds some flaws, but not most. What it specifically finds is intentional back doors, secret holes, bad ideas, fragility, etc.

    Open source is always very, very easy to fix whereas closed source is not.

    Remember this?

    http://mangolassi.it/topic/5692/old-smb-security-flaw-still-exists

    They found that too. But unlike OpenSSL that was fixed the same day, this one has just remained. So reading the code didn't play a major role in this one set of cases but being closed source has left one vulnerable while the other was patched the same day.


  • Service Provider

    @Dashrender said:

    The audit is something that can be easily done because it's open source (but who is going to pay for it?).

    You state this in a leading way to try to make a point that doesn't exist (that audits do not happen.) Lots of companies audit this code, as we have already discussed. Few small businesses do this (nor should they) but many large companies and organizations do do audits. It might feel like something you would not pay to do, but those seriously concerned about security and/or stability certainly do do audits.


  • Service Provider

    @Dashrender said:

    Also, don't take this as my saying that closed source is better - I'm not. I'm just saying that anyone who isn't already familiar with this situation needs to be aware that just because something is open source does in no way imply that anyone has ever done an audit, let along a security audit of the code.

    No, but you are implying that open source is equal or worse, but it is not. It is better (or equal.) It literally has no downsides compared to closed source (for the end users, obviously what is bad for the customers might be good for the vendor) but does require customers (but not every customer) to leverage to have it still be beneficial for all (one enterprise doing an audit and checking or improving code helps everyone). The same code made open or closed will always be better or equal to the same code closed source.

    You are completely correct that no one should think that the nature of a license for code visibility would mean that it is magic and that audits are automatic - but I've never heard of anyone implying or believing such a thing. I think we were all assuming that no one thought that open sourcing code was doing anything like that.

    But we also have the vast majority of enterprise open source software being audited all the time. So in one way, we have to be aware of basics like source licensing does not imply an audit. At the same time we have to be understanding that major companies certainly do audit core code, especially security code, regularly and that there is a level of auditing going on on enterprise open source that exists nowhere else.



Looks like your connection to MangoLassi was lost, please wait while we try to reconnect.