A Quick Understanding of Full Virtualization and Paravirtualization


  • Service Provider

    @Dashrender said:

    Does providing this level of access to the hardware remove any of separation strengths of VM? In other words, would Viruii, etc have an easier time breaking out of their containers because they are PV instead of HVM?

    Theoretically, yes. But the separation is still complete. It's good to wonder, but there is no known (to me) threats that use either.



  • @scottalanmiller said:

    @Dashrender said:

    Does providing this level of access to the hardware remove any of separation strengths of VM? In other words, would Viruii, etc have an easier time breaking out of their containers because they are PV instead of HVM?

    Theoretically, yes. But the separation is still complete. It's good to wonder, but there is no known (to me) threats that use either.

    There was a threat recently found in, I think it was, ESXi through the floppy drive controller that would allow one VM to break out of it's container and access the hypervisor, and the other VMs. Though I don't believe it was ever found in the wild.


  • Service Provider

    I think the biggest threat is the oversight of the code rather than the technology approach. If security is the top concern then the open source aspect of Xen is a big deal for security. Far more important than PV vs FV.



  • @scottalanmiller said:

    I think the biggest threat is the oversight of the code rather than the technology approach. If security is the top concern then the open source aspect of Xen is a big deal for security. Far more important than PV vs FV.

    Traditionally I'd agree with you. But the reality is that the code is rarely audited. How long was that bug in OpenSSL before it was discovered? A decade, wasn't it?


  • Service Provider

    @Dashrender said:

    Traditionally I'd agree with you. But the reality is that the code is rarely audited. How long was that bug in OpenSSL before it was discovered? A decade, wasn't it?

    That's not a useful example. One anecdote of an undiscovered bug leads us nowhere. You can't assume that "more" and "better" mean perfect and that a lack of perfection suggests that more and better are not true. Open source is a more secure model. Just because a model is secure does not mean that every project using that model is secure, either. It's the approach that is more secure.

    Also, that open source projects announce when a flaw is found (and was patched in hours) actually reflects just how secure they are and should be counted in their favour not against them. Closed source projects have decades old flaws too, but rarely announce it and rarely have a good way to prove how long the flaw has been around.

    And that code is actually audited all of the time. Companies like Red Hat, Suse, Canonical, IBM, the big banks, big governments, Linux Foundation, etc. are constantly auditing code. No code is or ever will be perfect. We are looking for better, not perfect.


  • Service Provider

    @Dashrender said:

    But the reality is that the code is rarely audited.

    Again, you are looking for perfect. Rarely audited is still better than never audited.



  • @Dashrender said:

    @scottalanmiller said:

    I think the biggest threat is the oversight of the code rather than the technology approach. If security is the top concern then the open source aspect of Xen is a big deal for security. Far more important than PV vs FV.

    Traditionally I'd agree with you. But the reality is that the code is rarely audited. How long was that bug in OpenSSL before it was discovered? A decade, wasn't it?

    If I remember correctly a lot of open source companies (ones that sell support) actually have bug bounties where you can be paid for finding bugs in the open source software. As opposed to certain closed source vendors who have threatened lawsuits against people who reveal vulnerabilities.



  • @scottalanmiller said:

    @Dashrender said:

    But the reality is that the code is rarely audited.

    Again, you are looking for perfect. Rarely audited is still better than never audited.

    I'd argue this point at this time. If you really want to find something to exploit, you might have an easier time with it today in the sea of Open Source software (again I point to OpenSSL). Who knows how long the NSA knew about that problem and just didn't report it, and why did they know - it's just as likely that they stumbled upon it as it is that they scoured over the code and found it.



  • Now, does this mean to stop using Open Source code, no of course not. Just be aware.


  • Service Provider

    @Dashrender said:

    @scottalanmiller said:

    @Dashrender said:

    But the reality is that the code is rarely audited.

    Again, you are looking for perfect. Rarely audited is still better than never audited.

    I'd argue this point at this time. If you really want to find something to exploit, you might have an easier time with it today in the sea of Open Source software (again I point to OpenSSL). Who knows how long the NSA knew about that problem and just didn't report it, and why did they know - it's just as likely that they stumbled upon it as it is that they scoured over the code and found it.

    Completely not relevant. Open source will ALWAYS be more secure then closed source code, simply because it IS reviewed by a third party.

    Want an example of how bad closed source code is? EverQuest.
    EQ was reverse engineered in less than a year to the point that people were able to being injecting their own packet data into the stream. After 2 years it was almost completely exposed. Things like MacroQuest were developed.

    At the same time other people began to actually make use of that hacked information and began to create the first EverQuest emulators.

    Today, there are Emulator servers with hack detection so good that it is (so far as is publicly known) not possible to play on those emulated servers with hacks. Compared to Sony server were use of MacroQuest is a given to almost the entire population.


  • Service Provider

    @Dashrender said:

    Now, does this mean to stop using Open Source code, no of course not. Just be aware.

    You need to be aware of ALL software, but Open Source the least. Every concern you have with open source you also have with closed source, plus more. The logic is that simple. Open source introduces no risk from being open but eliminates many and encourages better practice. Closed source literally introduces some types of risks and encourages others.

    So yes, you need to be aware. But stating it like this is misleading. You need to be less aware.


  • Service Provider

    @Dashrender said:

    @scottalanmiller said:

    @Dashrender said:

    But the reality is that the code is rarely audited.

    Again, you are looking for perfect. Rarely audited is still better than never audited.

    I'd argue this point at this time. If you really want to find something to exploit, you might have an easier time with it today in the sea of Open Source software (again I point to OpenSSL). Who knows how long the NSA knew about that problem and just didn't report it, and why did they know - it's just as likely that they stumbled upon it as it is that they scoured over the code and found it.

    You would argue it based on what reasoning? Pointing to OpenSSL means nothing. Like I said, that case showed how good open source was, not what you are implying. Also it is an anecdote and means nothing, literally nothing. We find issues like this is closed source too, every day. But you aren't pointing to those, only to OpenSSL, why?


  • Service Provider

    @coliver said:

    @Dashrender said:

    @scottalanmiller said:

    I think the biggest threat is the oversight of the code rather than the technology approach. If security is the top concern then the open source aspect of Xen is a big deal for security. Far more important than PV vs FV.

    Traditionally I'd agree with you. But the reality is that the code is rarely audited. How long was that bug in OpenSSL before it was discovered? A decade, wasn't it?

    If I remember correctly a lot of open source companies (ones that sell support) actually have bug bounties where you can be paid for finding bugs in the open source software. As opposed to certain closed source vendors who have threatened lawsuits against people who reveal vulnerabilities.

    Yes, it is very common for there to be bug and security bounties, not just from the groups that make the code but also from companies that use it.


  • Service Provider

    Open source can also be fixed by anyone, not only by the company that owns the rights to fix it.



  • @scottalanmiller said:

    @Dashrender said:

    @scottalanmiller said:

    @Dashrender said:

    But the reality is that the code is rarely audited.

    Again, you are looking for perfect. Rarely audited is still better than never audited.

    I'd argue this point at this time. If you really want to find something to exploit, you might have an easier time with it today in the sea of Open Source software (again I point to OpenSSL). Who knows how long the NSA knew about that problem and just didn't report it, and why did they know - it's just as likely that they stumbled upon it as it is that they scoured over the code and found it.

    You would argue it based on what reasoning? Pointing to OpenSSL means nothing. Like I said, that case showed how good open source was, not what you are implying. Also it is an anecdote and means nothing, literally nothing. We find issues like this is closed source too, every day. But you aren't pointing to those, only to OpenSSL, why?

    I'm pointing to this in OpenSSL because it is open source - i.e. anyone and their dog and read the source code and find this problem, and frankly should have many ages ago (and I'm sure that someone actually did, but didn't report it). At least with closed source software you must decompile it and trudge through the machine code to fine mistakes.

    Also, don't take this as my saying that closed source is better - I'm not. I'm just saying that anyone who isn't already familiar with this situation needs to be aware that just because something is open source does in no way imply that anyone has ever done an audit, let along a security audit of the code.

    The audit is something that can be easily done because it's open source (but who is going to pay for it?). Obviously you can't do this with closed source unless you're the owner and you hire an audit team.


  • Service Provider

    @Dashrender said:

    I'm pointing to this in OpenSSL because it is open source - i.e. anyone and their dog and read the source code and find this problem, and frankly should have many ages ago (and I'm sure that someone actually did, but didn't report it). At least with closed source software you must decompile it and trudge through the machine code to fine mistakes.

    This is not how vulnerabilities are found, however, and many have gone through it and not found the flaw. It isn't like you just look at code and see the flaw, if they did it wouldn't have happened in the first place. That you can look through code finds some flaws, but not most. What it specifically finds is intentional back doors, secret holes, bad ideas, fragility, etc.

    Open source is always very, very easy to fix whereas closed source is not.

    Remember this?

    http://mangolassi.it/topic/5692/old-smb-security-flaw-still-exists

    They found that too. But unlike OpenSSL that was fixed the same day, this one has just remained. So reading the code didn't play a major role in this one set of cases but being closed source has left one vulnerable while the other was patched the same day.


  • Service Provider

    @Dashrender said:

    The audit is something that can be easily done because it's open source (but who is going to pay for it?).

    You state this in a leading way to try to make a point that doesn't exist (that audits do not happen.) Lots of companies audit this code, as we have already discussed. Few small businesses do this (nor should they) but many large companies and organizations do do audits. It might feel like something you would not pay to do, but those seriously concerned about security and/or stability certainly do do audits.


  • Service Provider

    @Dashrender said:

    Also, don't take this as my saying that closed source is better - I'm not. I'm just saying that anyone who isn't already familiar with this situation needs to be aware that just because something is open source does in no way imply that anyone has ever done an audit, let along a security audit of the code.

    No, but you are implying that open source is equal or worse, but it is not. It is better (or equal.) It literally has no downsides compared to closed source (for the end users, obviously what is bad for the customers might be good for the vendor) but does require customers (but not every customer) to leverage to have it still be beneficial for all (one enterprise doing an audit and checking or improving code helps everyone). The same code made open or closed will always be better or equal to the same code closed source.

    You are completely correct that no one should think that the nature of a license for code visibility would mean that it is magic and that audits are automatic - but I've never heard of anyone implying or believing such a thing. I think we were all assuming that no one thought that open sourcing code was doing anything like that.

    But we also have the vast majority of enterprise open source software being audited all the time. So in one way, we have to be aware of basics like source licensing does not imply an audit. At the same time we have to be understanding that major companies certainly do audit core code, especially security code, regularly and that there is a level of auditing going on on enterprise open source that exists nowhere else.