How Are Containers Different?

Discussed in-depth in our recent article on OpenVZ, containers offer a different take on virtualization. These solutions are not as easily categorized as the hypervisors, since they tend to be much more varied, as each is based on a specific OS.

Containers are able to "partition" an existing operating system, in order to provide multiple isolated environments to its users. The isolation of resources is attained by implementing several changes into an existing kernel, allowing for a more controlled resource management. Essentially, the adapted kernel takes on the role of "hypervisor and base kernel in one", naturally handling the isolation of each separate container.


The methods used for doing so vary from one solution to the next, but the fundamentals remain the same. Each container is locked into its own part of the host's file system, has its own users and processes, its own network address, and is practically fully customizable software-wise. To the actual user, it is for all intents and purposes a completely separate system.

Cutting out the kernel allows a virtual environment to have a much smaller footprint, since there is no need for a separate kernel supporting it, making it essentially just a separate group of processes on the host OS. Logically, we could state that in a comparison of a hypervisor solution and a container solution on the exact same hardware, the container solution is likely to provide a larger amount of separate systems. Nonetheless, nothing prevents users from combining the best of two worlds, as a container-supporting kernel will still run perfectly well on top of a hypervisor-solution, making systems even more flexible.

More information on how containers actually work can be found in our article on the subject, where we dig deep into the inner workings of the adapted OpenVZ kernel.

Examples of popular container-based solutions are Solaris Containers, OpenVZ, Parallels Virtuozzo, and Linux VServer.

When Are Hypervisors a Good Solution? When Should We User Containers?
Comments Locked

14 Comments

View All Comments

  • Ralphik - Wednesday, October 29, 2008 - link

    Hello everybody,

    I have installed a virtual Win98 on my computer, which is running WinXP. The problem I have is that there are no GeForce7 and higher drivers available for such old Windows platforms - has anyone got a tip or a cracked driver that I could use? It now has a completely useless S3 Virge driver installed . . .
  • Jovec - Friday, October 31, 2008 - link

    Unless I'm missing something (new), your Win98 running in your VM will not see your GeForce video card, or indeed any of the actual hardware in your computer. It just sees the virtual hardware provided by your VM software - typically an emulated basic VGA video adapter and AC'97 sound. VM software emulates an emulates an entire virtual computer on your host PC, but does not use the physical hardware natively.

    In short, you are not going to get Geforce level graphics power in your Win98 VM.
  • stmok - Wednesday, October 29, 2008 - link

    "Could it be that these two pieces of software are using related techniques for their 3D acceleration? Stay tuned, as we will definitely be looking into this in further research!"

    => Parallels took Wine's 3D acceleration component. More specifically, they took the translator that allowed one to translate OpenGL calls to DirectX and vice versa.

    There was a minor issue about this when Parallels are not compliant with the open source license of Wine. But that was settled when Parallels complied with the LGPL two weeks later.
    => http://parallelsvirtualization.blogspot.com/2007/0...">http://parallelsvirtualization.blogspot...2007/07/...
    => http://en.wikipedia.org/wiki/Parallels_Desktop_for...">http://en.wikipedia.org/wiki/Parallels_Desktop_for...

    What annoys me, is that they never bothered with adding 3D Acceleration support in the Linux version of Parallels. The only option is the very current release of VMware Workstation. (Version 6.5 has technology implemented from their VMware Fusion product).
  • duploxxx - Tuesday, October 28, 2008 - link

    btw is this a teaser for the long announced virtualization performance review?
  • Vidmo - Tuesday, October 28, 2008 - link

    I was hoping this article would get into some of the latest hardware technologies designed for better virtualization. It's still quite confusing trying to determine which hardware platforms and CPUs support VT-d for example.

    The article is a nice software overview, but seems incomplete without getting into the hardware side of the issues.
  • solusstultus - Tuesday, October 28, 2008 - link

    Hardware support for VT is not used by most/any? commercial hypervisors (VMware doesn't use it) and has been shown to actually have lower performance in many cases than binary translation:

    http://www.vmware.com/pdf/asplos235_adams.pdf">http://www.vmware.com/pdf/asplos235_adams.pdf
  • duploxxx - Tuesday, October 28, 2008 - link

    unfortunately your link is 2 years old.

    Current statement for Vmware ESX is that you should use the hardware virtualization layer when you have 64bit OS at any time and when virtualization layer 2 aka NPT from amd (ept when intel launches nehalem next year) at any time.
  • solusstultus - Wednesday, October 29, 2008 - link

    While I don't claim to be an expert, that's the most recent study that I have seen that actually lists performance results from both techniques.

    If you have seen more recent results, do you have a link? I would be interested in reading it.

    From what I have seen, NPT addresses overheads associated with switching from the Guest to the VMM during page table updates (which can occur frequently when using small pages). However, the other main source of overhead cited in the paper that I referenced were traps into the VMMs on system calls which could be replaced by less expensive direct links to VMM routines in translated code. So unless the newer hardware support virtualization implementations address this (they might, I haven't looked at the documentation), it seems translation could still be potentially faster for some apps, and that an ideal implementation would make use of both in different situations.
  • Vidmo - Tuesday, October 28, 2008 - link

    Ahh I somehow missed the link to your hardware article.
    http://it.anandtech.com/IT/showdoc.aspx?i=3263&...">http://it.anandtech.com/IT/showdoc.aspx?i=3263&...

    Very well done. Would it be possible to update that article to reflect VT-d and possibly TV-i technologies as well?
  • LizVD - Tuesday, October 28, 2008 - link

    Thanks for the input!

    The real purpose of this article was to provide a "beginner-safe" intro into the things we have been discussing on Anandtech IT for the past couple of months, so in-depth discussion of each of the technologies is something we avoided on purpose, to keep focus on the basic differences without getting carried away.

    Your question is an interesting one, however, and of the sort we'd like to properly address in our blogs, so keep an eye on them, as we'll be looking into it.

Log in

Don't have an account? Sign up now