Virtualization Overview

So many options today… Let us start on high level.

We have to distinguish

  • container hypervisor versus
  • virtual machine hypervisor.

Container hypervisor can be separated into

  • application container e.g., Docker (runc with lib container as reference runtime from OCI, implemented in go-lang) versus
  • host machine container e.g., LXD

Note: LXD is the container hypervisor and LXC is the user space tooling for the Linux container-primitives.

Host machine container like LXD run multiple processes inside the same container, the application container like Docker run only one. This means if multiple concurrent process required for a use case, you have to run multiple containers. Due to that application container (typically) not support persistent storage, but much more portable. Reason for that is that networking, storage, and Operating systems are loosely coupled. Due to that Docker is “platform-independent”, and it can run on Linux as well as Windows.

A small command comparison:

LXCDocker
Create a containerlxc launch <image>docker run <image>
Execute command inside a containerlxc exec <container name> — /bin/bashdocker container exec -it <container name> sh
Stop a containerlxc stop <container name>docker stop <container name>
Remove the containerlxc delete <container name>docker container rm <container name>
Image repositoryhttps://uk.images.linuxcontainers.org/https://hub.docker.com

Lets have a small LXC example session:

Install all dependencies

sudo apt-get install lxc lxctl lxc-templates

Check which host machines (aka systems) we have as template:

sudo lxc-checkconfig
ls -l /usr/share/lxc/templates/

Lets start an e.g., Ubuntu system

sudo lxc-create -n testcontainer -t ubuntu
sudo lxc-ls --fancy
sudo lxc-start -n testcontainer -d
sudo lxc-console -n testcontainer

Lets get some info and tear down everything

sudo lxc-info -n testcontainer
sudo lxc-stop -n testcontainer
sudo lxc-destroy -n testcontainer

Now let’s have a view on virtual machine hypervisor such as KVM.

KVM  (Kernel-Based Virtual Machine) runs in the kernel space allowing a user space program to access the hardware virtualization features of various processors. These types are called type 1.

  • Type 1 run in kernel space e.g., KVM, and manage parallel access to same hardware by managing processor and memory state.
  • Type 2 in comparison e.g., QEMU, run in user space and perform hardware emulation e.g, for ARM processor on an Intel chipset by emulating the processor and memory management.

Notes:

  • With version 4.0 LXD frontend can also be used to manage virtual machines in addition to containers e.g., the KVM one.
  • The kvm-qemu executable works like normal Qemu.

The Virtio features makes KVM quite interesting, because the IOMMU (Input output memory mapping unit) access driver maps real DMA (direct memory access) addresses to virtualized addresses so direct access becomes possible and it brings bare-metal (native) performance.

QEMU example for bare-metal software execution. Bare-metal programming is proceeding without a scheduler of any sort. The base loop aka control-loop is just that, and all activity is either polled or interrupt-driven. In comparison an RTOS (real time operating system) or GPOS (general purpose operating system) includes a scheduler of some sort.

Nice explanation, marked in blue, I found in the internet. I don’t now from where. Happy to get your feedback and remark the source:

  • Interrupt handlers would call the scheduler at the end of the handler, and
  • the scheduler would examine the “run queue” and could checkpoint a running task on the stack and
  • launch another task with the return-from-interrupt instruction.

The following bare-metal code example bases on the following two articles:

In addition to the gcc stuff we have to install the following libraries

sudo apt install qemu-system-arm

Now let us write the programs.

The test program file: test.c

volatile unsigned int * const UART0DR = (unsigned int *)0x101f1000;
 
void print_uart0(const char *s) {
 while(*s != '\0') { /* Loop until end of string */
 *UART0DR = (unsigned int)(*s); /* Transmit char */
 s++; /* Next char */
 }
}
 
void c_entry() {
 print_uart0("Hello world!\n");
}

Assemble file for booting (aka boot-loader): startup.s

.global _Reset
_Reset:
 LDR sp, =stack_top
 BL c_entry
 B .

And this is the linker script test.ld:

ENTRY(_Reset)
SECTIONS
{
 . = 0x10000;
 .startup . : { startup.o(.text) }
 .text : { *(.text) }
 .data : { *(.data) }
 .bss : { *(.bss COMMON) }
 . = ALIGN(8);
 . = . + 0x1000; /* 4kB of stack memory */
 stack_top = .;
}

Now compile and link all together:

arm-none-eabi-as -mcpu=arm926ej-s -g startup.s -o startup.o
arm-none-eabi-gcc -c -mcpu=arm926ej-s -g test.c -o test.o
arm-none-eabi-ld -T test.ld test.o startup.o -o test.elf
arm-none-eabi-objcopy -O binary test.elf test.bin

And run it…

qemu-system-arm -M versatilepb -m 128M -nographic -kernel test.bin

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *