MIT License Forks Stargazers Contributors Issues LinkedIn
Explore the docs Β»
Web Site
-
Code Page
-
Gitbook
-
Report Bug
-
Request Feature
<a name="about-the-project"></a>
This project aims to help students or professionals to learn the main concepts of GNULinux and free software Some GNULinux distributions like Debian and RPM will be covered Installation and configuration of some packages will also be covered By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this. Use vagrant for up machines and execute labs and practice content in this article. I have published in folder Vagrant a Vagrantfile with what is necessary for you to upload an environment for studies
<a name="getting-started"></a>
For starting the learning, see the documentation above.
<a name="prerequisites"></a>
<a name="installation"></a>
Clone the repo
git clone https://github.com/marcossilvestrini/learning-lpic-3-305-300.git
cd learning-lpic-3-305-300
Customize a template Vagrantfile-topic-XXX. This file contains a vms configuration for labs. Example:
<folder>
\<to_machine>\#{VM_NAME}-instance-1"
Example: vm.clone_directory = "E:\Servers\VMWare\#{VM_NAME}-instance-1"
Customize network configuration in files configs/network.
<a name="usage"></a>
Use this repository for get learning about LPIC-3 305-300 exam
Switch a Vagrantfile-topic-xxx template and copy for a new file with name Vagrantfile
cd vagrant && vagrant up
cd vagrant && vagrant destroy -f
cd vagrant && vagrant reload
Important: If you reboot vms without vagrant, shared folder not mount after boot.
If you use Windows platform, I create a powershell script for up and down vms.
vagrant/up.ps1
vagrant/destroy.ps1
<a name="roadmap"></a>
<a name="freedoms"></a>
0.The freedom to run the program as you wish, for any purpose (freedom 0). 1.The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this. 2.The freedom to redistribute copies so you can help others (freedom 2). 3.freedom to distribute copies of your modified versions to others (freedom 3).
type COMMAND
apropos COMMAND
whatis COMMAND --long
whereis COMMAND
COMMAND --help, --h
man COMMAND
<a name="topic-351"></a>
<a name="topic-351.1"></a>
Weight: 6
Description: Candidates should know and understand the general concepts, theory and terminology of virtualization. This includes Xen, QEMU and libvirt terminology.
Key Knowledge Areas:
Hypervisor
Hardware Virtual Machine (HVM)
Paravirtualization (PV)
Emulation and Simulation
CPU flags
/proc/cpuinfo
Migration (P2V, V2V)
Runs directly on the host's physical hardware, providing a base layer to manage VMs without the need for a host operating system.
Runs on top of a conventional operating system, relying on the host OS for resource management and device support.
In the context of hypervisors, which are technologies used to create and manage virtual machines, the terms P2V migration and V2V migration are common in virtualization environments. They refer to processes of migrating systems between different types of platforms.
P2V migration refers to the process of migrating a physical server to a virtual machine.In other words, an operating system and its applications, running on dedicated physical hardware, are "converted" and moved to a virtual machine that runs on a hypervisor (such as VMware, Hyper-V, KVM, etc.).
V2V migration refers to the process of migrating a virtual machine from one hypervisor to another.In this case, you already have a virtual machine running in a virtualized environment (like VMware), and you want to move it to another virtualized environment (for example, to Hyper-V or to a new VMware server).
HVM leverages hardware extensions provided by modern CPUs to virtualize hardware, enabling the creation and management of VMs with minimal performance overhead.
VMware ESXi, Microsoft Hyper-V, KVM (Kernel-based Virtual Machine).
Paravirtualization involves modifying the guest operating system to be aware of the virtual environment, allowing it to interact more efficiently with the hypervisor.
Xen with paravirtualized guests, VMware tools in certain configurations, and some KVM configurations.
NUMA (Non-Uniform Memory Access) is a memory architecture used in multiprocessor systems to optimize memory access by processors. In a NUMA system, memory is distributed unevenly among processors, meaning that each processor has faster access to a portion of memory (its "local memory") than to memory that is physically further away (referred to as "remote memory") and associated with other processors.
Abstracts physical hardware to create virtual machines (VMs) that run separate operating systems and applications.
Data centers, cloud computing, server consolidation.
VMware ESXi, Microsoft Hyper-V, KVM.
Allows multiple isolated user-space instances (containers) to run on a single OS kernel.
Microservices architecture, development and testing environments.
Docker, Kubernetes, LXC.
Combines hardware and software network resources into a single, software-based administrative entity.
Software-defined networking (SDN), network function virtualization (NFV).
VMware NSX, Cisco ACI, OpenStack Neutron.
Pools physical storage from multiple devices into a single virtual storage unit that can be managed centrally.
Data management, storage optimization, disaster recovery.
IBM SAN Volume Controller, VMware vSAN, NetApp ONTAP.
Allows a desktop operating system to run on a virtual machine hosted on a server.
Virtual desktop infrastructure (VDI), remote work solutions.
Citrix Virtual Apps and Desktops, VMware Horizon, Microsoft Remote Desktop Services.
Separates applications from the underlying hardware and operating system, allowing them to run in isolated environments.
Simplified application deployment, compatibility testing.
VMware ThinApp, Microsoft App-V, Citrix XenApp.
Integrates data from various sources without physically consolidating it, providing a unified view for analysis and reporting.
Business intelligence, real-time data integration.
Denodo, Red Hat JBoss Data Virtualization, IBM InfoSphere.
Emulation involves simulating the behavior of hardware or software on a different platform than originally intended.
This process allows software designed for one system to run on another system that may have different architecture or operating environment.
While emulation provides versatility by enabling the execution of unmodified guest operating systems or applications, it often comes with performance overhead.
This overhead arises because the emulated system needs to interpret and translate instructions meant for the original system into instructions compatible with the host system. As a result, emulation can be slower than native execution, making it less efficient for resource-intensive tasks.
Despite this drawback, emulation remains valuable for running legacy software, testing applications across different platforms, and facilitating cross-platform development.
The systemd-machined service is dedicated to managing virtual machines and containers within the systemd ecosystem. It provides essential functionalities for controlling, monitoring, and maintaining virtual instances, offering robust integration and efficiency within Linux environments.
<a name="topic-351.2"></a>
Weight: 3
Description: Candidates should be able to install, configure, maintain, migrate and troubleshoot Xen installations. The focus is on Xen version 4.x.
Key Knowledge Areas:
Xen is an open-source type-1 (bare-metal) hypervisor, which allows multiple operating systems to run concurrently on the same physical hardware.Xen provides a layer between the physical hardware and virtual machines (VMs), enabling efficient resource sharing and isolation.
XenSource was the company founded by the original developers of the Xen hypervisor at the University of Cambridge to commercialize Xen.The company provided enterprise solutions based on Xen and offered additional tools and support to enhance Xenβs capabilities for enterprise use.
Xen Project refers to the open-source community and initiative responsible for developing and maintaining the Xen hypervisor after its commercialization.The Xen Project operates under the Linux Foundation, with a focus on building, improving, and supporting Xen as a collaborative, community-driven effort.
Xen Store is a critical component of the Xen Hypervisor. Essentially, Xen Store is a distributed key-value database used for communication and information sharing between the Xen hypervisor and the virtual machines (also known as domains) it manages.
Here are some key aspects of Xen Store:
XAPI, or XenAPI, is the application programming interface (API) used to manage the Xen Hypervisor and its virtual machines (VMs). XAPI is a key component of XenServer (now known as Citrix Hypervisor) and provides a standardized way to interact with the Xen hypervisor to perform operations such as creating, configuring, monitoring, and controlling VMs.
Here are some important aspects of XAPI:
XAPI is the interface that enables control and automation of the Xen Hypervisor, making it easier to manage virtualized environments.
Domain0, or Dom0, is the control domain in a Xen architecture. It manages other domains (DomUs) and has direct access to hardware. Dom0 runs device drivers, allowing DomUs, which lack direct hardware access, to communicate with devices. Typically, it is a full instance of an operating system, like Linux, and is essential for Xen hypervisor operation.
DomUs are non-privileged domains that run virtual machines. They are managed by Dom0 and do not have direct access to hardware. DomUs can be configured to run different operating systems and are used for various purposes, such as application servers and development environments. They rely on Dom0 for hardware interaction.
PV-DomUs use a technique called paravirtualization. In this model, the DomU operating system is modified to be aware that it runs in a virtualized environment, allowing it to communicate directly with the hypervisor for optimized performance. This results in lower overhead and better efficiency compared to full virtualization.
HVM-DomUs are virtual machines that utilize full virtualization, allowing unmodified operating systems to run. The Xen hypervisor provides hardware emulation for these DomUs, enabling them to run any operating system that supports the underlying hardware architecture. While this offers greater flexibility, it can result in higher overhead compared to PV-DomUs.
Paravirtualised Network Devices
Bridging
Domain0 (Dom0), DomainU (DomU)
PV-DomU, HVM-DomU
/etc/xen/
xl
xl.cfg
xl.conf # Xen global configurations
xentop
oxenstored # Xenstore configurations
# Xen Settings
/etc/xen/
/etc/xen/xl.conf - Main general configuration file for Xen
/etc/xen/oxenstored.conf - Xenstore configurations
# VM Configurations
/etc/xen/xlexample.pvlinux
/etc/xen/xlexample.hvm
# Service Configurations
/etc/default/xen
/etc/default/xendomains
# xen-tools configurations
/etc/xen-tools/
/usr/share/xen-tools/
# docs
xl(1)
xl.conf(5)
xlcpupool.cfg(5)
xl-disk-configuration(5)
xl-network-configuration(5)
xen-tscmode(7)
# initialized domains auto
/etc/default/xendomains
XENDOMAINS_AUTO=/etc/xen/auto
/etc/xen/auto/
# set domain for up after xen reboot
## create folder auto
cd /etc/xen && mkdir -p auto && cd auto
# create simbolic link
ln -s /etc/xen/lpic3-pv-guest /etc/xen/auto/lpic3-pv-guest
# create a pv image
xen-create-image \
--hostname=lpic3-pv-guest \
--memory=1gb \
--vcpus=2 \
--lvm=vg_xen \
--bridge=xenbr0 \
--dhcp \
--pygrub \
--password=vagrant \
--dist=bookworm
# list image
xen-list-image
# delete a pv image
xen-delete-image lpic3-pv-guest --lvm=vg_xen
# list xenstore infos
xenstore-ls
# view xen information
xl infos
# list Domains
xl list
xl list lpic3-hvm-guest
xl list lpic3-hvm-guest -l
# uptime Domains
xl uptime
# pause Domain
xl pause 2
xl pause lpic3-hvm-guest
# save state Domains
xl -v save lpic3-hvm-guest ~root/image-lpic3-hvm-guest.save
# restore Domain
xl restore /root/image-lpic3-hvm-guest.save
# get Domain name
xl domname 2
# view dmesg information
xl dmesg
# monitoring domain
xl top
xentop
xen top
# Limit mem Dom0
xl mem-set 0 2048
# Limit cpu (not permanent after boot)
xl vcpu-set 0 2
# create DomainU - virtual machine
xl create /etc/xen/lpic3-pv-guest.cfg
# create DomainU virtual machine and connect to guest
xl create -c /etc/xen/lpic3-pv-guest.cfg
##----------------------------------------------
# create DomainU virtual machine HVM
## create logical volume
lvcreate -l +20%FREE -n lpic3-hvm-guest-disk vg_xen
## create a ssh tunel for vnc
ssh -l vagrant -L 5900:localhost:5900 192.168.0.130
## configure /etc/xen/lpic3-hvm-guest.cfg
## set boot for cdrom: boot = "d"
## create domain hvm
xl create /etc/xen/lpic3-hvm-guest.cfg
## open vcn conection in your vnc client with localhost
## for view install details
## after installation finished, destroy domain: xl destroy <id_or_name>
## set /etc/xen/lpic3-hvm-guest.cfg: boot for hard disc: boot = "c"
## create domain hvm
xl create /etc/xen/lpic3-hvm-guest.cfg
## access domain hvm
xl console <id_or_name>
##----------------------------------------------
# connect in domain guest
xl console <id>|<name> (press enter)
xl console 1
xl console lpic3-pv-guest
#How do I exit domU "xl console" session
#Press ctrl+] or if you're using Putty press ctrl+5.
# Poweroff domain
xl shutdown lpic3-pv-guest
# destroy domain
xl destroy lpic3-pv-guest
# reboot domain
xl reboot lpic3-pv-guest
# list block devices
xl block-list 1
xl block-list lpic3-pv-guest
# detach block devices
xl block-detach lpic3-hvm-guest hdc
xl block-detach 2 xvdc
# attach block devices
## hard disk devices
xl block-attach lpic3-hvm-guest-ubuntu 'phy:/dev/vg_xen/lpic3-hvm-guest-disk2,xvde,w'
## cdrom
xl block-attach lpic3-hvm-guest 'file:/home/vagrant/isos/ubuntu/seed.iso,xvdc:cdrom,r'
xl block-attach 2 'file:/home/vagrant/isos/ubuntu/seed.iso,xvdc:cdrom,r'
# insert and eject cdrom devices
xl cd-insert lpic3-hvm-guest-ubuntu xvdb /home/vagrant/isos/ubuntu/ubuntu-24.04.1-live-server-amd64.iso
xl cd-eject lpic3-hvm-guest-ubuntu xvdb
In Xen, βvifβ stands for Virtual Interface and is used to configure networking for virtual machines (domains).
By specifying βvifβ directives in the domain configuration files, administrators can define network interfaces, assign IP addresses, set up VLANs, and configure other networking parameters for virtual machines running on Xen hosts. For example: vif = [βbridge=xenbr0β], in this case, it connects the VMβs network interface to the Xen bridge named βxenbr0β.
<p align="right">(<a href="#topic-351.2">back to sub Topic 351.2</a>)</p>
<p align="right">(<a href="#topic-351">back to Topic 351</a>)</p>
<p align="right">(<a href="#readme-top">back to top</a>)</p>
---
<a name="topic-351.3"></a>
### 351.3 QEMU

**Weight:** 4
**Description:** Candidates should be able to install, configure, maintain, migrate and troubleshoot QEMU installations.
**Key Knowledge Areas:**
* Understand the architecture of QEMU, including KVM, networking and storage
* Start QEMU instances from the command line
* Manage snapshots using the QEMU monitor
* Install the QEMU Guest Agent and VirtIO device drivers
* Troubleshoot QEMU installations, including networking and storage
* Awareness of important QEMU configuration parameters
#### 351.3 Cited Objects
```sh
Kernel modules: kvm, kvm-intel and kvm-amd
/dev/kvm
QEMU monitor
qemu
qemu-system-x86_64
ip
brctl
tunctl
# check if kvm is enabled
egrep -o '(vmx|svm)' /proc/cpuinfo
lscpu |grep Virtualization
lsmod|grep kvm
ls -l /dev/kvm
hostnamectl
systemd-detect-virt
# check if kvm is enabled
egrep -o '(vmx|svm)' /proc/cpuinfo
lscpu |grep Virtualization
lsmod|grep kvm
ls -l /dev/kvm
# check kernel infos
uname -a
# check root device
findmnt /
# mount a qcow2 image
## Example 1:
mkdir -p /mnt/qemu
guestmount -a os-images/Debian_12.0.0_VMM/Debian_12.0.0.qcow2 -i /mnt/qemu/
## Example 2:
sudo guestfish --rw -a os-images/Debian_12.0.0_VMM/Debian_12.0.0.qcow2
run
list-filesystems
# run commands in qcow2 images
## Example 1:
virt-customize -a os-images/Debian_12.0.0_VMM/Debian_12.0.0.qcow2 --run-command 'echo hello >/root/hello.txt'
## Example 2:
sudo virt-customize -a os-images/Debian_12.0.0_VMM/Debian_12.0.0.qcow2 \
--run-command 'echo -e "auto ens3\niface ens3 inet dhcp" > /etc/network/interfaces.d/ens3.cfg'
# generate mac
printf 'DE:AD:BE:EF:%02X:%02X\n' $((RANDOM%256)) $((RANDOM%256))
# list links
ip link show
# create bridge
ip link add br0 type bridge
# list links
ip link show
# create bridge
ip link add br0 type bridge
# create image
qemu-img create -f qcow2 vm-disk-debian-12.qcow2 20G
# convert vmdk to qcow2 image
qemu-img convert \
-f vmdk \
-O qcow2 os-images/Debian_12.0.0_VMM/Debian_12.0.0_VMM_LinuxVMImages.COM.vmdk os-images/Debian_12.0.0_VMM/Debian_12.0.0.qcow2 \
-p \
-m16
# check image
qemu-img info os-images/Debian_12.0.0_VMM/Debian_12.0.0.qcow2
# create vm with ISO
qemu-system-x86_64 \
-name lpic3-debian-12 \
-enable-kvm -hda vm-disk-debian-12.qcow2 \
-cdrom /home/vagrant/isos/debian/debian-12.8.0-amd64-DVD-1.iso \
-boot d \
-m 2048 \
-smp cpus=2 \
-k pt-br
# create vm with ISO using vnc in no gui servers \ ssh connections
## create ssh tunel in host
ssh -l vagrant -L 5902:localhost:5902 192.168.0.131
## create vm
qemu-system-x86_64 \
-name lpic3-debian-12 \
-enable-kvm \
-m 2048 \
-smp cpus=2 \
-k pt-br \
-vnc :2 \
-device qemu-xhci \
-device usb-tablet \
-device ide-cd,bus=ide.1,drive=cdrom,bootindex=1 \
-drive id=cdrom,media=cdrom,if=none,file=/home/vagrant/isos/debian/debian-12.8.0-amd64-DVD-1.iso \
-hda vm-disk-debian-12.qcow2 \
-boot order=d \
-vga std \
-display none \
-monitor stdio
# create vm with OS Image - qcow2
## create vm
qemu-system-x86_64 \
-name lpic3-debian-12 \
-enable-kvm \
-m 2048 \
-smp cpus=2 \
-k pt-br \
-vnc :2 \
-hda os-images/Debian_12.0.0_VMM/Debian_12.0.0.qcow2
## create vm with custom kernel params
qemu-system-x86_64 \
-name lpic3-debian-12 \
-kernel /vmlinuz \
-initrd /initrd.img \
-append "root=/dev/mapper/debian--vg-root ro fastboot console=ttyS0" \
-enable-kvm \
-m 2048 \
-smp cpus=2 \
-k pt-br \
-vnc :2 \
-hda os-images/Debian_12.0.0_VMM/Debian_12.0.0.qcow2
## create vm with and attach disk
qemu-system-x86_64 \
-name lpic3-debian-12 \
-enable-kvm \
-m 2048 \
-smp cpus=2 \
-vnc :2 \
-hda os-images/Debian_12.0.0_VMM/Debian_12.0.0.qcow2 \
-hdb vmdisk-debian12.qcow2 \
-drive file=vmdisk-extra-debian12.qcow2,index=2,media=disk,if=ide \
-netdev bridge,id=net0,br=qemubr0 \
-device virtio-net-pci,netdev=net0
## create vm network netdev user
qemu-system-x86_64 \
-name lpic3-debian-12 \
-enable-kvm \
-m 2048 \
-smp cpus=2 \
-vnc :2 \
-hda os-images/Debian_12.0.0_VMM/Debian_12.0.0.qcow2 \
-netdev user,id=mynet0,net=192.168.0.150/24,dhcpstart=192.168.0.155,hostfwd=tcp::2222-:22 \
-device virtio-net-pci,netdev=mynet0
## create vm network netdev tap (Private Network)
ip link add br0 type bridge ; ifconfig br0 up
qemu-system-x86_64 \
-name lpic3-debian-12 \
-enable-kvm \
-m 2048 \
-smp cpus=2 \
-vnc :2 \
-hda os-images/Debian_12.0.0_VMM/Debian_12.0.0.qcow2 \
-netdev tap,id=br0 \
-device e1000,netdev=br0,mac=DE:AD:BE:EF:1A:24
## create vm with public bridge
#create a public bridge : https://www.linux-kvm.org/page/Networking
qemu-system-x86_64 \
-name lpic3-debian-12 \
-enable-kvm \
-m 2048 \
-smp cpus=2 \
-hda os-images/Debian_12.0.0_VMM/Debian_12.0.0.qcow2 \
-k pt-br \
-vnc :2 \
-device qemu-xhci \
-device usb-tablet \
-vga std \
-display none \
-netdev bridge,id=net0,br=qemubr0 \
-device virtio-net-pci,netdev=net0
## get a ipv4 ip - open ssh in vm and:
dhcpclient ens4
For initiate QEMU monitor in commandline use -monitor stdio param in qemu-system-x86_64
qemu-system-x86_64 -monitor stdio
Exit qemu-monitor:
ctrl+alt+2
# Managment
info status # vm info
info cpus # cpu information
info network # network informations
stop # pause vm
cont # start vm in status pause
system_powerdown # poweroff vm
system_reset # restart monitor
# Blocks
info block # block info
boot_set d # force boot iso
change ide1-cd0 /home/vagrant/isos/debian/debian-12.8.0-amd64-DVD-1.iso # attach cdrom
eject ide1-cd0 # detach cdrom
# Snapshots
info snapshots # list snapshots
savevm snapshot-01 # create snapshot
loadvm snapshot-01 # restore snapshot
delvm snapshot-01
For enable, use:
qemu-system-x86_x64
-chardev socket,path=/tmp/qga.sock,server=on,wait=off,id=qga0 \
-device virtio-serial \
-device virtserialport,chardev=qga0,name=org.qemu.guest_agent.0
<a name="topic-351.4"></a>
Weight: 9
Description: Candidates should be able to manage virtualization hosts and virtual machines (βlibvirt domainsβ) using libvirt and related tools.
Key Knowledge Areas:
libvirtd
/etc/libvirt/
/var/lib/libvirt
/var/log/libvirt
virsh (including relevant subcommands)
# using env variable for set virsh uri (local or remotly)
export LIBVIRT_DEFAULT_URI=qemu:///system
export LIBVIRT_DEFAULT_URI=xen+ssh://vagrant@192.168.0.130
export LIBVIRT_DEFAULT_URI='xen+ssh://vagrant@192.168.0.130?keyfile=/home/vagrant/.ssh/skynet-key-ecdsa'
# COMMONS
# get helps
virsh help
virsh help pool-create
# view version
virsh version
# view system info
sudo virsh sysinfo
# view node info
virsh nodeinfo
# hostname
virsh hostname
# check vcn allocated port
virsh vncdisplay <domain_id>
virsh vncdisplay <domain_name>
virsh vncdisplay rocky9-server01
# HYPERVISIONER
# view libvirt hypervisioner connection
virsh uri
# list valid hypervisioners
virt-host-validate
virt-host-validate qemu
# test connetion uri(vm test)
virsh -c test:///default list
# connect remotly
virsh -c xen+ssh://vagrant@192.168.0.130
virsh -c xen+ssh://vagrant@192.168.0.130 list
virsh -c qemu+ssh://vagrant@192.168.0.130/system list
# connect remotly without enter password
virsh -c 'xen+ssh://vagrant@192.168.0.130?keyfile=/home/vagrant/.ssh/skynet-key-ecdsa'
# STORAGE
# list storage pools
virsh pool-list --details
# list all storage pool
virsh pool-list --all --details
# get a pool configuration
virsh pool-dumpxml default
# get pool info
virsh pool-info default
# create a storage pool
virsh pool-define-as --name default --type dir --target /var/lib/libvirt/images
# create a storage pool with dumpxml
virsh pool-create --overwrite --file configs/kvm/libvirt/pool.xml
# start storage pool
virsh pool-start default
# set storage pool for autostart
virsh pool-autostart default
# stop storage pool
virsh pool-destroy linux
# delete xml storage pool file
virsh pool-undefine linux
# edit storage pool
virsh pool-edit linux
# list volumes
virsh vol-list linux
# get volume infos
virsh vol-info Debian_12.0.0.qcow2 os-images
virsh vol-info --pool os-images Debian_12.0.0.qcow2
# get volume xml
virsh vol-dumpxml rocky9-disk1 default
# create volume
virsh vol-create-as default --format qcow2 disk1 10G
# delete volume
virsh vol-delete disk1 default
# DOMAINS \ INSTANCES \ VIRTUAL MACHINES
# list domain\instance\vm
virsh list
virsh list --all
# create domain\instance\vm
virsh create configs/kvm/libvirt/rocky9-server03.xml
# view domain\instance\vm info
virsh dominfo rocky9-server01
# view domain\instance\vm xml
virsh dumpxml rocky9-server01
# edit domain\instance\vm xml
virsh edit rocky9-server01
# stop domain\instance\vm
virsh shutdown rocky9-server01 # gracefully
virsh destroy 1
virsh destroy rocky9-server01
# suspend domain\instance\vm
virsh suspend rocky9-server01
# resume domain\instance\vm
virsh resume rocky9-server01
# start domain\instance\vm
virsh start rocky9-server01
# remove domain\instance\vm
virsh undefine rocky9-server01
# remove domain\instance\vm and storage volumes
virsh undefine rocky9-server01 --remove-all-storage
# save domain\instance\vm
virsh save rocky9-server01 rocky9-server01.qcow2
# restore domain\instance\vm
virsh restore rocky9-server01.qcow2
# list snapshots
virsh snapshot-list rocky9-server01
# create snapshot
virsh snapshot-create rocky9-server01
# restore snapshot
virsh snapshot-revert rocky9-server01 1748983520
# view snapshot xml
virsh snapshot-info rocky9-server01 1748983520
# dumpxml snapshot
virsh snapshot-dumpxml rocky9-server01 1748983520
# xml snapshot path
/var/lib/libvirt/qemu/snapshot/rocky9-server01/
# view snapshot info
virsh snapshot-info rocky9-server01 1748983671
# edit snapshot
virsh snapshot-edit rocky9-server01 1748983520
# delete snapshot
virsh snapshot-delete rocky9-server01 1748983520
# DEVICES
# list block devices
virsh domblklist rocky9-server01 --details
# add cdrom media
virsh change-media rocky9-server01 sda /home/vagrant/isos/rocky/Rocky-9.5-x86_64-minimal.iso
virsh attach-disk rocky9-server01 /home/vagrant/isos/rocky/Rocky-9.5-x86_64-minimal.iso sda --type cdrom --mode readonly
# remove cdrom media
virsh change-media rocky9-server01 sda --eject
# add new disk
virsh attach-disk rocky9-server01 /var/lib/libvirt/images/rocky9-disk2 vdb --persistent
# remove disk
virsh detach-disk rocky9-server01 vdb --persistent
# RESOURCES (CPU and Memory)
# get cpu infos
virsh vcpuinfo rocky9-server01 --pretty
virsh dominfo rocky9-server01 | grep 'CPU'
# get vcpu count
virsh vcpucount rocky9-server01
# set vcpus maximum config
virsh setvcpus rocky9-server01 --count 4 --maximum --config
virsh shutdown rocky9-server01
virsh start rocky9-server01
# set vcpu current config
virsh setvcpus rocky9-server01 --count 4 --config
# set vcpu current live
virsh setvcpus rocky9-server01 --count 3 --current
virsh setvcpus rocky9-server01 --count 3 --live
# configure vcpu afinity config
virsh vcpupin rocky9-server01 0 7 --config
virsh vcpupin rocky9-server01 1 5-6 --config
# configure vcpu afinity current
virsh vcpupin rocky9-server01 0 7
virsh vcpupin rocky9-server01 1 5-6
# set maximum memory config
virsh setmaxmem rocky9-server01 3000000 --config
virsh shutdown rocky9-server01
virsh start rocky9-server01
# set current memory config
virsh setmem rocky9-server01 2500000 --current
# NETWORK
# get netwwork bridges
brctl show
# get iptables rules for libvirt
sudo iptables -L -n -t nat
# list network
virsh net-list --all
# set default network
virsh net-define /etc/libvirt/qemu/networks/default.xml
# get network infos
virsh net-info default
# get xml network
virsh net-dumpxml default
# xml file
cat /etc/libvirt/qemu/networks/default.xml
# dhcp config
sudo cat /etc/libvirt/qemu/networks/default.xml | grep -A 10 dhcp
sudo cat /var/lib/libvirt/dnsmasq/default.conf
# get domain ipp address
virsh net-dhcp-leases default
virsh net-dhcp-leases default --mac 52\:54\:00\:89\:19\:86
# edit network
virsh net-edit default
# get domain network detais
virsh domiflist debian-server01
# path for network filter files
/etc/libvirt/nwfilter/
# list network filters
virsh nwfilter-list
# create network filter - block icmp traffic
virsh nwfilter-define block-icmp.xml
# virsh edit Debian-Server
# <interface type='network'>
# ...
# <filterref filter='block-icmp'/>
# ...
# </interface>
# virsh destroy debian-server01
# virsh start debian-server01
# delete network filter
virsh nwfilter-undefine block-icmp
# get xml network filter
virsh nwfilter-dumpxml block-icmp
# list os variants
virt-install --os-variant list
osinfo-query os
# create domain\instance\vm with iso file
virsh vol-create-as default --format qcow2 rocky9-disk1 20G
virt-install --name rocky9-server01 \
--vcpus 2 \
--cpu host \
--memory 2048 \
--disk vol=default/rocky9-disk1 \
--cdrom /home/vagrant/isos/rocky/Rocky-9.5-x86_64-minimal.iso \
--os-variant=rocky9 \
--graphics vnc,listen=0.0.0.0,port=5905
# create debian domain\instance\vm with qcow2 file
virt-install --name debian-server01 \
--vcpus 2 \
--ram 2048 \
--disk vol=os-images/Debian_12.0.0.qcow2 \
--import \
--osinfo detect=on \
--graphics vnc,listen=0.0.0.0,port=5906 \
--network network=default \
--noautoconsole
# create rocky9 domain\instance\vm with qcow2 file
virt-install --name rocky9-server02 \
--vcpus 2 \
--ram 2048 \
--disk path=os-images/RockyLinux_9.4_VMG/RockyLinux_9.4.qcow2,format=qcow2,bus=virtio \
--import \
--osinfo detect=on \
--graphics vnc,listen=0.0.0.0,port=5907 \
--network bridge=qemubr0,model=virtio \
--noautoconsole
# open domain\instance\vm gui console
virt-viewer debian-server01
# check metadata domain\instance\vm file (if uri is qemu:////system)
less /etc/libvirt/qemu/debian-server01.xml
<a name="topic-351.5"></a>
Weight: 3
Description: Candidates should be able to manage virtual machines disk images. This includes converting disk images between various formats and hypervisors and accessing data stored within an image.
Key Knowledge Areas:
qemu-img
guestfish (including relevant subcommands)
guestmount
guestumount
virt-cat
virt-copy-in
virt-copy-out
virt-diff
virt-inspector
virt-filesystems
virt-rescue
virt-df
virt-sparsify
virt-p2v
virt-p2v-make-disk
virt-v2v
# Display detailed information about a disk image
qemu-img info UbuntuServer_24.04.qcow2
# Create a new 22G raw disk image (default format is raw)
qemu-img create new-disk 22G
# Create a new 22G disk image in qcow2 format
qemu-img create -f qcow2 new-disk2 22G
# Convert a VDI image to raw format using 5 threads and show progress
qemu-img convert -f vdi -O raw Ubuntu-Server.vdk new-Ubuntu.raw -m5 -p
# Convert vmdk to qcow2 image
qemu-img convert \
-f vmdk \
-O qcow2 os-images/UbuntuServer_24.04_VM/UbuntuServer_24.04_VM_LinuxVMImages.COM.vmdk \
os-images/UbuntuServer_24.04_VM/UbuntuServer_24.04.qcow2 \
-p \
-m16
# Resize a raw image to 30G
qemu-img resize -f raw new-disk 30G
# Resize a qcow2 image to 15G(actual size 30Gdisk 30G)
qemu-img resize -f raw --shrink new-disk 15G
# Snapshots
# List all snapshots in the image
qemu-img snapshot -l new-disk2.qcow2
# Create a snapshot named SNAP1
qemu-img snapshot -c SNAP1 disk
# Apply a snapshot by ID or name
qemu-img snapshot -a 123456789 disk
# Delete the snapshot named SNAP1
qemu-img snapshot -d SNAP1 disk
# set enviroment variables for guestfish
export LIBGUESTFS_BACKEND_SETTINGS=force_tcg
# Launch guestfish with a disk image
guestfish -a UbuntuServer_24.04.qcow2
#run
#list-partitions
# Run the commands in a script file
guestfish -a UbuntuServer_24.04.qcow2 -m /dev/sda -i < script.ssh
# Interactively run commands
guestfish --rw -a UbuntuServer_24.04.qcow2 <<'EOF'
run
list-filesystems
EOF
# Copy a file from the guest image to the host
export LIBGUESTFS_BACKEND_SETTINGS=force_tcg
sudo guestfish --rw -a UbuntuServer_24.04.qcow2 -i <<'EOF'
copy-out /etc/hostname /tmp/
EOF
# Copy a file from the host into the guest image
echo "new-hostname" > /tmp/hostname
export LIBGUESTFS_BACKEND_SETTINGS=force_tcg
sudo guestfish --rw -a UbuntuServer_24.04.qcow2 -i <<'EOF'
copy-in /tmp/hostname /etc/
EOF
# View contents of a file in the guest image
guestfish --ro -a UbuntuServer_24.04.qcow2 -i <<'EOF'
cat /etc/hostname
EOF
# List files in the guest image
export LIBGUESTFS_BACKEND_SETTINGS=force_tcg
guestfish --rw -a UbuntuServer_24.04.qcow2 -i <<'EOF'
ls /home/ubuntu
EOF
# Edit a file in the guest image
export LIBGUESTFS_BACKEND_SETTINGS=force_tcg
guestfish --rw -a UbuntuServer_24.04.qcow2 -i <<'EOF'
edit /etc/hosts
EOF
# Mount a disk image to a directory
guestmount -a UbuntuServer_24.04.qcow2 -m /dev/ubuntu-vg/ubuntu-lv /mnt/ubuntu
# domain
guestmount -d rocky9-server02 -m /dev/ubuntu-vg/ubuntu-lv /mnt/ubuntu
# Mount a specific partition from a disk image
guestmount -a UbuntuServer_24.04.qcow2 -m /dev/sda2 /mnt/ubuntu
# domain
guestmount -d debian-server01 --ro -m /dev/debian-vg/root /mnt/debian
# Umount a disk image to a directory
sudo guestunmount /mnt/ubuntu
# Show free and used space on virtual machine filesystems
virt-df UbuntuServer_24.04.qcow2 -h
virt-df -d rocky9-server02 -h
# List filesystems, partitions, and logical volumes in a VM disk image (disk image)
virt-filesystems -a UbuntuServer_24.04.qcow2 --all --long -h
# List filesystems, partitions, and logical volumes in a VM disk image (domain)
virt-filesystems -d debian-server01 --all --long -h
# Inspect and report on the operating system in a VM disk image
virt-inspector -a UbuntuServer_24.04.qcow2 #(disk)
virt-inspector -d debian-server01 #(domain)
# Display the contents of a file inside a VM disk image
virt-cat -a UbuntuServer_24.04.qcow2 /etc/hosts
virt-cat -d debian-server01 /etc/hosts #(domain)
# Show differences between two VM disk images
virt-diff -a UbuntuServer_24.04.qcow2 -A Rocky-Linux.qcow2
# Make a VM disk image smaller by removing unused space
virt-sparsify UbuntuServer_24.04.qcow2 UbuntuServer_24.04-sparse.qcow2
# Resize a VM disk image or its partitions
virt-filesystems -a UbuntuServer_24.04.qcow2 --all --long -h #(check size of partitions)
qemu-img create -f qcow2 UbuntuServer_24.04-expanded.qcow2 100G #(create new disk image with 100G)
virt-resize --expand /dev/ubuntu-vg/ubuntu-lv \
UbuntuServer_24.04.qcow2 UbuntuServer_24.04-expanded.qcow2
# Copy files from the host into a VM disk image
virt-copy-in -a UbuntuServer_24.04.qcow2 ~vagrant/test-virt-copy-in.txt /home/ubuntu
# Copy files from a VM disk image to the host
virt-copy-out -a UbuntuServer_24.04.qcow2 /home/ubuntu/.bashrc /tmp
# List files and directories inside a VM disk image
virt-ls -a UbuntuServer_24.04.qcow2 /home/ubuntu
# Launch a rescue shell on a VM disk image for recovery
virt-rescue -a UbuntuServer_24.04.qcow2
# Prepare a VM disk image for cloning by removing system-specific data
virt-sysprep -a UbuntuServer_24.04.qcow2
# Convert a VM from a foreign hypervisor to run on KVM
virt-v2v -i disk input-disk.img -o local -os /var/tmp
# Convert a physical machine to use KVM
# Create a bootable disk image for physical to virtual conversion
sudo virt-p2v-make-disk -o output.img
OVF: An open format that defines a standard for packaging and distributing virtual machines across different environments.
The generated package has the .ova extension and contains the following files:
<a name="topic-352"></a>
<a name="topic-352.1"></a>
timeline
title Time Line Containers Evolution
1979 : chroot
2000 : FreeBSD Jails
2002 : Linux Namespaces
2005 : Solaris Containers
2007 : cgroups
2008 : LXC
2013 : Docker
2015 : Kubernetes
Weight: 7
Description: Candidates should understand the concept of container virtualization. This includes understanding the Linux components used to implement container virtualization as well as using standard Linux tools to troubleshoot these components.
Key Knowledge Areas:
nsenter
unshare
ip (including relevant subcommands)
capsh
/sys/fs/cgroups
/proc/[0-9]+/ns
/proc/[0-9]+/status
Containers are a lightweight virtualization technology that package applications along with their required dependencies β code, libraries, environment variables, and configuration files β into isolated, portable, and reproducible units.
In simple terms: a container is a self-contained box that runs your application the same way, anywhere.
Unlike Virtual Machines (VMs), containers do not virtualize hardware. Instead, they virtualize the operating system. Containers share the same Linux kernel with the host, but each one operates in a fully isolated user space.
π Containers vs Virtual Machines:
Feature | Containers | Virtual Machines |
---|---|---|
OS Kernel | Shared with host | Each VM has its own OS |
Startup time | Fast (seconds or less) | Slow (minutes) |
Image size | Lightweight (MBs) | Heavy (GBs) |
Resource efficiency | High | Lower |
Isolation mechanism | Kernel features (namespaces) | Hypervisor |
πΉ Lightweight: Share the host OS kernel, reducing overhead and enabling fast startup.
πΉ Portable: Run consistently across different environments (dev, staging, prod, cloud, on-prem).
πΉ Isolated: Use namespaces for process, network, and filesystem isolation.
πΉ Efficient: Enable higher density and better resource utilization than traditional VMs.
πΉ Scalable: Perfect fit for microservices and cloud-native architecture.
System Containers
Application Containers
Runtime | Description |
---|---|
Docker | Most widely adopted CLI/daemon for building and running containers. |
containerd | Lightweight runtime powering Docker and Kubernetes. |
CRI-O | Kubernetes-native runtime for OCI containers. |
LXC | Traditional Linux system containers, closer to full OS. |
RKT | Security-focused runtime (deprecated). |
Component | Role |
---|---|
Namespaces | Isolate processes, users, mounts, networks. |
cgroups | Control and limit resource usage (CPU, memory, IO). |
Capabilities | Fine-grained privilege control inside containers. |
seccomp | Restricts allowed syscalls to reduce attack surface. |
AppArmor / SELinux | Mandatory Access Control enforcement at kernel level. |
chroot (short for change root) is a system call and command on Unix-like operating systems that changes the apparent root directory (/) for the current running process and its children. This creates an isolated environment, commonly referred to as a chroot jail.
The chroot environment must have its own essential files and structure:
/mnt/myenv/
βββ bin/
β βββ bash
βββ etc/
βββ lib/
βββ lib64/
βββ usr/
βββ dev/
βββ proc/
βββ tmp/
Use ldd to identify required libraries:
ldd /bin/bash
For stronger isolation, consider alternatives like:
# download debain files
sudo debootstrap stable ~vagrant/debian http://deb.debian.org/debian
sudo chroot ~vagrant/debian bash
Use this script for lab: chroot.sh
Output:
Namespaces are a core Linux kernel feature that enable process-level isolation. They create separate "views" of global system resources β such as process IDs, networking, filesystems, and users β so that each process group believes it is running in its own system.
In simple terms: namespaces trick a process into thinking it owns the machine, even though it's just sharing it.
This is the foundation for container isolation.
Each namespace type isolates a specific system resource. Together, they make up the sandbox that a container operates in:
Namespace | Isolates... | Real-world example |
---|---|---|
PID | Process IDs | Processes inside a container see a different PID space |
Mount | Filesystem mount points | Each container sees its own root filesystem |
Network | Network stack | Containers have isolated IPs, interfaces, and routes |
UTS | Hostname and domain name | Each container sets its own hostname |
IPC | Shared memory and semaphores | Prevents inter-process communication between containers |
User | User and group IDs | Enables fake root (UID 0) inside the container |
Cgroup (v2) | Control group membership | Ties into resource controls like CPU and memory limits |
Imagine a shared office building:
That's exactly how containers experience the system β isolated, yet efficient.
When you run a container (e.g., with Docker or Podman), the runtime creates a new set of namespaces:
docker run -it --rm alpine sh
This command gives the process:
The result: a lightweight, isolated runtime environment that behaves like a separate system.
Namespaces hide resources from containers. But to control how much they can use and what they can do, we need additional mechanisms:
Cgroups allow the kernel to limit, prioritize, and monitor resource usage across process groups.
Resource | Use case examples |
---|---|
CPU | Limit CPU time per container |
Memory | Cap RAM usage |
Disk I/O | Throttle read/write operations |
Network (v2) | Bandwidth restrictions |
π‘οΈ Prevents the "noisy neighbor" problem by stopping one container from consuming all system resources.
Traditional Linux uses a binary privilege model: root (UID 0) can do everything, everyone else is limited.
Capability | Allows... |
---|---|
CAP_NET_BIND_SERVICE |
Binding to privileged ports (e.g. 80, 443) |
CAP_SYS_ADMIN |
A powerful catch-all for system admin tasks |
CAP_KILL |
Sending signals to arbitrary processes |
By dropping unnecessary capabilities, containers can run with only what they need β reducing risk.
Used in conjunction with namespaces and cgroups to lock down what a containerized process can do:
Feature | Description |
---|---|
seccomp | Whitelist or block Linux system calls (syscalls) |
AppArmor | Apply per-application security profiles |
SELinux | Enforce Mandatory Access Control with tight system policies |
β Namespaces isolate what a container can see β Cgroups control what it can use β Capabilities and security modules define what it can do
Together, these kernel features form the technical backbone of container isolation β enabling high-density, secure, and efficient application deployment without full VMs.
Use this script for lab: namespace.sh
Output:
Control Groups (cgroups) are a Linux kernel feature introduced in 2007 that allow you to limit, account for, and isolate the resource usage (CPU, memory, disk I/O, etc.) of groups of processes.
cgroups are heavily used by low-level container runtimes such as runc and crun, and leveraged by container engines like Docker, Podman, and LXC to enforce resource boundaries and provide isolation between containers.
Namespaces isolate, cgroups control.
Namespaces create separate environments for processes (like PID, network, or mounts), while cgroups limit and monitor resource usage (CPU, memory, I/O) for those processes.
βοΈ Key Capabilities
Feature | Description |
---|---|
Resource Limiting | Impose limits on how much of a resource a group can use |
Prioritization | Allocate more CPU/IO priority to some groups over others |
Accounting | Track usage of resources per group |
Control | Suspend, resume, or kill processes in bulk |
Isolation | Prevent resource starvation between groups |
cgroups operate through controllers, each responsible for managing one type of resource:
Subsystem | Description |
---|---|
cpu |
Controls CPU scheduling |
cpuacct |
Generates CPU usage reports |
memory |
Limits and accounts memory usage |
blkio |
Limits block device I/O |
devices |
Controls access to devices |
freezer |
Suspends/resumes execution of tasks |
net_cls |
Tags packets for traffic shaping |
ns |
Manages namespace access (rare) |
cgroups are exposed through the virtual filesystem under /sys/fs/cgroup.
Depending on the version:
Mounted under:
/sys/fs/cgroup/
Typical cgroups v1 hierarchy:
/sys/fs/cgroup/
βββ memory/
β βββ mygroup/
β β βββ tasks
β β βββ memory.limit_in_bytes
βββ cpu/
β βββ mygroup/
βββ ...
In cgroups v2, all resources are managed under a unified hierarchy:
/sys/fs/cgroup/
βββ cgroup.procs
βββ cgroup.controllers
βββ memory.max
βββ cpu.max
βββ ...
v1 β Create and assign memory limit:
# Mount memory controller (if needed)
mount -t cgroup -o memory none /sys/fs/cgroup/memory
# Create group
mkdir /sys/fs/cgroup/memory/mygroup
# Set memory limit (100 MB)
echo 104857600 | tee /sys/fs/cgroup/memory/mygroup/memory.limit_in_bytes
# Assign a process (e.g., current shell)
echo $$ | tee /sys/fs/cgroup/memory/mygroup/tasks
v2 β Unified hierarchy:
# Create subgroup
mkdir /sys/fs/cgroup/mygroup
# Enable controllers
echo +memory +cpu > /sys/fs/cgroup/cgroup.subtree_control
# Move shell into group
echo $$ > /sys/fs/cgroup/mygroup/cgroup.procs
# Set limits
echo 104857600 > /sys/fs/cgroup/mygroup/memory.max
echo "50000 100000" > /sys/fs/cgroup/mygroup/cpu.max # 50ms quota per 100ms period
π§ Process & Group Inspection
Command | Description |
---|---|
cat /proc/self/cgroup |
Shows current cgroup membership |
cat /proc/PID/cgroup |
cgroup of another process |
cat /proc/PID/status |
Memory and cgroup info |
ps -o pid,cmd,cgroup |
Show process-to-cgroup mapping |
Container engines like Docker, Podman, and containerd delegate resource control to cgroups (via runc or crun), allowing:
Docker example:
docker run --memory=256m --cpus=1 busybox
Behind the scenes, this creates cgroup rules for memory and CPU limits for the container process.
Concept | Explanation |
---|---|
Controllers | Modules like cpu , memory , blkio , etc. apply limits and rules |
Tasks | PIDs (processes) assigned to the control group |
Hierarchy | Cgroups are structured in a parent-child tree |
Delegation | Systemd and user services may manage subtrees of cgroups |
Use this script for lab: cgroups.sh
Output Soft limit memory:
β What Are Linux Capabilities?
Traditionally in Linux, the root user has unrestricted access to the system. Linux capabilities were introduced to break down these all-powerful privileges into smaller, discrete permissions, allowing processes to perform specific privileged operations without requiring full root access.
This enhances system security by enforcing the principle of least privilege.
π Capability | π Description |
---|---|
CAP_CHOWN |
Change file owner regardless of permissions |
CAP_NET_BIND_SERVICE |
Bind to ports below 1024 (e.g., 80, 443) |
CAP_SYS_TIME |
Set system clock |
CAP_SYS_ADMIN |
β οΈ Very powerful β includes mount, BPF, and more |
CAP_NET_RAW |
Use raw sockets (e.g., ping, traceroute) |
CAP_SYS_PTRACE |
Trace other processes (debugging) |
CAP_KILL |
Send signals to any process |
CAP_DAC_OVERRIDE |
Modify files and directories without permission |
CAP_SETUID |
Change user ID (UID) of the process |
CAP_NET_ADMIN |
Manage network interfaces, routing, etc. |
π Some Linux Capabilities Types
Capability Type | Description |
---|---|
CapInh (Inherited) | Capabilities inherited from the parent process. |
CapPrm (Permitted) | Capabilities that the process is allowed to have. |
CapEff (Effective) | Capabilities that the process is currently using. |
CapBnd (Bounding) | Restricts the maximum set of effective capabilities a process can obtain. |
CapAmb (Ambient) | Allows a process to explicitly define its own effective capabilities. |
π¦ Capabilities in Containers and Pods Containers typically do not run as full root, but instead receive a limited set of capabilities by default depending on the runtime.
Capabilities can be added or dropped in Kubernetes using the securityContext.
π Kubernetes example:
securityContext:
capabilities:
drop: ["ALL"]
add: ["NET_BIND_SERVICE"]
π This ensures the container starts with zero privileges and receives only what is needed.
Use this script for lab: capabilities.sh
Output:
What is it?
How does it work?
Quick commands
# Check support
docker info | grep Seccomp
# Disable for a container:
docker run --security-opt seccomp=unconfined ...
# Inspect running process:
grep Seccomp /proc/$$/status
Tools
# for analyzing
seccomp-tools
# Profiles
/etc/docker/seccomp.json
What is it?
How does it work?
Quick commands:
#Status
aa-status
# Put a program in enforce mode
sudo aa-enforce /etc/apparmor.d/usr.bin.foo
# Profiles
location: /etc/apparmor.d/
Tools:
aa-genprof, aa-logprof for generating/updating profiles
Logs
/var/log/syslog (search for apparmor)
What is it?
How does it work?
Quick commands:
#Status
sestatus
#Set to enforcing/permissive:
setenforce 1 # Enforcing
setenforce 0 # Permissive
#List security contexts:
ls -Z # Files
ps -eZ # Processes
Tools:
System | Focus | Complexity | Policy Location | Typical Use |
---|---|---|---|---|
Seccomp | Kernel syscalls | Medium | Per-process (via code/config) | Docker, sandboxes |
AppArmor | Per-program access | Easy | /etc/apparmor.d/ | Ubuntu, Snap, SUSE |
SELinux | Full-system MAC | Advanced | /etc/selinux/ + labels | RHEL, Fedora, CentOS |
Technology | Purpose / What It Does | Main Differences | Example in Containers |
---|---|---|---|
chroot π | Changes the apparent root directory for a process. Isolates filesystem. | Simple filesystem isolation; doesnot restrict resources, privileges, or system calls. | Docker uses chroot internally for building minimal images, but not for strong isolation. |
cgroups π | Controls and limits resource usage (CPU, memory, disk I/O, etc.) per group of processes. | Kernel feature; fine-grained resource control, not isolation. | Docker and Kubernetes use cgroups to limit CPU/mem per container/pod. |
namespaces π | Isolate system resources: PID, mount, UTS, network, user, IPC, time. | Kernel feature; provides different kinds of isolation. | Each container runs in its own set of namespaces (PID, net, mount, etc). |
capabilities π‘οΈ | Split root privileges into fine-grained units (e.g., net_admin, sys_admin). | More granular than all-or-nothing root/non-root; can drop or grant specific privileges. | Docker containers usually run with reduced capabilities (drop dangerous ones). |
seccomp π§± | Filter/restrict which syscalls a process can make (whitelisting/blacklisting). | Very focused: blocks kernel syscalls; cannot block all actions. | Dockerβs default profile blocks dangerous syscalls (e.g.,ptrace , mount ). |
AppArmor π§ | Mandatory Access Control (MAC) framework: restricts programs' file/network access via profiles. | Profile-based, easier to manage than SELinux; less fine-grained in some cases. | Ubuntu-based containers often use AppArmor for container process profiles. |
SELinux π | More complex MAC framework, label-based, very fine-grained. Can confine users, processes, and files. | More powerful and complex than AppArmor; enforced on Fedora/RHEL/CentOS. | On OpenShift/Kubernetes with RHEL, SELinux labels are used to keep pods separate. |
Summary
OCI (Open Container Initiative) ποΈ
A foundation creating open standards for container images and runtimes .
Defines how images are formatted, stored, and how containers are started/stopped (runtime spec).
runc βοΈ
A universal, low-level, lightweight CLI tool that can run containers according to the OCI runtime specification.
βThe engineβ that turns an image + configuration into an actual running Linux container.
containerd ποΈ
A core container runtime daemon for managing the complete container lifecycle: pulling images, managing storage, running containers (calls runc), networking plugins, etc.
Used by Docker, Kubernetes, nerdctl, and other tools as their main container runtime backend.
CRI (Container Runtime Interface) π
A Kubernetes-specific gRPC API to connect Kubernetes with container runtimes.
Not used outside Kubernetes, but enables K8s to talk to containerd, CRI-O, etc.
CRI-O π₯€
A lightweight, Kubernetes-focused runtime that only runs OCI containers, using runc under the hood.
Mostly used in Kubernetes, but demonstrates how to build a minimal container runtime focused on open standards.
Component | Emoji | What Is It? | Who Uses It? | Example Usage |
---|---|---|---|---|
OCI | ποΈ | Standards/specifications | Docker, Podman, CRI-O, containerd, runc | Ensures images/containers are compatible across tools |
runc | βοΈ | Container runtime (CLI) | containerd, CRI-O, Docker, Podman | Directly running a container from a bundle (e.g.runc run ) |
containerd | ποΈ | Container runtime daemon | Docker, Kubernetes, nerdctl | Handles pulling images, managing storage/network, starts containers via runc |
CRI | π | K8s runtime interface (API) | Kubernetes only | Lets kubelet talk to containerd/CRI-O |
CRI-O | π₯€ | Lightweight container runtime for K8s | Kubernetes, OpenShift | Used as K8s container engine |
Building images:
Any tool (Docker, Podman, Buildah) can produce images following the OCI Image Spec so theyβre compatible everywhere.
Running containers:
Both Podman and Docker ultimately use runc (via containerd or directly) to create containers.
Managing many containers:
containerd can be used on its own (via ctr
or nerdctl
) or as a backend for Docker and Kubernetes.
Plug-and-play runtimes:
Thanks to OCI , you could swap runc for another OCI-compliant runtime (like Kata Containers for VMs, gVisor for sandboxing) without changing how you build or manage images.
[User CLI / Orchestration]
|
[containerd / CRI-O]
|
[runc]
|
[Linux Kernel: namespaces, cgroups, etc]
graph TD
subgraph OCI_Standards
OCI1["OCI Image Spec"]
OCI2["OCI Runtime Spec"]
end
subgraph Orchestration_CLI
Docker["Docker CLI"]
Podman["Podman CLI"]
Kubelet["Kubelet"]
Nerdctl["nerdctl CLI"]
end
subgraph Container_Runtimes
containerd["containerd"]
crio["CRI-O"]
end
runc["runc"]
Kernel["Linux Kernel\n(namespaces, cgroups, seccomp, etc)"]
%% Connections
Docker --> containerd
Podman --> runc
Nerdctl --> containerd
Kubelet --> CRI[CRI API]
CRI --> containerd
CRI --> crio
containerd --> runc
crio --> runc
runc --> Kernel
OCI1 -.-> containerd
OCI1 -.-> crio
OCI2 -.-> runc
# create a new namespaces and run a command in it
unshare --mount --uts --ipc --user --pid --net --map-root-user --mount-proc --fork chroot ~vagrant/debian bash
# mount /proc for test
#mount -t proc proc /proc
#ps -aux
#ip addr show
#umount /proc
# show all namespaces
lsns
# show only pid namespace
lsns -s <pid>
lsns -p 3669
ls -l /proc/<pid>/ns
ls -l /proc/3669/ns
ps -o pid,pidns,netns,ipcns,utsns,userns,args -p <PID>
ps -o pid,pidns,netns,ipcns,utsns,userns,args -p 3669
# execute a command in namespace
sudo nsenter -t <PID> -n ip link show
sudo nsenter -t 3669 -n ip link show
# create a new network namespace
sudo ip netns add lxc1
# list network list
ip netns list
# exec command in network namespace
sudo ip netns exec lxc1 ip addr show
# get cgroup version
stat -fc %T /sys/fs/cgroup
# get cgroups of system
systemctl status
systemd-cgls
cgcreate -g memory,cpu:lsf
cgclassify -g memory,cpu:lsf <PID>
# List capabilities of all process
pscap
getcap /usr/bin/tcpdump
# add capabilities to tcpdump
sudo setcap cap_net_raw=ep /usr/bin/tcpdump
# remove capabilities from tcpdump
sudo setcap -r /usr/bin/tcpdump
sudo setcap '' /usr/bin/tcpdump
grep Cap /proc/<PID>/status
# use grep Cap /proc/<PID>/statusfor get hexadecimal value(Example CApEff=0000000000002000)
capsh --decode=0000000000002000
# check AppArmor status
sudo aa-status
# unload all AppArmor profiles
aa-teardown
# loads AppArmor profiles into the kernel
aaparmor_parser
# check SELinux status
sudo sestatus
# check SELinux mode
sudo getenforce
# set SELinux to enforcing mode
sudo setenforce 1
<a name="topic-352.2"></a>
Weight: 6
Description: Candidates should be able to use system containers using LXC and LXD. The version of LXC covered is 3.0 or higher.
Key Knowledge Areas:
lxd
lxc (including relevant subcommands)
foo
<a name="topic-352.3"></a>
Weight: 9
Description: Candidate should be able to manage Docker nodes and Docker containers. This include understand the architecture of Docker as well as understanding how Docker interacts with the nodeβs Linux system.
Key Knowledge Areas:
dockerd
/etc/docker/daemon.json
/var/lib/docker/
docker
Dockerfile
# Examples of docker
<a name="topic-352.4"></a>
Weight: 3
Description: Candidates should understand the importance of container orchestration and the key concepts Docker Swarm and Kubernetes provide to implement container orchestration.
Key Knowledge Areas:
<a name="topic-353"></a>
<a name="topic-353.1"></a>
Weight: 2
Description: Candidates should understand common offerings in public clouds and have basic feature knowledge of commonly available cloud management tools.
Key Knowledge Areas:
IaaS, PaaS, SaaS
OpenStack
Terraform
# examples
<a name="topic-353.2"></a>
Weight: 2
Description: Candidates should be able to use Packer to create system images. This includes running Packer in various public and private cloud environments as well as building container images for LXC/LXD.
Key Knowledge Areas:
packer
# examples
<a name="topic-353.3"></a>
Weight: 3
Description: Candidates should able to use cloud-init to configure virtual machines created from standardized images. This includes adjusting virtual machines to match their available hardware resources, specifically, disk space and volumes. Additionally, candidates should be able to configure instances to allow secure SSH logins and install a specific set of software packages. Furthermore, candidates should be able to create new system images with cloud-init support.
Key Knowledge Areas:
cloud-init
user-data
/var/lib/cloud/
# examples
<a name="topic-353.4"></a>
Weight: 3
Description: Candidate should be able to use Vagrant to manage virtual machines, including provisioning of the virtual machine.
Key Knowledge Areas:
vagrant
Vagrantfile
# examples
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
git checkout -b feature/AmazingFeature
)git commit -m 'Add some AmazingFeature'
)git push origin feature/AmazingFeature
)
Marcos Silvestrini - marcos.silvestrini@gmail.com
Project Link: https://github.com/marcossilvestrini/learning-lpic-3-305-300