Showing posts with label virtualization. Show all posts
Showing posts with label virtualization. Show all posts

2019-07-24

Expose LXC/LXD Container Ports to Public

LXC/LXD is lightweight OS-level virtualization on Linux, much like OpenVZ. It was used by early version of Docker. The benefit of using LXC/LXD is when you need a virtualization but also need fast startup and near-baremetal performance (especially compared to full-virtualization like KVM or VirtualBox). The difference between Docker and LXC is which level they are targeting, Docker is more for application deployment, where LXC is machine level. LXD adds REST API for LXC. Other main difference between LXC and Docker is that Docker has a copy-on-write file system built-in. To start using LXD, just install and run:

sudo apt install lxc lxd libvirt-bin zfsutils-linux
sudo lxd init

# there would be questions to be answered like these:
Would you like to use LXD clustering? (yes/no) [default=no]: 
Do you want to configure a new storage pool? (yes/no) [default=yes]: 
Name of the new storage pool [default=default]: 
Name of the storage backend to use (dir, lvm, zfs) [default=zfs]: 
Create a new ZFS pool? (yes/no) [default=yes]: 
Would you like to use an existing block device? (yes/no) [default=no]: 
Size in GB of the new loop device (1GB minimum) [default=100GB]:    
Would you like to connect to a MAAS server? (yes/no) [default=no]: 
Would you like to create a new local network bridge? (yes/no) [default=yes]: 
What should the new bridge be called? [default=lxdbr0]: 
What IPv4 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
What IPv6 address should be used? (CIDR subnet notation, “auto” or “none”) [default=auto]: 
Would you like LXD to be available over the network? (yes/no) [default=no]: yes
Address to bind LXD to (not including port) [default=all]: 127.0.0.1
Port to bind LXD to [default=8443]: 
Trust password for new clients: 
Again: 
Would you like stale cached images to be updated automatically? (yes/no) [default=yes] 
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]:

# cache one and run one container, but this will only shown on lxc-ls
sudo lxc-create -t download -n container1 -- --dist ubuntu --release bionic --arch amd64
sudo lxc-start --name container1 --daemon
sudo lxc-info --name container1
sudo lxc-stop --name container1
sudo lxc-destroy --name container1

# or run one container
lxc launch ubuntu:18.04 container1


# run command inside, enable ssh with password, change the root password
lxc exec container1 bash
echo '
PermitRootLogin yes
PasswordAuthentication yes
' > /etc/ssh/sshd_config
systemctl restart ssh
passwd

Then you'll need to expose (or port forward) from outside to your container:

# get ip from your container
lxc list
+------------+---------+-----------------------+------------+-----------+
|    NAME    |  STATE  |         IPV4          |    TYPE    | SNAPSHOTS |
+------------+---------+-----------------------+------------+-----------+
| container1 | RUNNING | 10.123.126.200 (eth0) | PERSISTENT | 0         |
+------------+---------+-----------------------+------------+-----------+

# forward real port 2200 to container's port 22 and vice versa
iptables -A FORWARD -i eth0 -j DROP
iptables -A FORWARD -i lxdbr0 -m state --state NEW,INVALID -j DROP
iptables -A FORWARD -i eth0 -d 10.123.126.200 -p tcp --dport 2200 -j ACCEPT
iptables -t nat -A PREROUTING -p tcp --dport 2200 -j DNAT --to 10.123.126.200:22

You can test whether the port forwarding and ssh works using these command from another computer:

ssh -o PreferredAuthentications=keyboard-interactive,password -o PubkeyAuthentication=no root:@thePublicIpAddress -p 2200

If you need to expose more ports, for example container's 80 to real's 8080 for example, you can add the rules like this:

iptables -A FORWARD -i eth0 -d 10.123.126.200 -p tcp --dport 8080 -j ACCEPT
iptables -t nat -A PREROUTING -p tcp --dport 8080 -j DNAT --to 10.123.126.200:80

But for this case, I think it's better to use a reverse proxy instead.

Here's the performance difference between baremetal machine and LXC?

CPU model:  Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz 
Number of cores: 8
CPU frequency:  2199.996 MHz
Total amount of RAM: 30151 MB
Total amount of swap:  MB
System uptime:   147 days, 20:48,    
I/O speed:  132 MB/s
Bzip 25MB: 8.01s
Download 100MB file: 69.2MB/s


I/O speed(1st run)   : 127 MB/s
I/O speed(2nd run)   : 107 MB/s
I/O speed(3rd run)   : 107 MB/s
Average I/O speed    : 113.7 MB/s

LXC (because the write not yet committed?):

CPU model:  Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz 
Number of cores: 8
CPU frequency:  2199.996 MHz
Total amount of RAM: 30151 MB
Total amount of swap:  MB
System uptime:   20 min,    
I/O speed:  451 MB/s
Bzip 25MB: 9.40s
Download 100MB file: 63.7MB/s


I/O speed(1st run)   : 925 MB/s
I/O speed(2nd run)   : 1.2 GB/s
I/O speed(3rd run)   : 956 MB/s
Average I/O speed    : 1036.6 MB/s

2016-10-01

LXC Web Panel

As you (probably) already know, LXC (Linux Containers) or OpenVZ an operating-system-level virtualization is really faster than hardware virtualization, see the comparison. For those who hate CLI, you can use web interface called LXC Web Panel (for LXC 0.7 to 0.9, or newer fork 1.0+ here) to manage your containers:

wget https://lxc-webpanel.github.io/tools/install.sh -O - | sudo bash

This software only works on Ubuntu 12.04 or later. Despite of its performance, of course there are limitations, such as: you can only use host OS and architecture on guest. You can find more info on their website or this blog post.




So why LXC instead of Docker or Virtualization? because it's simpler :3 yes, both are different kind of animal, don't forget to check LXD and other alternatives too.



2014-08-23

How to Install VirtualBox 4.3.14 on ArchLinux

VirtualBox is one of many virtualization software that could be run on Linux, one with slowest cpu performance according to this article. Installing VirtualBox is quite simple, but to make it run you must also install certain package. To install VirtualBox, type:

yaourt --needed --noconfirm -S --force virtualbox virtualbox-host-modules linux-headers

To make it able to run, you must create the kernel module first, the easiest way is using dkms:

sudo dkms autoinstall

or

sudo dkms vboxhost/4.3.14

then enable it using:

sudo modprobe vboxdrv

Now you can start and run VirtualBox images without error.
if you want to auto-recompile when installing new kernel, use this command:

sudo systemctl enable dkms