Вт, 19 апреля 2016, 15:39

kvm live backup

Live backup of your VM running in KVM under Ubuntu using qcow2 disk images

 

# First of all disable apparmor to allow access

aa-status

aa-complain /usr/sbin/libvirtd

aa-complain /etc/apparmor.d/libvirt/libvirt-xxxxxxxxxxxxxxxxxx

 

# View disks

virsh domblklist test-backup

 

# Suspend the domain (This is optional. I do this so that the condition of virtual machines remains unchanged during the backup.)

virsh suspend test-backup

 

# Create snapshot

virsh snapshot-create-as --domain test-backup test-snap1 \

--diskspec vda,file=/home/virtual/test_backup/test-c.img \

--diskspec vdb,file=/home/virtual/test_backup/test-d.img \

--disk-only --atomic

# or

virsh snapshot-create-as --domain test-backup test-backup-snap1 --disk-only --atomic

 

# View snapshots

virsh snapshot-list test-backup

 

# View disks (it must be running on snapshots)

virsh domblklist test-backup

 

# Copy drives to backup dir

rsync -avh --progress drive-c.qcow2 export/drive-c.qcow2-copy

rsync -avh --progress drive-d.qcow2 export/drive-d.qcow2-copy

 

# Perform active blockcommit by live merging contents

virsh blockcommit test-backup vda --active --verbose --pivot

virsh blockcommit test-backup vdb --active --verbose --pivot

 

# View disks (it must be running on normal drives)

virsh domblklist test-backup

 

# Unsuspend the domain

virsh resume test-backup

 

# View snapshots again

virsh snapshot-list test-backup

 

# Delete snapshots

virsh snapshot-delete test-backup test-backup-snap1 --metadata

 

# View snapshots to check 

virsh snapshot-list test-backup

 

# Delete snapshot files

 

rm -f /home/virtual/test_backup/drive-c.test-backup-snap1

rm -f /home/virtual/test_backup/drive-d.test-backup-snap1

 

 

 

Пт, 9 октября 2015, 11:19

Installing PostgreSQL BDR

Install PostgreSQL BDR

For those who get the error:

apt-get install postgresql-bdr-9.4
...
The following packages have unmet dependencies:
postgresql-bdr-9.4: Depends: postgresql-bdr-client-9.4 but it is not going to be installed
E: Unable to correct problems, you have held broken packages.

-

apt-get install postgresql-bdr-client-9.4
...
The following packages have unmet dependencies:
postgresql-bdr-client-9.4: Depends: libpq5 (>= 9.4.4) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.

 

That's my solution based on https://wiki.postgresql.org/wiki/Apt and http://packages.2ndquadrant.com/bdr/apt/

 

sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt/ $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'

sudo sh -c 'echo "deb http://packages.2ndquadrant.com/bdr/apt/ $(lsb_release -cs)-2ndquadrant main" > /etc/apt/sources.list.d/2ndquadrant.list.list'

sudo apt-get install wget ca-certificates

sudo wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -

sudo wget --quiet -O - http://packages.2ndquadrant.com/bdr/apt/AA7A6805.asc | sudo apt-key add -

sudo apt-get update

sudo apt-get upgrade

sudo apt-get install postgresql-bdr-9.4 postgresql-bdr-9.4-bdr-plugin

 

 

 

Чт, 4 июня 2015, 15:54

Building an LXC Server

Building an LXC Server on Ubuntu with ZFS and a container with public IP address

 

First update Ubuntu

 

apt-get update

apt-get dist-upgrade

 

Setup ZFS

 

apt-add-repository ppa:zfs-native/stable

apt-get update

apt-get install ubuntu-zfs

 

Configure LXC

 

sudo apt-get install lxc

 

Configure ZFS

 

Create ZFS pool: 

sudo zpool create -f tank /dev/sdX

 

Keep in mind that deduplication takes much more memory and sometimes CPU.

The rule of Thumb says to have 1GB of Ram per TB of Data. For deduplicated ZPools you actually should have 5 GB of Ram for 1TB of Data. I don't use it.

zfs set dedup=on tank

 

Turn on compression and create fs:

zfs set compression=on tank

zpool set feature@lz4_compress=enabled tank

zfs set compression=lz4 tank

 

zfs create tank/lxc

zfs create tank/lxc/containers

 

To configure LXC to use ZFS as the backing store and set the default LXC path, add the following to /etc/lxc/lxc.conf:

 

lxc.lxcpath = /tank/lxc/containers

lxc.bdev.zfs.root = tank/lxc/containers

 

 

Creating a Container

 

Create the first container by doing:

 

lxc-create -t ubuntu -n node.name -B zfs

 

 

Setup Bridged Network

 

apt-get install bridge-utils

 

Important Commands

Show bridge interfaces:

 

brctl show

 

Simple Bridge

This setup can be used to connect multiple network interfaces. The bridge acts as a switch: each additional network interface is directly connected to the physical network.

 

Edit /etc/network/interfaces, remove eth0, add br0. 

 

For dynamic IP:

 

#auto eth0

#iface eth0 inet dhcp

auto br0

iface br0 inet dhcp

bridge_ports eth0

bridge_stp off

bridge_fd 0

bridge_maxwait 0

 

For static IP:

 

auto br0

iface br0 inet static

bridge_ports eth0

bridge_stp off

bridge_fd 0

bridge_maxwait 0

address 192.168.0.101

netmask 255.255.255.0

network 192.168.0.0

broadcast 192.168.0.255

gateway 192.168.0.254

dns-nameservers 8.8.8.8 8.8.4.4

 

 

 

reboot server

Is all OK?

 

Edit /tank/lxc/containers/node.name/config

 

lxc.network.type = veth

lxc.network.flags = up

lxc.network.link = br0

lxc.network.hwaddr = 00:16:3e:30:fa:4a

 

 

start the node:

lxc-start -n node.name -d

 

connect to the node:

lxc-console -n node.name

 

On the lxc node /etc/network/interfaces:

 

auto eth0

iface eth0 inet static

address 192.168.0.102

netmask 255.255.255.0

network 192.168.0.0

broadcast 192.168.0.255

gateway 192.168.0.254

dns-nameservers 8.8.8.8 8.8.4.4

 

 

It's possible to use static IP address in node config and use dhcp inside the node, that works too. 

But IPv6 didn't work inside the node, I disabled it and then the node stopped receiving IP address at all. 

I had to use static IP.

I'm going to solve this problem later. 

 

 

 

Вт, 29 июля 2014, 10:47

SmartOS zone that will serve up SmartOS

PXE Booting SmartOS from SmartOS zone

Motivation

We’ve bought a new Supermicro server – chassis and four blades. The provider installed Ubuntu on one of them, and from this I have already set up SmartOS on three other blades. As you know, host machine running on SmartOS boots from PXE server. But I don’t need a separate blade running on Linux, so to ensure safety I decided that each blade could be used as a loader for the rest of them. It was possible to deploy Linux on each host in KVM, but I found a better solution – to deploy PXE server in native SmartOS zone. Isn’t that wonderful when SmartOS can boot SmartOS

Here's how to set up a simple PXE server in a SmartOS zone that will serve up SmartOS

 

imgadm update

imgadm import 8639203c-d515-11e3-9571-5bf3a74f354f

 

create pxe-server.json with following:
 

Zone Configuration

{
  "alias": "pxe-server",
  "hostname": "pxe-server",
  "brand": "joyent",
  "max_physical_memory": 64,
  "quota": 2,
  "image_uuid": "8639203c-d515-11e3-9571-5bf3a74f354f",
  "resolvers": [
  "8.8.8.8",
  "8.8.4.4"],

 "nics": [

    {
      "nic_tag": "admin",
      "ip": "192.168.0.2",
      "netmask": "255.255.255.0",
      "gateway": "192.168.0.1",
      "dhcp_server": "1"
    }
  ]
}


vmadm create -f pxe-server.json

 

Setting up TFTP

Use zlogin to log into the zone:

zlogin <uuid>

In the zone:

pkgin -y install tftp-hpa

mkdir /tftpboot

echo "tftp dgram udp wait root /opt/local/sbin/in.tftpd in.tftpd -s /tftpboot" > /tmp/tftp.inetd

svcadm enable inetd

inetconv -i /tmp/tftp.inetd -o /tmp

svccfg import /tmp/tftp-udp.xml

svcadm restart tftp/udp


Setting up DHCP (using Dnsmasq)

pkgin -y install dnsmasq

 
Edit /opt/local/etc/dnsmasq.conf


dhcp-range=192.168.0.200,192.168.0.220,2h
dhcp-match=set:gpxe,175
dhcp-boot=tag:!gpxe,undionly.kpxe
dhcp-boot=smartos.ipxe
dhcp-leasefile=/etc/dnsmasq.leases



svcadm enable dnsmasq

 

Setting up the tftpboot directory

Ben Rockwood provides a version of undionly.kpxe on his site. Run the following to get the PXE chainload binaries in place:


cd /tftpboot

curl http://cuddletech.com/IPXE-100612_undionly.kpxe > undionly.kpxe


At this point a generic PXE boot server is complete. iPXE will still expect smartos.ipxe, but that can be created with whatever content is needed. For those interested in booting SmartOS, what follows are the steps to provide SmartOS boot services on this server.


Providing SmartOS PXE Boot Services

A template iPXE config is useful both upfront and when updating to new platform releases. Create /tftpboot/smartos.ipxe.tpl with the following content (-B smartos=true is essential, otherwise logins will fail):

#!ipxe
# /var/lib/tftpboot/smartos.ipxe.tpl
kernel /smartos/$release/platform/i86pc/kernel/amd64/unix -B smartos=true
initrd /smartos/$release/platform/i86pc/amd64/boot_archive
boot

 

cd /tftpboot

mkdir smartos

 

Deploy/Update to the latest SmartOS platform release

The steps in this section work for both initial deployment and upgrades as Joyent releases them.

Next get the latest SmartOS platform and massage it into a workable shape for our iPXE config:

 

cd /tftpboot/smartos

curl https://us-east.manta.joyent.com/Joyent_Dev/public/SmartOS/platform-latest.tgz > /var/tmp/platform-latest.tgz
(Just now URL https://download.joyent.com/pub/iso/platform-latest.tgz is invalid, 404… )

cat /var/tmp/platform-latest.tgz | tar xz

directory=`ls | grep platform- | sort | tail -n1`

release=${directory:9}

mv $directory $release

cd $release

mkdir platform

mv i86pc platform

cd /tftpboot

cat smartos.ipxe.tpl | sed -e"s/\$release/$release/g" > smartos.ipxe


Make sure PXE boot is enabled and that it is the first in the boot sequence. 


Thanks

Thanks to Alain O'Dea for his notes about his experience in setting up Ubuntu Server 12.04.1 LTS as a PXE server to boot SmartOS and big thanks to Ben Rockwood for creating and maintaining the PXE Booting SmartOS wiki page. Without their instructions I would not have done it.


Enjoy and stay tuned!