Вт, 13 сентября, 11:17

chroot ssh linux user



#Subsystem sftp /usr/lib/openssh/sftp-server

Subsystem sftp internal-sftp


Match group chrooted

    ChrootDirectory /www/sites

    X11Forwarding no

    AllowTcpForwarding no

    ForceCommand internal-sftp





Вт, 19 апреля, 15:39

kvm live backup

Live backup of your VM running in KVM under Ubuntu using qcow2 disk images


# First of all disable apparmor to allow access


aa-complain /usr/sbin/libvirtd

aa-complain /etc/apparmor.d/libvirt/libvirt-xxxxxxxxxxxxxxxxxx


# View disks

virsh domblklist test-backup


# Suspend the domain (This is optional. I do this so that the condition of virtual machines remains unchanged during the backup.)

virsh suspend test-backup


# Create snapshot

virsh snapshot-create-as --domain test-backup test-snap1 \

--diskspec vda,file=/home/virtual/test_backup/test-c.img \

--diskspec vdb,file=/home/virtual/test_backup/test-d.img \

--disk-only --atomic

# or

virsh snapshot-create-as --domain test-backup test-backup-snap1 --disk-only --atomic


# View snapshots

virsh snapshot-list test-backup


# View disks (it must be running on snapshots)

virsh domblklist test-backup


# Copy drives to backup dir

rsync -avh --progress drive-c.qcow2 export/drive-c.qcow2-copy

rsync -avh --progress drive-d.qcow2 export/drive-d.qcow2-copy


# Perform active blockcommit by live merging contents

virsh blockcommit test-backup vda --active --verbose --pivot

virsh blockcommit test-backup vdb --active --verbose --pivot


# View disks (it must be running on normal drives)

virsh domblklist test-backup


# Unsuspend the domain

virsh resume test-backup


# View snapshots again

virsh snapshot-list test-backup


# Delete snapshots

virsh snapshot-delete test-backup test-backup-snap1 --metadata


# View snapshots to check 

virsh snapshot-list test-backup


# Delete snapshot files


rm -f /home/virtual/test_backup/drive-c.test-backup-snap1

rm -f /home/virtual/test_backup/drive-d.test-backup-snap1




Пт, 9 октября 2015, 11:19

Installing PostgreSQL BDR

Install PostgreSQL BDR

For those who get the error:

apt-get install postgresql-bdr-9.4
The following packages have unmet dependencies:
postgresql-bdr-9.4: Depends: postgresql-bdr-client-9.4 but it is not going to be installed
E: Unable to correct problems, you have held broken packages.


apt-get install postgresql-bdr-client-9.4
The following packages have unmet dependencies:
postgresql-bdr-client-9.4: Depends: libpq5 (>= 9.4.4) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.


That's my solution based on https://wiki.postgresql.org/wiki/Apt and http://packages.2ndquadrant.com/bdr/apt/


sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt/ $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'

sudo sh -c 'echo "deb http://packages.2ndquadrant.com/bdr/apt/ $(lsb_release -cs)-2ndquadrant main" > /etc/apt/sources.list.d/2ndquadrant.list.list'

sudo apt-get install wget ca-certificates

sudo wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -

sudo wget --quiet -O - http://packages.2ndquadrant.com/bdr/apt/AA7A6805.asc | sudo apt-key add -

sudo apt-get update

sudo apt-get upgrade

sudo apt-get install postgresql-bdr-9.4 postgresql-bdr-9.4-bdr-plugin





On the Source SmartOS Node:

zfs snapshot zones/d8944236-9d33-40ce-a112-5445cc40c412@migration
zfs snapshot zones/d8944236-9d33-40ce-a112-5445cc40c412-disk0@migration
zfs send zones/d8944236-9d33-40ce-a112-5445cc40c412@migration  | ssh root@ zfs recv zones/d8944236-9d33-40ce-a112-5445cc40c412
zfs send zones/d8944236-9d33-40ce-a112-5445cc40c412-disk0@migration  | ssh root@ zfs recv zones/d8944236-9d33-40ce-a112-5445cc40c412-disk0
scp /etc/zones/d8944236-9d33-40ce-a112-5445cc40c412.xml root@

On the Target SmartOS Node:

echo 'd8944236-9d33-40ce-a112-5445cc40c412:installed:/zones/d8944236-9d33-40ce-a112-5445cc40c412:d8944236-9d33-40ce-a112-5445cc40c412' >> /etc/zones/index
vmadm boot d8944236-9d33-40ce-a112-5445cc40c412


Non-Global Zone:

It's simple:

[root@]# vmadm send 1a666e33-f4ca-4c4e-a4c6-a47dd3d78096 | ssh vmadm receive

Successfully sent VM 1a666e33-f4ca-4c4e-a4c6-a47dd3d78096
Successfully received VM 1a666e33-f4ca-4c4e-a4c6-a47dd3d78096




Чт, 4 июня 2015, 15:54

Building an LXC Server

Building an LXC Server on Ubuntu with ZFS and a container with public IP address


First update Ubuntu


apt-get update

apt-get dist-upgrade


Setup ZFS


apt-add-repository ppa:zfs-native/stable

apt-get update

apt-get install ubuntu-zfs


Configure LXC


sudo apt-get install lxc


Configure ZFS


Create ZFS pool: 

sudo zpool create -f tank /dev/sdX


Keep in mind that deduplication takes much more memory and sometimes CPU.

The rule of Thumb says to have 1GB of Ram per TB of Data. For deduplicated ZPools you actually should have 5 GB of Ram for 1TB of Data. I don't use it.

zfs set dedup=on tank


Turn on compression and create fs:

zfs set compression=on tank

zpool set feature@lz4_compress=enabled tank

zfs set compression=lz4 tank


zfs create tank/lxc

zfs create tank/lxc/containers


To configure LXC to use ZFS as the backing store and set the default LXC path, add the following to /etc/lxc/lxc.conf:


lxc.lxcpath = /tank/lxc/containers

lxc.bdev.zfs.root = tank/lxc/containers



Creating a Container


Create the first container by doing:


lxc-create -t ubuntu -n node.name -B zfs



Setup Bridged Network


apt-get install bridge-utils


Important Commands

Show bridge interfaces:


brctl show


Simple Bridge

This setup can be used to connect multiple network interfaces. The bridge acts as a switch: each additional network interface is directly connected to the physical network.


Edit /etc/network/interfaces, remove eth0, add br0. 


For dynamic IP:


#auto eth0

#iface eth0 inet dhcp

auto br0

iface br0 inet dhcp

bridge_ports eth0

bridge_stp off

bridge_fd 0

bridge_maxwait 0


For static IP:


auto br0

iface br0 inet static

bridge_ports eth0

bridge_stp off

bridge_fd 0

bridge_maxwait 0










reboot server

Is all OK?


Edit /tank/lxc/containers/node.name/config


lxc.network.type = veth

lxc.network.flags = up

lxc.network.link = br0

lxc.network.hwaddr = 00:16:3e:30:fa:4a



start the node:

lxc-start -n node.name -d


connect to the node:

lxc-console -n node.name


On the lxc node /etc/network/interfaces:


auto eth0

iface eth0 inet static









It's possible to use static IP address in node config and use dhcp inside the node, that works too. 

But IPv6 didn't work inside the node, I disabled it and then the node stopped receiving IP address at all. 

I had to use static IP.

I'm going to solve this problem later. 




Вт, 28 апреля 2015, 16:28

Installing FreeSWITCH on SmartOS

OK, I've got it and tested it!

freeswitch@freeswitch:8021@internal> version 

FreeSWITCH Version 1.5.15b+git~20150425T191526Z~3058709a92~64bit (git 3058709 2015-04-25 19:15:26Z 64bit)


Update system to the latest:

pkgin up

pkgin ug

Install the necessary software:

pkgin install git gcc47 gmake autoconf gettext gettext-m4 automake libtool pkg-config \

unixodbc speex libogg libvorbis libshout ldns editline libjpeg-turbo libpqxx yasm nasm

mkdir /opt/local/src

cd /opt/local/src

To build from the current release source code:

git clone -b v1.4 https://stash.freeswitch.org/scm/fs/freeswitch.git

(or if you want to build from Master, the latest source code: git clone https://freeswitch.org/stash/scm/fs/freeswitch.git )

cd freeswitch

./bootstrap.sh -j

 If you want to add or remove modules from the build, edit modules.conf

vi modules.conf 

./configure -C --prefix=/opt/local/freeswitch --enable-64 --enable-optimization --enable-core-pgsql-support


make install

make cd-sounds-install cd-moh-install

When finished, FreeSWITCH should be located under /opt/local/freeswitch


Enjoy and stay tuned!


Чт, 22 января 2015, 14:24

Problem with booting SmartOS

Recently we have had a problem with booting SmartOS.


Valentin Zaretsky described the following issue: 

« SmartOS hang strangely: smartos itself, native VM's and KVM's continued responding to ping on their IP's but nothing else worked. 

After hardware restart I cannot login to system: after getting root password it waits for something and does not show shell prompt. VM's are not running. But network interface comes up, ssh prints banner «SSH-2.0-Sun_SSH_1.5» and the same way as on console hangs after getting password from user.  on client ssh -v stops on the following:  debug1: kex: server->client aes128-ctr hmac-md5 none debug1: kex: client->server aes128-ctr hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<3072<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP   

When I boot with noimport=true, I'm able to login with default password and able to do zpool import zones. And pool seems to be in normal healthy status   System is rather old — 20131128T230213Z but had no problems all the time running so I did not upgrade it. »


Keith Wesolowski gave us the following advice:

« Most, but not all, instances like this where the system seems ok until you try to actually log in or do something with it are actually caused by problems in the disk subsystem.  These problems may be transient or persistent, and they may be caused by software bugs or by hardware or firmware issues; the latter are more common.  When you boot with noimport and then import, can you subsequently enable all services and then ssh in?  What does fmadm faulty show you?  If nothing, are there errors occurring that are precursors to fault diagnosis?  You can find that out via fmdump -e.  Anything in the logs (you'll need to import the pool first to read them, which is also the case with the FMA data).  Failing all of that, I would recommend booting with -m milestone=none. You should be able to log in using the *platform* default root password (which is not the same as the one you set at setup time).  From there, you should be able to set up DTrace probes to monitor the progress of startup, then do 'svcadm milestone all' to start all the services.  DO NOT LOG OUT OF THE CONSOLE!  You will need it to monitor and debug the problem.  

If all services (except of course console-login) seem to come up normally, you can then use your favourite tools — DTrace, truss, mdb, etc. — to debug the sshd server when you try to log in.  You'll likely need to iterate a few times to narrow your search for the problem as your understanding improves.  This is a naive brute-force approach to debugging that almost always yields progress of some kind, even if it's negative progress.  If you can't learn anything at all this way, a last-ditch option (which likely won't work if the problem is with the disks or HBA) is to generate an NMI, which will cause the system to panic and create a crash dump.  If you then boot and import the pool, you should be able to run savecore to grab the dump, which can then be analysed to better understand why things were hanging.  How to generate an NMI is hardware-specific, and most desktop or consumer-type systems don't support it.  Among those that do, the most common way is to issue the IPMI 'chassis power diag' command remotely using ipmitool.  We ship this tool, and it's widely available on all POSIX-type OSs.  If your system doesn't have a BMC, or that doesn't work, consult your vendor-supplied docs. »


We have yet to check everything that he advised but anyway now we know much more interesting things about SmartOS booting process than we have ever known.



Вт, 29 июля 2014, 10:47

SmartOS zone that will serve up SmartOS

PXE Booting SmartOS from SmartOS zone


We’ve bought a new Supermicro server – chassis and four blades. The provider installed Ubuntu on one of them, and from this I have already set up SmartOS on three other blades. As you know, host machine running on SmartOS boots from PXE server. But I don’t need a separate blade running on Linux, so to ensure safety I decided that each blade could be used as a loader for the rest of them. It was possible to deploy Linux on each host in KVM, but I found a better solution – to deploy PXE server in native SmartOS zone. Isn’t that wonderful when SmartOS can boot SmartOS

Here's how to set up a simple PXE server in a SmartOS zone that will serve up SmartOS


imgadm update

imgadm import 8639203c-d515-11e3-9571-5bf3a74f354f


create pxe-server.json with following:

Zone Configuration

  "alias": "pxe-server",
  "hostname": "pxe-server",
  "brand": "joyent",
  "max_physical_memory": 64,
  "quota": 2,
  "image_uuid": "8639203c-d515-11e3-9571-5bf3a74f354f",
  "resolvers": [

 "nics": [

      "nic_tag": "admin",
      "ip": "",
      "netmask": "",
      "gateway": "",
      "dhcp_server": "1"

vmadm create -f pxe-server.json


Setting up TFTP

Use zlogin to log into the zone:

zlogin <uuid>

In the zone:

pkgin -y install tftp-hpa

mkdir /tftpboot

echo "tftp dgram udp wait root /opt/local/sbin/in.tftpd in.tftpd -s /tftpboot" > /tmp/tftp.inetd

svcadm enable inetd

inetconv -i /tmp/tftp.inetd -o /tmp

svccfg import /tmp/tftp-udp.xml

svcadm restart tftp/udp

Setting up DHCP (using Dnsmasq)

pkgin -y install dnsmasq

Edit /opt/local/etc/dnsmasq.conf


svcadm enable dnsmasq


Setting up the tftpboot directory

Ben Rockwood provides a version of undionly.kpxe on his site. Run the following to get the PXE chainload binaries in place:

cd /tftpboot

curl http://cuddletech.com/IPXE-100612_undionly.kpxe > undionly.kpxe

At this point a generic PXE boot server is complete. iPXE will still expect smartos.ipxe, but that can be created with whatever content is needed. For those interested in booting SmartOS, what follows are the steps to provide SmartOS boot services on this server.

Providing SmartOS PXE Boot Services

A template iPXE config is useful both upfront and when updating to new platform releases. Create /tftpboot/smartos.ipxe.tpl with the following content (-B smartos=true is essential, otherwise logins will fail):

# /var/lib/tftpboot/smartos.ipxe.tpl
kernel /smartos/$release/platform/i86pc/kernel/amd64/unix -B smartos=true
initrd /smartos/$release/platform/i86pc/amd64/boot_archive


cd /tftpboot

mkdir smartos


Deploy/Update to the latest SmartOS platform release

The steps in this section work for both initial deployment and upgrades as Joyent releases them.

Next get the latest SmartOS platform and massage it into a workable shape for our iPXE config:


cd /tftpboot/smartos

curl https://us-east.manta.joyent.com/Joyent_Dev/public/SmartOS/platform-latest.tgz > /var/tmp/platform-latest.tgz
(Just now URL https://download.joyent.com/pub/iso/platform-latest.tgz is invalid, 404… )

cat /var/tmp/platform-latest.tgz | tar xz

directory=`ls | grep platform- | sort | tail -n1`


mv $directory $release

cd $release

mkdir platform

mv i86pc platform

cd /tftpboot

cat smartos.ipxe.tpl | sed -e"s/\$release/$release/g" > smartos.ipxe

Make sure PXE boot is enabled and that it is the first in the boot sequence. 


Thanks to Alain O'Dea for his notes about his experience in setting up Ubuntu Server 12.04.1 LTS as a PXE server to boot SmartOS and big thanks to Ben Rockwood for creating and maintaining the PXE Booting SmartOS wiki page. Without their instructions I would not have done it.

Enjoy and stay tuned!


Чт, 24 июля 2014, 14:21

Very fast urgent setup PPTP VPN Server on Debian

Very fast urgent setup PPTP VPN Server on Debian system


sudo apt-get update
sudo apt-get upgrade

Add to /etc/network/interfaces

auto eth0:1
iface eth0:1 inet static
post-up /etc/nat

sudo service networking restart

Add to /etc/resolv.conf


And add to /etc/nat

echo 1 > /proc/sys/net/ipv4/ip_forward # Enable forwarding
iptables -t nat -A POSTROUTING -s  -o eth0 -j MASQUERADE

sudo chmod +x /etc/nat

sudo apt-get install pptpd

Edit /etc/pptpd.conf

option /etc/ppp/pptpd-options 

And edit /etc/ppp/pptpd-options

name pptpd

Add accounts to /etc/ppp/chap-secrets

# client server secret IP addresses
user pptpd password "*"

sudo service pptpd restart


That's all, folks! 

But it's best to spend more time and configure OpenVPN ;-)

Enjoy and stay tuned!


Пн, 21 июля 2014, 16:51

Notes about work

At work we use SmartOS and OmniOS,  new systems for me. Learned a lot of interesting and decided to start my own blog. Stay tuned!