free software

Linux, open source software, tips and tricks.

Misleading VNC Server error – fontPath

I am working from home today, which involves a combination of working locally on my laptop, but also remotely accessing some resources at the office. My current tool of choice for accessing the PC on my desk is VNC.

A note for Windows users — a VNC server on Windows will share the desktop session that is on your monitor. The default VNC server on Linux does not do this. Instead, it creates a brand new session that is completely independent of what’s on the monitor. If a Linux user wants the same experience that Windows users are used to, he can run “x11vnc”, a VNC server that copies the contents of whatever’s on the monitor and shares that with the remote user.

So when I access my PC remotely, I shell in to my desktop PC and I start a VNC server. I can specify the virtual screen size and depth, which is useful if I want to make it large enough to fill most of my laptop screen, but still leave a margin for stuff like window decorations. The command I use looks like this:

vncserver :1 -depth 32 -geometry 1600x1000 -dpi 96

This morning, when I started my VNC session, it responded with some errors about my fonts. Huh?

Couldn't start Xtightvnc; trying default font path.
Please set correct fontPath in the vncserver script.
Couldn't start Xtightvnc process.

21/02/12 10:40:55 Xvnc version TightVNC-1.3.9
21/02/12 10:40:55 Copyright (C) 2000-2007 TightVNC Group
21/02/12 10:40:55 Copyright (C) 1999 AT&T Laboratories Cambridge
21/02/12 10:40:55 All Rights Reserved.
21/02/12 10:40:55 See for information on TightVNC
21/02/12 10:40:55 Desktop name 'X' (chutney:2)
21/02/12 10:40:55 Protocol versions supported: 3.3, 3.7, 3.8, 3.7t, 3.8t
21/02/12 10:40:55 Listening for VNC connections on TCP port 5902

Fatal server error:
Couldn't add screen
21/02/12 10:40:56 Xvnc version TightVNC-1.3.9
21/02/12 10:40:56 Copyright (C) 2000-2007 TightVNC Group
21/02/12 10:40:56 Copyright (C) 1999 AT&T Laboratories Cambridge
21/02/12 10:40:56 All Rights Reserved.
21/02/12 10:40:56 See for information on TightVNC
21/02/12 10:40:56 Desktop name 'X' (chutney:2)
21/02/12 10:40:56 Protocol versions supported: 3.3, 3.7, 3.8, 3.7t, 3.8t
21/02/12 10:40:56 Listening for VNC connections on TCP port 5902

Fatal server error:
Couldn't add screen

I scratched around for a while, looking at font settings. Then I tried some experiments. I changed the dimensions from 1600×1000 to 800×600. Suddenly it worked.

Then I noticed that the VNC startup script makes some assumptions. If the VNC server can not start, it assumes that the problem is with the fonts, and it tries again using the default font path. In my case, the VNC server failed because I had specified a large screen (1600x1000x32) and there was already another session on the physical monitor. VNC could not start because it had run out of video memory.

So my solution? I had a choice. I could reduce the size or depth of the virtual screen, or I could kill the existing session that was running on my desktop.

Don’t believe everything you read. That error about the fontPath is misleading. The rest of the error message tells what’s going on. It could not start Xvncserver.

Chrome and LVM snapshots

This is crazy.

I was trying to make an LVM snapshot of my home directory, and I kept getting this error:

$ lvcreate -s /dev/vg1/home --name=h2 --size=20G
/dev/mapper/vg1-h2: open failed: No such file or directory
/dev/vg1/h2: not found: device not cleared
Aborting. Failed to wipe snapshot exception store.
Unable to deactivate open vg1-h2 (254:12)
Unable to deactivate failed new LV. Manual intervention required.

I did some Googling around, and I saw a post from a guy at Red Hat that says this error had to do with “namespaces”. And by the way, Google Chrome uses namespaces, too. Followed by the strangest question…

I see:

“Uevent not generated! Calling udev_complete internally to avoid process lock-up.”

in the log. This looks like a problem we’ve discovered recently. See also:

That happens when namespaces are in play. This was also found while using the Chrome web browser (which makes use of the namespaces for sandboxing).

Do you use Chrome web browser?


This makes no sense to me. I am trying to make a disk partition, and my browser is interfering with it?

Sure enough, I exited chrome, and the snapshot worked.

$ lvcreate -s /dev/vg1/home --name=h2 --size=20G
  Logical volume "h2" created

This is just so weird, I figured I should share it here. Maybe it’ll help if someone else runs across this bug.

Suspend & Hibernate on Dell XPS 15

In November, I got a new laptop, a Dell XPS 15 (also known as model L501X).  It’s a pretty nice laptop, with a very crisp display and some serious horsepower.

I first installed Ubuntu 10.10 on it, and almost everything worked out of the box.  But it would not suspend (save to RAM) or hibernate (save to disk) properly.  I quickly found a solution to this problem on the internet.  It had to do with two drivers that need to be unloaded before suspending/hibernating: the USB3 driver and the Intel “Centrino Advanced-N 6200” wireless a/b/g/n driver.

The solution is to create two files, shown below, that will unload the driver before suspending and reload the driver when it starts back up.

In /etc/pm/config.d/custom-L501X:

# see
SUSPEND_MODULES="xhci-hcd iwlagn"

In /etc/pm/sleep.d/20_custom-L501X:

# see
case "${1}" in
    echo -n "0000:04:00.0" | tee /sys/bus/pci/drivers/iwlagn/unbind
    echo -n "0000:05:00.0" | tee /sys/bus/pci/drivers/xhci_hcd/unbind
    echo -n "0000:04:00.0" | tee /sys/bus/pci/drivers/iwlagn/bind
    echo -n "0000:05:00.0" | tee /sys/bus/pci/drivers/xhci_hcd/bind

A little while later, I “upgraded” to Ubuntu 11.04, but then suspend/hibernate stopped working. Argh!! I tried a lot of work-arounds, but nothing worked.

What’s particularly frustrating about this sort of problem is that you end up finding a lot of posts on forums where some guy posts a very terse statement like “I put ‘foo’ in my /etc/bar and it worked”. So you end up stabbing a lot of configuration settings into a lot of files without actually understanding how any of it works.

From what I can piece together, it looks like the Linux suspend/hibernate process has been handled by several different subsystems over time. These days, it is handled by the “pm-utils” package and the kernel. PM utils figures out what to do, where to save the disk image (your swap partition), and then it tells the kernel to do the work. There are a lot of other packages that are not used any more: uswsusp, s2disk/s2ram, and others (please correct me if I am wrong).

I never really liked what I saw in Ubuntu 11.04, and so I tried a completely new distro, Linux Mint Debian Edition. This distro combines the freshness of the Debian “testing” software repository with the nice desktop integration of Linux Mint. Admittedly, it may be a little too bleeding edge for some, it’s certainly not for everybody. But so far, I like it.

When I first installed LMDE, suspend and hibernate both worked. But then I updated to a newer kernel and it stopped working. So, just like with Ubuntu, something in the more recent kernel changed and that broke suspend/hibernate.

I tried a few experiments. If I left the two custom config files alone, the system would hang while trying to save to RAM or to disk. If I totally removed my custom config files, I could suspend (to RAM), and I could hibernate (to disk), but I could not wake up from hibernating (called “thawing”). I determined through lots of experiments that the USB3 driver no longer needed to be unloaded, but the wireless driver does need to be unloaded.

So with the newer 2.6.39 kernel, the config files change to this, with the xhci_hcd lines removed.

In /etc/pm/config.d/custom-L501X:


In /etc/pm/sleep.d/20_custom-L501X:

case "${1}" in
    echo -n "0000:04:00.0" | tee /sys/bus/pci/drivers/iwlagn/unbind
    echo -n "0000:04:00.0" | tee /sys/bus/pci/drivers/iwlagn/bind

At the risk of just saying “I put ‘foo’ in /etc/bar and it worked”, that’s what I did… and it did.

If you have insights on how the overall suspend/hibernate process works, please share. There seems to be a lot of discussion online about particular configuration files and options, but I could not find much info about the overall architecture and the roadmap.

South East Linux Fest

I enjoyed a “Geekin’ Weekend” at South East Linux Fest in Spartanburg SC.

Four of us from TriLUG (Kevin Otte, Jeff Shornick, Bill Farrow and myself) packed into the minivan and made a road trip down to South Carolina on Friday.  We got there in time to see some of the exhibits and a few of the Friday afternoon sessions.  There was some light mingling in the evening, and then we all wrapped it up to prepare for the big day ahead.

On Saturday, we had breakfast with Jon “Maddog” Hall before he gave the keynote on how using open source can help create jobs.  The day was filled with educational sessions.  I attended ones on SELinux, Remote Access and Policy, FreeNAS, Arduino hacking, and Open Source in the greater-than-software world.  We wrapped it up with a talk on the many ways that projects can FAIL (from a distro package maintainer’s view).  But the night was not over… we partied hard in the hotel lounge, rockin’ to the beats of nerdcore rapper “Dual Core”, and then trying to spend the complimentary drink tickets faster than sponsor Rackspace could purchase them.

Sunday was much slower paced, as many had left for home and many others were sleeping off the funk of the previous night’s party.  But if you knew where to be, there were door prizes to be scored.  I ended up with a book on Embedded Linux.

It was a memorable weekend, for sure.  We learned a lot of new tech tricks, and we enjoyed hanging out with the geeks.

Encrypting your entire hard disk (almost)

I have a small netbook that I use when I travel, one of the original Asus EeePC’s, the 900.  It has a 9″ screen and a 16GB flash drive.  It runs Linux, and it’s just about right for accessing email, some light surfing, and doing small tasks like writing blog posts and messing with my checkbook.  And since it runs Linux, I can do a lot of nice network stuff with it, like SSH tunneling, VPN’s, and I can even make it act like a wireless access point.

However, the idea of leaving my little PC in a hotel room while I am out having fun leaves me a little uneasy.  I am not concerned with the hardware… it’s not worth much.  But I am concerned about my files, and the temporary files like browser cookies and cache.  I’d hate for someone to walk away with my EeePC and also gain access to
countless other things with it.

So this week, I decided to encrypt the main flash drive.  Before, the entire flash device was allocated as one device:

partition 1  –  16GB  –  the whole enhilada

Here’s how I made my conversion.

(0) What you will need:

  • a 1GB or larger USB stick (to boot off of)
  • an SD card or USB drive big enough to back up your root partition

(1) Boot the system using a “live USB stick” (you can create one in Ubuntu by going to “System / Administration / Startup Disk Creator”.  Open up a terminal and do “sudo -i” to become root.

ubuntu@ubuntu:~$ sudo -i
root@ubuntu:~$ cd /

(2) Install some tools that you’ll need… they will be installed in the Live USB session in RAM, not on your computer.  We’ll install them on your computer later.

root@ubuntu:/$ apt-get install cryptsetup

(3) Insert an SD card and format it. I formatted the entire card.  Sometimes, you might want to make partitions on it and format one partition.

root@ubuntu:/$ mkfs.ext4 /dev/sdb
root@ubuntu:/$ mkdir /mnt/sd
root@ubuntu:/$ mount /dev/sdb /mnt/sd

(4) Back up the main disk onto the SD card. The “numeric-owner” option causes the actual owner and group numbers to be stored in the tar file, rather than trying to match the owner/group names to the names from /etc/passwd and /etc/group (remember, we booted from a live USB stick).

root@ubuntu:/$ tar --one-file-system --numeric-owner -zcf /mnt/sd/all.tar.gz .

(5) Re-partition the main disk. I chose 128MB for /boot.  The rest of the disk will be encrypted.  The new layout looks like this:

partition 1  –  128MB  –  /boot, must remain unencrypted
partition 2  –  15.8GB  –  everything else, encrypted

root@ubuntu:/$ fdisk -l

Disk /dev/sda: 16.1 GB, 16139354112 bytes
255 heads, 63 sectors/track, 1962 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0002d507

Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        1          17      136521   83  Linux
/dev/sda2           18        1962    15623212+  83  Linux

(6) Make new filesystems on the newly-partitioned disk.

root@ubuntu:/$ mkfs.ext4 /dev/sda1
root@ubuntu:/$ mkfs.ext4 /dev/sda2

(7) Restore /boot to sda1. It will be restored into a “boot” subdirectory, because that’s the way it was on the original disk.  But since this is a stand-alone /boot partition, we need to move the files to that filesystem’s root.

root@ubuntu:/$ mkdir /mnt/sda1
root@ubuntu:/$ mount /dev/sda1 /mnt/sda1
root@ubuntu:/$ cd /mnt/sda1
root@ubuntu:/mnt/sda1$ tar --numeric-owner -zxf /mnt/sd/all.tar.gz ./boot
root@ubuntu:/mnt/sda1$ mv boot/* .
root@ubuntu:/mnt/sda1$ rmdir boot
root@ubuntu:/mnt/sda1$ cd /
root@ubuntu:/$ umount /mnt/sda1

(8) Make an encrypted filesystem on sda2. We will need a label, so I will call it “cryptoroot”.  You can choose anything here.

root@ubuntu:/$ cryptsetup luksFormat /dev/sda2

This will overwrite data on /dev/sda2 irrevocably.

Are you sure? (Type uppercase yes): YES
Enter LUKS passphrase: ********
Verify passphrase: ********
root@ubuntu:/$ cryptsetup luksOpen /dev/sda2 cryptoroot
root@ubuntu:/$ mkfs.ext4 /dev/mapper/cryptoroot

(9) Restore the rest of the saved files to the encrypted filesystem that lives on sda2.  We can remove the extra files in /boot, since that will become the mount point for sda1.  We need to leave the empty /boot directory in place, though.

root@ubuntu:/$ mkdir /mnt/sda2
root@ubuntu:/$ mount /dev/mapper/cryptoroot /mnt/sda2
root@ubuntu:/$ cd /mnt/sda2
root@ubuntu:/mnt/sda2$ tar --numeric-owner -zxf /mnt/sd/all.tar.gz
root@ubuntu:/mnt/sda2$ rm -rf boot/*
root@ubuntu:/mnt/sda2$ cd /

(10) Determine the UUID’s of the sda2 device and the encrypted filesystem that sits on top of sda2.

root@ubuntu:/$ blkid
/dev/sda1: UUID="285c9798-1067-4f7f-bab0-4743b68d9f04" TYPE="ext4"
/dev/sda2: UUID="ddd60502-87f0-43c5-aa28-c911c35f9278" TYPE="crypto_LUKS"   << [UUID-LUKS]
/dev/mapper/root: UUID="a613df67-3179-441c-8ce5-a286c16aa053" TYPE="ext4"   << [UUID-ROOT]
/dev/sdb: UUID="41745452-3f89-44f9-b547-aca5a5306162" TYPE="ext3"

Notice that you’ll also see sda1 (/boot) and sdb (the SD card) as well as some others, like USB stick.  Below, I will refer to the actual UUID’s that we read here as [UUID-LUKS] and [UUID-ROOT].

(11) Do a “chroot” inside the target system. A chroot basically uses the kernel from the Live USB stick, but the filesystem from the main disk.  Notice that when you do this, the prompt changes to what you usually see when you boot that system.

root@ubuntu:/$ mount /dev/sda1       /mnt/sda2/boot
root@ubuntu:/$ mount --bind /proc    /mnt/sda2/proc
root@ubuntu:/$ mount --bind /dev     /mnt/sda2/dev
root@ubuntu:/$ mount --bind /dev/pts /mnt/sda2/dev/pts
root@ubuntu:/$ mount --bind /sys     /mnt/sda2/sys
root@ubuntu:/$ chroot /mnt/sda2

(12) Install cryptsetup on the target.

root@enigma:/$ apt-get install cryptsetup

(13) Change some of the config files on the encrypted drive’s /etc so it will know where to find the new root filesystem.

root@enigma:/$ cat /etc/crypttab
cryptoroot  UUID=[UUID-LUKS]  none  luks
root@enigma:/$ cat /etc/fstab
proc  /proc  proc  nodev,noexec,nosuid  0  0
# / was on /dev/sda1 during installation
# UUID=[OLD-UUID-OF-SDA1]  /  ext4  errors=remount-ro  0  1
UUID=[UUID-ROOT]  /  ext4  errors=remount-ro  0  1
/dev/sda1  /boot  ext4  defaults  0  0
# RAM disks
tmpfs   /tmp       tmpfs   defaults   0  0
tmpfs   /var/tmp   tmpfs   defaults   0  0
tmpfs   /var/log   tmpfs   defaults   0  0
tmpfs   /dev/shm   tmpfs   defaults   0  0

(14) Rebuild the GRUB bootloader, since the files have moved from sda1:/boot to sda1:/ .

root@enigma:/$ update-grub
root@enigma:/$ grub-install /dev/sda

(15) Update the initial RAM disk so it will know to prompt for the LUKS passphrase so it can mount the new encrypted root filesystem.

root@enigma:/$ update-initramfs -u -v

(16) Reboot.

root@enigma:/$ exit
root@ubuntu:/$ umount /mnt/sda2/sys
root@ubuntu:/$ umount /mnt/sda2/dev/pts
root@ubuntu:/$ umount /mnt/sda2/dev
root@ubuntu:/$ umount /mnt/sda2/proc
root@ubuntu:/$ umount /mnt/sda2/boot
root@ubuntu:/$ umount /mnt/sda2
root@ubuntu:/$ reboot

When it has shut down the Live USB system, you can remove the USB stick and let it boot the system normally.  If all went well, you will be prompted for the LUKS passphrase a few seconds into the bootup process.

A Tech Trifecta

Dropbox + KeePass + VeraCrypt

I would like to introduce three software packages that are profoundly useful in their own rights, but then go a step further and show how these three tools can be used together to revolutionize how you keep track of your secrets & finances in the digital age.


The first tool is Dropbox, a cloud service where you can store a folder of your own personal files. They offer a free service where you can keep 2GB of files, and you can pay for more storage.

You can access the files from anywhere using a web interface. Or better, you can install the Dropbox program, and it will automatically keep a local copy of all of your stuff up to date in a folder on your computer (Windows, Mac or Linux). If you’re offline for a bit, no problem, because you still have your local copy of the files.

The real magic happens when you connect multiple computers to the same Dropbox account — say, your home PC and your office PC. Dropbox keeps the files in sync for you.

There is even a Dropbox app for your iPhone, so you can access your files on the go.


If I asked you how many passwords you have, you might think that you have a dozen or so. One day, I decided to make a list of every password I had (including multiple accounts where I happened to use the same password). I was shocked to find that I had more than 100.

PRO TIP – Most people use the same five passwords over and over… they don’t bother remembering which ones they used for a particular web site… they just try all five of their “not so important” passwords. That means that if I want to steal something from you, I can just set up a service and invite you to log in. I make your next login attempt fail, so you’ll get confused and enter all five of your “not so important” passwords, and possibly your “good” ones, too. Then I can go to eBay and PayPal and Amazon with your username and all of your passwords, and I have a pretty good chance of logging into your account.

For this reason, you should use a different password at EVERY web site.

KeePass is an open source program that stores your passwords. The program stores all of that information in a single data file that you lock using a single “master password”. This should be the only password you ever remember. The rest of them should be long strings of gibberish characters.

Keepass has a nice “generate” feature which will come up with really good passwords like “an@aiph5Ph”.

KeePass is available for Windows, Mac and Linux (KeePassX), and there are a couple of iPhone variants (I recommend “MyKeePass”).

Keepass stores four main fields: the browser URL address, user name, password, and notes. That means that I can store an entry like this for my bank:

name = alan_porter
password = uG7pi~ji9a
notes = security question, pet’s name = teabag

These days, it’s really easy to get confused about the main web address that you’re supposed to use to log in to your bank. They make things confusing by using fancy URL’s like “” instead of “”. If you have not had your coffee one morning and you type “” by mistake (“Me” instead of “You”), you’ll find yourself logging into a phishing site, which is trying to steal your password.

With keepass, there is no room for this sort of error. You double click on the URL and a browser opens on that site. You double click on the user name and it copies the user name to the clipboard. Paste it in the browser. Similarly, double click on the password and paste it in the browser. If your bank uses “security questions”, you can store your answers in the notes section.

PRO TIP – Don’t use truthful answers for “security” questions… treat these as extended passwords, just like you’d see in the old spy movies (he says “the sky looks like rain”, and you answer “that helps the bananas grow”). The answers do not even need to make sense… they just need to be remembered… in your head (bad) or in your KeePass file (good).

And no, I have never had a pet named “teabag”.


The last tool I would like to talk about is VeraCrypt*, a free package that allows you to create an encrypted volume on a file or a disk.

NOTE: the original version of this blog post recommended TrueCrypt, which has been discontinued — VeraCrypt is a drop-in replacement with several improvements, most notably it is open source and it has undergone very close scrutiny and security audits.

You can create a VeraCrypt file with a name like “” that you store on your desktop. You give it a size and a password. Then when you open that file, you’ll have to enter the password, and then it will appear as a Windows drive letter. On a Mac or Linux, it will show up as folder in /Volumes or in /media. When you close it, the drive letter is gone, and your secrets are stored, encrypted, in the “” file.

You can store this “” file anywhere you want… on your desktop, on a USB flash drive, or even on your Dropbox account.

You might also want to use VeraCrypt to encrypt an entire device, like a USB flash drive or SD card — nice.


Using these three tools together, suddenly we have a very secure way to do our online banking.

Regardless of what computer you’re using — Mac, Windows, Linux, desktop, laptop, at home, at work, whatever — you now have access to your passwords, and all of the other files that you need: Quicken or GnuCash, the PDF files of your bills, personal files, etc.


A typical online bill-paying session might look like this:

  • Look in your Dropbox for your KeePass password file, say D:\mypasswords.kdb. Open it with KeePass.
  • Look in your Dropbox for your VeraCrypt volume, say D:\ Open it with VeraCrypt. It will be mounted as drive V.
  • Use your normal accounting package (Quicken or Gnucash) to open your checkbook, say V:\checkbook.qdf or V:\checkbook.gc.
  • Find your bank password in KeePass. Click on the safe URL, copy the username and password from KeePass to the browser.
  • When you need to save a statement or a bill, save into the VeraCrypt volume on the V drive, like in V:\statements.


Say you’re out and about, and you want to use your iphone to log into a web site that you don’t know the password for (of course you don’t… because you use GOOD and UNIQUE passwords like “taeYi#c6We”, right?).

  • First, you use the iPhone Dropbox app to locate your KeePass file. Click on the filename, and the app will say “I don’t know what kind of file this is”. But it will show you a little “chain link” icon in the bottom. Click this icon to get a web URL for that one file. The URL will be copied to your clipboard.
  • Next, run MyKeePass and click “+” to add a new KeePass file. Select “download from WWW” and paste the URL that you just got from Dropbox.

Now, you can open your KeePass file from anywhere! You’ll have to enter the file’s master password that you set up earlier. After that, you’ll be able to browse the information in the KeePass file.

Note that you can only READ the file using the “download from WWW” method. If you need to save a new password or some other information, you’ll need to write it down on a piece of paper until you are back at your PC (no biggie — that’s pretty rare).


Some readers might note that we just shared our KeePass password file with the entire world, when we told Dropbox to give us a URL. Someone else could use that same (random-ish) URL to download our KeePass file. Since this file is encrypted using the master password, it will look like gibberish to anyone but you. For this reason, we should make sure that we use a VERY STRONG password for the KeePass file… not your dog’s name (remember, mine is “teabag”).

Using SSH to tunnel web traffic

Every once in a while, I learn something new and magical that SSH can do.  For a while now, I have been using its “dynamic port forwarding” capability, and I wanted to share that here.

Dynamic port forwarding is a way to create a SOCKS proxy that listens on your local machine, and forwards all traffic through the SSH tunnel to the remote side, where it will be passed along to the remote network.

What this means is that you can tell your web browser to send some – or all – of your web requests through the tunnel, and the SSH daemon on the other side will send those requests out to the internet for you, and then it will send the responses back through the tunnel to your browser.

There are several practical uses for SSH tunneling.  Here are a few real-world examples of where I have used tunnels:

  • You may be behind a firewall that blocks web sites that you want to visit. I was at a company that had a web proxy that blocked some “hacking” web sites, when I really needed to look up some information on computer vulnerabilities, to make sure that their systems were properly patched.  Using the SSH tunnel bypassed that company’s web proxy.
  • You may not trust a hotel’s internet service. In fact, you SHOULD NOT trust a hotel’s internet service.  They keep records of sites that you visit, ostensibly to keep a customer profile of your interests.  Some also do deep packet inspection to observe words in the web pages and emails that you read.  Intrusive marketing!
  • You may want to work from home, and access work lab resources from home. If you can SSH into one machine at work, then you can access the web resources inside that network.
  • Working on a cluster of machines, where only one of them is directly accessible from your desk. It stinks to have to do your work in a noisy lab.  Why not shell into the gateway machine and tunnel your web traffic through that SSH connection?
  • You may want to access a web interface to something at home, like a printer… or an oven. You should open the SSH service to your router, and then you can SSH in and easily access web interfaces to everything in your home.
  • You may want a web interface on some service on an internet-facing machine, but you don’t want to expose it to everyone. Maybe you want phpMyAdmin on a server, but you don’t want everyone to see it. So run the web service on the local ( network only, and use an SSH tunnel to get to it from wherever you are.
  • Your ISP throttles web traffic from a particular site. In my case, it was an overseas movie streaming site, and my ISP would limit my traffic to 300kbps for 30 seconds, followed by nothing for 30 seconds (they do this to discourage you from watching movies over the internet).  Sending the traffic through an SSH tunnel means the ISP only sees an encrypted stream of something on port 22 — they have no idea that the content is actually a movie streamed from China on port 80.
  • You may live in India, but want to watch a show on Hulu (which only serves the US market).  By tunneling through a US-based server, it will appear to Hulu that you’re in the US.
  • You may be visiting China, and you want to bypass the Great Firewall of China. Believe it or not, my humble domain was not accessible directly from Shanghai.
  • You may get a “better” route by going through a proxy. This is speculation on my part… but I did this once when I was in a free-for-all online ticket purchase.  I wondered if I might get a faster route from my remote data center than I would get directly from my ISP.

So, how do you create a tunnel?  It’s super easy.

Create an SSH session with a tunnel.

On Linux or a Mac, open a shell and type “ssh -D 10000“.

On Windows*, you can run Putty (an SSH client).  In the “connection / SSH / tunnels” menu, click on “dynamic”, then enter source port = 10000, then click “add”.  You’ll see a “D10000” in the box that lists your forwarded ports.  (* Putty is also available for Linux.)

That’s it… you’re done.

Tell the browser to use the tunnel.

The easiest way to get started with SOCKS proxies is to set it in the “preferences / advanced / network / connection settings” menu in Firefox (or the “preferences / under the hood / change proxy settings” menu in Google Chrome).  In both cases, you want to set “manual proxy configuration” and SOCKS host = “localhost” and SOCKS port = 10000.  If there is a selection for SOCKS v5, then check it.

Now, all web requests will go to a proxy that is running on your local machine on port 10000.  If you do a “netstat -plnt | grep 10000”, you’ll see that your SSH or Putty session is listening on port 10000.  SSH then forwards the web request to the remote machine, where SSHD will send it on out to the internet.  Web responses will come back along that same path.

Anyone on your local network will only be able to see one connection coming from your machine: an encrypted SSH session to the remote.  They will not see the web requests or responses.  This makes it a great tool for using on public networks, like coffee shops or libraries.

Test it

Point your browser to a site that tells you your IP address, such as  It should report the IP address of the remote server that you are connecting through, and not your local IP address.

Advanced – quickly selecting proxies using FoxyProxy or Proxy Switchy!

It can be a hassle to go into the preferences menu to turn the proxy on and off.  So there are browser plug-ins that allow you to quickly switch between a direct connection and using a SOCKS proxy.

For Firefox, there is the excellent “FoxyProxy“.  For Chrome, you’ll want “Proxy Switchy!“.  Both plug-in’s allow you to select a proxy using a button on the main browser window.

But there’s more… they also allow you build a list of rules to automatically switch based on the URL that you are browsing.  For example, at work, I set up a rule to route all web requests for the lab network through my SSH session that goes to the lab gateway machine.

It works like this.  First, I set up a tunnel to the lab gateway machine:

  • ssh -D 10001 alan@ (lab gateway)

Then I set up the following FoxyProxy rules:

  • 192.168.100.* – SOCKS localhost:10001 (SSH to lab)
  • anything else – direct

Now, I can get to the lab machines using the lab proxy, and any other site that I access uses the normal corporate network.

Closing thoughts

If you like Putty or FoxyProxy or Proxy Switchy!, please consider donating to the projects to support and encourage the developers.

How to resize a VirtualBox disk

I primarily run Linux, but occasionally need to run something in Windows.  So I use the open source VirtualBox program to run Windows XP in a box (where it belongs).

Most of the time, I create a new VM for a specific task.  For example, around tax time, I might create a VM that has nothing in it except WIndows XP and Turbo Tax or Tax Cut — or better yet — Tax Act.  To make this a little easier, I create a VM that has a fresh copy of Windows XP on a 10GB hard disk, and then I made a compressed copy of the disk image file.  It compresses nicely down to 700MB, just about right to fit on a CD.  So step one of doing my taxes is to copy this disk image file, uncompress it, and create a new VM that uses that disk.

That is, I treat the Windows OS as disposable — create a VM, use it, save my files somewhere else and then throw the VM away.

Every once in a while, though, I need more than the 10GB that I originally allocated to the VM.  For example, iTunes usually needs a lot more storage than that.  So I need to resize the Windows C: drive.  It turns out that this is very easy to do with the open source tool “gparted“.  This comes included in Ubuntu Live CD’s, so I just boot the VM into an Ubuntu Live CD session and prepare the new, larger disk.

Here’s the step-by-step, gleefully stolen from the VirtualBox support forums.

  • Make sure you have an Ubuntu Live CD, or a “System Rescue D“.
  • Create a new hard disk image (*.vdi file) using Virtual Disk Manager.  In VirtualBox, go to File / Virtual Disk Manager.
  • Set your current VM to use the new disk image as it’s second hard disk and the Ubuntu Live CD (or System Rescue CD) ISO file as it’s CDROM device.
  • Boot the VM from the CDROM.
  • If you’re using the System Rescue CD, start a graphical “X-windows” session by typing startx at the command prompt.
  • When your X-windows starts up, open up a terminal and type gparted.
  • You’ll need to create a partition on the new disk.  So in gparted, select the new disk and the Device / Create Partition Table.
  • Then select the windows partition and choose copy.
  • Select the second hard disk, right click on the representation of the disk and click paste.
  • Gparted will prompt you for the size of the disk, drag the slider to the max size.
  • Click apply and wait…
  • Important – when it is done, right click on the disk and choose Manage Flags, and select Boot.
  • Exit gparted and power off the VM.
  • Change the VM settings to only have one disk (the new bigger disk) and un-select the ISO as the CDROM.
  • Boot the VM into your windows install on it’s new bigger disk!  The first time it boots up, Windows may do a disk check and reboot.

Once you’re happy with the new larger disk, you might want to delete the old, smaller one.

This method should work the same, regardless of whether the host OS is Linux, Windows or Mac OS.

Keep that Ubuntu Live CD around — it really comes in handy!

Disabling Linux user accounts – passwd and chage

On internet-facing machines, I like to disable the root password so that you can only log in as root using an SSH key.  This is done by setting the seemingly-scary option PermitRootLogin without-password in /etc/ssh/sshd_config .  This option means that you can ONLY use a key to log in as root… a password will never be accepted, and so you can not guess it by trial-and-error.

While I am at it, I go ahead and disable the root password completely.  That’ll be one less thing for me to remember, and one less thing to keep secret.

First, be sure that you have at least one “sudoer” user, or at least one SSH key in ~root/.ssh/authorized_keys . Otherwise, you’ll realize in an “ignisecond” that you’ve just locked yourself out.

Then, use the passwd command to “lock” the account.

$ passwd -l root

This command will put a ! character in the password field of the /etc/shadow file so that no password hash will ever match the string in the shadow file.

But on SOME SYSTEMS, it also does a second thing — it may change the account expiration date to January 2, 1970. This will prevent SSH access, even with a key.

While researching this article, I had this “world-shifting-under-me” feeling, as I distinctly remember having to work around this issue. However, on every system I tried, I could not reproduce the expiration-date-changing behavior. Then I found that it was an issue that flip-flopped in the Debian community between 2006 and 2008. In 2006, they made the passwd -l command lock the password and also expire the user account. But in 2008, they decided that the change affected too many people, where most had grown accustomed to the passwd command affecting the password only, and not touching the account expiration date.

If you want to change the account’s expiration date, you should use the chage command.

$ chage -E -1 root  # never expire
$ chage -E 1 root  # expire on Jan 2, 1970
$ chage -E 0 root  # not recommended, undefined behavior
$ chage -E 2010-12-25  # expire on a specific date

So here I was, all set to share a nugget of wisdom with the world, and instead, it ends up being a trip down memory lane. However, in the process, I learned a couple of things.

A very safe way to disable a user account’s password, while keeping the account open to SSH access is like this: passwd -l user && chage -E -1 user . The chage part is unnecessary on modern systems, but it does not hurt anything.

A quick way to check on whether a password is locked, or a user account is expired, is to use passwd -S user, like this.

$ passwd -S user
user P 09/11/2007 0 99999 7 -1

This says that user‘s password is set (P), it was changed way back 2007, there is no minimum age (restriction on how often they can change their password), there’s a very long maximum age (time when they are forced to change their password), the warning period is one week, and the account is not inactive/expired.

So there you go, two for one, a Linux tip and a history lesson!


When it comes to searching, there seems to be two battling camps: the ones that prefer to index stuff in the middle of the night, and the ones that just want to search when you need to search. The problem is that, many times, “in the middle of the night” does not end up being “when you’re not using the computer”. The other problem is that this sort of indexing operation can often completely cripple a machine, by using a lot of RAM and completely slamming disk I/O.

As far back as Windows 3.1, with it’s FindFast disk indexing tool, I have been annoyed by indexing processes that wake up and chew your hard disk to shreds… just in case you might want to search for something later.

What a stupid idea.

The latest culprit in Ubuntu is apt-xapian-index, which digs through your package list information, assembling some treasure trove of information that was apparently already on the disk, if you ever needed to ask for it.


sudo apt-get remove apt-xapian-index

A better long-term solution:

If you have information that you would like to be indexed for faster retrieval later, do the indexing upon insertion, not periodically. That is, when you apt-get install a package, set a trigger to update the relevant bits of your package index at that time.

Installing Ubuntu on LVM

I am a big fan of LVM, the Linux Logical Volume Manager. It allows me to create, resize and delete disk partitions very easily. That comes in handy when you’re trying new things out. And it also has a nice snapshot feature, which I use when I am making a backup of a partition… while it is still in use.

If you use the Ubuntu alternate install CD, it gives you menu options for using and configuring Logical Volume Manager. But if you’re using the normal Live installation CD, you have to do it manually. Fortunately, this is not too hard. Below, I will tell how I do it.

  • Boot the Live CD and open a shell.
  • Install the LVM package in the Live CD environment:
    apt-get install lvm2
  • Tell LVM to look for existing volumes, and make them active:
    vgchange -ay
  • You may need to mark one or more physical volumes as being LVM pv’s:
    pvcreate /dev/sdX1
  • You may need to create a volume group:
    vgcreate vglaptop /dev/sdX1
  • Create the volumes that you want to use:
    lvcreate vglaptop --name=ubuntu --size=10G
  • Put a filesystem on the new volume:
    mkfs.ext4 /dev/vglaptop/ubuntu
  • Run the installer and do “manual partitioning”.
  • Use the menus to match up the newly formatted filesystems with the different mount points(root, home, etc).
  • When the installer finishes, tell it to “keep on testing”. Do not reboot.
  • Mount the new root filesystem:
    cd /mnt ; mkdir u ; mount /dev/vglaptop/ubuntu u
  • Mount any other partitions below that:
    mount /dev/sda1 u/boot ; mount /dev/vglaptop/home u/home

    (actually, you only need the root filesystem and /boot — you won’t need /home and others)

  • Mount the “special” filesystems:
    mount --bind /dev u/dev ; mount -t proc proc u/proc
  • Start up a “change root”:
    chroot u

    A new shell will start, using the new root filesystem as its root.

  • Install the LVM package inside the chroot:
    apt-get install lvm2
  • Re-build the initial RAM disk:
    update-initramfs -u -k all

    (this step may be done for you when you install LVM)

  • You may need to update grub:
  • Exit the chroot shell:
  • Cleanly unmount everything:
    umount u/dev ; umount u/proc ; umount u/home ; umount u
  • Reboot:

And when it boots up, it will load the kernel and initramfs, and then it will mount the root filesystem, which is on your new logical volume!

Photo utility – renrot

When I take pictures with my digital cameras, they name the image files something like this:

  • Wife’s Nikon camera – dscn0115.jpg
  • My iPhone – img_0367.jpg
  • My Panasonic camera – p1070126.jpg

I tend to let the images pile up on the cameras for a while, and then I copy them all onto our file server. I start off by dumping them all into one big folder. But then I sort them into folders based on the date and the event, with names like “pictures/y2010/2010-02-14_chinese_new_year” for events and “pictures/y2010/2010-02” for the random shots. I don’t mind this process, and it’s actually kind of fun to review them as I am copying them to our file server.

I should note as an aside, when I worked for Ericsson’s research lab in Singapore, I was talking to one of the researchers who had studied the many ways that people could organize and categorize photographs, both paper and digital. It turns out that a huge majority of people they studied tended to associate photo sets together, based on relative dates. That is, if you asked for a particular photo, they would think “that was about the same time as Bob’s birthday party, so it must have been in August”.

I know that there are newer ways to organize photos, with databases and “tags” and what-not. Mac people really love to let the computer take care of those details. But I still like the idea of using folders with dates… old school.

Since I am dealing with dates and times of photos, it seems a little silly that all of my photos are named using dumb serial numbers. I find myself looking at the image’s EXIF properties, the information that the camera stores about the image — when and where the image was taken, what the camera settings were, etc. This seems a little tedious.

I recently found a utility called “renrot”. It’s primary job is to read the EXIF data from a photo, and rotate the image to match the EXIF rotation flag. That is, it rewrites the image so that it will load from top-to-bottom, which makes it more compatible with less-than-intelligent viewers — like some digital picture frames. But while it’s doing that, it also renames the file based on the time and date of when the photo was taken.

So now I can start by going into the directory that contains my big pile of photos and doing this:

renrot --name-template %Y%m%d-%H%M%S --extension jpg *.JPG

If I wanted to be careful not to mix the iPhone pictures with the Nikon pictures, I could do this:

renrot --name-template iphone-%Y%m%d-%H%M%S --extension jpg IMG_*.JPG
renrot --name-template nikon-%Y%m%d-%H%M%S --extension jpg DSCN*.JPG
renrot --name-template pan-%Y%m%d-%H%M%S --extension jpg P*.JPG

If I want to be a purist, and just rename the photos without actually rotating the image, I can do that, too.

renrot --no-rotate --name-template %Y%m%d-%H%M%S --extension jpg *.JPG

Pretty cool.

Go to Top