Tuesday, June 22, 2010

Whole-disk encryption on Ubuntu

I've had my wife's laptop running whole-disk encryption with TrueCrypt for a couple of years now, and I always wanted to get that level of security on my Ubuntu machine.  It really makes a lot of sense for the laptops to have as much privacy protection as possible, since we travel with them and therefore they are at higher risk of being stolen.  This week I finally got the opportunity to try it out.

The Plan

I knew I was going to take a performance hit, so I got myself an 8 gig SD card.  The root of my Ubuntu installation would go on it, with an unencrypted /boot partition on the hard drive as well as a 60 gig encrypted /home partition to store all of my files.  The home partition needs its own encryption key, so I decided on a key file that would be stored on the already encrypted root parition, allowing the OS to automatically unlock the home parition and mount it at boot time.

Installation

I downloaded the Ubuntu Alternate Install disk, since the standard install does not include the option to install logical volumes, which are necessary for thinks like encryption or software RAID.  The installation process was somewhat involved as you need to manually configure your partitions.  There's simply no way around it as your /boot directory needs to be unencrypted (this is because the software that performs on-the-fly encryption and decryption is a kernel module, and it can't be started if it is encrypted).  I tried to configure my /home partition on the 60 gig partition on the hard drive like I wanted, but I kept running into weird problems with the installer so I decided to take my chances installing /home on the SD card with everything else and seeing if I could set up /home on the 60 gig encrypted partition later.

So I ended up with /boot on an unencrypted partition on the hard drive, my swap space on an encrypted partition on the hard drive, and the root directory with everything else (/etc, /usr, /bin, /home, etc) on the encrypted partition on the SD card.

Configuration of the /home partition on the hard drive

Less than 8 gigs of space was just not going to cut it for my /home partition, so I was anxious to get the 60 gig encrypted parition configured.

After much Googling, I learned that encrypted volumes in Linux are configured on logical volumes.  This means creating a physical parition, configuring it as a physical volume, adding it to a volumne group, creating a logical volume inside the group, and finally installing a file system inside the logical volume, but fortunately a handy program called cryptsetup takes care of most of that for you.  Encrypted volumes use LUKS, the Linux Unified Key Setup along with dm-crypt.  One of the nice things about LUKS is that it contains 8 key "slots", meaning that you can have up to 8 passphrases or key files, any of which will unlock the volume.  This allows me to have a backup passphrase in case I need to reinstall the operating system or the key file goes corrupt.

So I formatted the 60 gig parition with ext4, and then I ran the following to initialize it as a physical volume for use as a logical volume, giving it the name pvHomeDir:

    $ sudo pvcreate /dev/sda7 pvHomeDir

Then I create a volume group called vgHomeDir and added that volume:

    $ sudo vgcreate vgHomeDir /dev/sda7

I then ran the following line to create an encrypted volume in that virtual group:

    $ sudo cryptsetup -y --cipher aes-cbc-essiv:sha256 --key-size 256 luksFormat /dev/sda7

It asked me for a passphrase, which I supplied.  When it was finished, I unlocked the new encrypted partition with:

    $ sudo cryptsetup luksOpen /dev/sda7 pvHomeDir

I then created a 60 gig logical volume called lvHomeDir in the encrypted parition with this:

    $ sudo lvcreate -n lvHomeDir -L 60G vgHomeDir

Then I installed an ext4 file system in the new logical volume (note the name and location of the volume; it is located in /dev/mapper and has the name of the volume group prepended with a dash to the name I gave it):

    $ sudo mkfs.ext4 /dev/mapper/vgHomeDir-lvHomeDir

OK, at this point the drive is all set up but it needs that key file so that I don't have to enter in two passphrases every time my computer boots.  I create a folder under /usr called keyfile, and copy a picture of myself and my daughter into it, and rename it "file".  To add the file as a key to the new partition I ran the following (I'm pretty sure this is the command but not 100%; I'm sorry if it is wrong and please post a comment if this line needs correcting):

    $ sudo cryptsetup luksAddKey --key-file=/usr/keyfile/file /dev/sda7

Almost done!  The partition just needs to be configured so that it will be used for my /home directory.  I edited 2 files:
    1. In /etc/crypttab I entered:
       
        pvHomeDir /dev/sda7 /usr/keyfile/file luks,retry=1,lvm=vgHomeDir

    2. In /etc/fstab I entered:
       
        /dev/mapper/vgHomeDir-lvHomeDir /home ext4 defaults 0 2

I created a directory called crypt in my home directory on the SD card and mounted the volume with the following command:

    $ sudo mount /dev/mapper/vgHomeDir-lvHomeDir /home/jizldrangs/crypt

I then restored my home directory to that folder, and when I rebooted, I had my old desktop background and all of my files! 

Drawbacks

    1.  I have noticed a performance hit when booting and every so often when browsing the web in Firefox
    2.  During the installation process, I chose the "random key" option for my swap space, so there is no way to do true hibernation, where the state of the machine in memory is saved to disk and restored later.  Suspend, which is where the computer turns off most components and uses minimal power, still works just fine.

Saturday, May 8, 2010

Browse securely from a public WiFi connection with SSH

If you are on the road and jump on your hotel's free WiFi be afraid.  Be very afraid.  Why?  All of your network traffic is being broadcasted in all directions with no security protection whatsoever, and it is insanely easy for anyone to read that traffic using freely available tools.  Using ARP spoofing it is possible for a snooper to associate his MAC address with the network's gateway, thereby routing all internet-bound traffic through his or her machine.  Even SSL is not entirely safe as an attacker can use SSL's renegotiation capability to trick a server into giving the attacker access to your session.

What's a Linux geek to do?  Simple: SSH!

Using SSH you can create a secure tunnel from your laptop to your home computer, and pass all of your web traffic coming from your laptop through that connection to your home computer, then out to the internet.  It's simple; here's how it's done:
  1. Before you leave your home, make sure your SSH server is set up properly (see my instructions on how to harden it).  We'll say for simplicity that you have configured your SSH server and all of your clients to run on port 1234.  That means that your SSH server is listening for incoming connections on port 1234.
  2. Log into your router's configuration interface and configure it to forward all incoming traffic that arrives at port 1234 to your SSH server's IP address.
  3. Go to www.whatismyip.com and write down your public IP address.  For our example, we'll say that it is 200.200.200.200.
  4. When you arrive at your hotel, coffee shop, library, etc., pick a port above 1024 to use on your laptop.  The SSH client will listen on that port and forward all traffic to your home machine; for our example it will be port 4321. 
  5. Open a terminal and run the following command: ssh -D portNumber -N publicIPAddress (in our example, it would be ssh -D 4321 -N 200.200.200.200).  If your SSH client is not already configured to use your custom port, add "-p 1234".  After you hit Enter, your cursor will move to the beginning of the next line and sit there blinking.  This means that the tunnel has been established and it is ready to start forwarding traffic.  Leave the terminal open.
  6. Go into Firefox and go to Edit, then Preferences.  When the Preferences menu pops up, click the Advanced icon at the top, flip to the Network tab and hit Settings at the top of the screen.  That will pop up yet another window.  Put the bullet in "Manual Proxy Configuration" and in the box next to "SOCKS Host" enter "localhost", and enter the port you selected in step 4 (in our example, 4321) into the Port box.  Below is a screenshot of what it should look like.
  7. Hit OK, then Close, and restart Firefox.
  8. Browse to www.whatismyip.com and verify that it detects your home IP address as the one you are browsing from.  If whatismyip.com display's your home IP address, you have successfully configured Firefox to tunnel through SSH!
 Now you can browse in peace, and when you are finished simply flip back over to your terminal window and do a Ctrl + C to stop the SSH client.  Good luck!

Tuesday, April 20, 2010

Make web apps feel more like native apps with Prism

As I've mentioned before, I'm not crazy about the idea of web apps.  I much prefer the control and integrated experience that comes with desktop apps.  That's why I tried Prism, and I am loving it.

Prism is a "Site Specific" web browser built by the folks over at Mozilla.  They stripped out all of the menus, toolbars, navigation buttons, etc. that you would normally expect from a web browser because it is intended to be used with a single web application per window.  You basically give it a URL when you launch Prism, and that window is dedicated to that site.  Each web app gets its own window and all navigation is done through the links provided by the app itself.

This doesn't sound like any big deal, but there are several reasons why I like it:
  1. I am able to reclaim the extra screen real estate (which is especially important on a netbook).
  2. For web-based apps, I prefer to have a dedicated launcher.  If I am trying to check my GMail, I don't want to launch Firefox, wait for my home page to load, then browse to GMail; that is a lot of steps and I can reduce it to one click with Prism.  This also allows me to keep my web browsing separate from my web apps.  I generally have about 25 tabs open at a time, so with Firefox dedicated to browsing, I can open and close tabs with reckless abandon and don't have to worry about making sure that "special" tab stays open because if I close it I have to sign in again.  Let's face it, Firefox can be a little unstable at times, so if one of my tabs is causing Firefox to flake out, it is nice to be able to close and restart Firefox without losing my Grooveshark music or Gmail.
  3. The extra window is really nice because it gets its own tab in the taskbar.  I know that no matter what I am doing, my web app is only a click away, just like every other app.  This offers a great window managing and usability benefit.  For a while I was listening to Grooveshark inside Firefox, and whenever someone came up to talk to me, I had to spend a painful 5 to 10 seconds finding the Firefox window on my taskbar, waiting for it to maximize, flipping to the Grooveshark tab, and clicking Pause.  It is downright rude to keep someone waiting that long while I pause my music, so I welcome the chance to cut down on that time.
 Here is what I ended up with.  You'll notice the Grooveshark and GMail icons in my Gnome panel at the top of the screen, along with the dedicated windows for each app, and the separate tabs at the bottom. 




















If you like what you see, I recommend that you give it a shot.  You can get Prism from the Ubuntu repositories.  It comes with a Firefox extension, and you simply browse to the web page and go to Tools, then to Convert Website to Application.  That will pop up a window which will allow you to specify the name, URL, and icon of the application.  Most of the time that stuff will be filled out for you and all you have to do is check the Desktop button. 

Once the Launcher is created on your desktop (unfortunately Prism does not give you the option of specifying where the launcher will be created) you can copy it to your Gnome panel or create a menu item pointing to it.

As always, I have a few caveats to share:
  1. Prism uses the same process name regardless of how many windows are open, so if you create separate launchers and add them to a dock like Avant Window Navigator, it might get kinda confused.
  2. The Firefox plugins are not available to Prism, so if you are accustomed to the luxury of such plugins as AdBlock Plus, NoScript or any of the thousand other plugins for firefox, you will have to decide how important they are to the particular web app you want to create.

Prism isn't for everyone; it is intended for people who are looking for a certain kind of control over their web browsing experience.  Hope this helps!

Sunday, April 4, 2010

Great Linux Games: Extreme Tux Racer

Since upgrading to the beta version of Ubuntu Lucid I've rediscovered a classic Linux game: Extreme Tux Racer.  This game features the Linux mascot, Tux, sliding down the side of a mountain, collecting herring, avoiding trees, and going off jumps on his way to the finish line at the bottom.  It is a timed race (no other contestants involved), and you can start a campaign and work your way through a series of races or simply "practice" on any track you want.

I absolutely love insanely fast, out-of-control speed racing games such as this (I was a big fan of Star Wars Racer for the Nintendo 64 back in the day), and this is the perfect outlet for me.  Trying to keep Tux under control as he is screaming along a half-pipe of solid ice at breakneck speed while collecting herring and racing against the clock is such a blast.  The versions that have shipped with the last few releases of Ubuntu have added some great tracks to choose from (I recommend "In Search of Vodka" and "Candy Lane").

There are several other games for Linux that I want to write about at some point, but this is the one I caught myself playing this evening.  If you haven't tried it I highly recommend it (it is in the Ubuntu repos and the website is here).  Have fun!

From the website:

Sunday, March 28, 2010

The search for the right dock for Ubuntu on a netbook

The beta version of Ubuntu Lucid Lynx became available last week.  I usually upgrade at least one of my machines to the beta versions, so I went ahead and did so for my netbook. 

The first thing I noticed was that the upgrade removed the Netbook Remix enhancements, and I was looking at a plain vanilla Gnome desktop.  This isn't a terrible problem but it does take up extra space and I started wondering if I could do better.

I remembered hearing about several docks available for Ubuntu and decided to give them a try.  I decided that I really wanted to see the following features:
  1. One icon to launch an application or if it were already running, bring the instance of it to the top.  With Gnome, you have your launchers at the top and your running programs at the bottom. So if you want to go to a website, you first have to check the bottom of the screen to see if Firefox is already running (which is especially hard with the new Lucid theme, since the tab for the selected window has white typing on a light-gray backround, which makes it almost impossible to read).  The thing that first made me interested in exploring the possibility of a dock was the idea that when I am ready to browse the web, I have one icon that will launch Firefox if it isn't open already or bring it to the top if it is already open.  
  2. The one icon needs to have some visual indicator of whether the application is running.
  3. If I am to switch to a dock, it needs to replace all current panels and menus.  That means I want some kind of menu with full access to all of the programs and settings currently in my Applications, Places, and Settings menus.  I also need to be able to control my WiFi, see how much battery is remaining, adjust the volume, and shut down from the dock.
  4. It has to be fast and not too flashy.  I've seen some of these docks where the items grow as you hover over them, and I think I would really hate that.  When I am trying to click on an icon I do not want it growing or moving around as my mouse approaches it.  I also want the icons to be in the same position on the screen all the time, which means that I don't want the dock expanding and contracting as I launch applications (I found out later that a few docks supported "panel mode" which fits this requirement nicely).
 The Search

With these criteria in mind, I set off to find the best dock.  I found this blog posting and basically started going down the list.  Docky had a nice-looking panel mode but it wasn't easy enough to use (I couldn't figure out how to add a launcher).  Cairo-dock was nice and I almost stuck with it, but it didn't have the nice panel mode I wanted.  It was only available in the centered, expanding and collapsing mode, and the only theme that provided indication as to whether the application was already running under each launcher also had the icons growing in size when my mouse rolled over it.  So Cairo-dock was close but no cigar.  



I installed WBar but by the time I figured out how to launch it, I was already in the process of installing the dock I stuck with, Avant Window Navigator.


Avant Window Navigator (AWN)

To make a long story short, AWN met all of my specifications to the tee.  It is quite user-friendly and easy to configure, and the repos contain a lot of great plugins (more on that later).  AWN has all of the common options I saw when testing other docks, but also allowed me to configure it just the way I wanted it.  

I used Synaptic Package Manager to install the following packages:
  1. avant-window-navigator (with dependancies)
  2. awn-applets-c-extras
  3. awn-applets-python-extras
  4. python-awn-extras
When the setup was complete I ran avant-window-navigator from the command line and my dock popped up at the bottom of the screen.  I set the following settings on the Preferences tab:
  1. Size of Icons: 24 pixels
  2. Orientation: Bottom
  3. Style: None
  4. Behavior:  Panel Mode
  5. Icon effects:  None
  6. Checked the "Expand the Panel" checkbox
  7. Slid the "Position on the screen" bar all the way over to the left
  8. Checked "Start AWN automatically"
 Then I flipped over to the Applets tab and was blown away by the number of applets!  I found almost everything I could want there, including the battery meter, the volume control, a menu (I recommend the "Yet Another Menu applet" over the "AWN Main Menu" or the "Cairo Main menu"), the date/time, etc.  There were a few things I wasn't looking for but was happy to see:  media player controls (I added the play/pause, previous and next controls to my dock and now can control Rhythmbox with one click), an RSS feed reader, a system monitor, a hardware sensor, a weather indicator, etc.  It's perfect for me today, and has a lot of potential to make my life easier in the future.
It wasn't long before I was totally hooked and ready to abandon the default Gnome panels for good.  I right-clicked and deleted the bottom panel without a problem, but when I went to delete the top panel, I noticed that the Delete option was grayed out.  I guess they do that in order to stop newbies from accidentally deleting the panel, but I had made an informed decision to get rid of it! 

Long story short, here is how to turn off the top panel in ubuntu:  hit Alt+F2 and type "gconf-editor".  That will pop up a window that looks kinda like the Windows Registry Editor.  On the left-hand side, expand Desktop, then Gnome, then highlight Session.  On the right-hand side double-click "reuired_components_list".  That will pop open a new window.  Highlight "panel" and click Remove.  That's it!  Now the next time you log in you won't have any panels.

Here is what I ended up with.  Hope this helps!

Monday, March 15, 2010

Follow-up to "Using passphrase-protected SSH keys in Cron"

So I wrote a while back about automating SSH jobs in Cron when the keys were passphrase-protected.  I was very excited about being able to automate tasks over SSH and I was sure I had it working but a few days later I realized that it wasn't working at all.  It was failing without any error I could see.

I'll spare you the gory details of how many hours I spent and how many things I tried when getting it working.  The bottom line is that this evening I realized that when my Cron job or script ran "ssh-agent -s" another ssh-agent process was being created!  I figured that that command was simply exporting the environment variables necessary for me to use the process created by the Gnome session (you may remember that I have seahorse set to automatically unlock my ssh key when I log in), but instead I was inadvertently creating a new ssh-agent process with no keys in it.  Then when I tried to run Unison, it found the wrong ssh-agent process.

To fix it, I needed to find a way to keep from starting another ssh-agent process, and instead gain access to the one seahorse starts every time I log in.  I made a change to my crontab that would search for the existing ssh-agent process ID and authentication socket and import them into the cron environment.  This is kind of a hack but it actually works (not like last time).  Just add the following to your script before trying to connect to your SSH server (or do what I did and put them right into the cron job, separated by semicolons):

    export SSH_AGENT_PID=`ps -a | grep ssh-agent | grep -o -e [0-9][0-9][0-9][0-9]`
    export SSH_AUTH_SOCK=`find /tmp/ -path '*keyring-*' -name '*ssh*' -print 2>/dev/null`

Sunday, February 21, 2010

Playing MIDI files in Ubuntu

Back in the winter of 2005 when I was using Debian (back them Etch was the "testing" version) I spent several frustrated days trying to get Linux to play MIDI files. The best I could come up with was running TiMidiTy from the command line and specifying the "dumb" interface. It was very clumsy because I kept forgetting the command line arguments and would have to look them up whenever I wanted to play a song, and eventually I just decided that playing MIDI files was something that I would have to boot into my Windows partition to be able to do with a reasonable amount of ease.

Fast forward to about half an hour ago. I joined our local Church's choir last fall and wanted to listen to some MIDI files to help me learn my parts. A quick Google search revealed the packages I needed, so I went to the command line and issued the following:

    sudo apt-get install timidity timidity-interfaces-extra freepats

And within a few minutes I was listening to my MIDI files through Totem movie player, with which Ubuntu had automatically associated the MIDI file type. After my last experience I couldn't believe how easy it was!

So I want to send a big shout out to all the devs working on Ubuntu, Gnome, TiMidiTy, PulseAudio, etc. for getting this right. The need for playing MIDI files is definitely something of a niche need, since not too many people listen to them, so it would have been easy to overlook it but instead they did a great job of making it work. This is yet another example of how far Linux has come over the years. The Internet is peppered with stories about how various tasks were recently very difficult to do on Linux and are dead simple now, and I am happy to add mine to the pile. Great work, guys!

Sunday, February 7, 2010

XRDP, the best remote access software under Linux

My desktop PC is in my basement office, which is a nice place but somewhat inconvenient.  I really wanted a way to access it remotely from my laptop but had trouble finding a remote-access application that was fast enough.

Both my laptop and desktop are wireless, so there are 2 wireless "hops" between my machines and when logged into my desktop from my laptop each command (i.e. a click or keystroke) has 4 wireless hops to go through before my screen displays the result.  So speed is key. 

Remote access servers that didn't work out

I tried pretty much every remote access server I could find.  I was already familiar with VNC, and that was a perfectly viable option but my desktop has dual monitors, so there was a pretty big difference in screen resolutions and one of VNC's real weaknesses IMHO is that the client is forced to use the screen resolution of the server.  I didn't want to be scrolling around a lot so I axed that idea. 

Next I tried SSH with X forwarding enabled (using the -Y option).  That was nice and simple since all of the software and keys were already in place, but it was too slow, even with the -C option (which enables compression, effectively speeding up the rate of transfer).  FreeNX is a similar solution, and although it was somewhat faster some of the time, it didn't provide a consistently high level of performance so I had to reject both solutions.


Gnome also offers the option of logging into another machine via XDMCP.  When booting my laptop, I would stop at the login screen, hit Options, then "Remote Login via XDMCP".  In addition to stopping me from running any applications or doing anything else on my laptop while logged into my desktop remotely, this proved to be too slow also, so I canned the idea.


I was about to give up when I discovered xrdp.


XRDP

XRDP is a service that I installed on my desktop to enable it to accept RDP connections from my laptop.  Ubuntu already comes with an RDP client, tsclient, installed by default, and I've used Windows XP's Remote Desktop a lot at work so I know how well it works.  XRDP is in the Ubuntu repositories so I installed it on my desktop and now I can log into it from my laptop using the same program I use to log into my computer at work, and the speed more than adequate.  It is really great to be able to program, run virtual machines, and do many other resource-intensive operations from my laptop.

XRDP has a few quirks that I decided I could live with, but it is worth mentioning them here:
  1. It depends upon the package "tightvncserver" being installed on the server (the machine you are logging into), although it isn't configured as a dependency in the package (this was the case as of the last time I set up XRDP, which was a while ago so hopefully they have fixed this).  Install tightvncserver before you attempt to install XRDP.
  2. You may install the XRDP package and have it work just fine until you reboot the server.  This is because XRDP sometimes has difficulty figuring out whether it is running or not, so when the server is rebooting and it tries to start XRDP, XRDP thinks it is already running even though it isn't and refuses to start.  You can fix that by running this:
         sudo sesman --kill
         sudo xrdp --kill
         sudo rm /var/run/xrdp/xrdp.pid
         sudo sesman
         sudo xrdp
  3. There is no way to log into the console session.  So if I was working at my desktop and had to get up for something, I can't resume what I was doing on my laptop.  Windows XP provides the "/console" command-line switch which allows you to take over the session that is currently displayed on the machine's screen, but there is no way to do that with XRDP.  There might be a way to do it within Gnome; I'm not sure at the moment.
  4. If you do anything in the remote session that makes a sound, the sound will be played on the server's speakers, not your client's speakers.  So the first thing I hear when logging into my desktop is the sound of the Ubuntu login music being played on my desktop speakers, down the stairs and two rooms away.  This is a very minor annoyance to me but it is worth mentioning.
So now I have exactly what I set out to get: an easy-to-use remote access method that is fast and reliable.  It is really great to be able to open tsclient (you will find it on the Internet menu, it is called "Terminal Server Client") and see my desktop.  I highly recommend it!

Saturday, February 6, 2010

Using passphrase-protected SSH keys in Cron

Just a few days ago I told you all about the glory that is SSH, and now I want to show you how to automate its use.

I'm sure you noticed while you were setting up your key that it asked you for a passphrase to protect the key.  If you opt not to set a passphrase, it will store your key in plain text, which means that if anyone gets access to your filesystem, they have access to your key.  They can copy it and use it on any computer to log into your server as you, so you are much better off protecting your key with a passphrase.  The key generator will use the passphrase to encrypt your key and give you protection from this threat.

Because the SSH key is encrypted, whenever you want to use it, it will ask you for your passphrase so that it can get access to the key.  This is exactly what you want to keep your key secure, but if you want to automate any of your SSH related tasks in Cron, you will be prompted for the passphrase every time SSH tries to establish a connection with your server.  One quick, easy way out of this situation is to simply keep your SSH key unencrypted and hope that no one who gets near your computer has malicious intentions, but there's a better way: ssh-agent.

Ssh-agent is a process that stores SSH keys in memory.  When you set up your SSH key with a passphrase, you may notice that Gnome gives you the option to "unlock this key automatically when I log in" (Ubuntu offers this option, YMMV); if you select that option, it will use Ssh-agent to store your SSH key in memory.

So at this point you have your SSH key encrypted on disk but unlocked in memory, so you can manually SSH into your server without entering the password for you account on the server or your passphrase for the SSH key, but you want Cron to have the same ability.  At this point, any SSH jobs in Cron will prompt you for your passphrase to unlock the SSH key.  I will show you how to fix that by making your unlocked SSH key (which is stored in memory by ssh-agent) accessible to Cron.

There are 2 steps involved in making your key available to your script running in the Cron environment: Saving the 2 SSH environment variables to disk and reading them into your environment within your Cron job.

Saving the SSH environment variables

Go into your crontab file by entering "crontab -e" at the command line.  Add the following on its own line:

     @reboot ssh-agent -s | grep -v echo > $HOME/.ssh-agent

The @reboot is a special token in cron that will cause the command to be run when you first log in.  The command will cause the SSH_AUTH_SOCK and SSH_AGENT_PID to be written to a file on disk.

Retrieving the SSH environment variables within a Cron job

Basically, in order to make the running instance of ssh-agent available to your cron job, the following line needs to be run:
      
     . ~/.ssh-agent

On my machine, I put it right into crontab just before the job I wanted to run, like this:

     # m h  dom mon dow   command
     30 * * * * . ~/.ssh-agent; unison default -ui text -batch

If you are running a script, you can alternatively put the command at the top of the script.

And that's all there is to it!  You should now be able to run an SSH command using an SSH key that is encrypted on disk but unencrypted in memory within a cron job. 

EDIT:  A few days after posting this, I discovered that it was actually not working for me at all.  The steps I just described probably would have worked just fine if I was trying to start my own ssh-agent process and manually add keys to it using ssh-add, but I was trying to do it using the ssh-agent process and key which seahorse sets up for me when I first log in, and this configuration required a slightly different solution.  See my follow-up post for details.
 

See also:  http://sial.org/howto/openssh/publickey-auth/

Wednesday, February 3, 2010

SSH, the Secure Shizzle

SSH, the "secure shell" is the ultimate exception to the "security is inversely proportional to convenience" rule.  It is so easy to set up and once you have it running, secure authentication and communication are totally transparent.  It's almost irresponsible not to use it.

In its most basic form, SSH is a secure remote console program, a secure replacement for Telnet, however, it is very flexible and can handle such tasks as secure file copying between machines, proxying, and even tunneling.  Also, it's been around so long that it has become one of the de facto standards for secure authentication and communication on Linux.  Pretty much any network-enabled program worth its salt will feature support for it.

There are many fine tutorials on how to use SSH, my favorite is by the incomparable Chess Griffen on the Linux Reality podcast.  If you aren't already familiar with SSH, stop reading this post and listen to that episode of Linux Reality right away.  I'll wait.

As awesome as it is to be able to type "ssh myserver", enter the password, and start entering commands to be run on your server, there's a better way.  It's called SSH keys, and they will save you the trouble of typing in your password every time you want to connect to the server and they will give your server extra protection from being hacked.  It's easier and more secure, which almost never happens, so you know you have to do it.

Creating an SSH key

Simply hop into the command line and type "ssh-keygen".  It will ask you to give a filename and location for the new keys, and it will ask you to give it a passphrase.  When it is finished, go into your home directory's .ssh folder, and you will see two files: id_rsa and id_rsa.pub.  The id_rsa file is your private key; leave it where it is and don't let anyone have access to it.  Copy the id_rsa.pub file to your server.  On the server, go into your home directory's .ssh folder.  Create a file called authorized_keys if it doesn't already exist and copy the contents of your id_rsa.pub file into it (append the contents if the authorized_keys file already exists).  Now, go back to your client machine and SSH into your server.  It should let you in without asking for a password.

More SSH hardening measures

Let's face it, half the point of SSH is the security.  Setting up keys as I described will protect communications between your computers and ensure that you are actually contacting the server you think you are contacting with almost no effort on your part.  There are a few extra steps that should be taken to gain the highest level of security:
  1. Disallow protocol 1: On the server, open /etc/ssh/sshd_config as root.  Find the line that says "Protocol 1,2".  Delete the 1 and the comma, so it only says "Protocol 2".  Don't close sshd_config yet...
  2. Also in sshd_config, find the line that says "#PasswordAuthentication yes".  Remove the hash mark from the beginning of the line and change "yes" to "no".  Keep sshd_config open...
  3. Find the line that says "Port 22" and change the 22 to another number between 1024 and 65535.  
  4. Save that file and restart the ssh daemon (you can usually do this by running "sh /etc/init.d/sshd restart" as root)
  5. On your client, open /etc/ssh/ssh_config as root
  6. Find the line that says "Port 22" and change it to the same port number you set in step 3 above.
  7. Find the line that says "Protocol 1,2" and change it to "Protocol 2", as you did on the server in step 1 above.
  8. Find the line that says "Cipher aes128-cbc,3des-cbc,blowfish-cbc...".  Remove the hash mark at the beginning of the line if there is one, then remove all of the ciphers listed except aes256-cbc.  So that line should read, "Cipher aes256-cbc".
  9. Save that file
  10. Bookmark this blog post, because you will have to repeat steps 5-9 on all of your clients.
This may seem like a lot of work, but please keep in mind that SSH has been around for a very long time and there are a lot of automated ways to attack an SSH server that rely on the default settings, such as support for Protocol 1 and having the SSH daemon listen on port 22.  Changing these defaults takes only a few minutes but makes it exponentially harder for hackers to get in.  These steps are absolutely essential if you are considering exposing your SSH daemon to the public Internet.  

At this point, you are able communicate between machines with more ease and convenience than ever, and those communications are secured by military-grade authentication and encryption.  Security and convenience will never be this close, so take advantage!

Sunday, January 10, 2010

Automate gadget-related tasks by setting up a udev rule

If you're reading this post, odds are good that you own a computer.  Since it is specifically this post that you're reading, odds are good that you also have some kind of peripheral, such as a camera, printer, mp3 player, webcam, usb flash drive, usb hard drive, etc, and that there are certain tasks that you need to perform every time you plug it in.

For instance, you may want your computer to automatically import your pictures when you plug in your camera.  You may need to send your printer its firmware (like I do) every time you plug it into your computer.  You may want to back up your phone or send files to your flash drive automatically.  Or perhaps you just want Skype to launch automatically whenever you have your webcam connected.  Whatever your situation, odds are good that there is some way that you can automate tasks and make your life easier by saving time with udev rules.

Udev is the new device manager for the Linux kernel, replacing the older manager, hotplug.  In this post I will show you how to configure a rule so that udev runs a bash script (which I will also show you how to create) whenever you plug in your device.

Creating the udev rule

The nice thing about udev is that you can make rules for individual devices.  This means that you can have a different procedure run when you plug in your external hard drive than the one that runs when you plug in your USB flash drive.  To get started, plug in the device for which you would like the rule created.  Then run lsusb.

You will get a list of 5 or 6 items, and you should see your device listed by manufacturer.  On that same line, just before the manufacturer's name, you should see two 4-digit alphanumeric values seperated by a colon.  The first value is the Vendor ID and the second is the Product ID; you will need both for your udev rule.  As an example, when I created my rule for my mp3 player, this was the output of lsusb:
Bus 001 Device 012: ID 041e:4139 Creative Technology, Ltd Zen Nano Plus
Bus 001 Device 002: ID 0c45:62c0 Microdia Sonix USB 2.0 Camera
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub

You can see my mp3 player listed on the first line, and that the Vendor ID is 041e and the Product ID is 4139.  On to the next step.

Go into /etc/udev/rules.d and, as root, create a new file named 85-my_rule.rules (the file must begin with a 2-digit number followed by a hyphen, and it must end with ".rules"; it doesn't really matter what you put in the middle) and open it in your favorite text editor.  Enter the following:

     ACTION=="add", ATTRS{idVendor}=="your-devices-vendor-id", ATTRS{idProduct}=="your-devices-product-id", RUN+="/path/to/your/bash-script"


...so the rule I created for my Zen Nano looked like this:

     ACTION=="add", ATTRS{idVendor}=="041e", ATTRS{idProduct}=="4139", RUN+="/home/jizldrangs/bin/sync-zennano"


This basically tells udev that whenever you plug in a device with the vendor ID and product ID that you specified, it is to execute the bash script that you specified.  With this setup, the options for what you do when the device is plugged in is only limited by what can be scripted in Bash, so you can see how powerful this can be.

If for whatever reason you would like to manually mount or unmount the volume, you can tell udev to place a symlink in the /dev directory by using the SYMLINK keyword.  Just append SYMLINK+="my_symlink_name" to the end of the rule, and after it runs you can find a symlink to the device in /dev which you can use with the mount or umount commands.

When you are finished with that, restart udev:

     sudo service udev restart

Creating the script

The script that is run when the device is connected can do pretty much anything you want.  As long as it can be done from the command line, you can automate it with udev. 

As an example, and as a segue into my discussion on the various pitfalls of udev, here is what I ended up with:

#! /bin/sh

USER=jizldrangs
export HOME=/home/jizldrangs
SHELL=/bin/sh
DISPLAY=:0.0
MAILTO=jizldrangs
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/home/jizldrangs/bin
xhost local:jizldrangs
export DISPLAY=:0.0
LOGFILE="/usr/local/bin/sync-zennano.log"
TIMEFILE="/usr/local/bin/sync-zennano.time"

CURRENTLY=$( date +%s )
if [ -e $TIMEFILE ]; then
  
    echo "sync-zennano.time exists"
    FIRST_RUN_TIME=$( cat $TIMEFILE )
  
    echo $CURRENTLY
    echo $FIRST_RUN_TIME
    NUMOFSECONDS=$(($CURRENTLY - $FIRST_RUN_TIME))
    echo "num of seconds: ${NUMOFSECONDS}"
    if [ $NUMOFSECONDS -gt 60 ]; then
        echo "sync-zennano has not run in over 60 seconds"
        rm -f $TIMEFILE
    fi

fi

if ! [ -e $TIMEFILE ]; then

# Send the following block of commands to the background
{
    sleep 10

    notify-send -u normal -t 2000 -i info "Zen Nano Sync" "Starting synchronization.  Please wait."

    sudo -u jizldrangs unison zennano -auto -batch -terse -ui text

    notify-send -u normal -t 2500 -i info "Zen Nano Sync" "Synchronization of Zen Nano is complete"
} &
    touch $TIMEFILE
    chmod 666 $TIMEFILE
    date +s% > $TIMEFILE

fi

There are several udev "gotchas" that I'm working around here (I only really wanted to run 3 lines, but had to add a bunch of other stuff to get it working right).  It's worth going through each one...

Udev Gotchas

There are a relatively large number of things to watch out for:
  1.  For some reason, it is common for udev to run your script many times in quick succession.  Whenever I plug in my mp3 player, udev runs my script at least 10 times.  To get around that, I added a few lines to my script that would write the time to disk the first time it is run, then check for the existence of the file and not run the payload of the script if it had already been run within the last 60 seconds.  It's clunky but it works.
  2. Make sure you make the script executable with a "chmod 700 /path/to/script".  
  3. Keep in mind that your script will be run as root.  This means:
    1. For security reasons, keep it in a folder where it can't be accessed by anyone but root and remove read/write permissions from all other users, including yourself.
    2. Any environment variables or elements you may be relying on won't be available.  In my script, I am running a Unison profile located in my home directory, so in order to get it working I had to export the path to my home directory into the HOME variable in the first few lines of the script.  Then when it came time to run Unison, i added "sudo -u jizdrangs" at the beginning of the line, which causes that line to be run as me rather than as root.
  4. Udev will wait for your script to finish before mounting the device, so if you are doing any tasks that rely on the device being mounted, send that series of commands to the background by surrounding them with curly braces and adding a space and an ampersand after the close-curly-brace.  This will "detach" the commands, or cause udev to continue its work without waiting for those commands to finish.  Then add a "sleep 10" at the beginning of the commands being sent to the background, so that udev has time to mount the device before the rest of the commands are executed.
  5. Because udev runs the script in its own environment, it won't have information on your display.  I like to have notifications pop up, so I added the "export DISPLAY=:0.0" line towards the top.  The notifications I get using notify-send would not work without it.
It took me a while to get all this working, but it was worth the knowledge I gained along the way.  Now whenever I plug in my mp3 player, it automatically synchronizes my podcasts, and it displays notifications telling me when it begins the sync process and when it is finished.

See also:
http://reactivated.net/writing_udev_rules.html
http://ubuntuforums.org/archive/index.php/t-502864.html

Unify your files with Unison

If you are using multiple computers, you must have some way of keeping the files you want on each machine.  A lot of people use Dropbox or LiveMesh, and Canonical recently threw its hat into the ring with UbuntuOne, but all of these solutions provide a pretty small amount of storage (2 gigs in the case of Dropbox and UbuntuOne), and as with any online storage solution they also require that you trust them not to lose your data and to respect your privacy.

Because I prefer not to trust anyone with my data unless I have to, and partly because I'm a cheapskate, I prefer the DIY approach.  This is where Unison comes in.

Unison Explained

Unison is a file synchronization utility that keeps the contents of two directories synchronized.  In true Unix fasion, it doesn't reinvent the synchronization wheel, instead it uses rsync to do the comparison and synchronization, effectively making a tool that could only handle one-way synchronizations capable of handling two-way synchronizations.  In addition to being able to sync local directories, it can also operate over SSH, so you don't have to spend a bunch of time worrying about providing unison a protocol or dedicated port, or setting up a dedicated authentication mechanism.

Like all great Linux utilities, Unison has a graphic user interface but can also be used on the command line (this is important for automating synchronizations in cron; more on that later).  Either way, you are going to manage Unison tasks with Profiles.  Profiles are basically a set of two folders that you are syncing (one local, the other local or remote), and zero or more options, and although they can get rather elaborate, don't let it scare you.  The Unison GUI makes the task of creating a new profile simple.

Creating Unison Profiles
  
After you've installed Unison (on Ubuntu use "sudo apt-get install unison unison-gtk"), fire it up from your menu or by typing "unison-gtk" into the command line.  You will be greeted with a screen where you can type in the name of your first "root", or local directory.  When you are finished with that, hit OK and then you will be promted to enter the second directory.  This screen will give you the option to connect to a remote machine over SSH or a raw socket (not recommended).  If you decide to sync with another machine over SSH, be sure to type in the absolute path to the folder on that machine (i.e. "/home/jizldrangs/documents").  Fill in the host name, username to connect as, and port name as necessary, then hit OK and you're done!  Unison will take this information and create a new profile called "default".  You can create new profiles the next time you launch the Unison GUI

The GUI will help you set up a profile and will allow you to set some of the basic options, but if you want to use any of the more advanced features (see the Unison man page for a list), you will need to edit the profile by hand.  Your profiles are stored in separate files in the .unison directory in your home directory (e.g. /home/jizldrangs/.unison; it is a hidden directory).  In that folder you will see a file for each of your profiles, all ending with ".prf".  Simply open the profile you want and start adding options.  You can do things like exclude certain file types, exclude certain directories, exclude files over a certain size, have it follow links, etc. 

I already have an old Pentium 3 laptop recommissioned as our file and print server, and it runs all the time, so it was the perfect place to store the master copies of all my files, and to receive updates from whichever of my computers I make changes on and distribute them to the rest of the machines, all within the security and ease of SSH.

To make sure that I have the latest version of my files, I've tasked Cron, my personal Linux butler, with the task of running the sync at the bottom of every hour.  Here is the relevant line from my crontab file:

# m h  dom mon dow   command
30 * * * * unison default -ui text -batch

The "ui text" option tells unison to use command-line mode and not to launch the gui, and the "batch" option tells it to accept default update options (basically, replace older files with newer files), so it doesn't prompt me for input on what to do with the files.  Now my file syncing is totally automated.

Unison Gotchas

If you are syncing files to an mp3 player or usb flash drive, add the option "perms = 0" to avoid getting the error "failed to set permissions" when you try to sync.

Happy syncing!  

Saturday, January 2, 2010

Ubuntu Netbook Remix

I received an Acer Aspire One netbook for Christmas, and after installing Ubuntu Netbook Remix it has replaced the Dell Inspiron 5100 laptop as my main day-to-day machine.   UNR is a very slick modification of Ubuntu that is well-suited to this type of computer.  Some initial thoughts:
  1. Hulu works!  Yay!
  2. The smaller screen has been harder to get used to than I thought it would be.  I always thought of the larger screen size of my Dell Inspiron laptop as an asset, I didn't appreciate what an asset it was until I switched to this 10.1" screen.  Fortunately UNR has a neat little feature that helps you get the most of your pixels: when a window is maximized, it will merge the title bar with the menu at the top of the screen.
  3. The battery life on this thing is fantastic.  Thanks to the power-sipping Atom processor, the battery will last about 5 1/2 hours.  The Dell laptop's Pentium 4 would guzzle down the battery's juice in about 45 minutes (if I was lucky).
  4. Unfortunately the touchpad driver does not support multitouch, so 2-finger scrolling does not work.  UNR provides an "edge scrolling" option, which is what I'm using now, but it would be great to get 2-finger scrolling back.
  5. Like other netbook OSs that I've seen, the menu is integrated with the desktop.  It looks really slick, is easy to use, and does a good job of utilizing the netbook's limited screen real estate.
I love this little unit, but I'm not sure that I agree with the initial netbook vision.  Netbooks were supposed to be little more than a dedicated web-browser, with most of the applications, and therefore computing, done in the cloud.   Although most of what I do on this machine involves the web browser, there are plenty of client applications I use.  Sure, I am not going to be running VirtualBox on this machine any time soon, but this machine wouldn't have one-tenth the value to me if it couldn't run applications like Liferea, Zim, Unison, Empathy, and Rhythmbox in addition to Firefox. 

Furthermore, I'm not sure that the manufacturer of this device really believes in the original vision for the netbook either.  The original netbooks (and here I'm speaking of the Asus Eee) had a single-core Celeron processor, with a trimmed-down customized version of Xandros Linux, and 4 gigs of internal flash storage.  That was much more consistent with the web-browser-only ideal than today's netbooks.  This Acer Aspire One has a dual-core Intel Atom 1.6 Ghz processor, 1 gig of RAM, and 160 gig hard drive, and it shipped with Windows XP Home Edition.  It seems clear to me that the public liked netbooks but wanted a higher level of functionality than was available in the first generation of netbooks, and Acer along with the manufacturer of every other netbook I am aware of, has delivered.  I'm absolutely thrilled with the results.

It appears that some hardware manufacturers have picked up on the recent netbook trend and have decided to declare war on it.  Litl, LLC along with a few other companies are attempting to return to the original netbook vision with the "webbook", which is truly a web browser with a keyboard attached.  I wish litl the best of luck in their endevours, but I fear for their sake that the days of terminal-mainframe topologies are behind us, and that people will always want their machines to have some modicum of capability.