Friday, 30 July 2010

Encrypted Backups Part 2

After a bit of playing with the encrypted backup stuff described in my previous post I decided to expand on the the idea and created a script the will look for "backup.lst" in the home directories of all users who of members of a backup group.

If the file is found then each line in it is treated as a file path to be backed up with rsync. The script follows, its in the public domain so help yourself, if its useful please comment here :-)


#!/bin/bash

# this script will scan the home directories of all users in the
# group "backup" and look for a folder "Backup". If this folder
# exisits files in it will be backed up remotly

DESTIN=/home/local-user-name/Crypt
REMOTE=/home/local-user-name/Remote/
SERVER=remote-server:
RUSER=remote-user-name

sshfs $RUSER@$SERVER $REMOTE
encfs --extpass=/home/local-user-name/extpass.encfs $REMOTE/crypt $DESTIN

IFS=$','
USER_LIST=`grep ^backup /etc/group | cut -d: -f4`

for USR in $USER_LIST; do
if [ -f /home/${USR}/backup.lst ]; then
LOGFILE=/home/${USR}/backup.log
echo "Starting backup at `date`" >> $LOGFILE
echo "Working for" /home/${USR}/backup.lst >> $LOGFILE
if [ ! -d ${DESTIN}/${USR} ]; then
mkdir ${DESTIN}/${USR}
fi

IFS=$'\n'
for F in $(cat /home/${USR}/backup.lst); do
rsync -v -a --delete /home/${USR}/${F} $DESTIN/$USR/ >> $LOGFILE
done
echo "Backup done at `date`" >> $LOGFILE
chown ${USR}:users ${LOGFILE}
fi
done

fusermount -u $DESTIN
fusermount -u $REMOTE


The script makes use of a second helper script that provides the password for encryption so that everything can be run automatically via cron.


#!/bin/sh
# extpass.encfs

echo "my-crypto-pass"


Summary results of the backup are written into a file called "backup.log" in each users home folder that contained a "backup.lst" file.

Sunday, 25 July 2010

Encrypted Backups with rsync and FUSE

Recently I set up a backup solution between my home server and a friends. However, I decided that I really needed to keep my data as safe as possible when its out of my direct control. Being the paranoid person I am, that meant encryption.

Introducing rsync.
rsync is a very handy command that works on its own as a capable backup solution. It's designed to copy only the minimum about of data to represent changes to the files you wish backed up. Combined with ssh this allows a secure remote backup system that minimises bandwidth usage.

Adding encryption.
Encryption by its very nature will try to obscure the data your dealing with, a small change to part of a single file can result in every byte of that file being changed. If your working with encrypted disk images, then this could mean that every part of the image is changed. This property is great for strong encryption but completely destroys rsyncs ability to detect changes and minimise bandwidth. This can result in huge amounts of data being transmitted every time you backup even a minor change.


FUSE to the rescue.
FUSE ( File system in USEr Space ) is a fantastic project that makes it easy to implement new and interesting utility file systems, it also allows the use of these file systems as a regular user with out the need for root level access. Two file systems built using FUSE are Sshfs and Encfs. Sshfs allow the mounting of a remote file system via an ssh link to the machine. Encfs is an ecrypting file system, it allows mounting of an encrypted source directory to some destination. Any files writen into the destination directory will be encrypted and stored in the source directory.

With these two components with have everything we need to use rsync with encryption effectively. First we use sshfs to mount the remote file system


sshfs remote_user@remote_server: /home/local_user/backup


then we use encfs to mount a folder within the remote file system to a second local folder.


encfs /home/local_user/backup/encrypted /home/local_user/clear


finally we tell rsync to backup files as if to a local folder and point it at our encfs mount point.


rsync -v -a --delete /home/local_user/stuff_to_backup /home/local_user/clear


and there we go. rsync will do its job and write only the minimum bytes to represent the changes, encfs will encrypt this, and finally sshfs will tunnel it all to the remote server. The exact bandwidth usage will depend on how encfs encrypts its files. After your done, unmount sshfs and encfs with the thus


fusermount -u /path/to/mount_point


A better solution would be to mount the encfs folder on the server side before using rsync via sshfs. However that would require having fuse and encfs installed on the target server.

Thursday, 22 April 2010

Linux Bluetooth Audio

Recently I've been playing with bluetooth and audio. Thanks in part to finding a supply of very cheep dongles. Here is how I got my Linux computers to talk to a bluetooth audio dog-tag style stereo audio device, this can be used with any bluetooth audio device though including mono hands free kits.



Im using bluez 4.63, the latest available in my distro's ( Arch Linux ) repositories. First we need to find our device, so put the audio device into paring mode. Next we need to scan for our device.




# hcitool scan
Scanning ...
00:00:00:00:00:00 BTS-PHF41


Ok, once we have found the device we need to connect to it, for this we need the device mac address returned from our scan.




# hciconfig hci0 up
# hcitool cc 00:00:00:00:00:00


At this point we will have an open connection to the device, however we are not yet paired, so that's our next step. To do this I'm using one of the utility scripts provided by bluez. This requires Python.




# bluez-simple-agent hci0 00:00:00:00:00:00
RequestPinCOde (/org/bluez/.... )
Enter PIN code: 0000
Release
New Device ( /org/bluez/ .... )


This script will ask for a PIN number, enter one matching your device, given in the documentation that should come with it (0000 in the example above). Now we are all paired up and ready to send audio over to our device. In order to do this we will use ALSA and setup a bluetooth audio configuration. Open up the file ~/.asoundrc in the home folder of your regular user account and add the following to it.




pcm.bluetooth {
type bluetooth
}


That's that, we now have everything we need setup and ready for blutooth audio. However lets go one step more and configure a media player to use our new audio interface. I'm quite fond of MPD so ill show a configuration for that. Edit ~/.mpdconf and scroll down to the audio output section, add the following.




audio_output {
type "alsa"
name "blutooth output"
device "bluetooth"
format "44100:16:2"
mixer_device "defualt"
mixer_control "PCM"
mixer_index "0"
}


Now fire up MPD and enjoy audio with no wires attached ;-) This could also be useful as part of a nice jukebox setup, possibly using PMix from an Android phone as a remote.... sounds like a future project there!

Sunday, 31 January 2010

Data Driven ContentProvider for Android

Recently I have been playing with Google's Android platform for mobile devices. The application I am working on required a data store, thankfully android provides access to the excellent SQLite database engine. Indeed it even provides a selection of utility wrappers for working with databases without the need to craft any SQL statements. A handy feature for those with no prior knowledge of, or wish to use, SQL directly. One such mechanism is the ContentProvider class. In android ContentProviders are used to provide shared access to various data stores that may exist on the device, or possibly remote sources via a network link. Abstracting the specific data source behind a standardised interface results in a powerful tool for sharing or collecting data by an application.



As such a useful way of working with data is likely to crop up a lot, some kind of generic component seemed in order. Something that can be dropped into an application and configured to work in the way it needs with the minimum amount of hassle and time. Thankfully it seems this is entirely possible, at least for the most simple common case of database usage. That is, assuming that a single item will be described by a single row in a given table, such that each column holds one of that items properties. Quite a typical usage pattern and common enough to merit spending the time to write this over just hard coding access ( ah software engineers, how we love to spend 4 hours making a 3 hour job take only 2 hours ).



So after a weekends playing with the code I came up with a GenericProvider class that can be configured entirely via XML data. This class makes two main assumptions about how the data will be presented. Firstly, it assumes one item per row, as described above. Secondly it assumes that each element will have a unique primary key column called "_id", this is recommended by the Android documentation so its not really an issue.



How to use it, firstly three entries need to be added to res/values/strings.xml:




<string name="db_name">MyShinyDB</string>
<string name="db_version">1</string>
<stringname="authority">my.app.AppName.AppContent</string>


These strings give us the name for our database, its version number for upgrades and finally the authority string for our provider, see the Android documentation for how this is used. Next in the manifest we declare our provider class:




<provider name=".GenericProvider" authorities="@string/authority"></provider>


Unfortunately this wont work, I'm not sure why but Eclipse thinks this is a syntax error and forces me to explicitly type out the authority string rather than referencing it. So our actual provider declaration becomes:




<provider name=".GenericProvider" authorities="my.app.AppName.AppContent"></provider>


Note that the authority string is still referenced by the code so needs to remain in strings.xml. Finally we create a new file called "database_schema.xml" under res/values/xml. This file describes how our database will look and details the mime string for each item type.




<?xml version="1.0" encoding="utf-8"?>

<!-- a database schemer defined in xml so i can change it easily -->
<database>
<table name="record" >
<column name="_id" type="INTEGER PRIMARY KEY AUTOINCREMENT"></column>
<column name="date" type="DATETIME2"></column>
<column name="name" type="VARCHAR"></column>
<column name="tag1" type="VARCHAR"></column>
<column name="tag2" type="VARCHAR"></column>
<column name="tag3" type="VARCHAR"></column>
<column name="value" type="VARCHAR"></column>
</table>
<table name="todo" >
<column name="_id" type="INTEGER PRIMARY KEY AUTOINCREMENT"></column>
<column name="date" type="DATETIME2"></column>
<column name="due" type="DATETIME2"></column>
<column name="name" type="VARCHAR"></column>
<column name="tag1" type="VARCHAR"></column>
<column name="tag2" type="VARCHAR"></column>
<column name="tag3" type="VARCHAR"></column>
<column name="value" type="VARCHAR"></column>
</table>

<mimes>
<mime table="record" type="app.myapp.record" ></mime>
<mime table="todo" type="app.myapp.todo" ></mime>
</mimes>
</database>



That's that, a simple content provider can now be added to an application in minutes. The code for GenericProvider is available via the final link in this post, please note this is early code and may not be complete in some places, as always, comments welcome.



http://code.google.com/p/android-bits

Friday, 17 October 2008

Python Profiler

Recently I had to profile some python code, unfortunately the cProlfier module didn't seem to be returning valid results. For some reason it suggested a crazy amount of time was spent sleeping, and indeed it was, but not in all threads of the application. After a bit of thought I came up with the following code snippet designed to time the particular functions we were interested in.


"""
The PyProf utility profiler
a pure python profiler for simple interigation of function times

Author: Tim Kelsey, tim.callidus@gmail.com
This code is in the public domain
"""

import time as Time

global callStack
callStack = []

global callStatsDict
callStatsDict = {}

class CallStats( object ):
""" object used to record per-callable stats """
def __init__( self ):
self.callCount = 0
self.acumTime = 0.0


def pushCallStack( info ):
""" push """
stats = None
if info not in callStatsDict:
stats = CallStats()
callStatsDict[ info ] = stats
else:
stats = callStatsDict[ info ]

stats.callCount += 1
callStack.append( ( stats, Time.time() ) )


def popCallStack( ):
""" pop """
info = callStack.pop()
info[0].acumTime += Time.time() - info[1]


def addProfiledCall( ns, funcName ):
"""
this replaces a callable in a give namespace with a wrapped version that
suports profiling
"""

def _PyProfFunc( *args, **kwds ):
""" wrapper function for insertion into the namespace """
pushCallStack( ns.__name__ + "." + funcName )
ret = ns.__dict__[ newFunc ]( *args, **kwds )
popCallStack()
return ret

newFunc = "__PyProf__" + funcName
ns.__dict__[ newFunc ] = ns.__dict__[ funcName ]
ns.__dict__[ funcName ] = _PyProfFunc


def getResults():
for name, val in callStatsDict.iteritems():
yield ( name, val.callCount, val.acumTime )
return



The power of pythons mutable namespaces and dynamic functions really shines through here. Maybe someone else will find this code handy, comments welcome ;-)

Wednesday, 3 September 2008

Castle Harmondale......

Oh well, first post.
Having just moved into a new flat last weekend I found myself reminded of an old computer game, Might and Magic VII: For Blood and Honor. Ah they don't make 'em like they used to! Maybe thats a good thing, but I still have fond memories of many hours lost battling through its 2D denizens.

However, what I was particularly reminded of in this instance was Castle Harmondale, a run down derelict castle filled with rubbish and filth. You "win" this questionable relic at the start of the game and behold a scene not so very different to one encountered when first entering my flat (although regrettably, its somewhat smaller and less medieval).

Still sometimes it pays to make a noise, and just as with the lowly Castle Harmondale the journey of upgrades and repairs begins in earnest.