Nov 302014


2015-01-22:  Released InstaTower 1.5.6 (Forge-1.7.10).  Structures are built using comparable blocks from current dimension (overworld [default], nether, or end). Enclosed exposed tunnels. Eliminate lava and blocks above structure in nether to avoid lava rain. Made recipes much more expensive for better balance.  Made recipes accept log, wool, and brick variants.


My children love to play Minecraft!  The endless variety is one of the big reasons they never get bored.  Just when the game begins to feel stale, they download a new mod or install a new texture pack, and it’s a brand new game!  It never gets old.

They always ask me to play with them, which I happily do when I can, but being an old gamer, I like to follow the “rules” and enhance my character’s inventory, armor, and weapons, which takes time.  But, just about the time I build up a decent house with well-stocked chests, they move onto a new mod, which means a new profile and a new world.   😥   There goes all of my hard work …


So, I decided to make my own mod, InstaTower, which could be added with other mods.  It provides fairly cheap recipes to more quickly place a nice base, so we could get back to exploring and having fun!   🙂

The core InstaTower item contains plenty of beds, well-stocked chests (diamonds, arrows, steak, rails, etc.).  Plus, it has a beacon on top, so you can easily power-up and find your way home.  The basement level has a tunnel, which descends to the lower levels of the world, where diamonds and other rare ores are more plentiful.  The tunnel contains stairs, torches, and a circuitous  rail line, which explores the mining tunnel and lateral run, and then returns.  … The crafting recipe is comparatively cheap, although it requires items from later in the game (glow-stone, diamond block, emerald block, iron block, gold block, log, and stone-brick), so it is not too easy to craft.

The tower will always be rotated, so that the player is looking into the entry door of the tower, and the block clicked to place the tower will always be the “red-carpet entrance” into the tower.

The InstaGarden provides a complete garden, enclosed in a fence with gates and is well lit with glowstone to prevent hostile mobs from spawning in the garden.  It can be placed adjacent to the InstaTower to provide a safe provision of sustainable food for the adventurous explorer.  It also contains a few animals to start a farm or provide transportation.

When placed correctly, InstaTowers may be connected with an InstaWall item, which provides protected access between 2 towers as well as a connected rail line at the basement level.

Ultimately, an InstaCastle may be crafted using 4 InstaTowers, 4 InstaWalls, and an InstaGarden.  This item correctly places 4 InstaTowers connected by 4 InstaWalls to create a castle with 4 independent mining tunnels and a large, single connected rail line that connects all 4 towers and explores the mining tunnels and shafts created by all 4 tunnels.  The courtyard of the castle contains two InstaGardens and animals to completely satisfy dietary and transportation needs.

All structures adapt to their dimension.  In the overworld, stone bricks, glass, and wooden fences are used.  In the nether world, nether bricks, iron bars, netherack, and nether fences are used.  In the end dimension, obsidian is used.

For many people, this MOD provides far too much structure and and far too many items for too little cost.  To remedy this, many of the items and structures can be disabled server side by a editable configuration file, instatower.cfg, which is initialized by the mod.  Default settings may be retrieved by simply deleting the file and allowing the mod to recreate it with default values.  Virtually all of the the structures features (beacon, enchantment table, diamonds, food, anvil, rail lines, the entire basement level, etc.) except stones, lighting, and ladders can be disabled via this config file.  The crafting recipes for each item may also be disabled, making the items only available to users in Creative mode.

An optional item is also included, the Divining Rod, which reports miscellaneous block data about any block right-clicked.  This item and the provided info is handy for mod-developers but nothing more.


  1. Download and the recommended 1.7.10 version of Forge Mod Loader (1.7.10-Forge10.13.2.1230 or higher), which enables a lot of mods.  (This is not the place to get support for Forge, if you are new to it.  Sorry.)
  2. Download and install the InstaTower 1.5.6 (Forge-1.7.10) (JAR file inside ZIP) into the mods directory of your minecraft profile.  (I assume you know how to do this.)
  3. Mod will initialize a config file upon first run, instatower.cfg, which can be edited to disable features and control tunnel depth and lengths.


  • 1.5.6 (2015-01-22) – Structures are built using comparable blocks from current dimension (overworld [default], nether, or end). Enclosed exposed tunnels. Eliminate lava and blocks above structure in nether to avoid lava rain. Made recipes much more expensive for better balance.  Made recipes accept log, wool, and brick variants.
  • 1.5.4 (2015-01-07) – Added control for wall length in castle.  Cleaned up comments in config file.
  • 1.5.3 (2015-01-06) – Added many more switches in the config file, so that most of the features (including beacon, library, anvil, brewing stand, stocked chests, rail lines, basement level, etc.) can now be disabled via the config file.
  • 1.5.2 (2015-01-05) – Separated InstaTower into new InstaGarden and updated version of InstaTower.  Created new item, InstaWall, which creates a long wall with basement.  Created new item, InstaCastle, which places 4 InstaTowers, connected by InstaWalls, with a InstaGarden in the middle courtyard. items.  Added config-file control of  tunnel presence, depth, length, and whether valuable ores are cleared or not.
  • 1.20 (2014-12-31) – Added basement with descending tunnel and long lateral runs at bottom of tunnel, complete with rails, torches, and carts.
  • 1.10 (2014-12-28) – Added rotation of structure based on player’s facing direction.  (When placed, player will always be looking into entry door of tower, and structure’s location will be offset such that the red-carpet will lie upon block clicked to place tower.)  Fixed all glitches associated with chests, doors, and crops.  Added new item, Divining Rod, which reports block id, name, coordinates, and metadata to log, which is useful for MOD developers.
  • 1.02 (2014-12-23) – Moved config file inside JAR file. Deprecated external config file that specified tower layout.
  • 1.0.1 (2014-12-1) – Initial release.



Available on GitHub:


Have fun!

Oct 072013



Compiling Python on one type of system to run on another type, known as “cross-compiling”, is a notoriously difficult issue, because the Python process is very complicated, and it was not designed with cross-compilation in mind.  However, a simple recipe has been devised for past versions up to Python-2.7.3:

The gist of this process is summarized in 2 steps:

  1. Compile python and Parser/pgen to run on the build-system, because they are used later in the cross-compilation process.
  2. Compile everything else and python and Parser/pgen again using the cross-compiler tool-chain.

Thanks to recent work (Issue 17086), Python’s cross-compilation process has been enhanced to use and accept an external python interpreter, which is used during the build process.  This simplifies some of the patching provided by Paul’s blog post; however, it also changes the underlying code dramatically, which may explain why there have not been any new posts documenting the updated process with new patches since Python-2.7.3.

An Updated Patch and Build Process for Python-2.7.5

Using this patch and a suitable cross-compilation tool-chain (gcc, c++, ar, ranlib, etc.), the following process can be used to build Python-2.7.5 on one Linux system to run on another Linux process:

Prepare Sources

$ export RFS=/path_to_embedded_root_file_system
$ wget
$ tar -jxf Python-2.7.5.tar.bz2
$ cp <wherever_you_saved_it>/Python-2.7.5-xcompile.patch Python-2.7.5/
$ cd Python-2.7.5

Most embedded projects will have a directory on the build system, which contains all of the directories and files to be placed on the root file system of the embedded device. In this script, the full path to this directory is defined as RFS. The remaining steps retrieve the source code, unpack it, copy the above patch into place, and change into the build directory.

Step #1 – Compile Programs Used by Build System During Build

The first major step compiles 2 binaries to run on the build system: the python interpreter and Parser/pgen:

$ ./configure  # for build-system's native tool-chain
$ make python Parser/pgen
$ mv python python_for_build
$ mv Parser/pgen Parser/pgen_for_build

After building these 2 binaries, they are moved aside for later use during the full cross-compilation and installation process.

Patch Build Files and Prepare for Cross-Compilation

In preparation for the next step, all of the compiled files – except for the 2 moved aside – are deleted, the build files are re-configured using the cross-compilation toolchain, and the full Python suite is built including modules, such as ssl, zlib, and ctypes:

$ export PATH="/opt/freescale/usr/local/gcc-4.3.74-eglibc-2.8.74-dp-2/powerpc-none-linux-gnuspe/bin:${PATH}"
$ make distclean
$ ./configure --host=powerpc-none-linux-gnuspe --build=i586-linux-gnu --prefix=/ \
    --disable-ipv6 ac_cv_file__dev_ptmx=no ac_cv_file__dev_ptc=no                    \

I have included the full path to my current tool-chain as an example. (I have used this process for both x86 and x86_64 build systems to produce a Python installation for a Freescale PowerPC embedded Linux system.) Please notice how the path contains and corresponds to the “host” entry in the cross-compilation configuration step. I had to specify a few extra switches (disable-ipv6, ac_cv_file__dev_ptms, ac_cv_file__dev_ptc, and ac_cv_have_long_long_format) to help the configure script resolve some tests that required execution on the host. These will vary depending upon your embedded host system’s architecture.

For reference, here is the list directory listing of the above cross-compilation tool-chain:

$ ls -la /opt/freescale/usr/local/gcc-4.3.74-eglibc-2.8.74-dp-2/powerpc-none-linux-gnuspe/bin
-rwxrwxrwx 1 root root  590485 Mar  5  2012 powerpc-none-linux-gnuspe-addr2line
-rwxrwxrwx 1 root root  614647 Mar  5  2012 powerpc-none-linux-gnuspe-ar
-rwxrwxrwx 1 root root  897161 Mar  5  2012 powerpc-none-linux-gnuspe-as
-rwxrwxrwx 1 root root  235382 Mar  5  2012 powerpc-none-linux-gnuspe-c++
-rwxrwxrwx 1 root root  589227 Mar  5  2012 powerpc-none-linux-gnuspe-c++filt
-rwxrwxrwx 1 root root  234277 Mar  5  2012 powerpc-none-linux-gnuspe-cpp
-rwxrwxrwx 1 root root    8503 Mar  5  2012 powerpc-none-linux-gnuspe-embedspu
-rwxrwxrwx 1 root root  235382 Mar  5  2012 powerpc-none-linux-gnuspe-g++
-rwxrwxrwx 1 root root  233126 Mar  5  2012 powerpc-none-linux-gnuspe-gcc
-rwxrwxrwx 1 root root  233126 Mar  5  2012 powerpc-none-linux-gnuspe-gcc-4.3.2
-rwxrwxrwx 1 root root   16512 Mar  5  2012 powerpc-none-linux-gnuspe-gccbug
-rwxrwxrwx 1 root root   28017 Mar  5  2012 powerpc-none-linux-gnuspe-gcov
-rwxrwxrwx 1 root root  655127 Mar  5  2012 powerpc-none-linux-gnuspe-gprof
-rwxrwxrwx 1 root root 1036372 Mar  5  2012 powerpc-none-linux-gnuspe-ld
-rwxrwxrwx 1 root root  603678 Mar  5  2012 powerpc-none-linux-gnuspe-nm
-rwxrwxrwx 1 root root  750617 Mar  5  2012 powerpc-none-linux-gnuspe-objcopy
-rwxrwxrwx 1 root root  895336 Mar  5  2012 powerpc-none-linux-gnuspe-objdump
-rwxrwxrwx 1 root root  614647 Mar  5  2012 powerpc-none-linux-gnuspe-ranlib
-rwxrwxrwx 1 root root  264063 Mar  5  2012 powerpc-none-linux-gnuspe-readelf
-rwxrwxrwx 1 root root  593901 Mar  5  2012 powerpc-none-linux-gnuspe-size
-rwxrwxrwx 1 root root  591853 Mar  5  2012 powerpc-none-linux-gnuspe-strings
-rwxrwxrwx 1 root root  750617 Mar  5  2012 powerpc-none-linux-gnuspe-strip

As you can see, each of the binaries are prefixed with “powerpc-none-linux-gnuspe-“, which corresponds directly with the “host” variable in the configuration step.  (Please ignore the fact that these binaries are world writable!  This is not good, but it is a secure build system, and this is an artifact of some unrelated build system workarounds.   Your binaries should only be writable by user.)

Step #2 – Cross-Compilation

Having re-configured the sources for cross-compilation, the next major step is to actually cross-compile the fully Python suite include modules:

$ make --jobs=8 \
    CFLAGS="-g0 -Os -s -I${RFS}/usr/include -fdata-sections -ffunction-sections" \
    LDFLAGS='-L${RFS}/usr/lib -L${RFS}/lib'

The linked patch hard-codes additional build dependencies, where the compiler and linker can find header and library dependencies, like so:

diff --git a/packages/Python-2.7.5/ b/packages/Python-2.7.5/
index 716f08e..ca8b141 100644
--- a/packages/Python-2.7.5/
+++ b/packages/Python-2.7.5/

@@ -552,6 +556,11 @@ class PyBuildExt(build_ext):
         if host_platform in ['darwin', 'beos']:
             math_libs = []

+        # Insert libraries and headers from embedded root file system (RFS)
+        if 'RFS' in os.environ:
+            lib_dirs += [os.environ['RFS'] + '/usr/lib']
+            inc_dirs += [os.environ['RFS'] + '/usr/include']
         # XXX Omitted modules: gl, pure, dl, SGI-specific modules


However, they are also embedded in the above CFLAGS and LDFLAGS for good measure. 🙂 Please notice that the patch depends on the RFS serving as a flag to trigger this behavior.

Optional:  Reduce Binary Size

Optionally, additional compilation flags are also set in the above step, which help the cross-compilation strip tool reduce the binary size, like so:

powerpc-none-linux-gnuspe-strip --strip-unneeded python

This is entirely optional. Removing this step and the data-sections and function-sections compiler flags may help simplify the process, when troubleshooting.

Install into Root File System for Embedded Target

Lastly, the suite can be installed into the embedded device’s root file system, like so:

$ sudo make install DESTDIR=${RFS} PATH="${PATH}"

Often but not always, embedded root file system directories on the build system are owned by root; consequently, sudo is required to modify any files in the RFS directory. Since sudo also disregards the PATH environment variable, it must be passed explicitly via the command line.


Although patches and similar processes have been provided by others for Python-2.7.3 and below, there have been no recent updates or tutorials for 2.7.4 or 2.7.5 that I have been able to find on the internet. Most likely, this delay is because of the significant cross-compilation changes and the daunting complexity of the Python build process. The above process and linked patches enable this ongoing process for Python-2.7.5 and hopefully other future versions.  Hopefully, this patch set will eventually disappear as increasingly more cross-compilation capability is integrated into Python’s own internal build process.

Although I don’t have time to support this patch, if you have any suggestions on improving it, or if you find any bugs, please let me know in comments below.

A more elaborate build script with error checking and comments is available here for download.

Feb 222011


Sometimes, you may only be able to connect to a Microsoft SQL (MS-SQL, or MSSQL) server through its SQL port, so you cannot use RDP to access the Enterprise Manager or other graphical tools on the host.  Other times, you may simply want to leverage the power of a Linux box.  😉  Regardless of the reason, if you want to connect to a MS-SQL server from a Linux box, read on…  As in previous posts, any installation or configuration instructions pertain to Gentoo.  Please adapt as necessary. … Also, these instrcutions were tested on MS-SQL Server 2000, so some instructions may need to be adapted depending on your version of MS-SQL.


FreeTDS offers an opensource command line client, tsql.  This is comparable to using Microsoft’s OSQL command line interface, although the arguments to launch the client are different.  Although it has various options, you launch tsql, like so:

tsql -S <sql_server_name> -U <user_name> [-P <password>]

If you are comfortable with OSQL, you will have no problem using this basic SQL CLI client.


Packagers are available for Perl, PHP, Python, and several other scripting languages, which provide an extensive, programmatic interface to the remote MS-SQL server.


Feb 222011


Suppose you have access to a remote MS-SQL database, and you need to analyze it.  Why?  Let’s pretend that you intend to migrate the contents to another server or database, like MySQL.  😉  Anyway, how do you discover various essentials about the database? … These instructions are for Microsoft SQL Server 2000.  The following instructions and results will probably vary wildly for other versions of MS-SQL.

Connect with TSQL

First, let’s connect to the remote MS-SQL using the tsql command line client:

tsql -S <sql_server_name> -U <user_name>

You may have another favorite method to gain command line access to the SQL database, but this is my current favorite.  🙂

Listing All The Tables in the Database

In our open SQL CLI, one site posted doing something like:

EXEC sp_tables

Unfortunately, this does not list all the tables for some unknown reason.  I could see several more tables listed in MS SQL Enterprise Manager.  Most of my “user” tables were not reported by the above method, although some were reported correctly.

Another site indicated listing special system table contents, which I think is actually specific to ORACLE and newer versions of MS-SQL.  Maybe it will work for you?

SELECT * FROM sys.tables

The same site also suggested the following, but it also returned partial results for some reason unknown to me.


Others recommend using “sp_help” with no arguments, like so:


However, I found that it returned far too much information, much more than just the list of all tables, although that did seem to be included…

Ultimately, I used this:


This returned a filtered list of system objects that had a type of either “system” or “user” table.  Very nice! 😀

Listing Table Structure

The structure of a particular structure can be listed, like so:

sp_help &lt;table_name&gt;

Listing All Columns

Try this:

SELECT name, object_name(id) FROM sysindexes

Determining DB Size

Try this:

EXEC sp_spaceused [tablename]
Feb 222011


Imagine you have an MS-SQL table that has LOTS of columns.  Some of the column names are known, but others are dynamically generated, and their names are not known at run time.  Now imagine that you suspect most of those unknown columns are empty.  How do you determine if any rows contain values in the columns, whose names are unknown?

Incremental Solutions

You could always just show all the values for every column of every row:


However, if your table is large, this may return too much data.  So, you could look for unique values, like so:


This works better.  However, if the known columns contain lots of unique data, the above T-SQL command may still return too much data.  Let’s try to focus on returning the unique values of just the unknown columns!

Final Solution

In our problem, all of the unknown columns begin with a known prefix, “UDA_”.  So, we need to get a list of the column names compiled in a comma separated list, suitable for a second SELECT statement.  We can do this, like so:

SELECT @myColumnNames=COALESCE(@myColumnNames + ',', '') + COLUMN_NAME
SELECT @myColumnNames

These statements initialize a local variable, @myColumnNames, and then COALESCE, or compile all the column names from the myTable, which begin with the prefix, “UDA_”, into a comma separated list.  The last statement prints the variable value for debugging purposes, which might look like:

Finally, a simple select statement can be used to return the distinct values of these columns from our table.  The only problem is that you cannot substitute a variable directly into a T-SQL statement, so you have to build the necessary statement string and execute it, like so:

EXEC('SELECT DISTINCT ' + @myColumnNames + ' FROM myTable')

Putting It All Together

SELECT @myColumnNames=COALESCE(@myColumnNames + ',', '') + COLUMN_NAME
EXEC('SELECT DISTINCT ' + @myColumnNames + ' FROM myTable')

If the above statements yield a single row with nothing but NULLs, then you know those columns are all empty!  Anything else indicates that at least one of the table rows, for at least one of your columns of unknown name, contains a value!

Jun 172009


I recently switched from using Linux’s KVM to using Sun’s VirtualBox for virtualizing a Windows XP guest on a Gentoo Linux host, and I have been quite pleased. The only feature that I don’t have working is clipboard-sharing, but that’s a problem for another day. Previously, I had major problems with sound using my EchoAudio Mia card. Ultimately, I used ALSA’s dmix plug-in to mix audio streams in software, before feeding it to the card. That worked great! However, VirtualBox was complaining about the new setup, and start-up was defaulting the audio component to “null”. In other words, my VM guest had no audio support!

Two problems, Two Solutions

The primary problem was that I had enabled the “esd” USE flag for my Gentoo system, which enables Enlightenment’s Enlightened Sound Daemon (ESD, or ESounD). Apparently, the latest version of SDL (libsdl) does not function properly in the ALSA environment, if the esd USE flag is set. Well, VirtualBox depends on SDL for audio support; consequently, my VM guest could not produce audio. The solution was to add the “-esd” USE flag to /etc/make.conf and rebuild the dependent packages (emerge -uDN world).

The second problem stems from my use of ALSA’s dmix plug-in. Although this works fine for playback, the dmix plug-in cannot be used for capture. Therefore, an environment variable must be set to specify an alternative capture source, like so:

# Unset any output customizations, use default
# Specify capture (analog-to-digital converter) device - can also be added to ~/.bashrc
$ export VBOX_ALSA_ADC_DEV="hw:0,0"
# Launch my Windows guest VM
$ VBoxManage startvm "Windows XP SP2"
# wait for it...
$ sleep 3
# check log for results:
$ grep -Pi '(alsa|audio)' VBox.log
VirtualBox Command Line Management Interface Version 2.2.4
(C) 2005-2009 Sun Microsystems, Inc.
All rights reserved.
Waiting for the remote session to open...
Remote session has been successfully opened.
00:00:00.808 [/Devices/AudioSniffer/] (level 2)
00:00:00.808 [/Devices/AudioSniffer/0/] (level 3)
00:00:00.808 [/Devices/AudioSniffer/0/Config/] (level 4)
00:00:00.808 [/Devices/AudioSniffer/0/LUN#0/] (level 4)
00:00:00.808   Driver <string>  = "MainAudioSniffer" (cch=17)
00:00:00.808 [/Devices/AudioSniffer/0/LUN#0/Config/] (level 5)
00:00:00.808   Driver </string><string>  = "AUDIO" (cch=6)
00:00:00.808   AudioDriver </string><string>  = "alsa" (cch=5)
00:00:01.090 Audio: Trying driver 'alsa'.
00:00:01.093 Audio: set_record_source ars=0 als=0 (not implemented)
00:00:01.095 ALSA: ADC frequency 44100Hz, period size 1024, buffer size 4096
00:00:01.099 ALSA: DAC frequency 44100Hz, period size 940, buffer size 3763</string>

Now I could hear audio output from my VM guest, simultaneously while listening to audio from host applications, and I could also capture audio!

Incidentally, I extended my default ~/.asoundrc file a little, although it did not seem to make a difference:

pcm.!default {
    type plug
    slave.pcm "dmix"
pcm.dsp0 {
    type plug
    slave.pcm "dmix"
ctl.mixer0 {
    type hw
    card 0


Jun 162009


Previously I blogged about using Alsa with EchoAudio’s Mia 96-kHz / 24-bit prosumer audio card. However, I have since experienced one nagging problem: Only one application can use the sound card at a given time! It is impossible for two or more applications to share the sound card. For a second application to use the Mia, the first app must be closed before the second is opened; otherwise, the first one will maintain a lock, blocking the second.

Many newer sound cards included integrated mixers, which allow multiple applications to simultaneously produce sound. Older sound cards depended upon software to perform the mixing. ALSA includes the “dmix” plug-in, which performs this very task.

ALSA DMIX Configuration

Typically, dmix is used automatically by ALSA, whenever it detects a sound-card without built-in mixing. Unfortunately, the ALSA driver for the Mia reports the card as having mixing capability, which it does not. Therefore, we must manually direct ALSA to use the dmix plug-in by default. ALSA is generally controlled through the “asoundrc” file, which can exist at the system or user level:


Several tutorials and HOWTO’s exist to set up ASLA’s dmix plug-in; however, these proved overly complicated, as I simply use:

$ cat /home/my_user/.asoundrc
pcm.!default {
    type plug
    slave.pcm "dmix"

This directs all applications to use the dmix plug-in by default. Now I can simultaneously play sound from multiple applications and without having to open and close each application in sequence!

System ALSA Configuration

Just for reference, my system’s ALSA configuration seems to use DMIX by default; however, it has never worked for me. I am including it here, just for reference:

$ cat /etc/asound.conf
pcm.swmix {
    type dmix
    # any unique number here
    ipc_key 313
    slave {
        pcm "hw:0,0"
        # these settings may require tweaking for different sound
        # cards; this is for the Powerbook's built-in snd-powermac
        # probably not required at all for well-behaved cards...
        period_time 0
        period_size 1024
        buffer_size 8192
        # mentioning rate fixes wrong speed/pitch in native ALSA stuff
        rate 44100

# this makes OSS emulation via aoss default to using dmix, allegedly
pcm.dsp0 {
    type plug
    slave.pcm "swmix"

ctl.mixer0 {
    type hw
    card 0

# this makes native ALSA apps default to using dmix
pcm.!default {
    type plug
    slave.pcm "swmix"

Conversation with Giuliano Pochini

Giuliano Pochini is the author of the ALSA driver for the EchoAudio Mia. Recently, I posed my problem to him, as follows:

I am using your ALSA driver for the EchoAudio Mia PCI card. You have done a fantastic job! I really appreciate it.

One question: I am only able to use one application at a time to produce sound. If I want to produce sound in another application, I must close the first application, and then open the second.

Is it possible to have more than one application simultaneously produce sound through the Mia?

Should I use ALSA’s dmix plug-in, or is there a better solution?

To which, he kindly replied:

That card has 8 voices and 4 outputs. The vmixer controls how the voices are sent to the outputs. If your application uses default: or hw:x.0.0 or plughw:x.0.0 then it can use a single 1 to 8 – channels output. You have to use dmix to make alsa-lib mix the sound coming from differents apps. Otherwise you can manually configure your apps to use plughw:0.0.0, plughw:0.0.2, plughw:0.0.4 and plughw:0.0.6 and use the vmixer to route them to the ouptus as you prefer.

There is another way which requires some non trivial changes to the driver: make it automatically select the first free stereo pair when an application opens a substream. Drawbacks are that apps wouldn’t be able to open non stereo substreams anymore and that changing the volume would become problematic because you couldn’t know what channels have been assigned to each app. And, of course, there are only 4 stereo pairs available, all with the same sample rate, so dmix and resample are likely to be necessary anyway.


Apr 142009


Some time ago, I blogged on missing, or broken icons in Gnome’s Evolution, when used in the KDE desktop environment, running on a Gentoo box.  Since then, I have migrated to KDE 4.2, and I have once again experienced the same problem.  I tried using each of the icon themes installed on my system, but none of the themes were complete.  Each one was missing icons – some more, others less.  Incidentally, the new “oxygen” theme seemed the most complete.

I submitted a bug report to Gentoo’s bug tracking site and received the following answer.


Here’s the workaround provided by Timo Gurr:

$ rm ~/.gtkrc-2.0
# emerge gtk-engines
# cp /usr/share/themes/Clearlooks/gtk-2.0/gtkrc /etc/gtk-2.0/gtkrc
# DIY:
     Edit /etc/gtk-2.0/gtkrc
     Add: 'gtk-fallback-icon-theme = "gnome"' as the first line.

Then restart Evolution.

This worked great for me! Hopefully, Gentoo will update portage include this procedure for the evolution, gtk, or gtk-engines packages in the future.

Apr 142009


For some time, I have casually noticed that Gnome’s Evolution was not spell-checking my email messages.  Most other email clients, text editors, and even modern web-browsers, at least check your spelling as you type, underlining misspelled words with red wavy lines.  Evolution has this capability, but it was not enabled on my system.  Here is how I enabled spell-check on my Gentoo system for Evolution.


Gnome’s Evolution is dependent upon external spell checkers to perform this function.  Popular open-source checkers include ispell, aspell, and hunspell.  Aspell is newer than ispell, and was designed to replace it, at least according to aspell’s web-siteHunspell is an up and coming spell-checker, based on MySpell and backwards compatible with its dictionaries.  Apparently, Hunspell is used in many popular projects, such as:  OpenOffice, Mozilla Firefox, Mozilla Thunderbird, Opera, and Google Chrome.

All of these above spell-checkers are developed to be language independent; therefore, you also must also install a language specific “library” or “dictionary” to complete the installation.

Evolution appears to use aspell.  However, I installed all 3 spell-checkers and English dictionaries for each checker, plus a few other utilities, like so:

# emerge -pvt app-text/aspell app-text/gnome-spell app-text/ispell \
app-text/spellutils app-vim/vim-spell-en dev-perl/Text-Aspell dev-perl/gtk2-spell \
dev-python/gtkspell-python app-dicts/aspell-en app-dicts/myspell-en

Check to make sure you satisfied with the proposed installations, and then commit:

# emerge app-text/aspell app-text/gnome-spell app-text/ispell \
app-text/spellutils app-vim/vim-spell-en dev-perl/Text-Aspell dev-perl/gtk2-spell \
dev-python/gtkspell-python app-dicts/aspell-en app-dicts/myspell-en

After emerging the above packages and restarting Evolution, I was able to select dictionaries and enable the “Check as you type” feature by navigating here:

Evolution -> Edit -> Preferences -> Composer Preferences -> Spell Checking

Now I had nice red wavy lines everywhere! 🙂

BTW, I did encounter a problem where some of my dictionaries or checkers became corrupted, so I had to re-install (re-emerge) all of the above packages to fix the problem, even though I had previously installed these packages.

Feb 122009


There are just a few applications that keep me tied to Microsoft Windows.  For work, there is primarily one, and it’s a bigee!  Microsoft Office, especially Outlook.  We have looked at a few other solutions previously that mitigate this problem by running Office on top of either Wine (a Windows emulation layer) or KVM (a Virtual Machine with a Windows installation).  However, I wanted something more complete.  I wanted a native client that could access my corporate email and calendar on a Microsoft Exchange server.  As far as I know, at this point in time, there is only one client that would claim such functionality – Gnome’s Evolution.

Installing Evolution with the Microsoft Exchange Connector plug-in is not so bad.  Under the hood, it’s a bit of kludge, accessing your email through Exchange’s HTTP interface, but hey, it works!  🙂  The big question is, “How do I import all my Outlook data, archived in local PST files?

Unfortunately, PST is a proprietary Microsoft format (I know, you are shocked), and the only way to access it is through a proprietary Microsoft library (mapi32.dll or mapi.dll) that only exists on a Microsoft Windows installation.  (I know, again, you are flabbergasted.)

There are 3 types of solutions that I found to migrate and liberate your email.  None of them are trivial.  Yuck!  😥  All of them are potentially time-consuming, especially if you have lots of data, but here are a few pointers.  🙂

Solution #1:  Migrate Through Mozilla Thunderbird on Windows

Assuming you still have access to your Windows machine that is running Outlook, you can install Mozilla Thunderbird on the same machine.  The advantage being that Thunderbird can use the proprietary library to import your Outlook email!

The importing process if fairly simple.  Roughly, it is:

  1. Install Thunderbird, and uses its import wizard to suck in your Microsoft Outlook email and contacts.
  2. Close Thunderibrid.  Find where it stored the email, usually in a place like:
    C:\Documents And Settings\<USERNAME OR DEFAULT>\Application Data\Mozilla\Profiles\default\XXXX\Mail\imported.mail
    where, XXXX is some random collection of letters and numbers.
  3. In Evolution, import each email folder (named after the email folder, saved as an extensionless file by Thunderbird in the MBOX format) — one at a time!  Yikes!

As you can see, there are a few problems here:

  1. You have to process each folder one at a time!  Very time consuming, tedious, and boring…
  2. You cannot have nested folders, especially with email in a folder with additional folders.  Thunderbird’s storage format (MBOX) is standard, but old.  In fact, it has been “hacked” to support nested folders.  That was not part of the original idea.  Anyway, things can get messy if you have lots of folders, especially nested folders with messages and sub-folders in the same folders.
  3. Furthermore, this does not really address your calendar.
  4. Some people have reported that Thunderbird drops some of their email!!!!

I gave up on this method somewhere between problem #1 and #2.  … This could still work for you, and it does for many people, provided you either don’t have many folders or you don’t mind lots of pointy-clicky.  😉  If you go this route, make sure you count the number of messages in each folder inside Outlook and verify those counts once you import your email into Evolution.  You don’t want to lose any email.

Solution #2:  Migrate Through Outport on Windows

This solution is very similar to the above solution, except it uses an open source tool, Outport, to migrate your email on a Windows machine.  Consequently, I believe it has the same limitations.  It may be able to migrate calendar, but I still don’t like the idea of having to import my email into Evolution one MBOX file at a time.

Plus, Outport seems to have not been updated since 2003.  That’s worrisome, but maybe it’s no big deal…  Anyway, I wanted something easier.

Solution #3:  Migrate via an IMAP server

You may be thinking, “What?  This is easier?”   Well, it was to me.  🙂

An IMAP email server not only supports providing messages, but it also supports uploading them.  In most email clients, you can add an IMAP based email account and then simply “drag-and-drop” your email or folders from your local computer to the IMAP server, or vice-versa.

The basic idea of this method is roughly this:

  1. Setup or procure access to an IMAP server. (You have friends, right?  Gmail provides IMAP support too.  ;-))  You will need to know the machine name, possibly the port number, account id, password, and authentication method (plain text, SSL, TLS, etc.).
  2. Add this server as an IMAP account to Outlook.
  3. Drag and drop all your email to the IMAP account.  (You may not be able to put folders inside the “Inbox”, but you can add them to the “root” of the account.)  I know this is still “pointy-clicky”, but it’s much less than the other methods.
  4. Close Outlook.
  5. Open Evolution.
  6. Add the same IMAP server as an account in Evolution.
  7. Drag and drop all your email from the IMAP account to your local store on Evolution.  Done!  Tada!!!  🙂

If you are worried about setting up an IMAP server, I have some basic instructions listed below.

Migrating Contacts

You can export your contacts in Outlook to CSV files or other formats, which you can read into Evolution.  Thunderbird can also read them and export them into other file formats, if you wish.  No biggee here.

Migrating Calendar Entries

This can be more problematic.  If you are like me, you may not have many entries, so I simply moved my calendar entries back up the Exchange server, using it as a temporary staging area.

Others have suggested exporting the Calendar entries up to Google’s Calendar and then back down to Evolution in iCal format.  Sounds good, but I can’t offer any experience on this method.

Setting Up A Temporary IMAP Server

I did not want to transfer all my email via Gmail’s IMAP server for lots of reasons (bandwidth, privacy, possible sub-folder complications, etc.).  So, I decided to set up my own local IMAP server on my Gentoo Linux box.  Here’s a few quick pointers, if you decide to do the same.

Install dovecot’s IMAP server:

emerge dovecot

Verify the configuration is set up to use the “maildir” format, not MBOX.  The “maildir” format was created by the qmail folks, and it has several advantages over the older MBOX format, including native sub-folder support.  Here are the important configuration changes that I had to make in /etc/dovecot/dovecot.conf. You may want to verify or research these further:

# Config file = /etc/dovecot/dovecot.conf
# This is required to enable network access - otherwise, it may be restricted to localhost only
listen = *, [::]
# This enables plain text passwords.
# This is not good on a public network, but works fine for private LAN's.
# And, it simplifies the login process in Outlook and Evolution.
disable_plaintext_auth = no
# This sets the mailbox format to 'maildir', and it stores the mail
# in your user's homespace on the IMAP server, avoiding
# various file permission problems on the remote box.
mail_location = maildir:~/.maildir

Then you fire up the dovecot server:

# /etc/init.d/dovecot start

And, away you go!


It’s not a pretty process, but if you bang your head on it long enough, you’ll get through it!  🙂  Just remember, freedom awaits all those who persist!  😀  Good luck!