Sep 182018


For many years, I have been using Nightingale to play music on my local desktop computer.  Back in the day, it was one of the few Linux music players that was able to import Apple iTunes media libraries, which contained all of my music – and more importantly – my song ratings, which I use to dynamically create my playlists (e.g., 3-star, 4-star, 5-star, etc.).  (After investing lots of time into listening and rating all of my songs, I could not afford to start using a new application, resetting my ratings and playlists.)

Even though it is no longer maintained, Nightingale did everything I wanted – rate songs, update playlists automatically based on ratings, automatically add new songs dropped into the watched folder, export playlists to my phone, etc.  However, recently, nightingale started segfaulting (i.e., crashing with no useful message of explanation).  Often this is indicative of incompatible package dependencies, which frequently, eventually occurs to all unmaintained projects which have package dependencies, because no one is keeping the project current with the latest versions of its dependencies.  How could this be fixed?

Docker Solution

Docker provides a convenient method for running lightweight containers of single applications within an isolated root file system.  Using docker an image was constructed that starts with an Ubuntu base from the era when Nightingale was last supported, providing contemporary package dependencies, using this Dockefile:

FROM ubuntu:14.04
RUN apt update && apt install -y \
    autoconf \
    build-essential \
    cmake \
    firefox \
    g++ \
    git \
    gstreamer-1.0 \
    gstreamer1.0-alsa \
    gstreamer1.0-plugins-bad \
    gstreamer1.0-plugins-base \
    gstreamer1.0-plugins-good \
    gstreamer1.0-plugins-ugly \
    gstreamer1.0-pulseaudio \
    libasound2-dev \
    libdbus-glib-1-dev \
    libgnomevfs2-dev \
    libgstreamer-plugins-base1.0-dev \
    libgtk2.0-dev \
    libidl-dev \
    libnspr4-dev \
    libsqlite0-dev \
    libtag1-dev \
    pulseaudio \
    pulseaudio-utils \
    unzip \
    vim \
    xterm \
    zip \
    && \
    mkdir -p /local && \
    cd /local && \
    git clone --single-branch -b gstreamer-1.0 && \
    cd /local/nightingale-hacking && \
    sed -i 's/sed \(.*\) nightingale/sed \1 compiled\/dist\/nightingale/g' && \
    ./ && \
    make install
#RUN cd /local/nightingale-hacking/debian && dpkg-buildpackage -uc -us
    groupadd -o -g $GID $GIDN; \
    groupadd -o -g $AID audio; \
    groupadd -o -g $AID audio2; \
    useradd -u $UID -g $GID -G sudo,audio,audio2,video,plugdev,staff,games,users -M -N -o -s /bin/bash $UIDN && \
    cd /home/$UIDN && \
    start-pulseaudio-x11 && \
    su $UIDN -c /usr/bin/nightingale

Building this Dockerfile does the following:

  • Start with a minimal, popular Ubuntu 14.04 base from the 2014 era.
  • Install all the packages necessary to build Nightingale from source code and run it.
  • Download the latest source code for Nightingale which uses GStreamer-1.0 instead of the old 0.1 version.
  • Make a small bug-fix to the build script.
  • Build Nightingale from source and install it inside the Docker image.
  • Define the command (CMD) to be executed whenever an active container is started based on this image.

This Dockefile can be built locally using the following command, executed from the same directory containing the above Dockerfile:

docker build --tag=nightingale .

This takes about 8 minutest to build on my box.

Please note there are no hard-coded references to user id names, directories, etc.  This makes the solution fully portable and reusable for other users on different computers.

Running Nightingale in Docker – Exporting Video and Audio

Because Docker isolates its containers, in many ways any running processes are as accessible – or inaccessible – as processes running on remote machines.  In fact, many older solutions to sharing the audio and video of a local computer with Docker containers are built around using SSH and VNC to export displays just like a remote machine.  However, this is a fairly heavyweight solution with unnecessary overhead, considering that Docker can map local resources (files and directories) into its containers.

Several solutions have been posted on the web and StackOverflow detailing how to do export audio and video from a local docker container to its host.  The solution I adapted for this case looks like:

alias music='xhost +local:$(docker inspect --format="{{ .Config.Hostname }}" \
    $(docker run -it --rm \
        --env="DISPLAY" \
        --env="QT_X11_NO_MITSHM=1" \
        --env="UID=$(id -u)" \
        --env="UIDN=$(id -un)" \
        --env="GID=$(id -g)" \
        --env="GIDN=$(id -gn)" \
        --env="AID=$(getent group audio | cut -d: -f3)" \
        --volume="/tmp/.X11-unix:/tmp/.X11-unix:rw" \
        --volume /etc/resolv.conf:/etc/resolv.conf:ro \
        --volume /home/$(id -un):/home/$(id -un) \
        --volume /run/dbus/:/run/dbus/ \
        --volume /dev/shm:/dev/shm \
        --volume /dev/snd:/dev/snd \
        --volume /dev/shm:/dev/shm \
        --volume /etc/machine-id:/etc/machine-id \
        --volume /run/user/$(id -u)/pulse:/run/user/$(id -u)/pulse \
        --volume /var/lib/dbus:/var/lib/dbus \
        --volume /run/media:/run/media \
        --privileged \
        --group-add $(getent group audio | cut -d: -f3) \
        --group-add $(getent group video | cut -d: -f3) \
        -d \
        --name=nightingale \

Please notice this is wrapped in a bash alias (stored in my .bashrc file), which allows me to execute this monstrosity simply as:


Briefly, the alias contains a few levels of complex nested commands, explained here:

  • docker run ... – This launches the docker container built using the above build command.
  • --env and -e ... – These arguments pass environment variables defined by the host into the running Docker container, which define and configure parts of the host’s graphical environment.  They also create new environment variables that are used by the container’s command, explained further below.
  • --volume and -v ... – These volume arguments map directory and file resources from the host into the container, which allow the container to manipulate the host system’s audio and vidoe resources.
  • --privileged – This elevates the permissions of the running container allowing it to manipulate resources that normally on root has capabilities to read or edit.
  • -groupadd – Probably redundant with the the CMD function, but it these add the internal root user to the video and audio groups of the host system
  • --name=nightingale – This labels the launched container with a name, in this case, nightingale, which is convenient for accessing it later.

The docker command launches a daemonized container, reporting its container id, which is captured and used to inspect the hostname of the container, which is then captured to open up permissions and allow only that container to modify the host’s audio and video resources.


My keyboard provides previous, next, and pause/play buttons, which I can still use assuming that the Docker container is already launched and running, if defined similar to:

  • play/pausedocker exec nightingale nightingale -pause
  • previousdocker exec nightingale nightingale -previous
  • nextdocker exec nightingale nightingale -next

Please note that all of the above commands assume that the container “nightingale” contains the running instance of my music player.

Also, I believe that I have a command line plug-in installed which interprets the additional command line switches to perform an action instead of trying to spin up another instance of nightingale.


For users running some other audio system beside PulseAudio on their host box, both the Dockerfile and container launching command will need to be adjusted.


I found these resources helpful:


Enjoy your music again!  🙂

 Posted by at 1:47 PM
Nov 302014


2015-01-22:  Released InstaTower 1.5.6 (Forge-1.7.10).  Structures are built using comparable blocks from current dimension (overworld [default], nether, or end). Enclosed exposed tunnels. Eliminate lava and blocks above structure in nether to avoid lava rain. Made recipes much more expensive for better balance.  Made recipes accept log, wool, and brick variants.


My children love to play Minecraft!  The endless variety is one of the big reasons they never get bored.  Just when the game begins to feel stale, they download a new mod or install a new texture pack, and it’s a brand new game!  It never gets old.

They always ask me to play with them, which I happily do when I can, but being an old gamer, I like to follow the “rules” and enhance my character’s inventory, armor, and weapons, which takes time.  But, just about the time I build up a decent house with well-stocked chests, they move onto a new mod, which means a new profile and a new world.   😥   There goes all of my hard work …


So, I decided to make my own mod, InstaTower, which could be added with other mods.  It provides fairly cheap recipes to more quickly place a nice base, so we could get back to exploring and having fun!   🙂

The core InstaTower item contains plenty of beds, well-stocked chests (diamonds, arrows, steak, rails, etc.).  Plus, it has a beacon on top, so you can easily power-up and find your way home.  The basement level has a tunnel, which descends to the lower levels of the world, where diamonds and other rare ores are more plentiful.  The tunnel contains stairs, torches, and a circuitous  rail line, which explores the mining tunnel and lateral run, and then returns.  … The crafting recipe is comparatively cheap, although it requires items from later in the game (glow-stone, diamond block, emerald block, iron block, gold block, log, and stone-brick), so it is not too easy to craft.

The tower will always be rotated, so that the player is looking into the entry door of the tower, and the block clicked to place the tower will always be the “red-carpet entrance” into the tower.

The InstaGarden provides a complete garden, enclosed in a fence with gates and is well lit with glowstone to prevent hostile mobs from spawning in the garden.  It can be placed adjacent to the InstaTower to provide a safe provision of sustainable food for the adventurous explorer.  It also contains a few animals to start a farm or provide transportation.

When placed correctly, InstaTowers may be connected with an InstaWall item, which provides protected access between 2 towers as well as a connected rail line at the basement level.

Ultimately, an InstaCastle may be crafted using 4 InstaTowers, 4 InstaWalls, and an InstaGarden.  This item correctly places 4 InstaTowers connected by 4 InstaWalls to create a castle with 4 independent mining tunnels and a large, single connected rail line that connects all 4 towers and explores the mining tunnels and shafts created by all 4 tunnels.  The courtyard of the castle contains two InstaGardens and animals to completely satisfy dietary and transportation needs.

All structures adapt to their dimension.  In the overworld, stone bricks, glass, and wooden fences are used.  In the nether world, nether bricks, iron bars, netherack, and nether fences are used.  In the end dimension, obsidian is used.

For many people, this MOD provides far too much structure and and far too many items for too little cost.  To remedy this, many of the items and structures can be disabled server side by a editable configuration file, instatower.cfg, which is initialized by the mod.  Default settings may be retrieved by simply deleting the file and allowing the mod to recreate it with default values.  Virtually all of the the structures features (beacon, enchantment table, diamonds, food, anvil, rail lines, the entire basement level, etc.) except stones, lighting, and ladders can be disabled via this config file.  The crafting recipes for each item may also be disabled, making the items only available to users in Creative mode.

An optional item is also included, the Divining Rod, which reports miscellaneous block data about any block right-clicked.  This item and the provided info is handy for mod-developers but nothing more.


  1. Download and the recommended 1.7.10 version of Forge Mod Loader (1.7.10-Forge10.13.2.1230 or higher), which enables a lot of mods.  (This is not the place to get support for Forge, if you are new to it.  Sorry.)
  2. Download and install the InstaTower 1.5.6 (Forge-1.7.10) (JAR file inside ZIP) into the mods directory of your minecraft profile.  (I assume you know how to do this.)
  3. Mod will initialize a config file upon first run, instatower.cfg, which can be edited to disable features and control tunnel depth and lengths.


  • 1.5.6 (2015-01-22) – Structures are built using comparable blocks from current dimension (overworld [default], nether, or end). Enclosed exposed tunnels. Eliminate lava and blocks above structure in nether to avoid lava rain. Made recipes much more expensive for better balance.  Made recipes accept log, wool, and brick variants.
  • 1.5.4 (2015-01-07) – Added control for wall length in castle.  Cleaned up comments in config file.
  • 1.5.3 (2015-01-06) – Added many more switches in the config file, so that most of the features (including beacon, library, anvil, brewing stand, stocked chests, rail lines, basement level, etc.) can now be disabled via the config file.
  • 1.5.2 (2015-01-05) – Separated InstaTower into new InstaGarden and updated version of InstaTower.  Created new item, InstaWall, which creates a long wall with basement.  Created new item, InstaCastle, which places 4 InstaTowers, connected by InstaWalls, with a InstaGarden in the middle courtyard. items.  Added config-file control of  tunnel presence, depth, length, and whether valuable ores are cleared or not.
  • 1.20 (2014-12-31) – Added basement with descending tunnel and long lateral runs at bottom of tunnel, complete with rails, torches, and carts.
  • 1.10 (2014-12-28) – Added rotation of structure based on player’s facing direction.  (When placed, player will always be looking into entry door of tower, and structure’s location will be offset such that the red-carpet will lie upon block clicked to place tower.)  Fixed all glitches associated with chests, doors, and crops.  Added new item, Divining Rod, which reports block id, name, coordinates, and metadata to log, which is useful for MOD developers.
  • 1.02 (2014-12-23) – Moved config file inside JAR file. Deprecated external config file that specified tower layout.
  • 1.0.1 (2014-12-1) – Initial release.



Available on GitHub:


Have fun!

Oct 072013



Compiling Python on one type of system to run on another type, known as “cross-compiling”, is a notoriously difficult issue, because the Python process is very complicated, and it was not designed with cross-compilation in mind.  However, a simple recipe has been devised for past versions up to Python-2.7.3:

The gist of this process is summarized in 2 steps:

  1. Compile python and Parser/pgen to run on the build-system, because they are used later in the cross-compilation process.
  2. Compile everything else and python and Parser/pgen again using the cross-compiler tool-chain.

Thanks to recent work (Issue 17086), Python’s cross-compilation process has been enhanced to use and accept an external python interpreter, which is used during the build process.  This simplifies some of the patching provided by Paul’s blog post; however, it also changes the underlying code dramatically, which may explain why there have not been any new posts documenting the updated process with new patches since Python-2.7.3.

An Updated Patch and Build Process for Python-2.7.5

Using this patch and a suitable cross-compilation tool-chain (gcc, c++, ar, ranlib, etc.), the following process can be used to build Python-2.7.5 on one Linux system to run on another Linux process:

Prepare Sources

$ export RFS=/path_to_embedded_root_file_system
$ wget
$ tar -jxf Python-2.7.5.tar.bz2
$ cp <wherever_you_saved_it>/Python-2.7.5-xcompile.patch Python-2.7.5/
$ cd Python-2.7.5

Most embedded projects will have a directory on the build system, which contains all of the directories and files to be placed on the root file system of the embedded device. In this script, the full path to this directory is defined as RFS. The remaining steps retrieve the source code, unpack it, copy the above patch into place, and change into the build directory.

Step #1 – Compile Programs Used by Build System During Build

The first major step compiles 2 binaries to run on the build system: the python interpreter and Parser/pgen:

$ ./configure  # for build-system's native tool-chain
$ make python Parser/pgen
$ mv python python_for_build
$ mv Parser/pgen Parser/pgen_for_build

After building these 2 binaries, they are moved aside for later use during the full cross-compilation and installation process.

Patch Build Files and Prepare for Cross-Compilation

In preparation for the next step, all of the compiled files – except for the 2 moved aside – are deleted, the build files are re-configured using the cross-compilation toolchain, and the full Python suite is built including modules, such as ssl, zlib, and ctypes:

$ export PATH="/opt/freescale/usr/local/gcc-4.3.74-eglibc-2.8.74-dp-2/powerpc-none-linux-gnuspe/bin:${PATH}"
$ make distclean
$ ./configure --host=powerpc-none-linux-gnuspe --build=i586-linux-gnu --prefix=/ \
    --disable-ipv6 ac_cv_file__dev_ptmx=no ac_cv_file__dev_ptc=no                    \

I have included the full path to my current tool-chain as an example. (I have used this process for both x86 and x86_64 build systems to produce a Python installation for a Freescale PowerPC embedded Linux system.) Please notice how the path contains and corresponds to the “host” entry in the cross-compilation configuration step. I had to specify a few extra switches (disable-ipv6, ac_cv_file__dev_ptms, ac_cv_file__dev_ptc, and ac_cv_have_long_long_format) to help the configure script resolve some tests that required execution on the host. These will vary depending upon your embedded host system’s architecture.

For reference, here is the list directory listing of the above cross-compilation tool-chain:

$ ls -la /opt/freescale/usr/local/gcc-4.3.74-eglibc-2.8.74-dp-2/powerpc-none-linux-gnuspe/bin
-rwxrwxrwx 1 root root  590485 Mar  5  2012 powerpc-none-linux-gnuspe-addr2line
-rwxrwxrwx 1 root root  614647 Mar  5  2012 powerpc-none-linux-gnuspe-ar
-rwxrwxrwx 1 root root  897161 Mar  5  2012 powerpc-none-linux-gnuspe-as
-rwxrwxrwx 1 root root  235382 Mar  5  2012 powerpc-none-linux-gnuspe-c++
-rwxrwxrwx 1 root root  589227 Mar  5  2012 powerpc-none-linux-gnuspe-c++filt
-rwxrwxrwx 1 root root  234277 Mar  5  2012 powerpc-none-linux-gnuspe-cpp
-rwxrwxrwx 1 root root    8503 Mar  5  2012 powerpc-none-linux-gnuspe-embedspu
-rwxrwxrwx 1 root root  235382 Mar  5  2012 powerpc-none-linux-gnuspe-g++
-rwxrwxrwx 1 root root  233126 Mar  5  2012 powerpc-none-linux-gnuspe-gcc
-rwxrwxrwx 1 root root  233126 Mar  5  2012 powerpc-none-linux-gnuspe-gcc-4.3.2
-rwxrwxrwx 1 root root   16512 Mar  5  2012 powerpc-none-linux-gnuspe-gccbug
-rwxrwxrwx 1 root root   28017 Mar  5  2012 powerpc-none-linux-gnuspe-gcov
-rwxrwxrwx 1 root root  655127 Mar  5  2012 powerpc-none-linux-gnuspe-gprof
-rwxrwxrwx 1 root root 1036372 Mar  5  2012 powerpc-none-linux-gnuspe-ld
-rwxrwxrwx 1 root root  603678 Mar  5  2012 powerpc-none-linux-gnuspe-nm
-rwxrwxrwx 1 root root  750617 Mar  5  2012 powerpc-none-linux-gnuspe-objcopy
-rwxrwxrwx 1 root root  895336 Mar  5  2012 powerpc-none-linux-gnuspe-objdump
-rwxrwxrwx 1 root root  614647 Mar  5  2012 powerpc-none-linux-gnuspe-ranlib
-rwxrwxrwx 1 root root  264063 Mar  5  2012 powerpc-none-linux-gnuspe-readelf
-rwxrwxrwx 1 root root  593901 Mar  5  2012 powerpc-none-linux-gnuspe-size
-rwxrwxrwx 1 root root  591853 Mar  5  2012 powerpc-none-linux-gnuspe-strings
-rwxrwxrwx 1 root root  750617 Mar  5  2012 powerpc-none-linux-gnuspe-strip

As you can see, each of the binaries are prefixed with “powerpc-none-linux-gnuspe-“, which corresponds directly with the “host” variable in the configuration step.  (Please ignore the fact that these binaries are world writable!  This is not good, but it is a secure build system, and this is an artifact of some unrelated build system workarounds.   Your binaries should only be writable by user.)

Step #2 – Cross-Compilation

Having re-configured the sources for cross-compilation, the next major step is to actually cross-compile the fully Python suite include modules:

$ make --jobs=8 \
    CFLAGS="-g0 -Os -s -I${RFS}/usr/include -fdata-sections -ffunction-sections" \
    LDFLAGS='-L${RFS}/usr/lib -L${RFS}/lib'

The linked patch hard-codes additional build dependencies, where the compiler and linker can find header and library dependencies, like so:

diff --git a/packages/Python-2.7.5/ b/packages/Python-2.7.5/
index 716f08e..ca8b141 100644
--- a/packages/Python-2.7.5/
+++ b/packages/Python-2.7.5/

@@ -552,6 +556,11 @@ class PyBuildExt(build_ext):
         if host_platform in ['darwin', 'beos']:
             math_libs = []

+        # Insert libraries and headers from embedded root file system (RFS)
+        if 'RFS' in os.environ:
+            lib_dirs += [os.environ['RFS'] + '/usr/lib']
+            inc_dirs += [os.environ['RFS'] + '/usr/include']
         # XXX Omitted modules: gl, pure, dl, SGI-specific modules


However, they are also embedded in the above CFLAGS and LDFLAGS for good measure. 🙂 Please notice that the patch depends on the RFS serving as a flag to trigger this behavior.

Optional:  Reduce Binary Size

Optionally, additional compilation flags are also set in the above step, which help the cross-compilation strip tool reduce the binary size, like so:

powerpc-none-linux-gnuspe-strip --strip-unneeded python

This is entirely optional. Removing this step and the data-sections and function-sections compiler flags may help simplify the process, when troubleshooting.

Install into Root File System for Embedded Target

Lastly, the suite can be installed into the embedded device’s root file system, like so:

$ sudo make install DESTDIR=${RFS} PATH="${PATH}"

Often but not always, embedded root file system directories on the build system are owned by root; consequently, sudo is required to modify any files in the RFS directory. Since sudo also disregards the PATH environment variable, it must be passed explicitly via the command line.


Although patches and similar processes have been provided by others for Python-2.7.3 and below, there have been no recent updates or tutorials for 2.7.4 or 2.7.5 that I have been able to find on the internet. Most likely, this delay is because of the significant cross-compilation changes and the daunting complexity of the Python build process. The above process and linked patches enable this ongoing process for Python-2.7.5 and hopefully other future versions.  Hopefully, this patch set will eventually disappear as increasingly more cross-compilation capability is integrated into Python’s own internal build process.

Although I don’t have time to support this patch, if you have any suggestions on improving it, or if you find any bugs, please let me know in comments below.

A more elaborate build script with error checking and comments is available here for download.

Feb 222011


Sometimes, you may only be able to connect to a Microsoft SQL (MS-SQL, or MSSQL) server through its SQL port, so you cannot use RDP to access the Enterprise Manager or other graphical tools on the host.  Other times, you may simply want to leverage the power of a Linux box.  😉  Regardless of the reason, if you want to connect to a MS-SQL server from a Linux box, read on…  As in previous posts, any installation or configuration instructions pertain to Gentoo.  Please adapt as necessary. … Also, these instrcutions were tested on MS-SQL Server 2000, so some instructions may need to be adapted depending on your version of MS-SQL.


FreeTDS offers an opensource command line client, tsql.  This is comparable to using Microsoft’s OSQL command line interface, although the arguments to launch the client are different.  Although it has various options, you launch tsql, like so:

tsql -S <sql_server_name> -U <user_name> [-P <password>]

If you are comfortable with OSQL, you will have no problem using this basic SQL CLI client.


Packagers are available for Perl, PHP, Python, and several other scripting languages, which provide an extensive, programmatic interface to the remote MS-SQL server.


Feb 222011


Suppose you have access to a remote MS-SQL database, and you need to analyze it.  Why?  Let’s pretend that you intend to migrate the contents to another server or database, like MySQL.  😉  Anyway, how do you discover various essentials about the database? … These instructions are for Microsoft SQL Server 2000.  The following instructions and results will probably vary wildly for other versions of MS-SQL.

Connect with TSQL

First, let’s connect to the remote MS-SQL using the tsql command line client:

tsql -S <sql_server_name> -U <user_name>

You may have another favorite method to gain command line access to the SQL database, but this is my current favorite.  🙂

Listing All The Tables in the Database

In our open SQL CLI, one site posted doing something like:

EXEC sp_tables

Unfortunately, this does not list all the tables for some unknown reason.  I could see several more tables listed in MS SQL Enterprise Manager.  Most of my “user” tables were not reported by the above method, although some were reported correctly.

Another site indicated listing special system table contents, which I think is actually specific to ORACLE and newer versions of MS-SQL.  Maybe it will work for you?

SELECT * FROM sys.tables

The same site also suggested the following, but it also returned partial results for some reason unknown to me.


Others recommend using “sp_help” with no arguments, like so:


However, I found that it returned far too much information, much more than just the list of all tables, although that did seem to be included…

Ultimately, I used this:


This returned a filtered list of system objects that had a type of either “system” or “user” table.  Very nice! 😀

Listing Table Structure

The structure of a particular structure can be listed, like so:

sp_help &lt;table_name&gt;

Listing All Columns

Try this:

SELECT name, object_name(id) FROM sysindexes

Determining DB Size

Try this:

EXEC sp_spaceused [tablename]
Feb 222011


Imagine you have an MS-SQL table that has LOTS of columns.  Some of the column names are known, but others are dynamically generated, and their names are not known at run time.  Now imagine that you suspect most of those unknown columns are empty.  How do you determine if any rows contain values in the columns, whose names are unknown?

Incremental Solutions

You could always just show all the values for every column of every row:


However, if your table is large, this may return too much data.  So, you could look for unique values, like so:


This works better.  However, if the known columns contain lots of unique data, the above T-SQL command may still return too much data.  Let’s try to focus on returning the unique values of just the unknown columns!

Final Solution

In our problem, all of the unknown columns begin with a known prefix, “UDA_”.  So, we need to get a list of the column names compiled in a comma separated list, suitable for a second SELECT statement.  We can do this, like so:

SELECT @myColumnNames=COALESCE(@myColumnNames + ',', '') + COLUMN_NAME
SELECT @myColumnNames

These statements initialize a local variable, @myColumnNames, and then COALESCE, or compile all the column names from the myTable, which begin with the prefix, “UDA_”, into a comma separated list.  The last statement prints the variable value for debugging purposes, which might look like:

Finally, a simple select statement can be used to return the distinct values of these columns from our table.  The only problem is that you cannot substitute a variable directly into a T-SQL statement, so you have to build the necessary statement string and execute it, like so:

EXEC('SELECT DISTINCT ' + @myColumnNames + ' FROM myTable')

Putting It All Together

SELECT @myColumnNames=COALESCE(@myColumnNames + ',', '') + COLUMN_NAME
EXEC('SELECT DISTINCT ' + @myColumnNames + ' FROM myTable')

If the above statements yield a single row with nothing but NULLs, then you know those columns are all empty!  Anything else indicates that at least one of the table rows, for at least one of your columns of unknown name, contains a value!

Jun 172009


I recently switched from using Linux’s KVM to using Sun’s VirtualBox for virtualizing a Windows XP guest on a Gentoo Linux host, and I have been quite pleased. The only feature that I don’t have working is clipboard-sharing, but that’s a problem for another day. Previously, I had major problems with sound using my EchoAudio Mia card. Ultimately, I used ALSA’s dmix plug-in to mix audio streams in software, before feeding it to the card. That worked great! However, VirtualBox was complaining about the new setup, and start-up was defaulting the audio component to “null”. In other words, my VM guest had no audio support!

Two problems, Two Solutions

The primary problem was that I had enabled the “esd” USE flag for my Gentoo system, which enables Enlightenment’s Enlightened Sound Daemon (ESD, or ESounD). Apparently, the latest version of SDL (libsdl) does not function properly in the ALSA environment, if the esd USE flag is set. Well, VirtualBox depends on SDL for audio support; consequently, my VM guest could not produce audio. The solution was to add the “-esd” USE flag to /etc/make.conf and rebuild the dependent packages (emerge -uDN world).

The second problem stems from my use of ALSA’s dmix plug-in. Although this works fine for playback, the dmix plug-in cannot be used for capture. Therefore, an environment variable must be set to specify an alternative capture source, like so:

# Unset any output customizations, use default
# Specify capture (analog-to-digital converter) device - can also be added to ~/.bashrc
$ export VBOX_ALSA_ADC_DEV="hw:0,0"
# Launch my Windows guest VM
$ VBoxManage startvm "Windows XP SP2"
# wait for it...
$ sleep 3
# check log for results:
$ grep -Pi '(alsa|audio)' VBox.log
VirtualBox Command Line Management Interface Version 2.2.4
(C) 2005-2009 Sun Microsystems, Inc.
All rights reserved.
Waiting for the remote session to open...
Remote session has been successfully opened.
00:00:00.808 [/Devices/AudioSniffer/] (level 2)
00:00:00.808 [/Devices/AudioSniffer/0/] (level 3)
00:00:00.808 [/Devices/AudioSniffer/0/Config/] (level 4)
00:00:00.808 [/Devices/AudioSniffer/0/LUN#0/] (level 4)
00:00:00.808   Driver <string>  = "MainAudioSniffer" (cch=17)
00:00:00.808 [/Devices/AudioSniffer/0/LUN#0/Config/] (level 5)
00:00:00.808   Driver </string><string>  = "AUDIO" (cch=6)
00:00:00.808   AudioDriver </string><string>  = "alsa" (cch=5)
00:00:01.090 Audio: Trying driver 'alsa'.
00:00:01.093 Audio: set_record_source ars=0 als=0 (not implemented)
00:00:01.095 ALSA: ADC frequency 44100Hz, period size 1024, buffer size 4096
00:00:01.099 ALSA: DAC frequency 44100Hz, period size 940, buffer size 3763</string>

Now I could hear audio output from my VM guest, simultaneously while listening to audio from host applications, and I could also capture audio!

Incidentally, I extended my default ~/.asoundrc file a little, although it did not seem to make a difference:

pcm.!default {
    type plug
    slave.pcm "dmix"
pcm.dsp0 {
    type plug
    slave.pcm "dmix"
ctl.mixer0 {
    type hw
    card 0


Jun 162009


Previously I blogged about using Alsa with EchoAudio’s Mia 96-kHz / 24-bit prosumer audio card. However, I have since experienced one nagging problem: Only one application can use the sound card at a given time! It is impossible for two or more applications to share the sound card. For a second application to use the Mia, the first app must be closed before the second is opened; otherwise, the first one will maintain a lock, blocking the second.

Many newer sound cards included integrated mixers, which allow multiple applications to simultaneously produce sound. Older sound cards depended upon software to perform the mixing. ALSA includes the “dmix” plug-in, which performs this very task.

ALSA DMIX Configuration

Typically, dmix is used automatically by ALSA, whenever it detects a sound-card without built-in mixing. Unfortunately, the ALSA driver for the Mia reports the card as having mixing capability, which it does not. Therefore, we must manually direct ALSA to use the dmix plug-in by default. ALSA is generally controlled through the “asoundrc” file, which can exist at the system or user level:


Several tutorials and HOWTO’s exist to set up ASLA’s dmix plug-in; however, these proved overly complicated, as I simply use:

$ cat /home/my_user/.asoundrc
pcm.!default {
    type plug
    slave.pcm "dmix"

This directs all applications to use the dmix plug-in by default. Now I can simultaneously play sound from multiple applications and without having to open and close each application in sequence!

System ALSA Configuration

Just for reference, my system’s ALSA configuration seems to use DMIX by default; however, it has never worked for me. I am including it here, just for reference:

$ cat /etc/asound.conf
pcm.swmix {
    type dmix
    # any unique number here
    ipc_key 313
    slave {
        pcm "hw:0,0"
        # these settings may require tweaking for different sound
        # cards; this is for the Powerbook's built-in snd-powermac
        # probably not required at all for well-behaved cards...
        period_time 0
        period_size 1024
        buffer_size 8192
        # mentioning rate fixes wrong speed/pitch in native ALSA stuff
        rate 44100

# this makes OSS emulation via aoss default to using dmix, allegedly
pcm.dsp0 {
    type plug
    slave.pcm "swmix"

ctl.mixer0 {
    type hw
    card 0

# this makes native ALSA apps default to using dmix
pcm.!default {
    type plug
    slave.pcm "swmix"

Conversation with Giuliano Pochini

Giuliano Pochini is the author of the ALSA driver for the EchoAudio Mia. Recently, I posed my problem to him, as follows:

I am using your ALSA driver for the EchoAudio Mia PCI card. You have done a fantastic job! I really appreciate it.

One question: I am only able to use one application at a time to produce sound. If I want to produce sound in another application, I must close the first application, and then open the second.

Is it possible to have more than one application simultaneously produce sound through the Mia?

Should I use ALSA’s dmix plug-in, or is there a better solution?

To which, he kindly replied:

That card has 8 voices and 4 outputs. The vmixer controls how the voices are sent to the outputs. If your application uses default: or hw:x.0.0 or plughw:x.0.0 then it can use a single 1 to 8 – channels output. You have to use dmix to make alsa-lib mix the sound coming from differents apps. Otherwise you can manually configure your apps to use plughw:0.0.0, plughw:0.0.2, plughw:0.0.4 and plughw:0.0.6 and use the vmixer to route them to the ouptus as you prefer.

There is another way which requires some non trivial changes to the driver: make it automatically select the first free stereo pair when an application opens a substream. Drawbacks are that apps wouldn’t be able to open non stereo substreams anymore and that changing the volume would become problematic because you couldn’t know what channels have been assigned to each app. And, of course, there are only 4 stereo pairs available, all with the same sample rate, so dmix and resample are likely to be necessary anyway.


Apr 142009


Some time ago, I blogged on missing, or broken icons in Gnome’s Evolution, when used in the KDE desktop environment, running on a Gentoo box.  Since then, I have migrated to KDE 4.2, and I have once again experienced the same problem.  I tried using each of the icon themes installed on my system, but none of the themes were complete.  Each one was missing icons – some more, others less.  Incidentally, the new “oxygen” theme seemed the most complete.

I submitted a bug report to Gentoo’s bug tracking site and received the following answer.


Here’s the workaround provided by Timo Gurr:

$ rm ~/.gtkrc-2.0
# emerge gtk-engines
# cp /usr/share/themes/Clearlooks/gtk-2.0/gtkrc /etc/gtk-2.0/gtkrc
# DIY:
     Edit /etc/gtk-2.0/gtkrc
     Add: 'gtk-fallback-icon-theme = "gnome"' as the first line.

Then restart Evolution.

This worked great for me! Hopefully, Gentoo will update portage include this procedure for the evolution, gtk, or gtk-engines packages in the future.

Apr 142009


For some time, I have casually noticed that Gnome’s Evolution was not spell-checking my email messages.  Most other email clients, text editors, and even modern web-browsers, at least check your spelling as you type, underlining misspelled words with red wavy lines.  Evolution has this capability, but it was not enabled on my system.  Here is how I enabled spell-check on my Gentoo system for Evolution.


Gnome’s Evolution is dependent upon external spell checkers to perform this function.  Popular open-source checkers include ispell, aspell, and hunspell.  Aspell is newer than ispell, and was designed to replace it, at least according to aspell’s web-siteHunspell is an up and coming spell-checker, based on MySpell and backwards compatible with its dictionaries.  Apparently, Hunspell is used in many popular projects, such as:  OpenOffice, Mozilla Firefox, Mozilla Thunderbird, Opera, and Google Chrome.

All of these above spell-checkers are developed to be language independent; therefore, you also must also install a language specific “library” or “dictionary” to complete the installation.

Evolution appears to use aspell.  However, I installed all 3 spell-checkers and English dictionaries for each checker, plus a few other utilities, like so:

# emerge -pvt app-text/aspell app-text/gnome-spell app-text/ispell \
app-text/spellutils app-vim/vim-spell-en dev-perl/Text-Aspell dev-perl/gtk2-spell \
dev-python/gtkspell-python app-dicts/aspell-en app-dicts/myspell-en

Check to make sure you satisfied with the proposed installations, and then commit:

# emerge app-text/aspell app-text/gnome-spell app-text/ispell \
app-text/spellutils app-vim/vim-spell-en dev-perl/Text-Aspell dev-perl/gtk2-spell \
dev-python/gtkspell-python app-dicts/aspell-en app-dicts/myspell-en

After emerging the above packages and restarting Evolution, I was able to select dictionaries and enable the “Check as you type” feature by navigating here:

Evolution -> Edit -> Preferences -> Composer Preferences -> Spell Checking

Now I had nice red wavy lines everywhere! 🙂

BTW, I did encounter a problem where some of my dictionaries or checkers became corrupted, so I had to re-install (re-emerge) all of the above packages to fix the problem, even though I had previously installed these packages.