Java Secure HTTP Client Key Management

My current project at Novell involves the development of a ReSTful web service for submission of audit records from security applications. The server is a Jersey servlet within an embedded Tomcat 6 container.

One of the primary reasons for using a ReSTful web service for this purpose is to alleviate the need to design and build a heavy-weight audit record submission client library. Such client libraries need to be orthogonally portable across both hardware platforms and languages in order to be useful to Novell’s customers. Just maintaining the portability of this client library in one language is difficult enough, without adding multiple languages to the matrix.

Regardless of our motivation, we still felt the need to provide a quality reference implementation of a typical audit client library to our customers. They may incorporate as much or as little of this code as they wish, but a good reference implementation is worth a thousand pages of documentation. (Don’t get me wrong, however–this is no excuse for not writing good documentation! The combination of quality concise documentation and a good reference implementation is really the best solution.)

The idea here is simple: Our customers won’t have to deal with difficulties that we stumble upon and then subsequently provide solutions for. Additionally, it’s just plain foolish to provide a server component for which you’ve never written a client. It’s like publishing a library API that you’ve never written to. You don’t know if the API will even work the way you originally intended until you’ve at least tried it out.

Since we’re already using Java in the server, we’ve decided that our initial client reference implementation should also be written in Java. Yesterday found my code throwing one exception after another while simply trying to establish the TLS connection to the server from the client. All of these problems ultimately came down to my lack of understanding of the Java key store and trust store concepts.

You see, the establishment of a TLS connection from within a Java client application depends heavily on the proper configuration of a client-side trust store. If you’re using mutual authentication, as we are, then you also need to properly configure a client-side key store for the client’s private key. The level at which we are consuming Java network interfaces also demands that we specify these stores in system properties. More on this later…

Using Curl as an Https Client

We based our initial assumptions about how the Java client needed to be configured on our use of the curl command line utility in order to test the web service. The curl command line looks something like this:

  curl -k --cert client.cer --cert-type DER --key client-key.pem
    --key-type PEM --header "Content-Type: application/audit+json"
    -X POST --data @test-event.json

The important aspects of this command-line include the use of the –cert, –cert-type, –key and –key-type parameters, as well as the fact that we specified a protocol scheme of “https” in the URL.

With one exception, the remaining options are related to which http method to use (-X), what data to send (–data), and which message properties to send (–header). The exception is the -k option, and therein lay most of our problems with this Java client.

The curl man-page indicates that the -k/–insecure option allows the TLS handshake to succeed without verifying the server certificate in the client’s CA (Certificate Authority) trust store. The reason this option was added was because several releases of the curl package shipped with a terribly out-dated trust store, and people were getting tired of having to manually add certificates to their trust stores everytime they hit a newer site.

Doing it in Java

But this really isn’t the safe way to access any secure public web service. Without server certificate verification, your client can’t really know that it’s not communicating with a server that just says it’s the right server. (“Trust me!”)

During the TLS handshake, the server’s certificate is passed to the client. The client should then verify the subject name of the certificate. But verify it against what? Well, let’s consider–what information does the client have access to, outside of the certificate itself? It has the fully qualified URL that it used to contact the server, which usually contains the DNS host name. And indeed, a client is supposed to compare the CN (Common Name) portion of the subject DN (Distinguished Name) in the server certificate to the DNS host name in the URL, according to section 3.1 “Server Identity” of RFC 2818 “HTTP over TLS”.

Java’s HttpsURLConnection class strictly enforces the advice given in RFC 2818 regarding peer verification. You can override these constraints, but you have to basically write your own version of HttpsURLConnection, or sub-class it and override the methods that verify peer identity.

Creating Java Key and Trust Stores

Before even attempting a client connection to our server, we had to create three key stores:

  1. A server key store.
  2. A client key store.
  3. A client trust store.

The server key store contains the server’s self-signed certificate and private key. This store is used by the server to sign messages and to return credentials to the client.

The client key store contains the client’s self-signed certificate and private key. This store is used by the client for the same purpose–to send client credentials to the server during the TLS mutual authentication handshake. It’s also used to sign client-side messages for the server during the TLS handshake. (Note that once authentication is established, encryption happens using a secret or symetric key encryption algorithm, rather than public/private or asymetric key encryption. Symetric key encryption is a LOT faster.)

The client trust store contains the server’s self-signed certificate. Client-side trust stores normally contain a set of CA root certificates. These root certificates come from various widely-known certificate vendors, such as Entrust and Verisign. Presumably, almost all publicly visible servers have a purchased certificate from one of these CA’s. Thus, when your web browser connects to such a public server over a secure HTTP connection, the server’s certificate can be verified as having come from one of these well-known certificate vendors.

I first generated my server key store, but this keystore contains the server’s private key also. I didn’t want the private key in my client’s trust store, so I extracted the certificate into a stand-alone certificate file. Then I imported that server certificate into a trust store. Finally, I generated the client key store:

  $ keytool -genkey -alias server -keyalg RSA \
  > -storepass changeit -keystore server-keystore.jks
  What is your first and last name?
    [Unknown]:  audit-server
  What is the name of your organizational unit?
    [Unknown]:  Eng
  What is the name of your organization?
    [Unknown]:  Novell
  What is the name of your City or Locality?
    [Unknown]:  Provo
  What is the name of your State or Province?
    [Unknown]:  Utah
  What is the two-letter country code for this unit?
    [Unknown]:  US
  Is CN=audit-server, OU=Eng, O=Novell, L=Provo, ST=Utah, C=US correct?
    [no]:  yes

  Enter key password for <server>
          (RETURN if same as keystore password):  
  $ keytool -exportcert -keystore server-keystore.jks \
  > -file server.der -alias server -storepass changeit
  Certificate stored in file <server.der>
  $ keytool -importcert -trustcacerts -alias server \
  > -keystore server-truststore.jks -storepass changeit \
  > -file server.der
  Owner: CN=audit-server, OU=Eng, O=Novell, L=Provo, ST=Utah, C=US
  Issuer: CN=audit-server, OU=Eng, O=Novell, L=Provo, ST=Utah, C=US
  Serial number: 491cad67
  Valid from: Thu Nov 13 15:42:47 MST 2008 until: Wed Feb 11 15:42:47 MST 2009
  Certificate fingerprints:
           MD5:  EE:FA:EE:78:A8:42:2B:F2:3A:04:50:37:D3:94:B3:C0
           SHA1: 4E:BA:9B:2F:FC:84:10:5A:2E:62:D2:5B:B3:70:70:B5:2F:03:E1:CD
	   Signature algorithm name: SHA1withRSA
           Version: 3
  Trust this certificate? [no]:  yes
  Certificate was added to keystore
  $ keytool -genkey -alias client -keyalg RSA -storepass changeit \
  > -keystore client-keystore.jks
  What is your first and last name?
    [Unknown]:  audit-client
  What is the name of your organizational unit?
    [Unknown]:  Eng
  What is the name of your organization?
    [Unknown]:  Novell
  What is the name of your City or Locality?
    [Unknown]:  Provo
  What is the name of your State or Province?
    [Unknown]:  Utah
  What is the two-letter country code for this unit?
    [Unknown]:  US
  Is CN=audit-client, OU=Eng, O=Novell, L=Provo, ST=Utah, C=US correct?
    [no]:  yes

  Enter key password for <client>
          (RETURN if same as keystore password):  
  $ ls -1

Telling the Client About Keys

There are various ways of telling the client about its key and trust stores. One method involves setting system properties on the command line. This is commonly used because it avoids the need to enter absolute paths directly into the source code, or to manage separate configuration files.

  $ java ...

Another method is to set the same system properties inside the code itself, like this:

  public class AuditRestClient
    public AuditRestClient() 

I chose the latter, as I’ll eventually extract the strings into property files loaded as needed by the client code. I don’t really care for the fact that Java makes me specify these stores in system properties. This is especially a problem for our embedded client code, because our customers may have other uses for these system properties in the applications in which they will embed our code. Here’s the rest of the simple client code:

    public void send(JSONObject event) 
      byte[] bytes = event.toString().getBytes();
      HttpURLConnection conn = null; 
        // establish connection parameters
        URL url = new URL("");
        conn = (HttpURLConnection)url.openConnection();
        conn.addRequestProperty("Content-Length", "" + bytes.length);
        conn.addRequestProperty("Content-Type", "application/audit1+json");

        // send POST data
        OutputStream out = (OutputStream)conn.getOutputStream();

        // get response code and data
        BufferedReader read = new BufferedReader(new InputStreamReader(conn.getInputStream()));
        String query = null;
        while((query = read.readLine()) != null)
      catch(MalformedURLException e) { e.printStackTrace(); }
      catch(ProtocolException e) { e.printStackTrace(); }
      catch(IOException e) { e.printStackTrace(); }
      finally { conn.disconnect(); }

Getting it Wrong…

I also have a static test “main” function so I can send some content. But when I tried to execute this test, I got an exception indicating that the server certificate didn’t match the host name. I was using a hard-coded IP address (, but my certificate contained the name “audit-server”.

It turns out that the HttpsURLConnection class uses an algorithm to determine if the server that sent the certificate really belongs to the server on the other end of the connection. If the URL contains an IP address, then it attempts to locate a matching IP address in the “Alternate Names” portion of the server certificate.

Did you notice a keytool prompt to enter alternate names when you generated your server certificate? I didn’t–and it turns out there isn’t one. The Java keytool utility doesn’t provide a way to enter alternate names–a standardized extension of the X509 certificate format. To enter an alternate name containing the requisite IP address, you’d have to generate your certificate using the openssl utility, or some other more functional certificate generation tool, and then find a way to import these foreign certificates into a Java key store.

…And then Doing it Right

On the other hand, if the URL contains a DNS name, then HttpsURLConnection attempts to match the CN portion of the Subject DN with the DNS name. This means that your server certificates have to contain the DNS name of the server as the CN portion of the subject. Returning to keytool, I regenerated my server certificate and stores using the following commands:

  $ keytool -genkey -alias server -keyalg RSA \
  > -storepass changeit -keystore server-keystore.jks
  What is your first and last name?

  ... (the rest is the same) ...
  $ keytool -exportcert -keystore server-keystore.jks \
  > -file server.der -alias server -storepass changeit
  Certificate stored in file <server.der>
  $ keytool -importcert -trustcacerts -alias server \
  > -keystore server-truststore.jks -storepass changeit \
  > -file server.der
  Owner:, OU=Eng, O=Novell, L=Provo, ST=Utah, C=US
  Issuer:, OU=Eng, O=Novell, L=Provo, ST=Utah, C=US
  Serial number: 491cad67
  Valid from: Thu Nov 13 15:42:47 MST 2008 until: Wed Feb 11 15:42:47 MST 2009
  Certificate fingerprints:
           MD5:  EE:FA:EE:78:A8:42:2B:F2:3A:04:50:37:D3:94:B3:C0
           SHA1: 4E:BA:9B:2F:FC:84:10:5A:2E:62:D2:5B:B3:70:70:B5:2F:03:E1:CD
	   Signature algorithm name: SHA1withRSA
           Version: 3
  Trust this certificate? [no]:  yes
  Certificate was added to keystore

Of course, I also had to change the way I was specifying my URL in the client code:

  URL url = new URL("");
  conn = (HttpURLConnection)url.openConnection();

At this point, I was finally able to connect to my server and send the message. Is this reasonable? Probably not for my case. Both my client and server are within a corporate firewall, and controlled by the same IT staff, so to force this sort of gyration is really unreasonable. Can we do anything about it? Well, one thing that you can do is to provide a custom host name verifier like this:

  URL url = new URL("");
  conn = (HttpsURLConnection)url.openConnection();
  conn.setHostnameVerifier(new HostnameVerifier()
    public boolean verify(String hostname, SSLSession session) 
        { return true; }

When you do this, however, you should be aware that you give up the right to treat the connection as anything but an https connection. Note that we had to change the type of “conn” to HttpsURLConnection from its original type of HttpURLConnection. This means, sadly, that this code will now only work with secure http connections. I chose to use the DNS name in my URL, although a perfectly viable option would also have been the creation of a certificate containing the IP address as an “Alternate Name”.

Is This Okay?!

Ultimately, our client code will probably be embedded in some fairly robust and feature-rich security applications. Given this fact, we can’t really expect our customers to be okay with our sample code taking over the system properties for key and trust store management. No, we’ll have to rework this code to do the same sort of thing that Tomcat does–manage our own lower-level SSL connections, and thereby import certificates and CA chains ourselves. In a future article, I’ll show you how to do this.


On Personal Internet Security…

I have over a hundred accounts on the Internet. I do a lot of online shopping for everything from books to computer hardware to toys for my kids. I also have accounts on various social networking sites like linked-in, facebook, myspace, plaxo, pulse and naymes. I use online authoring sites like wordpress, freesoftwaremagazine, digg, technorati and others. I like to personalize google news and various product support sites to my own tastes. I like it when sites like this allow me to create a profile – essentially a login account.

I also work in the software industry and write a fair amount of open source code, so I have accounts at locations like, which manages separate authentication materials for mailing list accounts, primary site access, shell access, etc – and often for each project they support. This means literally dozens of passwords for a single site.

These places are all pretty benign as far as security issues are concerned. Frankly, I don’t really care if someone knows my middle name, or my phone number for that matter. But I do most of my banking online , and some web-based store fronts keep track of my credit card information these days. I have the option of not giving it to them, but if I trust them, I like to use this feature, and that presents a real security problem for me. Some of these sites have fairly good identity security–others do not. I don’t know which ones do and which ones don’t.

I used to use the same password everywhere – so I wouldn’t forget it. When I started doing online banking and storing credit card information at various store fronts, I used one password for these places, and another one for everywhere else. But lately the number of security classifications I use has increased significantly, making it difficult to remember all of the passwords I use.

If a hacker can break into one of these weaker sites, and capture account information and passwords, they can then access more sensitive personal information at many other sites where I have accounts. Now, I’m not a conspiracy theorist. I don’t believe there are groups of people out to get me personally. But I do believe in bad guys. And I know for a fact that there are bad guys out there “phishing” for random authentication materials. If they find a way to access one (like mine) and if they then find that I use the same password at my bank, I really do believe they’ll go after my cash. After all, they don’t really care whose money they take.


Recently, I was introduced to the KeePass project on What a gem of a little project! KeePass allows you to store passwords and other account information in an easily accessible hierarchical format within an encrypted database on your hard drive. You only need to remember a single master password to get into the database.

Some people might balk at the idea of another layer of indirection between themselves and their online banking web site. I’d agree myself, if it weren’t for some of the really cool usability features in KeePass. For instance, KeePass can copy a password to the clipboard from an entry in its database, which means you need only click on the password entry field and press Ctrl-V to paste it in. If you care to take this to the next level, KeePass will also fill in login forms automatically with a configurable hot-key press on the login page of your sites.

KeePass also contains a small area in each password entry for notes and such. I have an AT&T cell phone account which allows me to connect to the Internet on my laptop through my phone over a high-speed connection. But configuring this connection initially was a real pain in the neck! Once I got it figured out, I wrote down the steps for configuration so I wouldn’t forget them. The next time I needed to reconfigure my laptop, I forgot where I’d written down these instructions. Now, I have them in the notes section for my AT&T wireless account in KeePass.

Another nice feature is that KeePass will automatically generate a high-security password for you, with a single click. When I create a new account on a web-site these days, I just pull up KeePass and create the account and the KeePass entry at the same time. When the site asks me for a password, I don’t waste time thinking about what I should use–I just tell KeePass to give me a good one, then cut and paste it in.

Finally, KeePass will stay resident on your Windows machine, adding a little icon to the system tray while it’s running. Click the icon and you have instant access to your password database. With highly configurable security policy tailored to your personal tastes, you can decide how often you want to type in your master password: Once at login, each time you click the system tray icon, only when you lock it, when you lock your computer screen, etc. You can also configure it to minimize to the tray, or to close to the tray.

Taking It With You

This is all well and good if you only work on one machine. I work on multiple machines. I have one at home where I spend time shopping, and I have one at work where I access my accounts. I have a laptop that I take with me to sneak in some work or play while I’m waiting at the repair shop for my car to be fixed. Sometimes I use my wife’s laptop–just because it’s handy. Sometimes I use a kiosk computer at the airport or at the library. Sometimes I use a colleague’s computer in another office at work.

KeePass has a solution for this problem as well. If you wish, you can store the database on a removable media device, like a USB drive. You can pick up a 1G USB drive these days for 10 to 20 bucks. And this is 100 times as much memory as you need for a password store.

But the database does you little good if you can’t access it with the KeePass program when you need a password. The designers of KeePass understood this. You can store a portable version of the program itself on the USB drive. Portable, in this context, means programming in such a way that the software requires no explicit installation. It creates no registry entries, or special file system objects. This means you can access your password database from any Windows machine with a USB port. Just plug it in and run the program right from the USB drive.

What, Now Linux Too?!

What more could I ask for? Well…recently, I installed Linux on my desktop machine at work. Since moving to OpenSUSE 10.3, I’ve been very satisfied with what I’ve been able to accomplish using only free software. It’s been a whirlwind romance, and I’ve loved every minute of it, but it’s the first time I’ve been without a Windows machine handy to…you know, do the stuff I can only do on Windows. Sad to consider it that way, but it’s been true for me, so I’m guessing it’s true for most everyone else, as well.

Unfortunately, KeePass is a Windows program. “Well, I’m in love with the concept, not the program”, I told myself. So I went looking for a more portable alternative. One that was perhaps not as functional as KeePass, but at least ran on Windows and Linux. And I found it–KeePassX. This is a spin off of the original Windows open source program found on

KeePassX is written using QT and compiled under mingw on Windows, so its interfaces on both platforms are nearly identical. The people who did the port stayed true to the original KeePass look and feel as much as they could in this portable version. I’m very pleased, because now I can carry copies of KeePassX for Windows and Linux, as well as the database which, of course, both versions will open and process.

The only glitch I ran into with KeePassX was that it requires the mingwm10.dll, which fact is not advertised anywhere on the KeePassX web site that I could find, and the win32 package didn’t ship with this library. In fact, the only reference to it that I could find was an entry by a user in their forums indicating that they should probably mention the requirement somewhere. Personally, I think it’s an oversight, and that the Windows bundle should just install it.

To get the library, I just did a Google search for mingwm10 and found a myriad of places from which I could download it. I did that, placed the library in the same directory as the executable and all was well again.

Setting It All Up

To set all this up, I first formatted the USB key under Windows (because Linux has no problem reading FAT-formatted drives, and typically Windows only does Windows). Then I created a directory structure like this on the USB key:

   ...unpacked files from KeePassX Win32 bundle
   ...unpacked files from KeePassX Linux bundle
   ...bundles for both platforms, plus mingwm10 bundle, still packed

Now, I like to do things up right. On Windows XP, when you insert a USB key, it acts like a removable drive–a CDROM or a USB hard drive. On these types of media, you can place a file at the root of the volume called Autorun.inf, which describes for Windows some things you’d like to have happen when the volume is mounted. I added the following text to an Autorun.inf file on the root of my USB key:

action="Run KeePassX"

The “action” keyword allows Windows to display an option called “Run KeePassX” in the list of stuff to do when a drive is mounted that contains mixed media. Unfortunately, the graphic files (icons, bitmaps, etc) on a QT application are stored separately from the binary, so Windows interprets them as picture files. Since there are both pictures AND executables on the USB key, Windows doesn’t know what you really want to do, so it asks you every time you insert the USB key.

On Vista, you have a few more options. You can add more entries under a “[Contents]” section that tells Vista exactly what to do in the case of a conflict. To me, it’s a no-brainer to have done this in XP, but that’s not the way things came out, so we have to put up with the confusion. Most often, CDROM’s that contain executables designed to be run when the disk is inserted are installation CD’s for software you purchase. These have all sorts of media, but they often come packaged up in CAB or ZIP files, so Windows is not confused. There are only executables, so there’s no ambiguity. Windows just runs the setup.exe or install.exe program, as specified in the “open” tag.

When specifying an “action”, the “open” option tells Windows what to do if you select the “Run KeePassX” option in the pop up menu when the key is inserted. The “icon” option is really neat because it not only tells Windows what icon to display next to the action in the pop up, but also what icon to display in file explorer when the drive is mounted. The “shell” option is used to add a context menu option to the menu that comes up when you right-click on the drive in file explorer.

Look here at to learn more about Autoplay on Windows platforms.

Now, I’ve got the best of both worlds, and access to my password database from either place. Could I be any happier about the state of my personal Internet security? I don’t think so.

[Edit: I lost my password database the other day – it was corrupted when I pulled the USB key out of my Linux machine while the program was open. I think the corruption occurred because I popped it into a Windows machine, opened the database, and then put the key BACK into the Linux USB socket, and saved the database. In any case, I HIGHLY recommend you backup your password database once in a while. Luckily, I had a recent copy saved off somewhere, and I was able to get back about 95 percent of my data. Now, I keep a backup of the database on the same USB key in a “Backup” directory, which I overwrite quite often. I also keep a backup on another disk that I backup once a week or so, if I’ve made changes during the interim.

One person I know stores his database in a subversion repository, and updates it on any of his machines. That’s nice to get the latest version on any of your own machines, but it doesn’t help you when you want to access your store on a machine that’s not yours. Still, it’s a good idea to keep it in a repository like this.]

Autotools: The Learning Curve Lessens – Finally!

The Long and Winding Road

I’ve been waiting a LONG time to write this blog entry – over 5 years. Yesterday, after a final couple of eye-opening epiphanies, I think I finally got my head around the GNU Autotools well enough to explain them properly to others. This blog entry begins a series of articles on the use of Autotools. The hope is that others will not have to suffer the same pain-staking climb.

If the articles are well received, there may be a book in it in the end. Believe me, it’s about time for a really good book on the subject. The only book I’m aware of (besides the GNU software manuals) is the New Rider’s 2000 publication of GNU AUTOCONF, AUTOMAKE and LIBTOOL, affectionately known in the community as “The Goat Book”, and so named for the picture on the front cover.

The authors, Gary Vaughan, Ben Elliston, Tom Tromey and Ian Lance Taylor, are well-known in the industry, to say the least – indeed, they’re probably the best people I know of to write such a book. However, as fast as open source software moves these days, a book published in 2000 might as well have been published in 1980. Nevertheless, because of the absolute need for any book on this subject, it’s still being sold new in bookstores. In fairness to the authors, they’ve maintained an online copy through February of 2006 (as of the last time I checked). Regardless, even two years is too long in this business.

As well as it’s written, the biggest gripe I have with the Goat Book is the same gripe I have with the GNU manuals themselves. I’m talking about the sheer number of bits of information that are just assumed to be understood by the reader. The situation is excusable – even reasonable – in the case of the manuals, due to the limited scope of a software manual. My theory is that these guys have been in the business for so long (decades, actually) that many of these topics have become second-nature to them.

The problem, as I see it, is that a large percentage of their readership today are young people just starting out with Unix and Linux. You see, most of these “missing bits” are centered around Unix itself. Sed, for example: What a dream of a tool to work with – I love it! More to the point, however: A solid understanding of the basic functionality of sed is important to grasping the proper use of Autotools. This is true because much of the proper use of Autotools truly involves the proper extension of Autotools.

Another problem is that existing documentation is more reference material than solution-oriented information. I’ll try to make these articles solve real problems, rather than just find new ways to regurgitate the same old reference material found in the manuals.

As you’ve no doubt gathered by now, I’m not an expert on this topic. I don’t have decades of experience in Unix or Linux – well, no more than one decade anyway. But I am a software engineer with significant experience in systems software design and development on multiple hardware and software platforms. As I mentioned in my opening paragraph, I’ve worked extensively with Autotools for about 5 years now. Most of that time was spent in trying to get these tools to do things the wrong way – before finally discovering the way others were doing it.

Claiming not to be an expert gives me a bit of literary – and technical – latitude. To put it bluntly, I’m hoping to gather comments on these articles. So I state here and now: Please comment. I welcome all comments on methods, techniques, tradeoffs and even personal experiences.

I make this statement right up front for the sake of my integrity. I seem to recall a few years ago that Herb Sutter posted a series of articles on the C++ moderated newsgroup entitled GotW – an acronym for “Guru of the Week”. Each article presented a problem in object-oriented software design, specifically related to C++, and elicited responses from the community at large. In and of itself, it was a great idea, and the response was overwhelming. I very much enjoyed reading the GotW threads. But I must say that it surprised me a bit when I saw a book published a year later – Exceptional C++ – that contained most of the information in these threads. Well, I say, good for Herb. And in fairness, perhaps he didn’t plan to write the book until after he’d received such great response. But it feels more comfortable to me to indicate my intentions up front.

Who Should Use Autotools?

I’m going to make a rather broad and sweeping statement here: If you’re writing open source software targeting Unix or Linux systems, then you should be using GNU Autotools. I’m sure I sound a bit biased here. I shouldn’t be, given the number of long nights I’ve spent working around what appeared to be shortcomings in the Autotools system. Normally, I would have been angry enough to toss the entire thing out the window and write a good hand-coded Makefile. But the one truth that I always came back to was the fact that there are literally thousands of projects out there that are apparently very successfully using Autotools. That was too much for me. My pride wouldn’t let me give up.

What if you don’t work on open source software? What if you’re writing proprietary software for Unix or Linux systems? Then, I say, you should still be using Autotools. Even if you only ever intend to target a single distribution of Linux, Autotools will provide you with a build environment that is flexible enough to allow your project to build successfully on future versions or distributions with virtually no changes to the build scripts. This fact, in and of itself, is enough to warrant my statement.

In fact, about the only scenario where it makes sense NOT to use GNU Autotools is the one in which you are writing software for non-Unix platforms only – Microsoft Window comes to mind. Some people will tell you that Autotools can be successfully used on Windows as well, but my opinion is that the POSIX-based approach to software configuration management is just too alien for Windows development. While it can be done, the tradeoffs are too significant to justify the use of an unmodified version of Autotools on Windows.

I’ve seen some project managers develop a custom version of Autotools that allows the use of all native Windows tools. These projects were maintained by people who spent much of their time reconfiguring Autotools to do things it was never intended to do in a totally hostile and foreign environment. Quite frankly, Microsoft has some of the best tools on the planet for Windows software development. If I were developing a Windows software package, I’d use Microsoft’s tools exclusively. In fact, I often write portable software that targets both Linux and Windows. In these cases, I maintain two separate build environments – one for Windows, and one based on Autotools for everything else.

An Overview of Autoconf

If you’ve ever downloaded, built and installed software from a “tarball” (a gzipped or bzipped tar archive, often sporting one of the common extensions, .tar.gz, .tgz or .tar.bz2), then you’re well aware of the fact that there is a common theme to this process. It usually looks something like this:

$ gzip -cd hackers-delight-1.0.tar.gz | tar -xvf -
$ cd hackers-delight-1.0
$ ./configure
$ make all
$ sudo make install

NOTE: I have to assume some level of information on your part, and I’m stating right now that this is it. If you’ve performed this sequence of commands before, and you know what it means, then you’ll have no trouble following these articles.

Most developers know and understand the purpose of the make utility. But what’s this “configure” thing? The use of configuration scripts (often named simply, “configure”) started a long time ago on Unix systems because of variety imposed by the fast growing and divergent set of Unix platforms. It’s interesting to note that while Unix systems have generally followed the defacto-standard Unix kernel interface for decades, most software that does anything significant generally has to stretch outside the boundaries. I call it a defacto-standard because POSIX wasn’t actually standardized until recently. POSIX as a standard was more a documentation effort than a standardization effort, although it is a true standard today. It was designed around the existing set of Unix code bases, and for good reason – it takes a long time to incorporate significant changes into a well-established operating system kernel. It was easier to say, “Here’s how it’s currently being done by most.”, than to say, “Here’s how it should be done – everyone change!” Even so, most systems don’t implement all facets of POSIX. So configure scripts are designed to find out what capabilities your system has, and let your Makefiles know about them.

This approach worked well for literally decades. In the last 15 years however, with the advent of dozens of Linux distributions, the explosion of feature permutations has made writing a decent configure script very difficult – much more so than writing the Makefiles for a new project. Most people have generated configure scripts for their projects using a common technique – copy and modify a similar project’s configure script.

Autoconf changed this paradigm almost overnight. A quick glance at the AUTHORS file in the Savannah Autoconf project repository will give you an idea of the number of people that have had a hand in the making of autoconf. The original author was David MacKenzie, who started the autoconf project as far back as 1991. Now, instead of modifying, debugging and losing sleep over literally thousands of lines of supposedly portable shell script, developers can write a short meta-script file, using a concise macro API language, and let autoconf generate the configure script.

A generated configure script is more portable, more correct, and more maintainable than a hand-code version of the same script. In addition, autoconf often catches semantic or logic errors that the author would have spent days debugging. Before autoconf, it was not uncommon for a project developer to literally spend more time on the configure script for the build environment than on the project code itself!

What’s in a Configure Script?

The primary tasks of a typical configure script are:

  • Generate an include file (often called config.h) for inclusion by project source code.
  • Set environment variables so that make can quietly select major build options.
  • Set user options for a particular make environment – such as debug flags, etc.

For more complex projects, configure scripts often generated the project Makefile(s) from one or more templates maintained by the project manager. A Makefile template would contain configuration variables in an easily recognized format. The configure script would replace these variables with values determined during configuration – either from command line options specified by the user, or from a thorough analysis of the platform environment. Often this analysis would entail such things as checking for the existence of certain include files and libraries, searching various file system paths for required utilities, and even running small programs designed to indicate the feature set of the shell or C compiler. The tool of choice here for variable replacement was sed. A simple sed command can replace all of the configuration variables in a Makefile template in a single pass through the file.

Autoconf to the Rescue

Praise to David MacKenzie for having the foresight to – metaphorically speaking – stop and sharpen the axe! Otherwise we’d still be writing (copying) and maintaining long, complex configure scripts today.

The input to autoconf is … (drum roll please) … shell script. Man, what an anti-climax! Okay, so it’s not pure shell script. That is, it’s shell script with macros, plus a bunch of macro definition files – both those that ship with an autoconf distribution, as well as those that you write. The macro language used is called m4. “m-what?!”, you ask? The m4 utility is a general purpose macro language processor that was originally written by none other than Brian Kernighan and Dennis Ritchie in 1977. (The name m4 means “m plus 4 more letters” or the word “macro” – cute, huh?).

Some form of the m4 macro language processor is found on every Unix and Linux variant (as well as other systems) in use today. In fact, this proliferance is the primary reason for it’s use in autoconf. The design goals of autoconf included primarily that it should run on all systems without the addition of complex tool chains and utility sets. Autoconf depends on the existence of relatively few tools, including m4, sed and some form of the bourne shell, as well as many of the standard Unix utilities such as chmod, chown, mkdir, rm, ln and others. Autoconf generates somewhere around 15 thousand lines of portable shell script code that is unrelated to any additional code that you add to it’s main input file! This overhead is boiler plate functionality that existed in most of the well-designed configure scripts that were written (copied) and maintained in the days before autoconf.

Autoconf in Action

Probably the easiest way to get started with autoconf is to use the autoscan utility to scan your project directory from the root down, and generate the necessary script – the primary input file to autoconf. If you’d rather do it manually, you can start with as few as three macro calls, as follows:

# configure: generated from by autoconf
AC_INIT([my-project], [1.0])

echo "Configuration for package ${PACKAGE}, version ${VERSION} complete."
echo "Now type 'make' to continue."

In future articles, I’ll build on this initial script by adding additional macros and shell script to solve various problems that I’ve run into in the past. I believe these to be common problems related to build environments, and I expect others will feel the same.

AC_INIT actually takes three parameters: The package name, the package version and an email address for reporting bugs. The email address is optional and m4 allows trailing parameters (and separating commas) to simply be omitted, as shown in the example. AC_INIT sets some project definitions that are used throughout the rest of the generated configuration script. These variables may be referenced later in the script as the environment variables ${PACKAGE} and ${VERSION}, as indicated by the echo statements at the bottom of the script.

This example assumes you have a template for your Makefile called in your top-level project directory (next to the script). This file should look exactly like your Makefile, with one exception. Any text you want autoconf to replace should be marked with autoconf replacement variables, like this:

# Makefile: generated from by autoconf


all : $(PACKAGE)

$(PACKAGE) : main.c
    echo "Building $(PACKAGE), version $(VERSION)."
    gcc main.c -o $@

In fact, any file you list in AC_CONFIG_FILES (separated by white space) will be generated from a file of the same name with a .in extension, and found in the same directory. Autoconf generates sed commands into the configure script that perform this simple string replacement when the configure script is executed. Sed is a Stream EDitor, which is a fancy way of saying that it doesn’t require an entire source file to be loaded into memory while it’s doing it’s thing. Rather, it watches a stream of bytes as they go by, replacing text in the stream with other text, as specified on it’s command line. The expression list passed to sed by the configure script is built by autoconf from a list of variables defined by various autoconf macros, many of which we’ll cover in greater detail later.

Note in these example scripts, that we’ve used three different kinds of variables; autoconf replacement variables are text surrounded by ‘@’ signs, environment variables are indicated by normal shell syntax like this: ${variable}, and make variables, which are almost the same as shell variables, except that parenthesis are used instead of french braces: $(variable). In fact, we set make variables from the text replaced by autoconf at the top of If you were to look at the contents of the generated Makefile, this is what you’d see:

# Makefile: generated from by autoconf

PACKAGE = my-project

all : $(PACKAGE)

$(PACKAGE) : main.c
    echo "Building $(PACKAGE), version $(VERSION)."
    gcc main.c -o $@

The important thing to notice here is that the autoconf variables are the ONLY items replaced in while generating the Makefile. The reason this is important to understand is that it helps you to realize the flexibility you have when allowing autoconf to generate a file from a template. This flexibility will become more apparent as we get into various use cases for the pre-defined autoconf macros, and even later when we delve into the topic of writing your own autoconf macros.


It would be a great learning experience to take an existing project and just apply autoconf to the task of generating your configure script. Forget about the rest of the Autotools right now. Just focus on autoconf. There are actually a fair number of popular open source projects in the world that only use autoconf.

I’ll continue my treatise on autoconf in the next article. In the meantime, please use the following short reading list to put you to sleep at night for the next month or so! You really do need the background information. I’ll try to cover the basics of each of these next time, but you’ll want to be at least familiar with them before then.

  1. The GNU Autoconf Manual
  2. The GNU M4 Manual
  3. The General Electric SED Tutorial
  4. Additional Reading on Unix Shell Script, etc

Experiences with Linux Hardware Config

I got a new laptop at work in late August. The date is especially significant because it’s now mid-October and I’m still sorting out issues with video drivers, network cards, and bluetooth functionality.It’s a Lenovo T60p – a nice machine. Speedy. Sleek. Full-featured – it even comes with some built-in biometric features. Oddly, these are the sorts of hardware features that open source geeks love to play with. And it’s a good thing for us users because otherwise they’d just be extra baggage on an otherwise nice machine. Manufacturers just don’t spend a lot of time yet on Linux drivers.

For example, the package I ordered came with a Logitech M-RBB93 bluetooth wireless mouse. Now, this is a nice piece of hardware. I’ve had wireless mice before, and they all come with a USB fob that takes most of the joy out of using a wireless mouse. This mouse has an on/off switch on the bottom. That’s the extent of the init/shutdown process. I love it! But it didn’t work with my laptop out of the box as it should have. I didn’t even bother to open the CD that came with it. I would only have found myself disappointed by the instructions, as they nearly always begin with “Press the Start button, and select the Run option…” I hope this will change in the near future – not everyone runs Windows these days.


My new laptop came with SLED 10 (SuSE Linux Enterprise Desktop, version 10) pre-installed. Now, this Linux variant is clearly designed for non-technical people, as it comes completely configured to work well with several hardware configurations, including my Lenovo T60p. But it still had troubles with unforeseen additions such as the bluetooth mouse. It turns out that SLED 10 does work pretty well with the mouse, but you need to perform a few command-line gymnastic stunts in order to get the laptop to connect. Secretaries and executives will probably just toss the mouse in the garbage can, assuming it’s broken. Geeks like me know better.

I didn’t even hope that I could get the fingerprint reader working. But it turns out that there’s an entire community surrounding this little built-in device, known affectionately as the ThinkFinger. There’s a wiki site for configuration that contains very complete information on getting the fingerprint reader working on a variety of platforms, and Ubuntu actually has it’s own ThinkFinger community. I must say, it works very well. Sometimes it requires more than one pass to get a good reading, but usually it works on the first try – much of the quality of experience involves training yourself to swipe your finger in just the right way, but it’s an easy habit to pick up.

ATI is making big advances in coming to terms with the open source world, but they still have a long way to go. Five years ago, I installed a version of Mandrake Linux on my home computer (dual boot) just to play with it. I had an ASUS NVidia card installed. The desktop graphics came up out of the box in 800 x 600, 8-bit color. I was a bit disappointed at first, but then I decided to dig a bit deeper. I went to NVidia’s website, downloaded their latest open source Linux drivers, ran the installer and rebooted. When it came back up, I was viewing my X desktop in 1280 x 1024 24-bit color – perfect! I didn’t even have to select the resolution and color depth (although I had plenty of choices).

I can only wish for the same experience with ATI drivers. Five years ago, ATI drivers for Linux were unheard of. The answer to your question was simply this: You bought the wrong video card. Since AMD acquired ATI, things have changed. Video drivers for ATI cards can now be downloaded from AMD’s web site. They even (usually) work – if you have the patience and technical prowess to mess with them for long enough. But to the average user, my answer to your question is generally still the same: You bought the wrong video card. Give it a couple of years, and ATI will have caught up to where NVidia was five years ago. Now, don’t get me wrong. ATI cards are wonderful, but if you want to take full advantage of them, you’d better stick to Windows.

OpenSuSE 10.2

SLED 10, being designed for non-technical folks, has been tweaked and tested such that many of the processes that have to be done manually in even later opensuse offerings are well integrated and much more automated. However, SLED 10 is old. I’m sorry, but anything older than a year in this industry is out of date. I’m a developer, so I need the latest tools and libraries, and many of these just won’t install on SLED 10, so the first thing I did was upgrade to opensuse 10.2.

SLED 10 is actually ahead of opensuse 10.2 when it comes to integration. While the software may be older, the amount of integration testing and tuning is much greater with enterprise-level offerings. Frankly, given what I know about opensuse 10.2, I can’t wait for the next version of SLED. I still won’t use it, but for my non-technical co-workers, it will be a wonderful improvement.

Well, the devil (as they say) is in the details, so here they are:

Bluetooth Mouse

The bluetooth subsystem on Linux is called bluez. The bluez project is hosted by The trouble with the bluez web site and packages is (like many free software offerings) a woeful lack of both technical and non-technical documentation. The maintainers have done a great job of making it easy to build and install. Unpack the tarball, type “sudo configure; make; make install” and you’re done. The makefile installs a dozen tools and libraries, and even man pages for most of them. The trouble is that there’s no overarching documentation that describes WHY you’d want to use any of them.

Most of the tools are fairly low-level, designed to be configured and consumed by system integrators to provide a good automated end-user experience. The problem, of course being that system integrators in the Linux world generally stop short of the finish line.

Bluetooth is designed to work with a wide variety of devices. Most of these fit into a few categories. Bluetooth mice, for instance, are classified as input devices. The bluez tool that deals with human interface devices is known as hidd – human interface device daemon. This daemon is a system service that is supposed to be started by your system init scripts at boot time. It can also be called by a user logged in as root in order to configure it to bind to your mouse.

If you look on the bottom of your mouse, you’ll see what looks like an ethernet MAC address – a six part, colon separated set of values, two hexadecimal digits each (mine is: 00:07:61:6b:92:13). You can tell hidd to bind to your mouse by using a command like this:

>sudo hidd --connect 00:07:61:6b:92:13

Another way of doing this, is to tell hidd to just search for all devices it can see:

>sudo hidd --search

But if you happen to have more than one mouse lying around, it may connect to the wrong one.

The trouble with opensuse 10.2 is that it’s about 89 percent there with respect to bluez integration. Sometimes this works, other times you have to resort to tricks like adding the above hidd –connect command to your initialization startup scripts, so that it will connect every time. The hidd daemon is designed to remember connections, and the latest offerings really do work, but you may have to play with it for a while to get it to work consistently.

Fingerprint Reader

Download the latest version of the thinkfinger package (0.3 at the time of this writing) from the thinkfinger project site. The package is easily compiled and installed. From the root of the directory into which you extracted the package, just run the following sequence of commands:

#configure; make
#make install

Next, you’ll want to configure the pam module that comes with the package so that you can log into your desktop using your finger print. Pam modules are configured using the /etc/pam.d/common-auth file. Edit this file with your favorite editor and add the following line BEFORE the line containing the reference to

auth  sufficient

This will cause the PAM (Pluggable Authentication Modules) library to query the fingerprint reader each time a password is requested. But you’re only half done. You have to supply credentials in the form of .bir files. For this, you use the tf-tool command (as root):

#tf-tool --add-user jcalcote
#ThinkFinger 0.3 (
Copyright (C) 2006, 2007 Timo Hoenig <>

Initializing... done.
Please swipe your finger (successful swipes 3/3, failed swipes: 1)... done.
Storing data (/etc/pam_thinkfinger/jcalcote.bir)...done

Note that your .bir file was stored as username.bir in the /etc/pam_thinkfinger directory. Now, if all has gone well, the login prompt should say, “Password or swipe finger:”, instead of simply “Password:”. You’ll also get a prompt like this at the command line when you type “su”.

ATI Video Drivers

The Lenovo T60p comes with an integrated ATI Mobility FireGL V5250 video card. The “Mobility” part means it’s for laptops, the FireGL part means it’s one of their high-end offerings (along side of, but slightly lower than the Radeon series), and the V5250 part means it’s close enough to a V5200 that it works with any drivers designed for the V5200 – and that’s a good thing, because the drivers don’t actually recognize the card model number.

At the time of this writing, the latest driver available on ATI’s web site was 8.41.7. The most difficult issue to deal with here is that ATI’s website driver guide will not lead you to the latest drivers for your card unless it’s a fairly late Radeon series card. This doesn’t mean the driver won’t work with your FireGL card – it just means that ATI hasn’t spent the testing resources on your card with that driver, so they aren’t going to lead you to it. Here’s the deal: Most of ATI’s drivers will work with most of their cards just fine – they’re all based on the same or similar chip sets, so the drivers can’t really tell the difference. If you want the latest features, you’ll have to just get the latest driver and see if it works with your card. In fact, I’ve found that the 8.41.7 driver does NOT work with my card, but the previous 8.40.4 driver works fine.

Drivers come from ATI in the form of an executable that runs either from the command line, or as a GUI-based application. This application is actually designed to build a variety of driver installation packages for several different flavors and versions of Linux. For instance, it can build an rpm package for opensuse or redhat. It can also build .deb packages for debian.

To use the driver generator, use the following command-line syntax:

>su --help          (optional) --listpkg       (optional) --buildpkg SuSE/SUSE102-IA32

This will generate an rpm installer package for your system. Note that the first two ati* commands are only for your information. The –listpkg option will display a list of all packages that CAN be generated by the package generator. Choose the one that’s closest to your system type. After this command has completed, you’ll find an rpm package named fglrx_7_1_0_SUSE102-8.40.4-1.i386.rpm in the same directory.

Here’s the tricky part. This installer is complicated. It actually builds a kernel module as part of the installation process, which means that you’ll have to have kernel source and development packages installed in order to install this package. So ensure that you have the appropriate kernel development libraries installed on your system.

The opensuse community and ATI itself provides a YUM repository for various flavors of the opensuse 10.2 kernel (found at – note that this site is not accessible via the web – only through YUM). This package is NOT the same as the one you just generated. It’s pre-configured to run against a specific kernel version with a specific set of patches. Personally, I like the approach taken by this ATI package generator better. If you install kernel patches, you’ll either have to get matching updated community drivers, or simply reinstall from this rpm you just generated. It will rebuild the kernel module against the latest libraries and headers installed with those patches. If the kernel changes too much, then you’ll need to get a later ATI driver that’s designed to work with the latest kernel.

Now install the drivers with this set of commands (It’s best if you do this from a tty console – press Ctrl-Alt-F1):

Login: root
#init 3
#rpm -ivh  fglrx_7_1_0_SUSE102-8.40.4-1.i386.rpm
#sax2 -r -m 0=fglrx
#init 5

You should now be running with your ATI drivers. To test your configuration, open a terminal window, and type:


You should see the following lines among the output:

client glx vendor string: ATI
OpenGL vendor string: ATI Technologies Inc.
OpenGL renderer string: ATI MOBILITY FireGL V5250
OpenGL version string: 1.2 (2.0.6747 (8.40.4))

ATI drivers are not as well integrated as they could be. They don’t hook into sax2 so that you can toggle settings and enable or disable 3D mode. However, they do allow you to configure clone and xinerama modes from sax2, if you want. This situation can cause some frustration until you understand it. Basically, 3D hardware acceleration can’t be disabled, regardless of what sax2 tells you its current state is.

To prove this to yourself, run the fgl_glxgears program that’s installed with the firegl drivers. You should see a spinning cube whose faces each contain a set of gears spinning within the plane of the face. You can’t do this in software, so if you have any sort of smooth performance in this demo, then you’re definitely runnning with hardware acceleration enabled. Note that there’s a more basic program called glxgears. This one shows a simple set of gears spinning in one plane.

Compiz to Beryl to Compiz-Fusion

Of course, after getting your 3D accelerated drivers installed, you’ll want to do something with your system that will prove the worth of all that effort every minute that you use your computer. This is where compiz comes in. You know all that yummy eye candy that Mac OSX provides for its user experience? Well, Linux isn’t that far behind. To quote the home page:

“Compiz is a compositing window manager that uses 3D graphics acceleration via OpenGL. It provides various new graphical effects and features on any desktop environment, including Gnome and KDE.”

To enable the required OpenGL features, you’ll have to switch your display manager server from xorg to xgl. The default display manager server is the one that comes with the xorg system. It’s tried and true, and doesn’t often have a problem. In the vernacular, it’s stable. The xgl display manager server uses OpenGL to do everything done manually by the xorg server. The community calls xgl “experimental”, but the fact is it’s pretty good lately.

To change from xorg to xgl, you need to use your system configuration editor (YaST | System | etc/sysconfig Editor). From the menu on the left, choose Desktop | Display Manager | DISPLAYMANAGER_XSERVER. Change the setting on the right from “Xorg” to “Xgl”.

Close your applications and restart XWindows by pressing Ctrl-Alt-Backspace. If everything comes up as before, then you’re set. Now go to your main menu and from the section entitled “Look and Feel”, select “Desktop Effects”. The information in this dialog is a bit disconcerting. It tries to tell you that you can’t enable desktop effects (compiz) because your hardware is not recognized. It also tries to tell you that 3D acceleration is not enabled. Don’t forget that ATI drivers have bypassed the sax2 hooks for this feature. So applications that use sax data to determine 3D acceleration state are going to be misled into believing it’s not enabled. But just select “Enable Desktop Effects” at the bottom anyway (if it’s not already done for you). When you exit this dialog, you should see your windows doing cool stunts (sometimes without your aid or approval).

Probably because of the “experimental” nature of Xgl, occasionally you will lose your window manager. The effects of this are simple – the title bars on all of the windows on your screen will disappear, making it difficult at best to accomplish anything. Easily remedied however – just restart XWindows (Ctrl-Alt-Backspace). Unless you’ve really hosed things up, it should restart the window manager correctly.

OpenSuSE 10.3

After all of that (and that took me a month of research), opensuse 10.3 was released on the 3rd of October. I’m a bleeding edge sort of guy (if you couldn’t tell by now), so I immediately upgraded my 10.2 system. Believe it or not, nearly everything worked without a lot of tweaking and configuring in opensuse 10.3.

The only problem I’m having at this point is with my wireless network card. I got myself into a situation where there were two entries for my wireless card in the Network Devices dialog. One of them came from the udev hardware detection subsystem, and was listed as “unconfigured”. The other was a copy of the detected card that was listed as “DHCP” (meaning, configured to use DHCP). When I would try to delete the configured entry, and then configure the detected entry, it would look good until I closed the dialog and then rentered it, whereupon it would look as it did before. The solution to this problem finally presented itself accidentally, as I tried to do what I’d tried before, but in reverse order. That is, I first configured the detected card, and THEN deleted the originally configured card. For some reason, this worked.

One other problem I’m having with wi-fi is that I can’t seem to connect to my wireless network at work. At work, we have a wireless network with an unadvertised or “hidden” ssid – the network identifier. In order to connect to a network with a hidden ssid, you have to know the value of the ssid and specify in when you attempt to connect. I just can’t connect – I still can’t, and I haven’t got a clue why not. I can only assume there’s some sort of bug in the wireless drivers for 10.3 because I can connect just fine at home, where my ssid is advertised. I’ve googled this one for hours, but apparently no one else has had this problem, or they’re not speaking up. In truth, I did find some references to a problem like this last November – nearly a year ago, but it was quickly resolved with a patch to the wlan sub-system. Apparently, the bug is back with a vengeance – at least on my system.

But these things tend to sort themselves out in fairly quick order. People don’t like to go without network access, and whether or not they’re talking about it, this sort of defect is often more wide-spread than it appears at first glance.

Regarding ATI video drivers and opensuse 10.3; ATI provides a web-based repository for 10.3 drivers along side of their 10.2 repository, but be aware that it provides an rpm package with the 8.41.7 drivers. You’ll perhaps recall that these drivers didn’t work with my V5250 FireGL card, and this repository version is no exception. I tried them, and then had to back off to the 8.40.4 drivers. YMMV.

The bluetooth subsystem is substantially enhanced on 10.3. I was able to bind to my mouse using (get this) a GUI interface! If you’re coming from a Windows background, you’re no doubt laughing, but then I’m not talking to you, am I? 🙂

The fingerprint reader actually has a YaST panel plugin in 10.3. The installer detects the fingerprint reader and ensures that the appropriate packages are installed, so you don’t have to go looking for it.

Despite the fact that both bluetooth and biometric hardware integration is much better in 10.3, I still upgraded these two packages from bluez and thinkfinger – probably because I’m a glutton for punishment. But the latest packages do provide some small bit of extended functionality.

All in all, I’m so pleased with opensuse 10.3 laptop, that my co-workers think I’m weirder than I really am, walking around the office with a big grin on my face. But they’re not laughing at me when I show them something cool that my machine can do that theirs can’t.