Reconsider Your Definition of Unreadable

I get really tired of reading blog posts and comments about programming language and library features that are touted as “unreadable”, “confusing”, or “obfuscating”. The simple fact is, what makes a language or library feature unreadable or confusing is ignorance and lack of familiarity.

Recently I read this Medium article comparing the Java Lombok library with the Java Manifold project. A comment was posted that both of them (especially Lombok) were confusing and hard to understand, making the code more difficult to read. This is perhaps true for people who haven’t used Lombok for more than a few days. In fact, if they took the time to think back to when they were new to Java, they might remember that everything was confusing and hard to understand for a while.

Another place where I’ve encountered this sentiment is in newer language features. For example, back when Java streams were new, no one liked them because they were too hard to read. Today, I can’t conceive of a reasonably complex Java program that doesn’t use streams to enhance readability. It all depends on how familiar you are with the construct. A new developer coming into the team may throw up his hands in despair after running into one complex use of streams after another, or he may love you for using this elegant language feature to manipulate data structures, rather than several dozen lines of more traditional code to accomplish the same thing.

I’ve also read comments about Spring and other such frameworks indicating that they “totally abuse annotations”. This is an inaccurate assessment, at best. Most of the annotations used by Spring are not even defined by Spring, but rather by more fundamental Java standards. Spring took proper advantage, where possible, of existing generic standards exactly because they wanted people to hit the ground running with their first Spring application. Nicely done, Spring developers. Complainers should start looking ahead a little before forming opinions.

If you think Java is hard to understand, with tools like IntelliJ that write half the code for you, try using C++ for a while. I’m not complaining about C++. I love the language, along with all of its idiosyncrasies. But the nature of VM-based languages like Java and C# is that tools can be written that use the compiler and language specification to greatly benefit a new developer’s learning process. IntelliJ, for instance, will write a lot of code for you. I can’t count the number of times I’ve learned something interesting about Java simply by noticing and clicking on an IntelliJ highlight.

A lot of effort by a lot of people smarter than you and I goes into designing a programming language. I have yet to see one that I couldn’t appreciate for elegance within its target domain.

Developers who are not curious by nature should probably have chosen another profession. They may do well financially and, for the most part, get the job done, but they won’t get any real joy from their career. To them, it’s just a job. For those who are curious and passionate about programming, but are reticent to accept new things, just give it a little more time, and you’ll come around in the end.

Cool Stuff I Learned on the Job

During the last week, I had the opportunity (after 5 years of Java) to return to writing some C code – yep, you read that right – not even C++, but straight C. I’m sure glad we have C99 and C11 now because I don’t know what I’d do if I couldn’t even declare variables within a for statement.

During this time, I realized that there are some really cool things I learned while working my first engineering job years ago at Novell. I was part of the Novell Directory Services team (later renamed eDirectory). I made a lot of friends in that job and a few of them taught me some really great techniques.

For example, I needed to use a singly linked list and (C being what it is, and all) there just aren’t any library routines for it, so you have to write it yourself. Yes, yes, there are myriad libraries out there at places like github that supply all the code you could want – I’m talking about STANDARD library routines. The most complex function in the C standard library is qsort. After working in the Java world for the last 7 years, I find it hard to go back to a language where you have to write your own data structures.

I needed a way to traverse the list and remove an element matching some search criteria. Now, this is an old problem – people have solved it already, right? So I headed over to my favorite best-solution site, StackOverflow. If you need to know how the majority of folks do stuff, that’s where you go. I found… nothing… well, nothing elegant anyway. There were any number of solutions involving multiple if statements checking various boundary conditions to handle everything perfectly. Then I remembered what my old mentor Dale Olds taught me about this topic – or, rather, I remembered MOST of it. It took me the better part of an hour to remember in enough detail to actually reimplement it.

This blog post will be a running post, updated as I remember little techniques like this. Hopefully, by the time I’m done in a few years, I’ll have a nice library to rely on.

Remove an Element from a Singly Linked List

Let’s consider what we have in a singly linked list. Well, we have a head pointer and nodes with a next pointer. Perhaps a tail pointer, but only if you’re using it like a FIFO queue or something, where you can append to the tail and remove from the head. Let’s just stick to the basics – a head pointer and a next pointer in each node. To traverse from the head to a node matching some search criteria and remove that node, you must keep track of your current node, and the previous node. Or do you? The actual fact is, you only need to keep track of the current node and the previous node’s next pointer and herein lies the elegance. Here’s how we did it back in the day:

static struct node_t *head;
...
struct node_t **pprev, *cur;
for (pprev = &head, cur = *pprev; cur; pprev = &cur->next, cur = *pprev)
    if (is_match(cur) {
        *pprev = cur->next; /* unlink cur */
        break;
    }
return cur;

The really elegant thing about this technique is that head pointer maintenance is automatic. No need to do something special if the matching node is the one pointed to by head. Because you’re only dealing with the address of the previous node’s next pointer, and head looks just like one of those when you’re dealing with its address and not its value. You can treat it just like you would any other next pointer. Cool, huh?

Heads or Tails

This technique loses a bit of its elegance when you decide to maintain a tail pointer. Now you really do need to keep track of the address of the previous node; if the node you decide to remove is the current tail then you need to repoint the tail at the previous node. In short, the address of the previous node’s next pointer is no longer sufficient. Here’s a possible implementation:

static struct node_t *head, *tail;
...
struct node_t **pprev, *prev, *cur;
for (pprev = &head, prev = NULL, cur = *pprev; cur; 
        pprev = &cur->next, prev = cur, cur = *pprev)
    if (is_match(cur) {
        if (cur == tail)
            tail = prev;
        *pprev = cur->next; /* unlink cur */
        break;
    }
return cur;

In this case, we’re tracking all the same information, plus the address of the previous node (which is NULL in case we’re removing the first node).

Maven code generation – ftw!

Have you ever wanted to generate some of the code you use in a project from a data file?

Recently, I’ve been working on a class that provides configuration options to the application. The options come in from a file or are hard-coded, or simply come from defaults, but it’s nice to be able to access configuration options from a single class with type-safe ‘getters’ for each option. The trouble starts when the set becomes larger than, say, a dozen or so options. At that point the class starts to become a maintenance nightmare. Such classes usually access the property name and type a dozen times or more within the class definition. This is the perfect chance to try out template-based code generation.

Maven

My first thought was to look for a maven plugin that provided such template-based, data-driven code generation. Hmmm – there is the replacer plugin, but the data source is the maven pom file. Well, ok, but I had in mind a more formal definition of the dictionary. I didn’t really want to bury my properties in some obscure pom file in the middle of a multi-module project. In fact, it would be nice if there existed some template-based file generator that would accept data from multiple sources, and even from multiple types of sources – xml, json, csv, whatever.

I’ve had some limited experience using Apache Velocity for this purpose in a past life. The trouble is my experience with it really was limited. I wasn’t the author of the system – just the consumer – all the work of integrating the Velocity engine into the maven build life cycle had already been done by someone else and I never bothered to see how they did it. A quick discussion with a friend who still worked there enlightened me to the fact that it took a LOT of custom code to get it working right.

Now it seems to me that this is not an uncommon thing to want to do – someone must have worked out all the kinks by now in a nicely integrated solution, right?

Freemarker

I looked at several code generation packages and finally landed on Apache Freemarker (the Apache foundation seems to be the final resting place for a lot of projects – many of which completely overlap in functionality – once their originators tire of them). Freemarker sports a powerful data-driven template language so it was a perfect fit for my needs, but – honestly – the main reason I went with it is because of the fmpp (freemarker pre-processor) project, which is a command-line front-end for the Freemarker template engine. While I could have used the Freemarker engine the same way – it’s also a jar – the engine requires data to be fed to it though its API, and the maven-exec-plugin is just not that sophisticated. No, I needed a command-line tool that would allow me to specify a data file as a command-line argument to the jar.

fmpp

The fmpp command-line is also very powerful and functionally complete. Additionally, it’s written in java and, thus, comes packaged in a jar file which can be executed by the maven-exec-plugin‘s “java” goal:

<plugins>
  ...
  <plugin>
    <groupId>org.codehaus.mojo</groupId>
    <artifactId>exec-maven-plugin</artifactId>
    <version>1.6.0</version>
    <executions>
      <execution>
        <phase>generate-sources</phase>
        <goals>
          <goal>java</goal>
        </goals>
      </execution>
    </executions>
    <configuration>
      <includePluginDependencies>true</includePluginDependencies>
      <mainClass>fmpp.tools.CommandLine</mainClass>
      <sourceRoot>${basedir}/target/generated-sources/</sourceRoot>
      <arguments>
        <argument>-C</argument>
        <argument>${basedir}/src/main/templates/config.fmpp</argument>
        <argument>-S</argument>
        <argument>${basedir}/src/main/templates/com/</argument>
        <argument>-O</argument>
        <argument>${basedir}/target/generated-sources/com/</argument>
      </arguments>
    </configuration>
    <dependencies>
      <dependency>
        <groupId>net.sourceforge.fmpp</groupId>
        <artifactId>fmpp</artifactId>
        <version>0.9.15</version>
      </dependency>
      <dependency>
        <groupId>org.freemarker</groupId>
        <artifactId>freemarker</artifactId>
        <version>2.3.23</version>
      </dependency>
    </dependencies>
  </plugin>
  ...
</plugins>

Important aspects of this snippet are highlighted in red:

  1. I’m sure it’s obvious to every other maven user out there, but it’s never been obvious to me how you get the exec plugin to run as part of your build. There appear to be several ways of doing this, but the most succinct, and simplest, in my opinion, is to specify a life cycle phase in an execution. In my case, I wanted to generate source code to be used in the build process, so the “generate-sources” phase seemed appropriate.
  2. Being able to add a “dependencies” section to a plugin made sense to me. What did not make sense was having to explicitly tell the plugin to add those dependencies to the plugin’s class path. I mean – what would be the point of adding dependencies to the exec plugin unless you wanted those dependencies to be accessible by the plugin when you told it to run something?! Well, this is apparently not obvious to the authors of the plugin, because unless you set the “includePluginDependencies” tag to true, your attempts to execute any java code in the fmpp package will be in vain.
  3. Finally, I needed to upgrade the Freemarker engine to a newer version than the one that last shipped with fmpp. Adding that as a separate dependency allowed my specified version to override the one defined in the fmpp pom.

The fmpp command-line is driven by a combination of command-line arguments and a configuration file. You may use all command-line arguments, or all configuration file options, or any mix of the two. I chose to use a mix. My configuration file is minimal, just specifying the data source and where to find it, as well as what to do with the file names during translation:

removeExtensions: ftl
dataRoot: .
data: {
  properties: json(data.json, UTF-8)
}

The first line tells fmpp to remove the ftl extension from the templates as it generates source files. A template file might be named Configuration.java.ftl, while the corresponding generated source file would be Configuration.java.

The second line says where to look for the data file, relative to the configuration file (they’re in the same directory).

The third option – “data:” – tells fmpp where to look for data sources. I have one data source, a json file named data.json, and it uses UTF-8 text encoding. The term “properties” indicates the name of the root-level hash that will store the data in the json file. Here’s an example of some json data:

[
  {
    "type": "Integer",
    "section": "someSection",
    "name": "someProperty",
    "defaultValue": "1",
  },
  ...
]

With this data set, you can generate a java class from a template like this:

package com.example;

import ...

/**
 * NOTE: Generated code !! DO NOT EDIT !!
 * Reference: http://freemarker.org
 * See template: ${pp.sourceFile}
 */
public class Configuration implements Cloneable {
<#list properties as property>
    private final ${property.type} ${property.name};
</#list>
...

The possibilities are endless, really. The Freemarker template language is very powerful, allowing you to transform the input data in any way you can think of.

IntelliJ

I use an IDE – I love my IDE – it does half my work for me. I remember a math teacher from middle school that was upset at the thought of tools that might cause you to forget some of the basics. But, let’s face it, software developers have so much to remember these days, it’s nice when a tool can take some of the load off of us.

One thing we can agree on about maven is that directory structure matters. Maven is all about using reasonable defaults for the 10,000 configuration options in a build, allowing you to override the few that need to change for your special needs. Thus, directory structure matters. It is, therefore, important to choose wisely where template files go and where generated sources are placed.

Referring back up to the pom file plugin definition, note that the fmpp “-O” command line option specifies where to place generated files. And maven has a place for them – in the target/generated-sources directory. IntelliJ will recognize these files as sources and index them properly. Additionally, since IntelliJ recognizes the generated-sources directory as the proper maven location for generated sources, it will also warn you when you try to edit one of these files.

The fmpp “-S” option specifies where the source templates can be found. These I placed in the src/main/templates directory. Now, the templates sub directory of  src/main is not a standard maven location but, not being able to find a better location, I figured it made sense to place them in a package directory structure relative to the src/main/java directory. If you know of a better location for such templates, please feel free to comment.

Managing a Dynamic Java Trust Store

This is the latest (and probably last) in my series of client-side Java key and trust store management articles, and a good summary article for the topic, I hope.

It’s clear from the design of SSLContext in the JSSE that Java key and trust stores are meant to contain static data. Yet browsers regularly display the standard security warning dialog when connecting to sites whose certificates have expired or whose administrators haven’t bothered to purchase a CA-signed certificate. This dialog generally offers you three choices:

  • Get me out of here!
  • I understand the risks: add certificate for this session only
  • I understand the risks: add certificate permanently

In this article, I’d like to elaborate on what it means to “add certificate” – either temporarily or permanently.

Let’s start with a simple Java http(s) client:

public byte[] getContentBytes(URI uri, SSLContext ctx)
    throws Exception {
  URL url = uri.toURL();
  URLConnection conn = url.openConnection();
  if (conn instanceof HttpsURLConnection && ctx != null) {
    ((HttpsURLConnection)conn).setSSLSocketFactory(
        ctx.getSocketFactory());
  }
  InputStream is = conn.getInputStream();
  int bytesRead, bufsz = Math.max(is.available(), 4096);
  ByteArrayOutputStream os = new ByteArrayOutputStream(bufsz);
  byte[] buffer = new byte[bufsz];
  while ((bytesRead = is.read(buffer)) > 0)
    os.write(buffer, 0, bytesRead);
  byte[] content = os.toByteArray();
  os.close(); is.close();
  return content;
}

This client opens a URLConnection, reads the input stream into a byte buffer, and then closes the connection. If the connection is https – that is, an instance of HttpsURLConnection – it applies the SocketFactory from the supplied SSLContext.

NOTE: I’m purposely ignoring exception managment in this article to keep it short.

This code is simple and concise, but clearly there’s no way to affect what happens during application of the SSL certificates and keys at this level of the code. Certificate and key management is handled by the SSLContext so if we want to modify the behavior of the SocketFactory relative to key management, we’re going to have to do something with SSLContext before we pass it to the client. The simplest way to get an SSLContext is to call SSLContext.getDefault in this manner:

byte[] bytes = getContentBytes(
    URI.create("https://www.example.com/"), 
    SSLContext.getDefault());

The default SSLContext is fairly limited in functionality. It uses either default key and trust store files (and passwords!) or else ones specified in system properties – often via the java command line in this manner:

$ java -Djavax.net.ssl.keyStore=/path/to/keystore.jks \
 -Djavax.net.ssl.keyStorePassword=changeit \
 -Djavax.net.ssl.trustStorePath=/path/to/truststore.jks \
 -Djavax.net.ssl.trustStorePassword=changeit ...

In reality, there is no default keystore, which is fine for normal situations, as most websites don’t require X.509 client authentication (more commonly referred to as mutual auth). The default trust store is $JAVA_HOME/jre/lib/security/cacerts, and the default trust store password is changeit. The cacerts file contains several dozen certificate authority (CA) root certificates and will validate any server whose public key certificate is signed by one of these CAs.

More importantly, however, the default SSLContext simply fails to connect to a server in the event that a trust certificate is missing from the default trust store. But that’s not what web browsers do. Instead, they display the aforementioned dialog presenting the user with options to handle the situation in the manner that suits him or her best.

Assume the simple client above is a part of a larger application that adds certificates to the trust store during execution of other code paths and then expects to be able to use this updated trust store later during the same session. This dynamic reload functionality requires some SSLContext customization.

Let’s explore. SSLContext is a great example of a composite design. It’s built from several other classes, each of which may be specified by the user when initializing a context object. This practically eliminates the need to sub-class SSLContext in order to define custom behavior. The default context is eschewed in favor of a user-initialized instance of SSLContext like this:

public SSLContext getSSLContext(String tspath) 
    throws Exception {
  TrustManager[] trustManagers = new TrustManager[] { 
    new ReloadableX509TrustManager(tspath) 
  };
  SSLContext sslContext = SSLContext.getInstance("SSL");
  sslContext.init(null, trustManagers, null);
  return sslContext;
}

At the heart of this method is the instantiation of a new ReloadableX509TrustManager. The init method of SSLContext accepts a reference to an array of TrustManager objects. Passing null tells the context to use the default trust manager array which exihibits the default behavior mentioned above.

The init method also accepts two other parameters, to which I’ve passed null. The first parameter is a KeyManager array and the third is an implementation of SecureRandom. Passing null for any of these three parameters tells SSLContext to use the default. Here’s one implementation of ReloadableX509TrustManager:

class ReloadableX509TrustManager 
    implements X509TrustManager {
  private final String trustStorePath;
  private X509TrustManager trustManager;
  private List tempCertList 
      = new List();

  public ReloadableX509TrustManager(String tspath)
      throws Exception {
    this.trustStorePath = tspath;
    reloadTrustManager();
  }

  @Override
  public void checkClientTrusted(X509Certificate[] chain, 
      String authType) throws CertificateException {
    trustManager.checkClientTrusted(chain, authType);
  }

  @Override
  public void checkServerTrusted(X509Certificate[] chain, 
      String authType) throws CertificateException {
    try {
      trustManager.checkServerTrusted(chain, authType);
    } catch (CertificateException cx) {
      addServerCertAndReload(chain[0], true);
      trustManager.checkServerTrusted(chain, authType);
    }
  }

  @Override
  public X509Certificate[] getAcceptedIssuers() {
    X509Certificate[] issuers 
        = trustManager.getAcceptedIssuers();
    return issuers;
  }

  private void reloadTrustManager() throws Exception {

    // load keystore from specified cert store (or default)
    KeyStore ts = KeyStore.getInstance(
	    KeyStore.getDefaultType());
    InputStream in = new FileInputStream(trustStorePath);
    try { ts.load(in, null); }
    finally { in.close(); }

    // add all temporary certs to KeyStore (ts)
    for (Certificate cert : tempCertList) {
      ts.setCertificateEntry(UUID.randomUUID(), cert);
    }

    // initialize a new TMF with the ts we just loaded
    TrustManagerFactory tmf 
	    = TrustManagerFactory.getInstance(
            TrustManagerFactory.getDefaultAlgorithm());
    tmf.init(ts);

    // acquire X509 trust manager from factory
    TrustManager tms[] = tmf.getTrustManagers();
    for (int i = 0; i < tms.length; i++) {
      if (tms[i] instanceof X509TrustManager) {
        trustManager = (X509TrustManager)tms[i];
        return;
      }
    }

    throw new NoSuchAlgorithmException(
        "No X509TrustManager in TrustManagerFactory");
  }

  private void addServerCertAndReload(Certificate cert, 
      boolean permanent) {
    try {
      if (permanent) {
        // import the cert into file trust store
        // Google "java keytool source" or just ...
        Runtime.getRuntime().exec("keytool -importcert ...");
      } else {
        tempCertList.add(cert);
      }
      reloadTrustManager();
    } catch (Exception ex) { /* ... */ }
  }
}

NOTE: Trust stores often have passwords but for validation of credentials the password is not needed because public key certificates are publicly accessible in any key or trust store. If you supply a password, the KeyStore.load method will use it when loading the store but only to validate the integrity of non-public information during the load – never during actual use of public key certificates in the store. Thus, you may always pass null in the second argument to KeyStore.load. If you do so, only public information will be loaded from the store.

A full implementation of X509TrustManager is difficult and only sparsely documented but, thankfully, not necessary. What makes this implementation simple is that it delegates to the default trust manager. There are two key bits of functionality in this implementation: The first is that it loads a named trust store other than cacerts. If you want to use the default trust store, simply assign $JAVA_HOME/jre/lib/security/cacerts to trustStorePath.

The second bit of functionality is the call to addServerCertAndReload during the exception handler in the checkServerTrusted method. When a certificate presented by a server is not found in the trust manager’s in-memory database, ReloadableX509TrustManager assumes that the trust store has been updated on disk, reloads it, and then redelegates to the internal trust manager.

A more functional implementation might display a dialog box to the user before calling addServerCertAndReload. If the user selects Get me out of here!, the method would simply rethrow the exception instead of calling that routine. If the user selects It’s cool: add permanently, the method would add the certificate to the file-based trust store, reload from disk, and then reissue the delegated request. If the user selects I’ll bite: add temporarily, the certificate would be added to a list of temporary certificates in memory.

The way I’ve implemented the latter case is to add the certificate to a temporary list and then reload from disk. Strictly speaking, reloading from disk isn’t necessary in this case since no changes were made to the disk file but the KeyStore built from the disk image would have to be kept around for reloading into the trust manager (after the new cert was added to it), so some modifications would have to be made to avoid reloading from disk.

This same code might as well be used in a server-side setting but the checkClientTrusted method would have to be modified instead of the checkServerTrusted method as in this example.

RESTful Authentication

My last post on RESTful transactions sure seemed to attract a lot of attention. There are a number of REST discussion topics that tend to get a lot of hand-waving by the REST community, but no real concrete answers seem to be forthcoming. I believe the most fundamental reasons for this include the fact that the existing answers are unpalatable – both to the web services world at large, and to REST purists. Once in a while when they do mention a possible solution to a tricky REST-based issue, the web services world responds violently – mostly because REST purists give answers like “just don’t do that” to questions like “How do I handle session management in a RESTful manner?”

I recently read an excellent treatise on the subject of melding RESTful web services concepts with enterprise web service needs. Benjamin Carlyle’s Sound Advice blog entry, entitled The REST Statelessness Constraint hits the mark dead center. Rather than try to persuade enterprise web service designers not to do non-RESTful things, Benjamin instead tries to convey the purposes behind REST constraints (in this case, specifically statelessness), allowing web service designers to make rational tradeoffs in REST purity for the sake of enterprise goals, functionality, and performance. Nice job Ben!

The fact is that the REST architectural style was designed with one primary goal in mind: to create web architectures that would scale well to the Internet. The Internet is large, representing literally billions of clients. To make a web service scale to a billion-client network, you have to make hard choices. For instance, http is connectionless. Connectionless protocols scale very well to large numbers of clients. Can you imagine a web server that had to manage 500,000 simultaneous long-term connections?

Server-side session data is a difficult concept to shoehorn into a RESTful architecture, and it’s the subject of this post. Lots of web services – I’d venture to say 99 percent of them – manage authentication using SSL/TLS and the HTTP “basic auth” authentication scheme. They use SSL/TLS to keep from exposing a user’s name and password over the wire, essentially in clear text. They use basic auth because it’s trivial. Even banking institutions use this mechanism because, for the most part, it’s secure. Those who try to go beyond SSL/TLS/basic auth often do so because they have special needs, such as identity federation of disparate services.

To use SSL/TLS effectively, however, these services try hard to use long-term TCP connections. HTTP 1.0 had no built-in mechanism for allowing long-term connections, but NetScape hacked in an add-on mechanism in the form of the “connection: keep-alive” header, and most web browsers support it, even today. HTTP 1.1 specifies that connections remain open by default. If an HTTP 1.1 client sends the “connection: close” header in a request then the server will close the connection after sending the response, but otherwise, the connection remains open.

This is a nice enhancement, because it allows underlying transport-level security mechanisms like SSL/TLS to optimize transport-level session management. Each new SSL/TLS connection has to be authenticated, and this process costs a few round-trips between client and server. By allowing multiple requests to occur over the same authenticated sesssion, the cost of transport-level session management is amortized over several requests.

In fact, by using SSL/TLS mutual authentication as the primary authentication mechanism, no application state need be maintained by the server at all for authentication purposes. For any given request, the server need only ask the connection layer who the client is. If the service requires SSL/TLS mutual auth, and the client has made a request, then the server knows that the client is authenticated. Authorization (resource access control) must still be handled by the service, but authorization data is not session data, it’s service data.

However, SSL/TLS mutual auth has an inherent deployment problem: key management. No matter how you slice it, authentication requires that the server know something about the client in order to authenticate that client. For SSL/TLS mutual auth, that something is a public key certificate. Somehow, each client must create a public key certificate and install it on the server. Thus, mutual auth is often reserved for the enterprise, where key management is done by IT departments for the entire company. Even then, IT departments cringe at the thought of key management issues.

User name and password schemes are simpler, because often web services will provide users a way of creating their account and setting their user name and password in the process. Credential management done. Key management can be handled in the same way, but it’s not as simple. Some web services allow users to upload their public key certificate, which is the SSL/TLS mutual-auth equivalent of setting a password. But a user has to create a public/private key pair, and then generate a public key certificate from this key pair. Java keytool makes this process as painless as possible, but it’s still far from simple. No – user name and password is by far the simpler solution.

As I mentioned above, the predominant solution today is a combination of CA-based transport-layer certificate validation for server authentication, and HTTP basic auth for client authentication. The web service obtains a public/private key pair that’s been generated by a well-known Certificate Authority (CA). This is done by generating a certificate signing request using either openssl or the Java keytool utility (or by using less mainstream tools provided by the CA). Because most popular web browsers today ship well-known CA certificates in their truststores, and because clients implicitly trust services that provide certificates signed by these well-known CA’s, people tend to feel warm and fuzzy because no warning messages pop up on the screen when they connect to one of these services. Should they fear? Given the service verification process used by CAs like Entrust and Verisign, they probably should, but that problem is very difficult to solve, so most people just live with this stop-gap solution.

On the server side, the web service needs to know the identity of the client in order to know what service resources that client should have access to. If a client requests a protected resource, the server must be able to validate that client’s right to the resource. If the client hasn’t authenticated yet, the server challenges the client for credentials using a response header and a “401 Unauthorized” response code. Using the basic auth scheme, the client base64-encodes his user name and password and returns this string in a response header. Now, base64 encoding is not encrytion, so the client is essentially passing his user name and password in what amounts to clear text. This is why SSL/TLS is used. By the time the server issues the challenge, the SSL/TLS encrypted channel is already established, so the user’s credentials are protected from even non-casual snoopers.

When the proper credentials arrive in the next attempt to request the protected resource, the server decodes the user name and password, verifies them against its user database, and either returns the requested resource, or fails the request with “401 Unauthorized” again, if the user doesn’t have the requisite rights to the requested resource.

If this was the extent of the matter, there would be nothing unRESTful about this protocol. Each subsequent request contains the user’s name and password in the Authorization header, so the server has the option of using this information on each request to ensure that only authorized users can access protected resources. No session state is managed by the server here. Session or application state is managed by the client, using a well-known protocol for passing client credentials on each request – basic auth.

But things don’t usually stop there. Web services want to provide a good session experience for the user – perhaps a shopping cart containing selected items. Servers typically implement shopping carts by keeping a session database, and associating collections of selected items with users in this database. How long should such session data be kept around? What if the user tires of shopping before she checks out, goes for coffee, and gets hit by a car? Most web services deal with such scenarios by timing out shopping carts after a fixed period – anywhere from an hour to a month. What if the session includes resource locks? For example, items in a shopping cart are sometimes made unavailable to others for selection – they’re locked. Companies like to offer good service to customers, but keeping items locked in your shopping cart for a month while you’re recovering in the hospital just isn’t good business.

REST principles dictate that keeping any sort of session data is not viable for Internet-scalable web services. One approach is to encode all session data in a cookie that’s passed back and forth between client and server. While this approach allows the server to be completely stateless with respect to the client, it has its flaws. First, even though the data is application state data, it’s still owned by the server, not the client. Most clients don’t even try to interpret this data. They just hand it back to the server on each successive request. But this data is application state data, so the client should manage it, not the server.

There’s no good answers to these questions yet. What it comes down to is that service design is a series of trade-offs. If you really need your web service to scale to billions of users, then you’d better find ways to make your architecture compliant with REST principles. If you’re only worried about servicing a few thousand users at a time, then perhaps you can relax the constraints a bit. The point is that you should understand the constraints, and then make informed design decisions.

RESTful Transactions

I was reading recently in RESTful Web Services (Leonard Richardson & Sam Ruby, O’Reilly, 2007) about how to implement transactional behavior in a RESTful web service. Most web services today do this with an overloaded POST operation, but the authors assert that this isn’t necessary.

Their example (in Chapter Eight) uses the classic bank account transaction scenario, where a customer wants to transfer 50 dollars from checking to savings. I’ll recap here for your benefit. Both accounts start with 200 dollars. So after a successful transaction, the checking account should contain 150 dollars and the savings account should contain 250 dollars. Let’s consider what happens when two clients operate on the same resources:

Client A -> Read account: 200 dollars
Client A -> Withdraw 50 dollars: 200 - 50 = 150 dollars
Client A -> Write account: 150 dollars

Client B -> Read account: 150 dollars
Client B -> Withdraw 50 dollars: 150 - 50 = 100 dollars
Client B -> Write account: 100 dollars

This is all well and good until you consider that the steps in these operations might not be atomic. Transactions protect against the following situation, wherein the separate steps of these two Clients’ operations are interleaved:

Client A -> Read account: 200 dollars
Client B -> Read account: 200 dollars
Client A -> Withdraw 50 dollars: 200 - 50 = 150 dollars
Client B -> Withdraw 50 dollars: 200 - 50 = 150 dollars
Client A -> Write account: 150 dollars
Client B -> Write account: 150 dollars

After both operations, the account should contain 100 dollars, but because no account locking was in effect during the two updates, the second withdrawal is lost. Thus 100 dollars was physically removed from the account, but the account balance reflects only a 50 dollar withdrawal. Transaction semantics would cause the following series of steps to occur:

Client A -> Begin transaction
Client A -> Read account: 200 dollars
Client B -> Begin Transaction (block)
Client A -> Withdraw 50 dollars: 200 - 50 = 150 dollars
Client A -> Write account: 150 dollars
Client A -> Commit transaction
Client B -> (unblock) Read account: 150 dollars
Client B -> Withdraw 50 dollars: 150 - 50 = 100 dollars
Client B -> Write account: 100 dollars
Client B -> Commit transaction

Web Transactions

The authors’ approach to RESTful web service transactions involves using POST against a “transaction factory” URL. In this case /transactions/account-transfer represents the transaction factory. The checking account is represented by /accounts/checking/11 and the savings account by /accounts/savings/55.

Now, if you recall from my October 2008 post, PUT or POST: The REST of the Story, POST is designed to be used to create new resources whose URL is not known in advance, whereas PUT is designed to update or create a resource at a specific URL. Thus, POSTing against a transaction factory should create a new transaction and return its URL in the Location response header.

A user might make the following series of web requests:

GET /transaction/account-transfer/11a5/accounts/checking/11 HTTP/1.1
Host: example.com
...
200 Ok

balance=200
---
GET /transaction/account-transfer/11a5/accounts/savings/55 HTTP/1.1
Host: example.com
...
200 Ok

balance=200

The fact that the client reads the account balances before beginning is implied by the text, rather than stated explicitly. At some later time (hopefully not much later) the transaction is started:

POST /transaction/account-transfer HTTP/1.1
Host: example.com
...
201 Created
Location: /transaction/account-transfer/11a5
---
PUT /transaction/account-transfer/11a5/accounts/checking/11 HTTP/1.1
Host: example.com

balance=150
...
200 Ok
---
PUT /transaction/account-transfer/11a5/accounts/savings/55 HTTP/1.1
Host: example.com

balance=250
...
200 Ok
---
PUT /transaction/account-transfer/11a5 HTTP/1.1
Host: example.com

committed=true
...
200 Ok

At first glance, this appears to be a nice design, until you begin to consider the way such a system might be implemented on the back end. The authors elaborate on one approach. They state that documents PUT to resources within the transaction might be serialized during building of the transaction. When the transaction is committed the entire set of serialized operations could then be executed by the server within a server-side database transaction. The result of committing the transaction is then returned to the client as the result of the client’s commit on the web transaction.

However, this can’t work properly, as the server would have to have the client’s view of the original account balances in order to ensure that no changes had slipped in after the client had read the accounts, but before the transaction was committed (or even begun!). As it stands, changes could be made by a third-party to the accounts before the new balances are written and there’s no way for the server to ensure that these other modifications are not overwritten by outdated state provided by the transaction log. It is, after all, the entire purpose of a transaction to protect a database against this very scenario.

Fixing the Problem

One way to make this work is to include account balance read (GET) operations within the transaction, like this:

POST /transaction/account-transfer HTTP/1.1
Host: example.com
...
201 Created
Location: /transaction/account-transfer/11a5
---
GET /transaction/account-transfer/11a5/accounts/checking/11 HTTP/1.1
Host: example.com
...
200 Ok

balance=200
---
PUT /transaction/account-transfer/11a5/accounts/checking/11 HTTP/1.1
Host: example.com

balance=150
...
200 Ok
---
GET /transaction/account-transfer/11a5/accounts/savings/55 HTTP/1.1
Host: example.com
...
200 Ok

balance=200
---
PUT /transaction/account-transfer/11a5/accounts/savings/55 HTTP/1.1
Host: example.com

balance=250
...
200 Ok
---
PUT /transaction/account-transfer/11a5 HTTP/1.1
Host: example.com

committed=true
...
200 Ok

The GET operations would, of course, return real data in real time. But the fact that the accounts were read within the transaction would give the server a reference point for later comparison during the execution of the back-end database transaction. If the values of either account balance are modified before the back-end transaction is begun, then the server would have to abort the transaction and the client would have to begin a new transaction.

This mechanism is similar in operation to lock-free data structure semantics. Lock-free data structures are found in low-level systems programming on symmetric multi-processing (SMP) hardware. A lock-free data structure allows multiple threads to make updates without the aid of concurrency locks such as mutexes and spinlocks. Essentially, the mechanism guarantees that an attempt to read, update and write a data value will either succeed or fail in a transactional manner. The implementation of such a system usually revolves around the concept of a machine-level test and set operation. The would-be modifier, reads the data element, updates the read copy, and then performs a conditional write, wherein the condition is that the value is the same as the originally read value. If the value is different, the operation is aborted and retried. Even under circumstances of high contention the update will likely eventually occur.

How this system applies to our web service transaction is simple: If the values of either account are modified outside of the web transaction before the back-end database transaction is begun (at the time the commit=true document is PUT), then the server must abort the transaction (by returning “500 Internal server error” or something). The client must then retry the entire transaction again. This pattern must continue until the client is lucky enough to make all of the modifications within the transaction that need to be made before anyone else touches any of the affected resources. This may sound nasty, but as we’ll see in a moment, the alternatives have less desirable effects.

Inline Transaction Processing

Another approach is to actually have the server begin a database transaction at the point where the transaction resource is created with the initial POST operation above. Again, the client must read the resources within the transaction. Now the server can guarantee atomicity — and data integrity.

As with the previous approach, this approach works whether the database uses global- or resource-level locking. All web transaction operations happen in real time within a database transaction, so reads return real data and writes happen during the write requests, but of course the writes aren’t visible to other readers until the transaction is committed.

A common problem with this approach is that the database transaction is now exposed as a “wire request”, which means that a transaction can be left outstanding by a client that dies in the middle of the operation. Such transactions have to be aborted when the server notices the client is gone. Since HTTP is a stateless, connectionless protocol, it’s difficult for a server to tell when a client has died. At the very least, database transactions begun by web clients should be timed out. Unfortunately, while timing out a database transaction, no one else can write to the locked resources, which can be a real problem if the database uses global locking. Additional writers are blocked until the transaction is either committed or aborted. Locking a highly contended resource over a series of network requests can significantly impact scalability, as the time frame for a given lock has just gone through the ceiling.

It’s clear that creating proper RESTful transaction semantics is a tricky problem.

Java Https Key Setup

In my last article, I showed how to remove all security from a secure web (https) transaction by installing dummy trust manager and host name verifier objects into an SSLSocketFactory. Today, I’m going to take it to the next level by demonstrating how to create a private key and self-signed certificate in a JKS keystore, exporting the public key certificate to a client-side trust store, and configuring our client to use the trust store to verify our server.

I’ll be using a Tomcat 6 server – mainly because it’s almost trivial to install and configure for SSL traffic. On my OpenSuSE 11.1 64-bit GNU/Linux machine, I’ve installed the tomcat6 package, and then I’ve gone into YaST’s service management panel and enabled the tomcat6 service.

Self-Signed Certificates

Let’s start by generating the proper keys. First, we’ll generate the server’s self-signed certificate, with embedded public/private key pair. For the common name (CN) field, I’ll make sure to enter the fully qualified domain name of my server (jmc-linux-64.provo.novell.com). This will ensure that my Java client code will properly compare the hostname used in my URL with the server’s certificate. Using any other value here would cause my client to fail with an invalid hostname exception. Here’s the Java keytool command line to create a self-signed certificate in a JKS key store called jmc-linux-64.keystore.jks:

$ keytool -genkey -alias jmc-linux-64 \
 -keyalg RSA -keystore jmc-linux-64.keystore.jks
Enter keystore password: password
Re-enter new password: password
What is your first and last name?
  [Unknown]:  jmc-linux-64.provo.novell.com
What is the name of your organizational unit?
  [Unknown]:  Engineering
What is the name of your organization?
  [Unknown]:  Novell, Inc.
What is the name of your City or Locality?
  [Unknown]:  Provo
What is the name of your State or Province?
  [Unknown]:  Utah
What is the two-letter country code for this unit?
  [Unknown]:  US
Is CN=jmc-linux-64.provo.novell.com, OU=Engineering,
 O="Novell, Inc.", L=Provo, ST=Utah, C=US correct?
  [no]:  yes

Enter key password for 
         (RETURN if same as keystore password): <CR>
		
$

To view the new certificate and key pair, just use the -list option, along with the -v (verbose) option, like this:

$ keytool -list -v -keystore jmc-linux-64.keystore.jks
Enter keystore password: password

Keystore type: JKS
Keystore provider: SUN

Your keystore contains 1 entry

Alias name: jmc-linux-64
Creation date: Jun 19, 2009
Entry type: PrivateKeyEntry
Certificate chain length: 1
Certificate[1]:
Owner: CN=jmc-linux-64.provo.novell.com, OU=Engineering, O="Novell, Inc.", L=Provo, ST=Utah, C=US
Issuer: CN=jmc-linux-64.provo.novell.com, OU=Engineering, O="Novell, Inc.", L=Provo, ST=Utah, C=US
Serial number: 4a3c006f
Valid from: Fri Jun 19 15:17:35 MDT 2009 until: Thu Sep 17 15:17:35 MDT 2009
Certificate fingerprints:
         MD5:  E5:37:9F:85:C9:76:60:FC:DC:01:81:AD:5F:FC:F4:9A
         SHA1: FD:E3:47:6C:AE:9B:75:3B:9C:6C:05:7B:C9:A4:B4:E6:07:F6:B5:FB
         Signature algorithm name: SHA1withRSA
         Version: 3


*******************************************
*******************************************

$

Server Configuration

Okay, now we have a server certificate with public and private key pair in a JKS keystore. The next step is to configure Tomcat to listen for https requests. The default configuration for Tomcat is to run a bare http server on port 8080. To enable the https server on port 8443, I edited the /usr/share/tomcat6/conf/server.xml file and uncommented the default entry for SSL that was already in place as a comment:

...
<!-- Define a SSL HTTP/1.1 Connector on port 8443
     This connector uses the JSSE configuration, when using APR, the
     connector should be using the OpenSSL style configuration
     described in the APR documentation -->

<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
           maxThreads="150" scheme="https" secure="true"
           keystoreFile="/jmc-linux-64.keystore.jks" 
           keystorePass="password"
           clientAuth="false" sslProtocol="TLS" />
...

Make sure the sslProtocol is set to at least “SSLv3” – I just used “TLS” here. The important fields, however, are the keystoreFile and keystorePass fields, which I’ve set to the keystore we created in the previous step, and its password. You can put the keystore file anywhere on your file system accessible by the user running the tomcat service. On my system, the tomcat6 service is executed as root by default, so I just copied my keystore to the root of my file system.

After editing the file, I had to restart the tomcat6 service:

# rctomcat6 restart
Shutting down Tomcat (/usr/share/tomcat6)	... done
Starting Tomcat (/usr/share/tomcat6)		... done
#

Client-Side Trust Store

So much for server configuration. Now we have to configure the client’s trust store with the server’s self-signed certificate. This is done by exporting the certificate and public key from the server’s keystore, and then importing it into a client trust store. A trust store is just a JKS keystore that contains only trust certificates:

$ keytool -export -alias jmc-linux-64 \
 -keystore jmc-linux-64.keystore.jks -rfc \
 -file jmc-linux-64.cert
Enter keystore password: password
Certificate stored in file 
$
$ cat jmc-linux-64.cert
-----BEGIN CERTIFICATE-----
MIICezCCAeSgAwIBAgIESjwAbzANBgkqhkiG9w0BAQUFADCBgTELMAkGA1UEBhMCVVMxDTALBgNV
BAgTBFV0YWgxDjAMBgNVBAcTBVByb3ZvMRUwEwYDVQQKEwxOb3ZlbGwsIEluYy4xFDASBgNVBAsT
C0VuZ2luZWVyaW5nMSYwJAYDVQQDEx1qbWMtbGludXgtNjQucHJvdm8ubm92ZWxsLmNvbTAeFw0w
OTA2MTkyMTE3MzVaFw0wOTA5MTcyMTE3MzVaMIGBMQswCQYDVQQGEwJVUzENMAsGA1UECBMEVXRh
aDEOMAwGA1UEBxMFUHJvdm8xFTATBgNVBAoTDE5vdmVsbCwgSW5jLjEUMBIGA1UECxMLRW5naW5l
ZXJpbmcxJjAkBgNVBAMTHWptYy1saW51eC02NC5wcm92by5ub3ZlbGwuY29tMIGfMA0GCSqGSIb3
DQEBAQUAA4GNADCBiQKBgQCOwb5migz+c1mmZS5eEhBQ5wsYFuSmp6bAL7LlHARQxhZg62FEVBFL
Y2klPoCGfUoXUFegnhCV5I37M0dAQtNLSHiEPj0NjAvWuzagevE6Tq+0zXEBw9fKoVV/ypEsAxEX
6JQ+a1WU2W/vdL+x0lEbRpRCk9t6yhxLw16M/VD/GwIDAQABMA0GCSqGSIb3DQEBBQUAA4GBAC5E
kniYYFxwZUqg9ToFlF0LKjGZfttkXJoTMfOFwA6OXrO6cKdzS04srxhoDzkD8V4RskPxttt0pbKr
iAoGKT/9P4hpDb0Ej4urek9TxlrnoC8g0rOYaDfE57SMStDrCg2ha4IuJFtJOh1aMcl4pm/sk+JW
7U/cWyW9B7InJinZ
-----END CERTIFICATE-----

$
$ keytool -import -alias jmc-linux-64 \
 -file jmc-linux-64.cert \
 -keystore jmc-linux-64.truststore.jks
Enter keystore password: trustpass
Re-enter new password: trustpass
Owner: CN=jmc-linux-64.provo.novell.com, OU=Engineering, O="Novell, Inc.", L=Provo, ST=Utah, C=US
Issuer: CN=jmc-linux-64.provo.novell.com, OU=Engineering, O="Novell, Inc.", L=Provo, ST=Utah, C=US
Serial number: 4a3c006f
Valid from: Fri Jun 19 15:17:35 MDT 2009 until: Thu Sep 17 15:17:35 MDT 2009
Certificate fingerprints:
         MD5:  E5:37:9F:85:C9:76:60:FC:DC:01:81:AD:5F:FC:F4:9A
         SHA1: FD:E3:47:6C:AE:9B:75:3B:9C:6C:05:7B:C9:A4:B4:E6:07:F6:B5:FB
         Signature algorithm name: SHA1withRSA
         Version: 3
Trust this certificate? [no]:  yes
Certificate was added to keystore

$

We now have a file called jmc-linux-64.truststore.jks, which contains only the server’s public key and certificate. You can show the contents of the truststore JKS file with the -list option, like this:

$ keytool -list -v -keystore jmc-linux-64.truststore.jks
Enter keystore password: trustpass

Keystore type: JKS
Keystore provider: SUN

Your keystore contains 1 entry

Alias name: jmc-linux-64
Creation date: Jun 19, 2009
Entry type: trustedCertEntry

Owner: CN=jmc-linux-64.provo.novell.com, OU=Engineering, O="Novell, Inc.", L=Provo, ST=Utah, C=US
Issuer: CN=jmc-linux-64.provo.novell.com, OU=Engineering, O="Novell, Inc.", L=Provo, ST=Utah, C=US
Serial number: 4a3c006f
Valid from: Fri Jun 19 15:17:35 MDT 2009 until: Thu Sep 17 15:17:35 MDT 2009
Certificate fingerprints:
         MD5:  E5:37:9F:85:C9:76:60:FC:DC:01:81:AD:5F:FC:F4:9A
         SHA1: FD:E3:47:6C:AE:9B:75:3B:9C:6C:05:7B:C9:A4:B4:E6:07:F6:B5:FB
         Signature algorithm name: SHA1withRSA
         Version: 3


*******************************************
*******************************************

$

A Simple Https Client

We have several options for how to consume this trust store in client code. I’ll take the easy route today, but watch for another article that describes more complex mechanisms that provide more flexibility. Today, I’ll just show you how to set system properties on our client application. This client is very simple. All it does is connect to the server and display the contents of the web page in raw html to the console:

import java.io.IOException;
import java.io.InputStream;
import java.net.HttpURLConnection;
import java.net.MalformedURLException;
import java.net.URL;

public class HttpsClient
{
  private final String serverUrl;

  public HttpsClient(String serverUrl) 
  {
    this.serverUrl = serverUrl;
  }

  public void connect() 
  {
    try
    {
      HttpURLConnection conn = null;
      URL url = new URL(serverUrl);

      try
      {
        conn = (HttpURLConnection)url.openConnection();
        conn.setRequestMethod("GET");
        conn.setDoOutput(false);
        conn.setDoInput(true);
        conn.connect();
        InputStream is = conn.getInputStream();

        Integer bytes;
        byte [] buffer = new byte[512];
        while ((bytes = is.read(buffer, 0, 512)) > 0)
          System.out.write(buffer, 0, bytes);
      }
      catch (IOException e) { e.printStackTrace(); }
    }
    catch(MalformedURLException e) { e.printStackTrace(); }
  }

  public static void main(String[] args) 
  {
    HttpsClient client = new HttpsClient(
        "https://jmc-linux-64.provo.novell.com:8443");
    client.connect();
  }
}

Executing this client as is, without an assigned trust store will cause it to use the default trust store ($JAVA_HOME/lib/security/cacerts), which doesn’t contain our server’s public certificate, so it will fail with an exception:

javax.net.ssl.SSLHandshakeException: 
sun.security.validator.ValidatorException: 
PKIX path building failed: 
sun.security.provider.certpath.SunCertPathBuilderException: 
unable to find valid certification path to requested target
... stack trace ...
Caused by: 
sun.security.validator.ValidatorException: 
PKIX path building failed: 
sun.security.provider.certpath.SunCertPathBuilderException: 
unable to find valid certification path to requested target
... stack trace ...
Caused by: 
sun.security.provider.certpath.SunCertPathBuilderException: 
unable to find valid certification path to requested target
... stack trace ...  

Configuring the Client Trust Store

The quick way to get this client to work properly is to assign our client’s trust store (containing the server’s public key and self-signed certificate) to JSSE system properties in this manner:

$ java -Djavax.net.ssl.trustStore=jmc-linux-64.truststore.jks \
  -Djavax.net.ssl.trustStorePassword=trustword

If you get the path to the trust store file wrong, you’ll get a different cryptic exception:

javax.net.ssl.SSLException: 
java.lang.RuntimeException: Unexpected error: 
java.security.InvalidAlgorithmParameterException: 
the trustAnchors parameter must be non-empty
... stack trace ...
Caused by: java.lang.RuntimeException: Unexpected error: 
java.security.InvalidAlgorithmParameterException: 
the trustAnchors parameter must be non-empty
... stack trace ...
Caused by: 
java.security.InvalidAlgorithmParameterException: 
the trustAnchors parameter must be non-empty
... stack trace ...

And if you get the password wrong, you’ll get yet another (somewhat less) cryptic exception:

java.net.SocketException: 
java.security.NoSuchAlgorithmException: 
Error constructing implementation 
(algorithm: Default, provider: SunJSSE, 
class: com.sun.net.ssl.internal.ssl.DefaultSSLContextImpl)
... stack trace ...
Caused by: java.security.NoSuchAlgorithmException: 
Error constructing implementation 
(algorithm: Default, provider: SunJSSE, 
class: com.sun.net.ssl.internal.ssl.DefaultSSLContextImpl)
... stack trace ...
Caused by: java.io.IOException: 
Keystore was tampered with, or password was incorrect
... stack trace ...
Caused by: java.security.UnrecoverableKeyException: 
Password verification failed
... stack trace ...

In these examples, my client is using my server’s fully qualified domain name in the URL, which is the common name we used when we created the self-signed certificate:

  ...
  public static void main(String[] args) 
  {
    HttpsClient client = new HttpsClient(
        "https://jmc-linux-64.provo.novell.com:8443");
    client.connect();
  }
}

This is the only name that will work with this trust store. In my next article I’ll show you how to generate certificates that work with aliases like the IP address. I’ll also show you how to add a hostname verifier to allow our client code to be a bit more intelligent about which aliases it rejects out of hand.

Java HTTPS Client Issues

I’ve written in the last several months about creating a client for a RESTful web-based auditing service. In that client, I had to implement client-side authentication, which is much more involved (or it should be anyway) than writing a simple secure web client that accesses content from secure public web servers.

Such a simple secure web client has only a little more functionality than a non-secure (http) web client. Essentially, it must perform a check after each connection to the secure web server to ensure that the server certificate is valid and trustworthy. This involves basically two steps:

  1. Verifying the server’s certificate chain.
  2. Verifying the server’s host name against that certificate.

Verifying the Certificate

The purpose of step 1 is to ensure that the service you’re attempting to use is not trying
to pull something shady on you. That is, the owner of the service was willing to put his or her name on the line with a Certificate Authority (CA) like Entrust or VeriSign. When you purchase a CA-signed certificate, you have to follow various procedures that document who you are, and why you’re setting up the service. But don’t worry – the CA doesn’t get to determine if your service is worthy of public consumption. Rather, only that you are who you say you are. The CA verifies actual existence, names, addresses, phone numbers, etc. If there’s any question about the service later, a consumer may contact that CA to find out the details of the service provider. This is dangerous for scam artists because they can be tracked and subsequently prosecuted. Thus, they don’t want to deal with Certificate Authorities if they don’t have to.

The client’s verification process (step 1) usually involves following the certificates in the certificate chain presented by the server back to a CA-signed certificate installed in its own trust store. A normal Sun JRE comes with a standard JKS truststore in $JAVA_HOME/lib/security/cacerts. This file contains a list of several dozen world-renowned public Certificate Authority certificates. By default, the SSLContext object associated with a normal HTTPSURLConnection object refers to a TrustManager object that will compare the certificates in the certificate chain presented by servers with the list of public CA certificates in the cacerts trust store file.

If you have an older cacerts file that doesn’t happen to contain a certificate for a site to which you are connecting, or if you’ve set up the site yourself using a self-signed certificate, then you’ll encounter an exception when you attempt to connect:

javax.net.ssl.SSLHandshakeException: 
sun.security.validator.ValidatorException: 
PKIX path building failed: 
sun.security.provider.certpath.SunCertPathBuilderException: 
unable to find valid certification path to requested target

Ouch! Does this mean you can’t connect to your test server while writing your client code? Can you only test against public servers? No, of course not, but unfortunately, it does mean a bit more work for you. You have basically two options. The first is to install your test server’s self-signed certificate into your default trust store. I first learned about this technique from a blog entry by Andreas Sterbenz in October of 2006. Nice article, Andreas. Thanks!

However, there is another way. You can write some temporary code in the form of your own sort of dumb trust manager that accepts any certificate from any server. Of course, you don’t want to ship your client with this code in place, but for testing and debugging, it’s nice not to have to mess up your default trust store with temporary certs that you may not want in there all the time. Writing DumbX509TrustManager is surprisingly simple. As with most well-considered Java interfaces, the number of required methods for the X509TrustManager interface is very small:

public class MyHttpsClient
{
  private Boolean isSecure;
  private String serverURL;

  private class DumbX509TrustManager 
      implements X509TrustManager 
  {
    public void checkClientTrusted(X509Certificate[] chain, 
        String authType) throws CertificateException {}

    public void checkServerTrusted(X509Certificate[] chain, 
        String authType) throws CertificateException {}

    public X509Certificate[] getAcceptedIssuers() 
        { return new X509Certificate[] {}; }
  }
  ...

To make use of this trust manager, simply obtain an SSLSocketFactory object in your client’s constructor that you can configure with your dumb trust manager. Then, as you establish https connections to your test server, install your preconfigured SSLSocketFactory object, like this:

  ...
  private SSLSocketFactory getSocketFactory()
  {
    SSLSocketFactory socketFactory = null;
    try
    {
      SSLContext context = SSLContext.getInstance("SSLv3");
      context.init(null, new X509TrustManager[] 
          { new DumbX509TrustManager() }, null);
      socketFactory = context.getSocketFactory();
    }
    catch (Exception e) { e.printstacktrace(); }
    return socketFactory;
  }

  public MyHttpsClient(String serverURL)
  {
    this.serverURL = serverURL;
    if (isSecure = serverURL.startsWith("https:"))
      sslSocketFactory = getSocketFactory();
  }

  public void process() 
  {
    try
    {
      HttpURLConnection conn = null;
      URL url = new URL(serverURL);
      try
      {
        conn = (HttpURLConnection)url.openConnection();
        if (isSecure)
        {
          HttpsURLConnection sconn = (HttpsURLConnection)conn;
              sconn.setSSLSocketFactory(sslSocketFactory);
        }
        conn.setRequestMethod(verb);
        conn.setDoOutput(false);
        conn.setDoInput(true);
        conn.connect();
        ...

That’s it. Warning: Don’t ship your client with DumbX509TrustManager in place. You don’t need it for public secure web servers anyway. If you know your client will only ever be used against properly configured public secure web servers, then you can rely on the default trust manager in the default socket factory associated with HttpsURLConnection.

If you think your client may be expected to work with non-public secure web servers with self-signed, or company-signed certificates, then you have more work to do. Here, you have two options. You can write some code similar to that found in browsers, wherein the client displays a dialog box upon connection, asking if you would like to connect to this “unknown” server just this once, or forever (where upon, the client then imports the server’s certificate into the default trust store). Or you can allow your customer to pre-configure the default trust store with certificates from non-public servers that he or she knows about in advance. But these are topics for another article.

Verifying the Server

Returning to the original two-step process, the purpose of step 2 (host name verification) is to ensure that the certificate you received from the service to which you connected was not stolen by a scammer.

When a CA-signed certificate is generated, the information sent to the Certificate Authority by the would-be service provider includes the fully qualified domain name of the server for which the new cert is intended. This FQDN is embedded in a field of the certificate, which the client uses to ensure that the server is really the owner of the certificate that it’s presenting.

As I mentioned in a previous article, Java’s keytool utility won’t let you generate self-signed certs containing the FQDN in the proper field, thus the default host name verification code will always fail with self-signed certs generated by keytool. Again, a simple dummy class comes to the rescue in the form of the DumbHostnameVerifier class. Just implement the HostnameVerifier interface, which has one required method, verify. Have it return true all the time, and you won’t see anymore Java exceptions like this:

HTTPS hostname wrong:  
should be <jmc-linux-64.provo.novell.com>

Here’s an example:

  ...
  private class DumbHostnameVerifier 
      implements HostnameVerifier
  {
    public boolean verify(String arg0, SSLSession arg1) 
        { return true; }
  }
  ...
  public void process() 
  {
        ...
        if (isSecure)
        {
          HttpsURLConnection sconn = (HttpsURLConnection)conn;
          sconn.setSSLSocketFactory(sslSocketFactory);
          sconn.setHostnameVerifier(new DumbHostnameVerifier());
        }
        ...

Scoping the Changes

A final decision you should make is the proper scope for setting the dummy trust manager and hostname verifier objects. The JSSE framework is extremely flexible. You can set these on a per-request basis, or as the class defaults, so that whenever a new HttpsURLConnection object is created, your objects are automatically assigned to them internally. For instance, you can use the following code to setup class default values:

public class MyHttpsClient
{
  private static class DumbX509TrustManager 
      implements X509TrustManager 
  {
    public void checkClientTrusted(X509Certificate[] chain, 
        String authType) throws CertificateException {}

    public void checkServerTrusted(X509Certificate[] chain, 
        String authType) throws CertificateException {}

    public X509Certificate[] getAcceptedIssuers() 
        { return new X509Certificate[] {}; }
  }

  private static class DumbHostnameVerifier 
      implements HostnameVerifier
  {
    public boolean verify(String arg0, SSLSession arg1) 
        { return true; }
  }

  private static SSLSocketFactory getSocketFactory()
  {
    SSLSocketFactory socketFactory = null;
    try
    {
      SSLContext context = SSLContext.getInstance("SSLv3");
      context.init(null, new X509TrustManager[] 
          { new DumbX509TrustManager() }, null);
      socketFactory = context.getSocketFactory();
    }
    catch (Exception e) { e.printstacktrace(); }
    return socketFactory;
  }

  static
  {
    HttpsURLConnection.setDefaultHostnameVerifier(
        new DumbHostnameVerifier());
    HttpsURLConnection.setDefaultSSLSocketFactory(
        getSocketFactory());
  }

  private String serverURL;
  
  public MyHttpsClient(String serverURL)
  {
    this.serverURL = serverURL;
  }
  ...

You can now remove the isSecure check in the process routine, because new instances of HttpsURLConnection will automatically be assigned objects of your new trust manager and hostname verifier classes – the default objects you stored in the classes with the HttpsClient class’s static initializer.

With that, you’re set to connect to any https server. Here’s a little insight for you: The difficult part – the real work – of writing https clients involves writing real code for these classes. I’ll write a future article that provides details on these processes. Again, I remind you: Don’t accidentally ship your clients with DumbHostnameVerifier in place! (Unless, of course, you want to. After all, it’s your code…)

Effective Communications and Apache Ant

Nothing bothers me more, when searching for the solution to a software problem that I’ve encountered, than to find someone with similar problems asking questions on various message boards, only to have response after response sound something like this:

“What’s changed in your code?”

“Look for the problem in your code.”

“You’ve messed something up in your code.”

“Your environment is hosed.”

Recently, I had a problem building a very large Java project with Apache Ant. I kept getting, “Error starting modern compiler.” about a third of the way into the build (5-10 minutes). Not getting any help from the core project team, I did what I usually do – I turned to Google search and immediately found a few people with the same problem. Unfortunately, most of them were using Ant in conjunction with Eclipse. I was getting the same error message from the command line.

I can usually judge by now the age of a problem by the number and quality of responses I find in a Google search. This was clearly a fairly recent issue. One link I found was in reference to the Apache build itself, wherein a bug was filed against Ant for this very issue (or one very nearly like it).

But it irks me to no end when people feel the need to respond to queries on issues like this, without having anything useful to say. If you haven’t encountered the problem before, or you don’t have any particular insight into what’s causing it, then please don’t respond with silly accusations about how the original poster’s environment must be hosed, or how his code must be at fault.

The fact is, software tools have bugs. It’s that simple. Even a tool as revered in the Java world as Ant will have defects. The solutions I eventually found included either changing my project build script in such a way as to cause Ant to fork a new process when it get’s to a particularly large build, or to increase the amount of virtual memory allocated to Ant via an Ant command-line option. I chose to set ANT_OPTS=-Xmx512m in my environment before executing Ant (mainly because I disagree in principle with project-specific solutions to general tool problems).

As it turns out, the root cause of this problem seems to be related more to the fact that Ant can’t spawn a child process, rather than that the wrong compiler was referred to by some environment variable. Java 1.6 has more problems than Java 1.5, probably because 1.6 is larger and more resource intensive than 1.5. The inaccuracy of the message (“modern compiler”??) leads us to believe that the problem is with the compiler itself. But that’s an entirely different problem in effective communications…

Java Secure HTTP Keys, Part II

In my last article, I described the process of configuring client-side key and trust stores within a Java web client application. To keep it simple, I purposely used the built-in functionality of HttpsURLConnection to read certain System properties to obtain references to these credential stores, along with their passwords.

However, for an embedded client–as would be the case with library code–you’d not want to rely on any System properties, because these belong to your user and her application, not to your library. But, manually configuring the key and trust stores for a client-side https connection is a little more involved.

In this article, I’d like to show you how it’s done, and I’d like to begin by suggesting some required reading for a solid understanding of the way it all works. I’m referring to the Java Secure Socket Extensions (JSSE) Reference Guide. Since JSSE was introduced in Java 1.4, and hasn’t really changed much since then, this document is officially up to date–even in Java SE 6.

Getting Started…

Note that the process for setting up the key and trust stores hasn’t changed, so I’ll simply refer you to my previous article for this information.

To summarize, the goal here is to associate our key and trust stores with our client-side connections without specifying them in System properties. And it’s amazing the amount of extra work we have to go through in order to accomplish this seemingly simple task.

The first thing we’ll do is remove the calls to System.setProperty in our AuditRestClient constructor. We still need the values we wrote to those properties, so we’ll just convert them to constants in the AuditRestClient class. At some later point, these should undoubtedly be converted to properties that we read from our own configuration file, but for now, these constants will do:

  public class AuditRestClient
  {
    // URL components (should be configured variables)
    private static final String HTTP = "HTTP";
    private static final String HTTPS = "HTTPS";
    private static final String HOSTNAME = "10.0.0.1";
    private static final Integer PORT = 9015;

    // secure channel key material stores (should be configured)
    private static final String keystore = "/tmp/keystore.jks";
    private static final String truststore = "/tmp/truststore.jks";
    private static final String keypass = "changeit";
    private static final String trustpass = "changeit";

    // secure channel variables
    private Boolean isSecure = true;
    private SSLSocketFactory sslSocketFactory = null;

    public AuditRestClient()
    {
      setupSocketFactory();
    }
    ...

Building Your Own Socket Factory

The new version of the AuditRestClient constructor calls a private method called setupSocketFactory, which configures an SSLSocketFactory object for use later when we configure our HttpsURLConnection object. Here’s the code:

    ...
    private void setupSocketFactory()
    {
      try
      {
        String protocol = "TLS";
        String type = "JKS";
        String algorithm = KeyManagerFactory.getDefaultAlgorithm();
        String trustAlgorithm =
            TrustManagerFactory.getDefaultAlgorithm();

        // create and initialize an SSLContext object
        SSLContext sslContext = SSLContext.getInstance(protocol);
        sslContext.init(getKeyManagers(type, algorithm),
            getTrustManagers(type, trustAlgorithm),
            new SecureRandom());

        // obtain the SSLSocketFactory from the SSLContext
        sslSocketFactory = sslContext.getSocketFactory();
      }
      catch (Exception e) { e.printStackTrace(); }
    }
    ...

This private helper method calls two other private methods, getKeyManagers and getTrustManagers to configure the key and trust stores. Each of these two routines also call a routine named getStore to obtain the key and trust stores from the configured key and trust managers. Again, here’s the code for all three of these methods:

    ...
    private KeyStore getStore(String type,
        String filename, String pwd) throws Exception
    {
      KeyStore ks = KeyStore.getInstance(type);
      InputStream istream = null;

      try
      {
        File ksfile = new File(filename);
        istream = new FileInputStream(ksfile);
        ks.load(istream, pwd != null? pwd.toCharArray(): null);
      }
      finally { if (istream != null) istream.close(); }

      return ks;
    }

    private KeyManager[] getKeyManagers(String type,
        String algorithm) throws Exception
    {
      KeyStore ks = getStore(type, keyStore, keyPass);
      KeyManagerFactory kmf =
          KeyManagerFactory.getInstance(algorithm);

      kmf.init(ks, keypass.toCharArray());

      return kmf.getKeyManagers();
    }

    private TrustManager[] getTrustManagers(String type,
        String algorithm) throws Exception
    {
      KeyStore ts = getStore(type, trustStore, trustPass);
      TrustManagerFactory tmf =
          TrustManagerFactory.getInstance(algorithm);

      tmf.init(ts);

      return tmf.getTrustManagers();
    }
    ...

The getStore method calls KeyStore.getInstance to obtain an instance of the key store associated with the specified type–in this case, “JKS”. It should be noted that if you wish to specify your own provider, you may do so by calling the other version of KeyStore.getInstance, which accepts a string provider name, as well.

Using Your New Socket Factory

Now that you have your socket factory built (whew!), it’s time to look at how it’s used by the rest of the AuditRestClient code. Here’s the context for the use of the new object:

    public void send(JSONObject event)
    {
      byte[] bytes = event.toString().getBytes();
      HttpURLConnection conn = null;

      try
      {
        URL url = new URL(isSecure? HTTPS: HTTP,
            HOSTNAME, PORT, "/audit/log/test");
        conn = (HttpURLConnection)url.openConnection();
        if (isSecure)
        {
          HttpsURLConnection sconn = (HttpsURLConnection)conn;
          sconn.setSSLSocketFactory(sslSocketFactory);
        }
        conn.setRequestMethod("POST");
        conn.addRequestProperty("Content-Length", "" + bytes.length);
        conn.addRequestProperty("Content-Type", CTYPE);
        conn.addRequestProperty("Connection", "Keep-Alive");
        conn.setDoOutput(true);
        conn.setDoInput(true);
        conn.connect();
        ...

Now, this code is completely independent of application owned System properties. Additionally, it’s portable between secure and non-secure HTTP channels. This protocol portability requires a type cast of the connection from HttpURLConnection to HttpsURLConnection in one place (as highlighted in the example above in bold text).

You may have also noticed that I converted the previous version of send to use the other popular form of the URL constructor. This form accepts constituent parts of the URL as separate parameters, rather than as a single string. It’s a bit more efficient under the covers, as the constructor doesn’t need to parse these components from the URL string. It made more sense on my end, as well since I’m parameterizing several of these parts now anyway. Attributes like HOSTNAME and PORT will eventually be read from a library configuration file.