Java Https Key Setup

In my last article, I showed how to remove all security from a secure web (https) transaction by installing dummy trust manager and host name verifier objects into an SSLSocketFactory. Today, I’m going to take it to the next level by demonstrating how to create a private key and self-signed certificate in a JKS keystore, exporting the public key certificate to a client-side trust store, and configuring our client to use the trust store to verify our server.

I’ll be using a Tomcat 6 server – mainly because it’s almost trivial to install and configure for SSL traffic. On my OpenSuSE 11.1 64-bit GNU/Linux machine, I’ve installed the tomcat6 package, and then I’ve gone into YaST’s service management panel and enabled the tomcat6 service.

Self-Signed Certificates

Let’s start by generating the proper keys. First, we’ll generate the server’s self-signed certificate, with embedded public/private key pair. For the common name (CN) field, I’ll make sure to enter the fully qualified domain name of my server (jmc-linux-64.provo.novell.com). This will ensure that my Java client code will properly compare the hostname used in my URL with the server’s certificate. Using any other value here would cause my client to fail with an invalid hostname exception. Here’s the Java keytool command line to create a self-signed certificate in a JKS key store called jmc-linux-64.keystore.jks:

$ keytool -genkey -alias jmc-linux-64 \
 -keyalg RSA -keystore jmc-linux-64.keystore.jks
Enter keystore password: password
Re-enter new password: password
What is your first and last name?
  [Unknown]:  jmc-linux-64.provo.novell.com
What is the name of your organizational unit?
  [Unknown]:  Engineering
What is the name of your organization?
  [Unknown]:  Novell, Inc.
What is the name of your City or Locality?
  [Unknown]:  Provo
What is the name of your State or Province?
  [Unknown]:  Utah
What is the two-letter country code for this unit?
  [Unknown]:  US
Is CN=jmc-linux-64.provo.novell.com, OU=Engineering,
 O="Novell, Inc.", L=Provo, ST=Utah, C=US correct?
  [no]:  yes

Enter key password for 
         (RETURN if same as keystore password): <CR>
		
$

To view the new certificate and key pair, just use the -list option, along with the -v (verbose) option, like this:

$ keytool -list -v -keystore jmc-linux-64.keystore.jks
Enter keystore password: password

Keystore type: JKS
Keystore provider: SUN

Your keystore contains 1 entry

Alias name: jmc-linux-64
Creation date: Jun 19, 2009
Entry type: PrivateKeyEntry
Certificate chain length: 1
Certificate[1]:
Owner: CN=jmc-linux-64.provo.novell.com, OU=Engineering, O="Novell, Inc.", L=Provo, ST=Utah, C=US
Issuer: CN=jmc-linux-64.provo.novell.com, OU=Engineering, O="Novell, Inc.", L=Provo, ST=Utah, C=US
Serial number: 4a3c006f
Valid from: Fri Jun 19 15:17:35 MDT 2009 until: Thu Sep 17 15:17:35 MDT 2009
Certificate fingerprints:
         MD5:  E5:37:9F:85:C9:76:60:FC:DC:01:81:AD:5F:FC:F4:9A
         SHA1: FD:E3:47:6C:AE:9B:75:3B:9C:6C:05:7B:C9:A4:B4:E6:07:F6:B5:FB
         Signature algorithm name: SHA1withRSA
         Version: 3


*******************************************
*******************************************

$

Server Configuration

Okay, now we have a server certificate with public and private key pair in a JKS keystore. The next step is to configure Tomcat to listen for https requests. The default configuration for Tomcat is to run a bare http server on port 8080. To enable the https server on port 8443, I edited the /usr/share/tomcat6/conf/server.xml file and uncommented the default entry for SSL that was already in place as a comment:

...
<!-- Define a SSL HTTP/1.1 Connector on port 8443
     This connector uses the JSSE configuration, when using APR, the
     connector should be using the OpenSSL style configuration
     described in the APR documentation -->

<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
           maxThreads="150" scheme="https" secure="true"
           keystoreFile="/jmc-linux-64.keystore.jks" 
           keystorePass="password"
           clientAuth="false" sslProtocol="TLS" />
...

Make sure the sslProtocol is set to at least “SSLv3″ – I just used “TLS” here. The important fields, however, are the keystoreFile and keystorePass fields, which I’ve set to the keystore we created in the previous step, and its password. You can put the keystore file anywhere on your file system accessible by the user running the tomcat service. On my system, the tomcat6 service is executed as root by default, so I just copied my keystore to the root of my file system.

After editing the file, I had to restart the tomcat6 service:

# rctomcat6 restart
Shutting down Tomcat (/usr/share/tomcat6)	... done
Starting Tomcat (/usr/share/tomcat6)		... done
#

Client-Side Trust Store

So much for server configuration. Now we have to configure the client’s trust store with the server’s self-signed certificate. This is done by exporting the certificate and public key from the server’s keystore, and then importing it into a client trust store. A trust store is just a JKS keystore that contains only trust certificates:

$ keytool -export -alias jmc-linux-64 \
 -keystore jmc-linux-64.keystore.jks -rfc \
 -file jmc-linux-64.cert
Enter keystore password: password
Certificate stored in file 
$
$ cat jmc-linux-64.cert
-----BEGIN CERTIFICATE-----
MIICezCCAeSgAwIBAgIESjwAbzANBgkqhkiG9w0BAQUFADCBgTELMAkGA1UEBhMCVVMxDTALBgNV
BAgTBFV0YWgxDjAMBgNVBAcTBVByb3ZvMRUwEwYDVQQKEwxOb3ZlbGwsIEluYy4xFDASBgNVBAsT
C0VuZ2luZWVyaW5nMSYwJAYDVQQDEx1qbWMtbGludXgtNjQucHJvdm8ubm92ZWxsLmNvbTAeFw0w
OTA2MTkyMTE3MzVaFw0wOTA5MTcyMTE3MzVaMIGBMQswCQYDVQQGEwJVUzENMAsGA1UECBMEVXRh
aDEOMAwGA1UEBxMFUHJvdm8xFTATBgNVBAoTDE5vdmVsbCwgSW5jLjEUMBIGA1UECxMLRW5naW5l
ZXJpbmcxJjAkBgNVBAMTHWptYy1saW51eC02NC5wcm92by5ub3ZlbGwuY29tMIGfMA0GCSqGSIb3
DQEBAQUAA4GNADCBiQKBgQCOwb5migz+c1mmZS5eEhBQ5wsYFuSmp6bAL7LlHARQxhZg62FEVBFL
Y2klPoCGfUoXUFegnhCV5I37M0dAQtNLSHiEPj0NjAvWuzagevE6Tq+0zXEBw9fKoVV/ypEsAxEX
6JQ+a1WU2W/vdL+x0lEbRpRCk9t6yhxLw16M/VD/GwIDAQABMA0GCSqGSIb3DQEBBQUAA4GBAC5E
kniYYFxwZUqg9ToFlF0LKjGZfttkXJoTMfOFwA6OXrO6cKdzS04srxhoDzkD8V4RskPxttt0pbKr
iAoGKT/9P4hpDb0Ej4urek9TxlrnoC8g0rOYaDfE57SMStDrCg2ha4IuJFtJOh1aMcl4pm/sk+JW
7U/cWyW9B7InJinZ
-----END CERTIFICATE-----

$
$ keytool -import -alias jmc-linux-64 \
 -file jmc-linux-64.cert \
 -keystore jmc-linux-64.truststore.jks
Enter keystore password: trustpass
Re-enter new password: trustpass
Owner: CN=jmc-linux-64.provo.novell.com, OU=Engineering, O="Novell, Inc.", L=Provo, ST=Utah, C=US
Issuer: CN=jmc-linux-64.provo.novell.com, OU=Engineering, O="Novell, Inc.", L=Provo, ST=Utah, C=US
Serial number: 4a3c006f
Valid from: Fri Jun 19 15:17:35 MDT 2009 until: Thu Sep 17 15:17:35 MDT 2009
Certificate fingerprints:
         MD5:  E5:37:9F:85:C9:76:60:FC:DC:01:81:AD:5F:FC:F4:9A
         SHA1: FD:E3:47:6C:AE:9B:75:3B:9C:6C:05:7B:C9:A4:B4:E6:07:F6:B5:FB
         Signature algorithm name: SHA1withRSA
         Version: 3
Trust this certificate? [no]:  yes
Certificate was added to keystore

$

We now have a file called jmc-linux-64.truststore.jks, which contains only the server’s public key and certificate. You can show the contents of the truststore JKS file with the -list option, like this:

$ keytool -list -v -keystore jmc-linux-64.truststore.jks
Enter keystore password: trustpass

Keystore type: JKS
Keystore provider: SUN

Your keystore contains 1 entry

Alias name: jmc-linux-64
Creation date: Jun 19, 2009
Entry type: trustedCertEntry

Owner: CN=jmc-linux-64.provo.novell.com, OU=Engineering, O="Novell, Inc.", L=Provo, ST=Utah, C=US
Issuer: CN=jmc-linux-64.provo.novell.com, OU=Engineering, O="Novell, Inc.", L=Provo, ST=Utah, C=US
Serial number: 4a3c006f
Valid from: Fri Jun 19 15:17:35 MDT 2009 until: Thu Sep 17 15:17:35 MDT 2009
Certificate fingerprints:
         MD5:  E5:37:9F:85:C9:76:60:FC:DC:01:81:AD:5F:FC:F4:9A
         SHA1: FD:E3:47:6C:AE:9B:75:3B:9C:6C:05:7B:C9:A4:B4:E6:07:F6:B5:FB
         Signature algorithm name: SHA1withRSA
         Version: 3


*******************************************
*******************************************

$

A Simple Https Client

We have several options for how to consume this trust store in client code. I’ll take the easy route today, but watch for another article that describes more complex mechanisms that provide more flexibility. Today, I’ll just show you how to set system properties on our client application. This client is very simple. All it does is connect to the server and display the contents of the web page in raw html to the console:

import java.io.IOException;
import java.io.InputStream;
import java.net.HttpURLConnection;
import java.net.MalformedURLException;
import java.net.URL;

public class HttpsClient
{
  private final String serverUrl;

  public HttpsClient(String serverUrl) 
  {
    this.serverUrl = serverUrl;
  }

  public void connect() 
  {
    try
    {
      HttpURLConnection conn = null;
      URL url = new URL(serverUrl);

      try
      {
        conn = (HttpURLConnection)url.openConnection();
        conn.setRequestMethod("GET");
        conn.setDoOutput(false);
        conn.setDoInput(true);
        conn.connect();
        InputStream is = conn.getInputStream();

        Integer bytes;
        byte [] buffer = new byte[512];
        while ((bytes = is.read(buffer, 0, 512)) > 0)
          System.out.write(buffer, 0, bytes);
      }
      catch (IOException e) { e.printStackTrace(); }
    }
    catch(MalformedURLException e) { e.printStackTrace(); }
  }

  public static void main(String[] args) 
  {
    HttpsClient client = new HttpsClient(
        "https://jmc-linux-64.provo.novell.com:8443");
    client.connect();
  }
}

Executing this client as is, without an assigned trust store will cause it to use the default trust store ($JAVA_HOME/lib/security/cacerts), which doesn’t contain our server’s public certificate, so it will fail with an exception:

javax.net.ssl.SSLHandshakeException: 
sun.security.validator.ValidatorException: 
PKIX path building failed: 
sun.security.provider.certpath.SunCertPathBuilderException: 
unable to find valid certification path to requested target
... stack trace ...
Caused by: 
sun.security.validator.ValidatorException: 
PKIX path building failed: 
sun.security.provider.certpath.SunCertPathBuilderException: 
unable to find valid certification path to requested target
... stack trace ...
Caused by: 
sun.security.provider.certpath.SunCertPathBuilderException: 
unable to find valid certification path to requested target
... stack trace ...  

Configuring the Client Trust Store

The quick way to get this client to work properly is to assign our client’s trust store (containing the server’s public key and self-signed certificate) to JSSE system properties in this manner:

$ java -Djavax.net.ssl.trustStore=jmc-linux-64.truststore.jks \
  -Djavax.net.ssl.trustStorePassword=trustword

If you get the path to the trust store file wrong, you’ll get a different cryptic exception:

javax.net.ssl.SSLException: 
java.lang.RuntimeException: Unexpected error: 
java.security.InvalidAlgorithmParameterException: 
the trustAnchors parameter must be non-empty
... stack trace ...
Caused by: java.lang.RuntimeException: Unexpected error: 
java.security.InvalidAlgorithmParameterException: 
the trustAnchors parameter must be non-empty
... stack trace ...
Caused by: 
java.security.InvalidAlgorithmParameterException: 
the trustAnchors parameter must be non-empty
... stack trace ...

And if you get the password wrong, you’ll get yet another (somewhat less) cryptic exception:

java.net.SocketException: 
java.security.NoSuchAlgorithmException: 
Error constructing implementation 
(algorithm: Default, provider: SunJSSE, 
class: com.sun.net.ssl.internal.ssl.DefaultSSLContextImpl)
... stack trace ...
Caused by: java.security.NoSuchAlgorithmException: 
Error constructing implementation 
(algorithm: Default, provider: SunJSSE, 
class: com.sun.net.ssl.internal.ssl.DefaultSSLContextImpl)
... stack trace ...
Caused by: java.io.IOException: 
Keystore was tampered with, or password was incorrect
... stack trace ...
Caused by: java.security.UnrecoverableKeyException: 
Password verification failed
... stack trace ...

In these examples, my client is using my server’s fully qualified domain name in the URL, which is the common name we used when we created the self-signed certificate:

  ...
  public static void main(String[] args) 
  {
    HttpsClient client = new HttpsClient(
        "https://jmc-linux-64.provo.novell.com:8443");
    client.connect();
  }
}

This is the only name that will work with this trust store. In my next article I’ll show you how to generate certificates that work with aliases like the IP address. I’ll also show you how to add a hostname verifier to allow our client code to be a bit more intelligent about which aliases it rejects out of hand.

Java HTTPS Client Issues

I’ve written in the last several months about creating a client for a RESTful web-based auditing service. In that client, I had to implement client-side authentication, which is much more involved (or it should be anyway) than writing a simple secure web client that accesses content from secure public web servers.

Such a simple secure web client has only a little more functionality than a non-secure (http) web client. Essentially, it must perform a check after each connection to the secure web server to ensure that the server certificate is valid and trustworthy. This involves basically two steps:

  1. Verifying the server’s certificate chain.
  2. Verifying the server’s host name against that certificate.

Verifying the Certificate

The purpose of step 1 is to ensure that the service you’re attempting to use is not trying
to pull something shady on you. That is, the owner of the service was willing to put his or her name on the line with a Certificate Authority (CA) like Entrust or VeriSign. When you purchase a CA-signed certificate, you have to follow various procedures that document who you are, and why you’re setting up the service. But don’t worry – the CA doesn’t get to determine if your service is worthy of public consumption. Rather, only that you are who you say you are. The CA verifies actual existence, names, addresses, phone numbers, etc. If there’s any question about the service later, a consumer may contact that CA to find out the details of the service provider. This is dangerous for scam artists because they can be tracked and subsequently prosecuted. Thus, they don’t want to deal with Certificate Authorities if they don’t have to.

The client’s verification process (step 1) usually involves following the certificates in the certificate chain presented by the server back to a CA-signed certificate installed in its own trust store. A normal Sun JRE comes with a standard JKS truststore in $JAVA_HOME/lib/security/cacerts. This file contains a list of several dozen world-renowned public Certificate Authority certificates. By default, the SSLContext object associated with a normal HTTPSURLConnection object refers to a TrustManager object that will compare the certificates in the certificate chain presented by servers with the list of public CA certificates in the cacerts trust store file.

If you have an older cacerts file that doesn’t happen to contain a certificate for a site to which you are connecting, or if you’ve set up the site yourself using a self-signed certificate, then you’ll encounter an exception when you attempt to connect:

javax.net.ssl.SSLHandshakeException: 
sun.security.validator.ValidatorException: 
PKIX path building failed: 
sun.security.provider.certpath.SunCertPathBuilderException: 
unable to find valid certification path to requested target

Ouch! Does this mean you can’t connect to your test server while writing your client code? Can you only test against public servers? No, of course not, but unfortunately, it does mean a bit more work for you. You have basically two options. The first is to install your test server’s self-signed certificate into your default trust store. I first learned about this technique from a blog entry by Andreas Sterbenz in October of 2006. Nice article, Andreas. Thanks!

However, there is another way. You can write some temporary code in the form of your own sort of dumb trust manager that accepts any certificate from any server. Of course, you don’t want to ship your client with this code in place, but for testing and debugging, it’s nice not to have to mess up your default trust store with temporary certs that you may not want in there all the time. Writing DumbX509TrustManager is surprisingly simple. As with most well-considered Java interfaces, the number of required methods for the X509TrustManager interface is very small:

public class MyHttpsClient
{
  private Boolean isSecure;
  private String serverURL;

  private class DumbX509TrustManager 
      implements X509TrustManager 
  {
    public void checkClientTrusted(X509Certificate[] chain, 
        String authType) throws CertificateException {}

    public void checkServerTrusted(X509Certificate[] chain, 
        String authType) throws CertificateException {}

    public X509Certificate[] getAcceptedIssuers() 
        { return new X509Certificate[] {}; }
  }
  ...

To make use of this trust manager, simply obtain an SSLSocketFactory object in your client’s constructor that you can configure with your dumb trust manager. Then, as you establish https connections to your test server, install your preconfigured SSLSocketFactory object, like this:

  ...
  private SSLSocketFactory getSocketFactory()
  {
    SSLSocketFactory socketFactory = null;
    try
    {
      SSLContext context = SSLContext.getInstance("SSLv3");
      context.init(null, new X509TrustManager[] 
          { new DumbX509TrustManager() }, null);
      socketFactory = context.getSocketFactory();
    }
    catch (Exception e) { e.printstacktrace(); }
    return socketFactory;
  }

  public MyHttpsClient(String serverURL)
  {
    this.serverURL = serverURL;
    if (isSecure = serverURL.startsWith("https:"))
      sslSocketFactory = getSocketFactory();
  }

  public void process() 
  {
    try
    {
      HttpURLConnection conn = null;
      URL url = new URL(serverURL);
      try
      {
        conn = (HttpURLConnection)url.openConnection();
        if (isSecure)
        {
          HttpsURLConnection sconn = (HttpsURLConnection)conn;
              sconn.setSSLSocketFactory(sslSocketFactory);
        }
        conn.setRequestMethod(verb);
        conn.setDoOutput(false);
        conn.setDoInput(true);
        conn.connect();
        ...

That’s it. Warning: Don’t ship your client with DumbX509TrustManager in place. You don’t need it for public secure web servers anyway. If you know your client will only ever be used against properly configured public secure web servers, then you can rely on the default trust manager in the default socket factory associated with HttpsURLConnection.

If you think your client may be expected to work with non-public secure web servers with self-signed, or company-signed certificates, then you have more work to do. Here, you have two options. You can write some code similar to that found in browsers, wherein the client displays a dialog box upon connection, asking if you would like to connect to this “unknown” server just this once, or forever (where upon, the client then imports the server’s certificate into the default trust store). Or you can allow your customer to pre-configure the default trust store with certificates from non-public servers that he or she knows about in advance. But these are topics for another article.

Verifying the Server

Returning to the original two-step process, the purpose of step 2 (host name verification) is to ensure that the certificate you received from the service to which you connected was not stolen by a scammer.

When a CA-signed certificate is generated, the information sent to the Certificate Authority by the would-be service provider includes the fully qualified domain name of the server for which the new cert is intended. This FQDN is embedded in a field of the certificate, which the client uses to ensure that the server is really the owner of the certificate that it’s presenting.

As I mentioned in a previous article, Java’s keytool utility won’t let you generate self-signed certs containing the FQDN in the proper field, thus the default host name verification code will always fail with self-signed certs generated by keytool. Again, a simple dummy class comes to the rescue in the form of the DumbHostnameVerifier class. Just implement the HostnameVerifier interface, which has one required method, verify. Have it return true all the time, and you won’t see anymore Java exceptions like this:

HTTPS hostname wrong:  
should be <jmc-linux-64.provo.novell.com>

Here’s an example:

  ...
  private class DumbHostnameVerifier 
      implements HostnameVerifier
  {
    public boolean verify(String arg0, SSLSession arg1) 
        { return true; }
  }
  ...
  public void process() 
  {
        ...
        if (isSecure)
        {
          HttpsURLConnection sconn = (HttpsURLConnection)conn;
          sconn.setSSLSocketFactory(sslSocketFactory);
          sconn.setHostnameVerifier(new DumbHostnameVerifier());
        }
        ...

Scoping the Changes

A final decision you should make is the proper scope for setting the dummy trust manager and hostname verifier objects. The JSSE framework is extremely flexible. You can set these on a per-request basis, or as the class defaults, so that whenever a new HttpsURLConnection object is created, your objects are automatically assigned to them internally. For instance, you can use the following code to setup class default values:

public class MyHttpsClient
{
  private static class DumbX509TrustManager 
      implements X509TrustManager 
  {
    public void checkClientTrusted(X509Certificate[] chain, 
        String authType) throws CertificateException {}

    public void checkServerTrusted(X509Certificate[] chain, 
        String authType) throws CertificateException {}

    public X509Certificate[] getAcceptedIssuers() 
        { return new X509Certificate[] {}; }
  }

  private static class DumbHostnameVerifier 
      implements HostnameVerifier
  {
    public boolean verify(String arg0, SSLSession arg1) 
        { return true; }
  }

  private static SSLSocketFactory getSocketFactory()
  {
    SSLSocketFactory socketFactory = null;
    try
    {
      SSLContext context = SSLContext.getInstance("SSLv3");
      context.init(null, new X509TrustManager[] 
          { new DumbX509TrustManager() }, null);
      socketFactory = context.getSocketFactory();
    }
    catch (Exception e) { e.printstacktrace(); }
    return socketFactory;
  }

  static
  {
    HttpsURLConnection.setDefaultHostnameVerifier(
        new DumbHostnameVerifier());
    HttpsURLConnection.setDefaultSSLSocketFactory(
        getSocketFactory());
  }

  private String serverURL;
  
  public MyHttpsClient(String serverURL)
  {
    this.serverURL = serverURL;
  }
  ...

You can now remove the isSecure check in the process routine, because new instances of HttpsURLConnection will automatically be assigned objects of your new trust manager and hostname verifier classes – the default objects you stored in the classes with the HttpsClient class’s static initializer.

With that, you’re set to connect to any https server. Here’s a little insight for you: The difficult part – the real work – of writing https clients involves writing real code for these classes. I’ll write a future article that provides details on these processes. Again, I remind you: Don’t accidentally ship your clients with DumbHostnameVerifier in place! (Unless, of course, you want to. After all, it’s your code…)

Effective Communications and Apache Ant

Nothing bothers me more, when searching for the solution to a software problem that I’ve encountered, than to find someone with similar problems asking questions on various message boards, only to have response after response sound something like this:

“What’s changed in your code?”

“Look for the problem in your code.”

“You’ve messed something up in your code.”

“Your environment is hosed.”

Recently, I had a problem building a very large Java project with Apache Ant. I kept getting, “Error starting modern compiler.” about a third of the way into the build (5-10 minutes). Not getting any help from the core project team, I did what I usually do – I turned to Google search and immediately found a few people with the same problem. Unfortunately, most of them were using Ant in conjunction with Eclipse. I was getting the same error message from the command line.

I can usually judge by now the age of a problem by the number and quality of responses I find in a Google search. This was clearly a fairly recent issue. One link I found was in reference to the Apache build itself, wherein a bug was filed against Ant for this very issue (or one very nearly like it).

But it irks me to no end when people feel the need to respond to queries on issues like this, without having anything useful to say. If you haven’t encountered the problem before, or you don’t have any particular insight into what’s causing it, then please don’t respond with silly accusations about how the original poster’s environment must be hosed, or how his code must be at fault.

The fact is, software tools have bugs. It’s that simple. Even a tool as revered in the Java world as Ant will have defects. The solutions I eventually found included either changing my project build script in such a way as to cause Ant to fork a new process when it get’s to a particularly large build, or to increase the amount of virtual memory allocated to Ant via an Ant command-line option. I chose to set ANT_OPTS=-Xmx512m in my environment before executing Ant (mainly because I disagree in principle with project-specific solutions to general tool problems).

As it turns out, the root cause of this problem seems to be related more to the fact that Ant can’t spawn a child process, rather than that the wrong compiler was referred to by some environment variable. Java 1.6 has more problems than Java 1.5, probably because 1.6 is larger and more resource intensive than 1.5. The inaccuracy of the message (“modern compiler”??) leads us to believe that the problem is with the compiler itself. But that’s an entirely different problem in effective communications…

Java Secure HTTP Keys, Part II

In my last article, I described the process of configuring client-side key and trust stores within a Java web client application. To keep it simple, I purposely used the built-in functionality of HttpsURLConnection to read certain System properties to obtain references to these credential stores, along with their passwords.

However, for an embedded client–as would be the case with library code–you’d not want to rely on any System properties, because these belong to your user and her application, not to your library. But, manually configuring the key and trust stores for a client-side https connection is a little more involved.

In this article, I’d like to show you how it’s done, and I’d like to begin by suggesting some required reading for a solid understanding of the way it all works. I’m referring to the Java Secure Socket Extensions (JSSE) Reference Guide. Since JSSE was introduced in Java 1.4, and hasn’t really changed much since then, this document is officially up to date–even in Java SE 6.

Getting Started…

Note that the process for setting up the key and trust stores hasn’t changed, so I’ll simply refer you to my previous article for this information.

To summarize, the goal here is to associate our key and trust stores with our client-side connections without specifying them in System properties. And it’s amazing the amount of extra work we have to go through in order to accomplish this seemingly simple task.

The first thing we’ll do is remove the calls to System.setProperty in our AuditRestClient constructor. We still need the values we wrote to those properties, so we’ll just convert them to constants in the AuditRestClient class. At some later point, these should undoubtedly be converted to properties that we read from our own configuration file, but for now, these constants will do:

  public class AuditRestClient
  {
    // URL components (should be configured variables)
    private static final String HTTP = "HTTP";
    private static final String HTTPS = "HTTPS";
    private static final String HOSTNAME = "10.0.0.1";
    private static final Integer PORT = 9015;

    // secure channel key material stores (should be configured)
    private static final String keystore = "/tmp/keystore.jks";
    private static final String truststore = "/tmp/truststore.jks";
    private static final String keypass = "changeit";
    private static final String trustpass = "changeit";

    // secure channel variables
    private Boolean isSecure = true;
    private SSLSocketFactory sslSocketFactory = null;

    public AuditRestClient()
    {
      setupSocketFactory();
    }
    ...

Building Your Own Socket Factory

The new version of the AuditRestClient constructor calls a private method called setupSocketFactory, which configures an SSLSocketFactory object for use later when we configure our HttpsURLConnection object. Here’s the code:

    ...
    private void setupSocketFactory()
    {
      try
      {
        String protocol = "TLS";
        String type = "JKS";
        String algorithm = KeyManagerFactory.getDefaultAlgorithm();
        String trustAlgorithm =
            TrustManagerFactory.getDefaultAlgorithm();

        // create and initialize an SSLContext object
        SSLContext sslContext = SSLContext.getInstance(protocol);
        sslContext.init(getKeyManagers(type, algorithm),
            getTrustManagers(type, trustAlgorithm),
            new SecureRandom());

        // obtain the SSLSocketFactory from the SSLContext
        sslSocketFactory = sslContext.getSocketFactory();
      }
      catch (Exception e) { e.printStackTrace(); }
    }
    ...

This private helper method calls two other private methods, getKeyManagers and getTrustManagers to configure the key and trust stores. Each of these two routines also call a routine named getStore to obtain the key and trust stores from the configured key and trust managers. Again, here’s the code for all three of these methods:

    ...
    private KeyStore getStore(String type,
        String filename, String pwd) throws Exception
    {
      KeyStore ks = KeyStore.getInstance(type);
      InputStream istream = null;

      try
      {
        File ksfile = new File(filename);
        istream = new FileInputStream(ksfile);
        ks.load(istream, pwd != null? pwd.toCharArray(): null);
      }
      finally { if (istream != null) istream.close(); }

      return ks;
    }

    private KeyManager[] getKeyManagers(String type,
        String algorithm) throws Exception
    {
      KeyStore ks = getStore(type, keyStore, keyPass);
      KeyManagerFactory kmf =
          KeyManagerFactory.getInstance(algorithm);

      kmf.init(ks, keypass.toCharArray());

      return kmf.getKeyManagers();
    }

    private TrustManager[] getTrustManagers(String type,
        String algorithm) throws Exception
    {
      KeyStore ts = getStore(type, trustStore, trustPass);
      TrustManagerFactory tmf =
          TrustManagerFactory.getInstance(algorithm);

      tmf.init(ts);

      return tmf.getTrustManagers();
    }
    ...

The getStore method calls KeyStore.getInstance to obtain an instance of the key store associated with the specified type–in this case, “JKS”. It should be noted that if you wish to specify your own provider, you may do so by calling the other version of KeyStore.getInstance, which accepts a string provider name, as well.

Using Your New Socket Factory

Now that you have your socket factory built (whew!), it’s time to look at how it’s used by the rest of the AuditRestClient code. Here’s the context for the use of the new object:

    public void send(JSONObject event)
    {
      byte[] bytes = event.toString().getBytes();
      HttpURLConnection conn = null;

      try
      {
        URL url = new URL(isSecure? HTTPS: HTTP,
            HOSTNAME, PORT, "/audit/log/test");
        conn = (HttpURLConnection)url.openConnection();
        if (isSecure)
        {
          HttpsURLConnection sconn = (HttpsURLConnection)conn;
          sconn.setSSLSocketFactory(sslSocketFactory);
        }
        conn.setRequestMethod("POST");
        conn.addRequestProperty("Content-Length", "" + bytes.length);
        conn.addRequestProperty("Content-Type", CTYPE);
        conn.addRequestProperty("Connection", "Keep-Alive");
        conn.setDoOutput(true);
        conn.setDoInput(true);
        conn.connect();
        ...

Now, this code is completely independent of application owned System properties. Additionally, it’s portable between secure and non-secure HTTP channels. This protocol portability requires a type cast of the connection from HttpURLConnection to HttpsURLConnection in one place (as highlighted in the example above in bold text).

You may have also noticed that I converted the previous version of send to use the other popular form of the URL constructor. This form accepts constituent parts of the URL as separate parameters, rather than as a single string. It’s a bit more efficient under the covers, as the constructor doesn’t need to parse these components from the URL string. It made more sense on my end, as well since I’m parameterizing several of these parts now anyway. Attributes like HOSTNAME and PORT will eventually be read from a library configuration file.

Java Secure HTTP Client Key Management

My current project at Novell involves the development of a ReSTful web service for submission of audit records from security applications. The server is a Jersey servlet within an embedded Tomcat 6 container.

One of the primary reasons for using a ReSTful web service for this purpose is to alleviate the need to design and build a heavy-weight audit record submission client library. Such client libraries need to be orthogonally portable across both hardware platforms and languages in order to be useful to Novell’s customers. Just maintaining the portability of this client library in one language is difficult enough, without adding multiple languages to the matrix.

Regardless of our motivation, we still felt the need to provide a quality reference implementation of a typical audit client library to our customers. They may incorporate as much or as little of this code as they wish, but a good reference implementation is worth a thousand pages of documentation. (Don’t get me wrong, however–this is no excuse for not writing good documentation! The combination of quality concise documentation and a good reference implementation is really the best solution.)

The idea here is simple: Our customers won’t have to deal with difficulties that we stumble upon and then subsequently provide solutions for. Additionally, it’s just plain foolish to provide a server component for which you’ve never written a client. It’s like publishing a library API that you’ve never written to. You don’t know if the API will even work the way you originally intended until you’ve at least tried it out.

Since we’re already using Java in the server, we’ve decided that our initial client reference implementation should also be written in Java. Yesterday found my code throwing one exception after another while simply trying to establish the TLS connection to the server from the client. All of these problems ultimately came down to my lack of understanding of the Java key store and trust store concepts.

You see, the establishment of a TLS connection from within a Java client application depends heavily on the proper configuration of a client-side trust store. If you’re using mutual authentication, as we are, then you also need to properly configure a client-side key store for the client’s private key. The level at which we are consuming Java network interfaces also demands that we specify these stores in system properties. More on this later…

Using Curl as an Https Client

We based our initial assumptions about how the Java client needed to be configured on our use of the curl command line utility in order to test the web service. The curl command line looks something like this:

  curl -k --cert client.cer --cert-type DER --key client-key.pem
    --key-type PEM --header "Content-Type: application/audit+json"
    -X POST --data @test-event.json https://10.0.0.1:9015/audit/log/test

The important aspects of this command-line include the use of the –cert, –cert-type, –key and –key-type parameters, as well as the fact that we specified a protocol scheme of “https” in the URL.

With one exception, the remaining options are related to which http method to use (-X), what data to send (–data), and which message properties to send (–header). The exception is the -k option, and therein lay most of our problems with this Java client.

The curl man-page indicates that the -k/–insecure option allows the TLS handshake to succeed without verifying the server certificate in the client’s CA (Certificate Authority) trust store. The reason this option was added was because several releases of the curl package shipped with a terribly out-dated trust store, and people were getting tired of having to manually add certificates to their trust stores everytime they hit a newer site.

Doing it in Java

But this really isn’t the safe way to access any secure public web service. Without server certificate verification, your client can’t really know that it’s not communicating with a server that just says it’s the right server. (“Trust me!”)

During the TLS handshake, the server’s certificate is passed to the client. The client should then verify the subject name of the certificate. But verify it against what? Well, let’s consider–what information does the client have access to, outside of the certificate itself? It has the fully qualified URL that it used to contact the server, which usually contains the DNS host name. And indeed, a client is supposed to compare the CN (Common Name) portion of the subject DN (Distinguished Name) in the server certificate to the DNS host name in the URL, according to section 3.1 “Server Identity” of RFC 2818 “HTTP over TLS”.

Java’s HttpsURLConnection class strictly enforces the advice given in RFC 2818 regarding peer verification. You can override these constraints, but you have to basically write your own version of HttpsURLConnection, or sub-class it and override the methods that verify peer identity.

Creating Java Key and Trust Stores

Before even attempting a client connection to our server, we had to create three key stores:

  1. A server key store.
  2. A client key store.
  3. A client trust store.

The server key store contains the server’s self-signed certificate and private key. This store is used by the server to sign messages and to return credentials to the client.

The client key store contains the client’s self-signed certificate and private key. This store is used by the client for the same purpose–to send client credentials to the server during the TLS mutual authentication handshake. It’s also used to sign client-side messages for the server during the TLS handshake. (Note that once authentication is established, encryption happens using a secret or symetric key encryption algorithm, rather than public/private or asymetric key encryption. Symetric key encryption is a LOT faster.)

The client trust store contains the server’s self-signed certificate. Client-side trust stores normally contain a set of CA root certificates. These root certificates come from various widely-known certificate vendors, such as Entrust and Verisign. Presumably, almost all publicly visible servers have a purchased certificate from one of these CA’s. Thus, when your web browser connects to such a public server over a secure HTTP connection, the server’s certificate can be verified as having come from one of these well-known certificate vendors.

I first generated my server key store, but this keystore contains the server’s private key also. I didn’t want the private key in my client’s trust store, so I extracted the certificate into a stand-alone certificate file. Then I imported that server certificate into a trust store. Finally, I generated the client key store:

  $ keytool -genkey -alias server -keyalg RSA \
  > -storepass changeit -keystore server-keystore.jks
  What is your first and last name?
    [Unknown]:  audit-server
  What is the name of your organizational unit?
    [Unknown]:  Eng
  What is the name of your organization?
    [Unknown]:  Novell
  What is the name of your City or Locality?
    [Unknown]:  Provo
  What is the name of your State or Province?
    [Unknown]:  Utah
  What is the two-letter country code for this unit?
    [Unknown]:  US
  Is CN=audit-server, OU=Eng, O=Novell, L=Provo, ST=Utah, C=US correct?
    [no]:  yes

  Enter key password for <server>
          (RETURN if same as keystore password):  
  $
  $ keytool -exportcert -keystore server-keystore.jks \
  > -file server.der -alias server -storepass changeit
  Certificate stored in file <server.der>
  $
  $ keytool -importcert -trustcacerts -alias server \
  > -keystore server-truststore.jks -storepass changeit \
  > -file server.der
  Owner: CN=audit-server, OU=Eng, O=Novell, L=Provo, ST=Utah, C=US
  Issuer: CN=audit-server, OU=Eng, O=Novell, L=Provo, ST=Utah, C=US
  Serial number: 491cad67
  Valid from: Thu Nov 13 15:42:47 MST 2008 until: Wed Feb 11 15:42:47 MST 2009
  Certificate fingerprints:
           MD5:  EE:FA:EE:78:A8:42:2B:F2:3A:04:50:37:D3:94:B3:C0
           SHA1: 4E:BA:9B:2F:FC:84:10:5A:2E:62:D2:5B:B3:70:70:B5:2F:03:E1:CD
	   Signature algorithm name: SHA1withRSA
           Version: 3
  Trust this certificate? [no]:  yes
  Certificate was added to keystore
  $
  $ keytool -genkey -alias client -keyalg RSA -storepass changeit \
  > -keystore client-keystore.jks
  What is your first and last name?
    [Unknown]:  audit-client
  What is the name of your organizational unit?
    [Unknown]:  Eng
  What is the name of your organization?
    [Unknown]:  Novell
  What is the name of your City or Locality?
    [Unknown]:  Provo
  What is the name of your State or Province?
    [Unknown]:  Utah
  What is the two-letter country code for this unit?
    [Unknown]:  US
  Is CN=audit-client, OU=Eng, O=Novell, L=Provo, ST=Utah, C=US correct?
    [no]:  yes

  Enter key password for <client>
          (RETURN if same as keystore password):  
  $
  $ ls -1
  client-keystore.jks
  server.der
  server-keystore.jks
  server-truststore.jks
  $

Telling the Client About Keys

There are various ways of telling the client about its key and trust stores. One method involves setting system properties on the command line. This is commonly used because it avoids the need to enter absolute paths directly into the source code, or to manage separate configuration files.

  $ java -Djavax.net.ssl.keyStore=/tmp/keystore.jks ...

Another method is to set the same system properties inside the code itself, like this:

  public class AuditRestClient
  {
    public AuditRestClient() 
    {
      System.setProperty("javax.net.ssl.keyStore", 
          "/tmp/keystore.jks");
      System.setProperty("javax.net.ssl.keyStorePassword", 
          "changeit");
      System.setProperty("javax.net.ssl.trustStore", 
          "/tmp/truststore.jks");
      System.setProperty("javax.net.ssl.trustStorePassword", 
          "changeit");
    }
    ...

I chose the latter, as I’ll eventually extract the strings into property files loaded as needed by the client code. I don’t really care for the fact that Java makes me specify these stores in system properties. This is especially a problem for our embedded client code, because our customers may have other uses for these system properties in the applications in which they will embed our code. Here’s the rest of the simple client code:

    ...
    public void send(JSONObject event) 
    {
      byte[] bytes = event.toString().getBytes();
      HttpURLConnection conn = null; 
		
      try
      {
        // establish connection parameters
        URL url = new URL("https://10.0.0.1:9015/audit/log/test");
        conn = (HttpURLConnection)url.openConnection();
        conn.setRequestMethod("POST");
        conn.addRequestProperty("Content-Length", "" + bytes.length);
        conn.addRequestProperty("Content-Type", "application/audit1+json");
        conn.setDoOutput(true);
        conn.setDoInput(true);
        conn.connect();

        // send POST data
        OutputStream out = (OutputStream)conn.getOutputStream();
        out.write(bytes);
        out.flush();
        out.close(); 		

        // get response code and data
        System.out.println(conn.getResponseCode());
        BufferedReader read = new BufferedReader(new InputStreamReader(conn.getInputStream()));
        String query = null;
        while((query = read.readLine()) != null)
          System.out.println(query);
      }
      catch(MalformedURLException e) { e.printStackTrace(); }
      catch(ProtocolException e) { e.printStackTrace(); }
      catch(IOException e) { e.printStackTrace(); }
      finally { conn.disconnect(); }
    }
  }

Getting it Wrong…

I also have a static test “main” function so I can send some content. But when I tried to execute this test, I got an exception indicating that the server certificate didn’t match the host name. I was using a hard-coded IP address (10.0.0.1), but my certificate contained the name “audit-server”.

It turns out that the HttpsURLConnection class uses an algorithm to determine if the server that sent the certificate really belongs to the server on the other end of the connection. If the URL contains an IP address, then it attempts to locate a matching IP address in the “Alternate Names” portion of the server certificate.

Did you notice a keytool prompt to enter alternate names when you generated your server certificate? I didn’t–and it turns out there isn’t one. The Java keytool utility doesn’t provide a way to enter alternate names–a standardized extension of the X509 certificate format. To enter an alternate name containing the requisite IP address, you’d have to generate your certificate using the openssl utility, or some other more functional certificate generation tool, and then find a way to import these foreign certificates into a Java key store.

…And then Doing it Right

On the other hand, if the URL contains a DNS name, then HttpsURLConnection attempts to match the CN portion of the Subject DN with the DNS name. This means that your server certificates have to contain the DNS name of the server as the CN portion of the subject. Returning to keytool, I regenerated my server certificate and stores using the following commands:

  $ keytool -genkey -alias server -keyalg RSA \
  > -storepass changeit -keystore server-keystore.jks
  What is your first and last name?
    [Unknown]:  jmc-test.provo.novell.com

  ... (the rest is the same) ...
  
  $ keytool -exportcert -keystore server-keystore.jks \
  > -file server.der -alias server -storepass changeit
  Certificate stored in file <server.der>
  $
  $ keytool -importcert -trustcacerts -alias server \
  > -keystore server-truststore.jks -storepass changeit \
  > -file server.der
  Owner: CN=jmc-test.provo.novell.com, OU=Eng, O=Novell, L=Provo, ST=Utah, C=US
  Issuer: CN=jmc-test.provo.novell.com, OU=Eng, O=Novell, L=Provo, ST=Utah, C=US
  Serial number: 491cad67
  Valid from: Thu Nov 13 15:42:47 MST 2008 until: Wed Feb 11 15:42:47 MST 2009
  Certificate fingerprints:
           MD5:  EE:FA:EE:78:A8:42:2B:F2:3A:04:50:37:D3:94:B3:C0
           SHA1: 4E:BA:9B:2F:FC:84:10:5A:2E:62:D2:5B:B3:70:70:B5:2F:03:E1:CD
	   Signature algorithm name: SHA1withRSA
           Version: 3
  Trust this certificate? [no]:  yes
  Certificate was added to keystore
  $

Of course, I also had to change the way I was specifying my URL in the client code:

  ...
  URL url = new URL("https://jmc-test.provo.novell.com:9015/audit/log/test");
  conn = (HttpURLConnection)url.openConnection();
  ...

At this point, I was finally able to connect to my server and send the message. Is this reasonable? Probably not for my case. Both my client and server are within a corporate firewall, and controlled by the same IT staff, so to force this sort of gyration is really unreasonable. Can we do anything about it? Well, one thing that you can do is to provide a custom host name verifier like this:

  ...
  URL url = new URL("https://jmc-sentinel.dnsdhcp.provo.novell.com:9015/audit/AuditLog/test");
  conn = (HttpsURLConnection)url.openConnection();
  conn.setHostnameVerifier(new HostnameVerifier()
  { 
    public boolean verify(String hostname, SSLSession session) 
        { return true; }
  });
  conn.setRequestMethod("POST");
  ...

When you do this, however, you should be aware that you give up the right to treat the connection as anything but an https connection. Note that we had to change the type of “conn” to HttpsURLConnection from its original type of HttpURLConnection. This means, sadly, that this code will now only work with secure http connections. I chose to use the DNS name in my URL, although a perfectly viable option would also have been the creation of a certificate containing the IP address as an “Alternate Name”.

Is This Okay?!

Ultimately, our client code will probably be embedded in some fairly robust and feature-rich security applications. Given this fact, we can’t really expect our customers to be okay with our sample code taking over the system properties for key and trust store management. No, we’ll have to rework this code to do the same sort of thing that Tomcat does–manage our own lower-level SSL connections, and thereby import certificates and CA chains ourselves. In a future article, I’ll show you how to do this.

PUT or POST: The REST of the Story

Web service designers have tried for some time now to correlate CRUD (Create, Retrieve, Update and Delete) semantics with the Representational State Transfer (REST) verbs defined by the HTTP specification–GET, PUT, POST, DELETE, HEAD, etc.

So often, developers will try to correlate these two concepts–CRUD and REST–using a one-to-one mapping of verbs from the two spaces, like this:

  • Create = PUT
  • Retrieve = GET
  • Update = POST
  • Delete = DELETE

“How to Create a REST Protocol” is an example of a very well-written article about REST, but which makes this faulty assumption. (In fairness to the author, he may well have merely “simplified REST for the masses”, as his article doesn’t specifically state that this mapping is the ONLY valid mapping. And indeed, he makes the statement that the reader should not assume the mapping indicates a direct mapping to SQL operations.)

In the article, “I don’t get PUT versus POST” the author clearly understands the semantic differences between PUT and POST, but fails to understand the benefits (derived from the HTTP protocol) of the proper REST semantics. Ultimately, he promotes the simplified CRUD to REST mapping as layed out above.

But such a trivial mapping is inaccurate at best. The semantics of these two verb spaces have no direct correlation. This is not to say you can’t create a CRUD client that can talk to a REST service. Rather, you need to add some additional higher-level logic to the mapping to complete the transformation from one space to the other.

While Retrieve really does map to an HTTP GET request, and likewise Delete really does map to an HTTP DELETE operation, the same cannot be said of Create and PUT or Update and POST. In some cases, Create means PUT, but in other cases it means POST. Likewise, in some cases Update means POST, while in others it means PUT.

The crux of the issue comes down to a concept known as idempotency. An operation is idempotent if a sequence of two or more of the same operation results in the same resource state as would a single instance of that operation. According to the HTTP 1.1 specification, GET, HEAD, PUT and DELETE are idempotent, while POST is not. That is, a sequence of multiple attempts to PUT data to a URL will result in the same resource state as a single attempt to PUT data to that URL, but the same cannot be said of a POST request. This is why a browser always pops up a warning dialog when you back up over a POSTed form. “Are you sure you want to purchase that item again!?” (Would that the warning was always this clear!)

After that discussion, a more realistic mapping would seem to be:

  • Create = PUT iff you are sending the full content of the specified resource (URL).
  • Create = POST if you are sending a command to the server to create a subordinate of the specified resource, using some server-side algorithm.
  • Retrieve = GET.
  • Update = PUT iff you are updating the full content of the specified resource.
  • Update = POST if you are requesting the server to update one or more subordinates of the specified resource.
  • Delete = DELETE.

NOTE: “iff” means “if and only if”.

Analysis

Create can be implemented using an HTTP PUT, if (and only if) the payload of the request contains the full content of the exactly specified URL. For instance, assume a client issues the following Create OR Update request:

   HTTP/1.1 PUT /GrafPak/Pictures/1000.jpg
   ...

   <full content of 1000.jpg ... >

This command is idempotent because sending the same command once or five times in a row will have exactly the same effect; namely that the payload of the request will end up becoming the full content of the resource specified by the URL, “/GrafPak/Pictures/1000.jpg”.

On the other hand, the following request is NOT idempotent because the results of sending it either once or several times are different:

   HTTP/1.1 POST /GrafPak/Pictures
   ...

   <?xml version="1.0" encoding="UTF-8"?> 
   <GrafPak operation="add" type="jpeg">
     <[CDATA[ <full content of some picture ... > ]]>
   </GrafPak>

Specifically, sending this command twice will result in two “new” pictures being added to the Pictures container on the server. According to the HTTP 1.1 specification, the server’s response should be something like “201 Created” with Location headers for each response containing the resource (URL) references to the newly created resources–something like “/GrafPak/Pictures/1001.jpg” and “/GrafPak/Pictures/1002.jpg”.

The value of the Location response header allows the client application to directly address these new picture objects on the server in subsequent operations. In fact, the client application could even use PUT to directly update these new pictures in an idempotent fashion.

What it comes down to is that PUT must create or update a specified resource by sending the full content of that same resource. POST operations, on the other hand, tell a web service exactly how to modify the contents of a resource that may be considered a container of other resources. POST operations may or may not result in additional directly accessible resources.

References

Getting the Most Out of Your HTPC

Well, now that you have this wonderful Home Theater PC (HTPC), what do you do with it? In this article, I’ll provide some insight on how to configure your HTPC for maximum enjoyment. You paid a lot for this fancy piece of hardware. In fact, I paid as much for my HTPC as I did for my Denon 7.1 channel digital decoder and amplifier, and about half as much as I paid for my 720p projector. There’d better be a good reason for spending that much. Let’s explore…

Watching on the Big Screen

The first thing to consider is how you’ve connected your HTPC. Mine is not connected physically to my video system. That is, I have my HTPC sitting in another room of my home that I currently use as a den or study. It allows me the peace and quiet that I need to continue the on-going process of converting my movie collection into streamable media that I can serve from my HTPC.

I recognize that some people may want to connect their HTPC directly into their home theater system. Eventually, I’ll do this myself. But I have a problem, and you may also. Unless you have the latest video and audio equipment in your home theatre, you’re probably facing a physical connection issue that can’t simply be ignored. By this I mean that your slightly older video display (projector or TV) probably accepts, at most, analog component video (YPbPr) inputs. But the connection on the back of your HTPC (if you purchased the Gigabyte motherboard I mentioned in that first HTPC article) has only VGA, DVI-D and HDMI outputs.

Direct Digital Connection

If you’re lucky enough to have newer home theater equipment–a 1080p projector with HDMI inputs, and a newer 5.1 channel digital decoder/amp with HDMI video switching capability, then you’re really set. Just plug the HDMI output from the onboard ATI video circuitry into one of the free HDMI inputs on your amplifier, and start watching!

The nice thing about an end-to-end digital connection is that you’ll be able to watch your Blu-ray content in the highest resolution available to your display device. Such a connection between your HTPC and your TV will provide exactly the same home theater experience you’d get from your 350 dollar Blu-ray player.

Direct Analog Connection

As mentioned, to get the most out of a direct connection, I really need to use the digital (either DVD-D or HDMI) outputs. But, short of upgrading my amp and projector, I have little recourse here. My somewhat older Denon amplifier has component video switching capability for up to three inputs switched to one output, which is really nice for older devices and monitors. But unfortunately, none of this is compatible with modern digital signals. Until I come into some spare cash, I’m going to have to settle for an end-to-end analog signal between my HTPC and my projector.

To make matters worse, VGA has nothing whatsoever to do with component video, except that they’re both analog signals. Unfortunately these two analog signals operate in different color spaces, so there’s no ad-hoc wiring harness that you can solder together that will allow you to generate component video from the VGA signal at the back of your HTPC.

The solution to this problem is an inexpensive video transcoder. There are various devices available for reasonable prices that will actively convert from one color space to the other. Some of them have more capabilities–and are thus more expensive–than others. I’ve mentioned these devices briefly in my first HTPC article, but I’ll cover them in more detail here.

The device I’ve found that seems to be the best compromise between price and performance is one manufactured by Audio Authority called the 9A60 VGA to Component Video Transcoder. This is a sweet little device–the sweetest aspect of which is the price. In the first place, it does exactly what you want it to do, no more and no less. It converts an RGB signal from a VGA connector to YPbPr Component Video via the standard 3 RCA jacks, with no video scaling or dimensional transformations.

Incidentally, the best price I’ve found on the 9A60 is at mythic.tv for 105 dollars.

Setting Video Card Resolution

Regardless of the type of connection you establish, you’ll have to configure your HTPC’s video card to provide the exact resolution and format expected by your projector or television. An HDMI connection will make setting the computer’s resolution a bit easier, but it has to be done nonetheless.

The resolution expected by your viewing device of choice often depends on how you’ve configured it. For televisions, the resolution is somewhat hard-coded into the device, but projectors can usually be configured to display in different resolutions. Both types of devices can automatically handle a slightly varying range of resolution, regardless of configuration, the rendering quality of which depends on the quality of the display circuitry in the device.

You also need to understand the correlation between TV industry display resolutions and computer display resolutions. In the television industry, resolutions are defined in terms of number of scan lines and whether the signal is progressive or interlaced. Thus, you’ll often hear of TV’s that can display 480p, 720p, 1080i, or 1080p. The numeric values indicate the number of horizontal scan lines displayed, and the letter is either “i” for interlaced, or “p” for progressive (non-interlaced).

The number of scan lines directly corresponds to the vertical resolution on your HTPC. Thus, to generate a 1080p signal to your HD television, you’re going to have to configure your HTPC’s video card to display a resolution of (‘something’ x 1080). The ‘something’ is determined by back-calculating the horizontal resolution from the aspect ratio of your television.

The aspect ratio of US televisions (I mean NTSC/ATSC, rather than the European PAL standard) is either 4:3 or 16:9. So, on a wide-screen (16:9) US television, you would use the following formula to determine the horizontal resolution of your video card:

   Hr = Vr * 16 / 9

where ‘Hr’ stands for Horizontal resolution, and ‘Vr’ stands for Vertical resolution. Thus, the proper horizontal resolution for a 1080p display is 1080 * 16 / 9, or 1920.

The biggest problem you’re likely to run into in this process is actually finding a conforming resolution in the list handed to you by the Microsoft Windows video card configuration dialogs. Windows wants to query the monitor to find out what it can handle, and then transform this information into a set of resolutions compatible with your monitor, but when your monitor is effectively the Audio Authority 9A60, you’ll find it to be quite uninformative regarding what it can handle. Windows responds by giving you a minimal set of choices.

Fortunately, there is free software available in the form of an application called PowerStrip by a Taiwanese company called Entech, which allows you to manually choose your horizontal and vertical resolution, as well as color depth, and horizontal and vertical sync rates. These values must be chosen carefully, or you can damage your display device, but most TV’s and projectors are much more resilient than computer monitors. PowerStrip is pretty self-explanatory, and there are guides abounding on the Internet, so I’ll forego the details here.

Extender Technology

Before I’m ready to connect my HTPC directly to my home theater system, I’m going to use it for several months to convert my video collection, so I’ll want to use “Windows Media Center Extender Technology” and my home network to display my Media Center console on my home theater projector remotely.

Microsoft sells an extender device designed explicitly for this purpose, however, I already have an XBox 360 that I got for my family for Christmas last year, and the 360 has built-in WMC extender functionality. You activate it through the 360 console’s Media page. Look for the option to connect to a Windows Media Center PC.

When you select this “connect” option, the 360 displays an 8 digit random number on the screen, and tells you to use this number at the appropriate location when setting up the extender on your HTPC. In the Media Center setup menu of your HTPC, you’ll find an option for setting up an extender. During this setup wizard, an entry dialog will be displayed, where you’ll be asked to enter this 2-part, 8-digit value. Once you’ve entered this value, the rest is trivial, and your 360 will display your Windows Media Center console.

You can use your game controller to move about the WMC menus and select various options. There’s a cheat-sheet provided by Microsoft that will help you understand how the controller buttons map to Media Center functionality.

Watching Digital Television

TV cards–even digital TV cards–are so inexpensive these days, it would be a shame if you chose to forego that expense. I dare say a TV card costs less than the memory in your HTPC. With that TV card, you get the ability to watch digital TV in full definition.

Of course, if you’d rather spend 400 dollars on a stand-alone digital broadcast tuner, feel free. I much prefer the 80 dollar Hauppaugh WinTV solution. In fact, it’s so cheap, It’s worth considering purchasing two such tuners. Windows Media Center will recognize and consume both units. You can then use one of them to record from one channel, while you’re watching another channel on the other. You can even enjoy picture-in-picture features using both tuners–want to watch a movie while not missing the big game (or vice-versa)? Hmmmm. 400 dollars for a single stand-alone tuner, or 160 dollars for a couple of tuner cards? Not a tough choice.

In fact, the Hauppaugh WinTV 1800 card is actually two tuners in one; an analog tuner and a digital tuner. So even one card will let you do some of the fancy stuff–like recording a digital program while watching an analog program, each on different channels. But if you’re hooked on the realistic quality of digital TV, then you’ll probably almost forget that you have an analog tuner in your TV card. I didn’t even bother connecting the analog tuner to the antenna wire.

This does bring up an interesting side issue for me. The TV card has four antenna inputs on the back: TV, DTV, FM, and QUAM. Okay, I can understanding separate inputs for FM radio and Satellite or Cable input, but was it really necessary to separate the inputs for Analog and Digital TV? I can get a really nice analog picture by connecting my digital antenna to my Analog antenna input. I suppose it’s conceivable that your area has digital and analog broadcast towers set up in different locations, which would preclude aiming TV and DTV antennas in different directions… What I’d really like to see is some sort of software switch or hardware jumper that bridges the DTV input to the TV input, so I don’t have to use an input cable splitter to connect my DTV antenna wire to both inputs.

Time-Shifting and the Media Center Programming Guide

One of the nicest features of Windows Media center is the ability to easily record a program for later viewing. I can sit down on Saturday afternoon, and check out the schedule for the coming week. In a few minutes, and with just a few clicks, I can schedule the recording of broadcast movies or shows I want to watch. If you always schedule tuner B to record, then you know you can always watch tuner A without worrying about bumping in to a recording session.

Remember when you had to get out the manual for your VCR whenever you wanted to record a program on TV. It was a fairly complex and time-consuming process to configure your VCR to record a program at a later time. If you just wanted to record something now, it wasn’t too bad. You could almost figure it out without the manual (just press the red record button and the play button at the same time–often this combination was highlighted on the remote for this purpose). But if you wanted to record a program that was scheduled to start when you were not home, now that was a different matter. How’d I do that last time? Dang! Where’s that VCR manual?!

Windows Media Center comes with an online programming guide for the United States. If you live in the US, you simply supply your zip code when you configure your tuner card (and, of course, agree to the online content use license), and Media Center will configure your TV viewing experience with an online programming guide. Recording any program is as simple as finding the upcoming program in the guide, and pressing the record button at the bottom of the screen. This isn’t perfect–it never has been. Last minute programming changes will always be sources of heartburn, but the media providers understand this, and try more then ever to ensure that the content is accurate.

You even have the option of recording an entire season of a program with one button. Do you like a particular television program, but you forget to record it half the time, so there are gaps in your understanding of the program plot? No problem. Let Media Center do the remembering for you. Just tell it to record the entire season, and then forget it. If you become busy with life and stuff (who doesn’t?), and are unable to watch your program for a few weeks, don’t worry–when the load lightens up again, the missed episodes will be there for you to watch.

You can also watch a program while it’s being recorded. Now, why would you want to do that?! Okay, you can perhaps understand that you might wish to save this program and watch it again later. But most people who record while watching do so for one reason: They want to skip commercials on the fly. Just start recording a program you want to watch, then go away for 15 minutes or so. When you come back, you’ll have enough recorded material so that when a commercial starts, you can fast forward over it to the show again. By the time you get to the next commercial, enough material has been recorded to allow you to skip this one as well. This is a common feature on 200 dollar Personal Video Recorder (PVR) devices. PVR functionality comes built-in to a Media Center PC with a tuner card.

DVD and Blu-ray Movies

My system includes a Blu-ray disc player, so I can watch my Blu-ray discs on my HTPC. At the time of this writing, Blu-ray players (not recorders) can be had for between 100 and 150 dollars, and they’re coming down in price fast. It won’t be long before, like internal DVD players, you can pick one up for about 20 bucks.

But Blu-ray players can also play DVD’s and CD’s, as well. This shouldn’t be too surprising, as DVD players can also play CD’s. Thus, for about 100 bucks, I have an HTPC-based replacement for my stand-alone Blu-ray/DVD player. Such a device would normally cost 350 dollars or more in today’s market. (It’s becoming easier and easier to justify the 1000 dollar cost of my HTPC!)

For complete instructions on how to create a playable archive of your purchased movie content, see my previous article, entitled, “Creating a Disk-Based Movie Archive”.

NetFlix Streaming Media

One of my favorite services (and a primary motivation for me to build an HTPC in the first place) is NetFlix streaming video. I’ve had a NetFlix subscription for a couple of years now. Last year when NetFlix came out with free streaming video for current subscribers, I thought Christmas had come early for me.

If you’ve got your Media Center PC connected directly to your television, then you have a several options. The most obvious option is to open a browser window from your HTPC desktop, and navigate to netflix.com. In this case, you’re accessing NetFlix streaming video just as you always have (if, that is, you’ve used NetFlix streaming video in the past), except that now you’re watching it on your television, instead of your computer monitor.

If you’re using an WMC extender, or if you simply want to configure WMC as your only desktop (by making it non-minimizable), then you have fewer options. Since you can’t access your browser application from the extender console, you’ll have to find a way to access NetFlix streaming video through WMC itself. There are two approaches you can take.

One of these is a free software project hosted by Google code, called VMCNetFlix. VMCNetFlix is basically a Windows Media Center application that makes the NetFlix Web API available through the Windows Media Center interface. To use VMCNetFlix, you must be using Windows Vista Media Center (thus, the ‘VMC’ portion of the name), which comes packaged with Windows Vista Home Premium, Business or Ultimate editions. Assuming you are, simply go to the VMCNetFlix project download page, and download the package appropriate for your hardware architecture (32- or 64-bit).

Install the package by double-clicking on it, and then bring up Windows Media Center. Navigate up or down to the “Online Media” menu, and select the “Program Library” option. If you’ve seen this screen before, then you should see a new item in the list with the familiar NetFlix motif. Select the NetFlix program, and the VMCNetFlix application will help you configure your Media Center to access your NetFlix account.

I like this option because it’s easy to use, fully functional, and best of all–free. In sharp contrast, the other option for accessing NetFlix streaming content through WMC is just plain stupid. I’m sorry, but I just don’t understand how normally intelligent people can conceive of what they deem to be viable business models that fly in the face of reality. If you’re using an XBox 360 as a Media Center extender, then you can also access NetFlix streaming content through your XBox Live! account, if you have one. This would be fine, except that you have to have a Gold account, which means you’ll be charged a monthly fee to use a service that you already pay a monthly fee to use. Now, of course, if you’re an avid gamer, and you already pay for an XBox Live! Gold account, then this requirement probably won’t bother you (much).

The sad part about the XBox Live! method is that it’s the only officially sanctioned way of accessing NetFlix streaming content from the Media Center console. To be sure, there’s nothing illegal about using VMCNetFlix. It’s just that it’s a bit of a hack, which means that anytime NetFlix decides to change their web API, VMCNetFlix will have to be updated to accommodate the modifications.

Additional Features

You can also play games and execute other pc-based software. You’re not limited to using your HTPC as a media center. Unless you’ve configured WMC to be non-minimizable, you can simply click the usually minimize button in the upper-left corner and you’re looking at the Windows Vista PC screen on your TV. This means that any software you have installed is available from your TV. There are a few Windows games that can be played through the “Online Media/Program Files” menu.

There is on-line content available through Windows Media Center. Most of this is subscription based, but you’ll have to decide whether it’s worth the price. And finally, you can do most of the usually things with Media Center that you can do with media on your PC, including playing music. If you have a really nice stereo, this could be a great way to use your PC-based music collection.

Since PC’s are naturally extensible, having a PC as a component in your home theater makes your home theater extensible. Whenever a new PC-based media experience becomes available, you’ll be ready to take full advantage of it.