Java Https Key Setup

In my last article, I showed how to remove all security from a secure web (https) transaction by installing dummy trust manager and host name verifier objects into an SSLSocketFactory. Today, I’m going to take it to the next level by demonstrating how to create a private key and self-signed certificate in a JKS keystore, exporting the public key certificate to a client-side trust store, and configuring our client to use the trust store to verify our server.

I’ll be using a Tomcat 6 server – mainly because it’s almost trivial to install and configure for SSL traffic. On my OpenSuSE 11.1 64-bit GNU/Linux machine, I’ve installed the tomcat6 package, and then I’ve gone into YaST’s service management panel and enabled the tomcat6 service.

Self-Signed Certificates

Let’s start by generating the proper keys. First, we’ll generate the server’s self-signed certificate, with embedded public/private key pair. For the common name (CN) field, I’ll make sure to enter the fully qualified domain name of my server (jmc-linux-64.provo.novell.com). This will ensure that my Java client code will properly compare the hostname used in my URL with the server’s certificate. Using any other value here would cause my client to fail with an invalid hostname exception. Here’s the Java keytool command line to create a self-signed certificate in a JKS key store called jmc-linux-64.keystore.jks:

$ keytool -genkey -alias jmc-linux-64 \
 -keyalg RSA -keystore jmc-linux-64.keystore.jks
Enter keystore password: password
Re-enter new password: password
What is your first and last name?
  [Unknown]:  jmc-linux-64.provo.novell.com
What is the name of your organizational unit?
  [Unknown]:  Engineering
What is the name of your organization?
  [Unknown]:  Novell, Inc.
What is the name of your City or Locality?
  [Unknown]:  Provo
What is the name of your State or Province?
  [Unknown]:  Utah
What is the two-letter country code for this unit?
  [Unknown]:  US
Is CN=jmc-linux-64.provo.novell.com, OU=Engineering,
 O="Novell, Inc.", L=Provo, ST=Utah, C=US correct?
  [no]:  yes

Enter key password for 
         (RETURN if same as keystore password): <CR>
		
$

To view the new certificate and key pair, just use the -list option, along with the -v (verbose) option, like this:

$ keytool -list -v -keystore jmc-linux-64.keystore.jks
Enter keystore password: password

Keystore type: JKS
Keystore provider: SUN

Your keystore contains 1 entry

Alias name: jmc-linux-64
Creation date: Jun 19, 2009
Entry type: PrivateKeyEntry
Certificate chain length: 1
Certificate[1]:
Owner: CN=jmc-linux-64.provo.novell.com, OU=Engineering, O="Novell, Inc.", L=Provo, ST=Utah, C=US
Issuer: CN=jmc-linux-64.provo.novell.com, OU=Engineering, O="Novell, Inc.", L=Provo, ST=Utah, C=US
Serial number: 4a3c006f
Valid from: Fri Jun 19 15:17:35 MDT 2009 until: Thu Sep 17 15:17:35 MDT 2009
Certificate fingerprints:
         MD5:  E5:37:9F:85:C9:76:60:FC:DC:01:81:AD:5F:FC:F4:9A
         SHA1: FD:E3:47:6C:AE:9B:75:3B:9C:6C:05:7B:C9:A4:B4:E6:07:F6:B5:FB
         Signature algorithm name: SHA1withRSA
         Version: 3


*******************************************
*******************************************

$

Server Configuration

Okay, now we have a server certificate with public and private key pair in a JKS keystore. The next step is to configure Tomcat to listen for https requests. The default configuration for Tomcat is to run a bare http server on port 8080. To enable the https server on port 8443, I edited the /usr/share/tomcat6/conf/server.xml file and uncommented the default entry for SSL that was already in place as a comment:

...
<!-- Define a SSL HTTP/1.1 Connector on port 8443
     This connector uses the JSSE configuration, when using APR, the
     connector should be using the OpenSSL style configuration
     described in the APR documentation -->

<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
           maxThreads="150" scheme="https" secure="true"
           keystoreFile="/jmc-linux-64.keystore.jks" 
           keystorePass="password"
           clientAuth="false" sslProtocol="TLS" />
...

Make sure the sslProtocol is set to at least “SSLv3” – I just used “TLS” here. The important fields, however, are the keystoreFile and keystorePass fields, which I’ve set to the keystore we created in the previous step, and its password. You can put the keystore file anywhere on your file system accessible by the user running the tomcat service. On my system, the tomcat6 service is executed as root by default, so I just copied my keystore to the root of my file system.

After editing the file, I had to restart the tomcat6 service:

# rctomcat6 restart
Shutting down Tomcat (/usr/share/tomcat6)	... done
Starting Tomcat (/usr/share/tomcat6)		... done
#

Client-Side Trust Store

So much for server configuration. Now we have to configure the client’s trust store with the server’s self-signed certificate. This is done by exporting the certificate and public key from the server’s keystore, and then importing it into a client trust store. A trust store is just a JKS keystore that contains only trust certificates:

$ keytool -export -alias jmc-linux-64 \
 -keystore jmc-linux-64.keystore.jks -rfc \
 -file jmc-linux-64.cert
Enter keystore password: password
Certificate stored in file 
$
$ cat jmc-linux-64.cert
-----BEGIN CERTIFICATE-----
MIICezCCAeSgAwIBAgIESjwAbzANBgkqhkiG9w0BAQUFADCBgTELMAkGA1UEBhMCVVMxDTALBgNV
BAgTBFV0YWgxDjAMBgNVBAcTBVByb3ZvMRUwEwYDVQQKEwxOb3ZlbGwsIEluYy4xFDASBgNVBAsT
C0VuZ2luZWVyaW5nMSYwJAYDVQQDEx1qbWMtbGludXgtNjQucHJvdm8ubm92ZWxsLmNvbTAeFw0w
OTA2MTkyMTE3MzVaFw0wOTA5MTcyMTE3MzVaMIGBMQswCQYDVQQGEwJVUzENMAsGA1UECBMEVXRh
aDEOMAwGA1UEBxMFUHJvdm8xFTATBgNVBAoTDE5vdmVsbCwgSW5jLjEUMBIGA1UECxMLRW5naW5l
ZXJpbmcxJjAkBgNVBAMTHWptYy1saW51eC02NC5wcm92by5ub3ZlbGwuY29tMIGfMA0GCSqGSIb3
DQEBAQUAA4GNADCBiQKBgQCOwb5migz+c1mmZS5eEhBQ5wsYFuSmp6bAL7LlHARQxhZg62FEVBFL
Y2klPoCGfUoXUFegnhCV5I37M0dAQtNLSHiEPj0NjAvWuzagevE6Tq+0zXEBw9fKoVV/ypEsAxEX
6JQ+a1WU2W/vdL+x0lEbRpRCk9t6yhxLw16M/VD/GwIDAQABMA0GCSqGSIb3DQEBBQUAA4GBAC5E
kniYYFxwZUqg9ToFlF0LKjGZfttkXJoTMfOFwA6OXrO6cKdzS04srxhoDzkD8V4RskPxttt0pbKr
iAoGKT/9P4hpDb0Ej4urek9TxlrnoC8g0rOYaDfE57SMStDrCg2ha4IuJFtJOh1aMcl4pm/sk+JW
7U/cWyW9B7InJinZ
-----END CERTIFICATE-----

$
$ keytool -import -alias jmc-linux-64 \
 -file jmc-linux-64.cert \
 -keystore jmc-linux-64.truststore.jks
Enter keystore password: trustpass
Re-enter new password: trustpass
Owner: CN=jmc-linux-64.provo.novell.com, OU=Engineering, O="Novell, Inc.", L=Provo, ST=Utah, C=US
Issuer: CN=jmc-linux-64.provo.novell.com, OU=Engineering, O="Novell, Inc.", L=Provo, ST=Utah, C=US
Serial number: 4a3c006f
Valid from: Fri Jun 19 15:17:35 MDT 2009 until: Thu Sep 17 15:17:35 MDT 2009
Certificate fingerprints:
         MD5:  E5:37:9F:85:C9:76:60:FC:DC:01:81:AD:5F:FC:F4:9A
         SHA1: FD:E3:47:6C:AE:9B:75:3B:9C:6C:05:7B:C9:A4:B4:E6:07:F6:B5:FB
         Signature algorithm name: SHA1withRSA
         Version: 3
Trust this certificate? [no]:  yes
Certificate was added to keystore

$

We now have a file called jmc-linux-64.truststore.jks, which contains only the server’s public key and certificate. You can show the contents of the truststore JKS file with the -list option, like this:

$ keytool -list -v -keystore jmc-linux-64.truststore.jks
Enter keystore password: trustpass

Keystore type: JKS
Keystore provider: SUN

Your keystore contains 1 entry

Alias name: jmc-linux-64
Creation date: Jun 19, 2009
Entry type: trustedCertEntry

Owner: CN=jmc-linux-64.provo.novell.com, OU=Engineering, O="Novell, Inc.", L=Provo, ST=Utah, C=US
Issuer: CN=jmc-linux-64.provo.novell.com, OU=Engineering, O="Novell, Inc.", L=Provo, ST=Utah, C=US
Serial number: 4a3c006f
Valid from: Fri Jun 19 15:17:35 MDT 2009 until: Thu Sep 17 15:17:35 MDT 2009
Certificate fingerprints:
         MD5:  E5:37:9F:85:C9:76:60:FC:DC:01:81:AD:5F:FC:F4:9A
         SHA1: FD:E3:47:6C:AE:9B:75:3B:9C:6C:05:7B:C9:A4:B4:E6:07:F6:B5:FB
         Signature algorithm name: SHA1withRSA
         Version: 3


*******************************************
*******************************************

$

A Simple Https Client

We have several options for how to consume this trust store in client code. I’ll take the easy route today, but watch for another article that describes more complex mechanisms that provide more flexibility. Today, I’ll just show you how to set system properties on our client application. This client is very simple. All it does is connect to the server and display the contents of the web page in raw html to the console:

import java.io.IOException;
import java.io.InputStream;
import java.net.HttpURLConnection;
import java.net.MalformedURLException;
import java.net.URL;

public class HttpsClient
{
  private final String serverUrl;

  public HttpsClient(String serverUrl) 
  {
    this.serverUrl = serverUrl;
  }

  public void connect() 
  {
    try
    {
      HttpURLConnection conn = null;
      URL url = new URL(serverUrl);

      try
      {
        conn = (HttpURLConnection)url.openConnection();
        conn.setRequestMethod("GET");
        conn.setDoOutput(false);
        conn.setDoInput(true);
        conn.connect();
        InputStream is = conn.getInputStream();

        Integer bytes;
        byte [] buffer = new byte[512];
        while ((bytes = is.read(buffer, 0, 512)) > 0)
          System.out.write(buffer, 0, bytes);
      }
      catch (IOException e) { e.printStackTrace(); }
    }
    catch(MalformedURLException e) { e.printStackTrace(); }
  }

  public static void main(String[] args) 
  {
    HttpsClient client = new HttpsClient(
        "https://jmc-linux-64.provo.novell.com:8443");
    client.connect();
  }
}

Executing this client as is, without an assigned trust store will cause it to use the default trust store ($JAVA_HOME/lib/security/cacerts), which doesn’t contain our server’s public certificate, so it will fail with an exception:

javax.net.ssl.SSLHandshakeException: 
sun.security.validator.ValidatorException: 
PKIX path building failed: 
sun.security.provider.certpath.SunCertPathBuilderException: 
unable to find valid certification path to requested target
... stack trace ...
Caused by: 
sun.security.validator.ValidatorException: 
PKIX path building failed: 
sun.security.provider.certpath.SunCertPathBuilderException: 
unable to find valid certification path to requested target
... stack trace ...
Caused by: 
sun.security.provider.certpath.SunCertPathBuilderException: 
unable to find valid certification path to requested target
... stack trace ...  

Configuring the Client Trust Store

The quick way to get this client to work properly is to assign our client’s trust store (containing the server’s public key and self-signed certificate) to JSSE system properties in this manner:

$ java -Djavax.net.ssl.trustStore=jmc-linux-64.truststore.jks \
  -Djavax.net.ssl.trustStorePassword=trustword

If you get the path to the trust store file wrong, you’ll get a different cryptic exception:

javax.net.ssl.SSLException: 
java.lang.RuntimeException: Unexpected error: 
java.security.InvalidAlgorithmParameterException: 
the trustAnchors parameter must be non-empty
... stack trace ...
Caused by: java.lang.RuntimeException: Unexpected error: 
java.security.InvalidAlgorithmParameterException: 
the trustAnchors parameter must be non-empty
... stack trace ...
Caused by: 
java.security.InvalidAlgorithmParameterException: 
the trustAnchors parameter must be non-empty
... stack trace ...

And if you get the password wrong, you’ll get yet another (somewhat less) cryptic exception:

java.net.SocketException: 
java.security.NoSuchAlgorithmException: 
Error constructing implementation 
(algorithm: Default, provider: SunJSSE, 
class: com.sun.net.ssl.internal.ssl.DefaultSSLContextImpl)
... stack trace ...
Caused by: java.security.NoSuchAlgorithmException: 
Error constructing implementation 
(algorithm: Default, provider: SunJSSE, 
class: com.sun.net.ssl.internal.ssl.DefaultSSLContextImpl)
... stack trace ...
Caused by: java.io.IOException: 
Keystore was tampered with, or password was incorrect
... stack trace ...
Caused by: java.security.UnrecoverableKeyException: 
Password verification failed
... stack trace ...

In these examples, my client is using my server’s fully qualified domain name in the URL, which is the common name we used when we created the self-signed certificate:

  ...
  public static void main(String[] args) 
  {
    HttpsClient client = new HttpsClient(
        "https://jmc-linux-64.provo.novell.com:8443");
    client.connect();
  }
}

This is the only name that will work with this trust store. In my next article I’ll show you how to generate certificates that work with aliases like the IP address. I’ll also show you how to add a hostname verifier to allow our client code to be a bit more intelligent about which aliases it rejects out of hand.

Java HTTPS Client Issues

I’ve written in the last several months about creating a client for a RESTful web-based auditing service. In that client, I had to implement client-side authentication, which is much more involved (or it should be anyway) than writing a simple secure web client that accesses content from secure public web servers.

Such a simple secure web client has only a little more functionality than a non-secure (http) web client. Essentially, it must perform a check after each connection to the secure web server to ensure that the server certificate is valid and trustworthy. This involves basically two steps:

  1. Verifying the server’s certificate chain.
  2. Verifying the server’s host name against that certificate.

Verifying the Certificate

The purpose of step 1 is to ensure that the service you’re attempting to use is not trying
to pull something shady on you. That is, the owner of the service was willing to put his or her name on the line with a Certificate Authority (CA) like Entrust or VeriSign. When you purchase a CA-signed certificate, you have to follow various procedures that document who you are, and why you’re setting up the service. But don’t worry – the CA doesn’t get to determine if your service is worthy of public consumption. Rather, only that you are who you say you are. The CA verifies actual existence, names, addresses, phone numbers, etc. If there’s any question about the service later, a consumer may contact that CA to find out the details of the service provider. This is dangerous for scam artists because they can be tracked and subsequently prosecuted. Thus, they don’t want to deal with Certificate Authorities if they don’t have to.

The client’s verification process (step 1) usually involves following the certificates in the certificate chain presented by the server back to a CA-signed certificate installed in its own trust store. A normal Sun JRE comes with a standard JKS truststore in $JAVA_HOME/lib/security/cacerts. This file contains a list of several dozen world-renowned public Certificate Authority certificates. By default, the SSLContext object associated with a normal HTTPSURLConnection object refers to a TrustManager object that will compare the certificates in the certificate chain presented by servers with the list of public CA certificates in the cacerts trust store file.

If you have an older cacerts file that doesn’t happen to contain a certificate for a site to which you are connecting, or if you’ve set up the site yourself using a self-signed certificate, then you’ll encounter an exception when you attempt to connect:

javax.net.ssl.SSLHandshakeException: 
sun.security.validator.ValidatorException: 
PKIX path building failed: 
sun.security.provider.certpath.SunCertPathBuilderException: 
unable to find valid certification path to requested target

Ouch! Does this mean you can’t connect to your test server while writing your client code? Can you only test against public servers? No, of course not, but unfortunately, it does mean a bit more work for you. You have basically two options. The first is to install your test server’s self-signed certificate into your default trust store. I first learned about this technique from a blog entry by Andreas Sterbenz in October of 2006. Nice article, Andreas. Thanks!

However, there is another way. You can write some temporary code in the form of your own sort of dumb trust manager that accepts any certificate from any server. Of course, you don’t want to ship your client with this code in place, but for testing and debugging, it’s nice not to have to mess up your default trust store with temporary certs that you may not want in there all the time. Writing DumbX509TrustManager is surprisingly simple. As with most well-considered Java interfaces, the number of required methods for the X509TrustManager interface is very small:

public class MyHttpsClient
{
  private Boolean isSecure;
  private String serverURL;

  private class DumbX509TrustManager 
      implements X509TrustManager 
  {
    public void checkClientTrusted(X509Certificate[] chain, 
        String authType) throws CertificateException {}

    public void checkServerTrusted(X509Certificate[] chain, 
        String authType) throws CertificateException {}

    public X509Certificate[] getAcceptedIssuers() 
        { return new X509Certificate[] {}; }
  }
  ...

To make use of this trust manager, simply obtain an SSLSocketFactory object in your client’s constructor that you can configure with your dumb trust manager. Then, as you establish https connections to your test server, install your preconfigured SSLSocketFactory object, like this:

  ...
  private SSLSocketFactory getSocketFactory()
  {
    SSLSocketFactory socketFactory = null;
    try
    {
      SSLContext context = SSLContext.getInstance("SSLv3");
      context.init(null, new X509TrustManager[] 
          { new DumbX509TrustManager() }, null);
      socketFactory = context.getSocketFactory();
    }
    catch (Exception e) { e.printstacktrace(); }
    return socketFactory;
  }

  public MyHttpsClient(String serverURL)
  {
    this.serverURL = serverURL;
    if (isSecure = serverURL.startsWith("https:"))
      sslSocketFactory = getSocketFactory();
  }

  public void process() 
  {
    try
    {
      HttpURLConnection conn = null;
      URL url = new URL(serverURL);
      try
      {
        conn = (HttpURLConnection)url.openConnection();
        if (isSecure)
        {
          HttpsURLConnection sconn = (HttpsURLConnection)conn;
              sconn.setSSLSocketFactory(sslSocketFactory);
        }
        conn.setRequestMethod(verb);
        conn.setDoOutput(false);
        conn.setDoInput(true);
        conn.connect();
        ...

That’s it. Warning: Don’t ship your client with DumbX509TrustManager in place. You don’t need it for public secure web servers anyway. If you know your client will only ever be used against properly configured public secure web servers, then you can rely on the default trust manager in the default socket factory associated with HttpsURLConnection.

If you think your client may be expected to work with non-public secure web servers with self-signed, or company-signed certificates, then you have more work to do. Here, you have two options. You can write some code similar to that found in browsers, wherein the client displays a dialog box upon connection, asking if you would like to connect to this “unknown” server just this once, or forever (where upon, the client then imports the server’s certificate into the default trust store). Or you can allow your customer to pre-configure the default trust store with certificates from non-public servers that he or she knows about in advance. But these are topics for another article.

Verifying the Server

Returning to the original two-step process, the purpose of step 2 (host name verification) is to ensure that the certificate you received from the service to which you connected was not stolen by a scammer.

When a CA-signed certificate is generated, the information sent to the Certificate Authority by the would-be service provider includes the fully qualified domain name of the server for which the new cert is intended. This FQDN is embedded in a field of the certificate, which the client uses to ensure that the server is really the owner of the certificate that it’s presenting.

As I mentioned in a previous article, Java’s keytool utility won’t let you generate self-signed certs containing the FQDN in the proper field, thus the default host name verification code will always fail with self-signed certs generated by keytool. Again, a simple dummy class comes to the rescue in the form of the DumbHostnameVerifier class. Just implement the HostnameVerifier interface, which has one required method, verify. Have it return true all the time, and you won’t see anymore Java exceptions like this:

HTTPS hostname wrong:  
should be <jmc-linux-64.provo.novell.com>

Here’s an example:

  ...
  private class DumbHostnameVerifier 
      implements HostnameVerifier
  {
    public boolean verify(String arg0, SSLSession arg1) 
        { return true; }
  }
  ...
  public void process() 
  {
        ...
        if (isSecure)
        {
          HttpsURLConnection sconn = (HttpsURLConnection)conn;
          sconn.setSSLSocketFactory(sslSocketFactory);
          sconn.setHostnameVerifier(new DumbHostnameVerifier());
        }
        ...

Scoping the Changes

A final decision you should make is the proper scope for setting the dummy trust manager and hostname verifier objects. The JSSE framework is extremely flexible. You can set these on a per-request basis, or as the class defaults, so that whenever a new HttpsURLConnection object is created, your objects are automatically assigned to them internally. For instance, you can use the following code to setup class default values:

public class MyHttpsClient
{
  private static class DumbX509TrustManager 
      implements X509TrustManager 
  {
    public void checkClientTrusted(X509Certificate[] chain, 
        String authType) throws CertificateException {}

    public void checkServerTrusted(X509Certificate[] chain, 
        String authType) throws CertificateException {}

    public X509Certificate[] getAcceptedIssuers() 
        { return new X509Certificate[] {}; }
  }

  private static class DumbHostnameVerifier 
      implements HostnameVerifier
  {
    public boolean verify(String arg0, SSLSession arg1) 
        { return true; }
  }

  private static SSLSocketFactory getSocketFactory()
  {
    SSLSocketFactory socketFactory = null;
    try
    {
      SSLContext context = SSLContext.getInstance("SSLv3");
      context.init(null, new X509TrustManager[] 
          { new DumbX509TrustManager() }, null);
      socketFactory = context.getSocketFactory();
    }
    catch (Exception e) { e.printstacktrace(); }
    return socketFactory;
  }

  static
  {
    HttpsURLConnection.setDefaultHostnameVerifier(
        new DumbHostnameVerifier());
    HttpsURLConnection.setDefaultSSLSocketFactory(
        getSocketFactory());
  }

  private String serverURL;
  
  public MyHttpsClient(String serverURL)
  {
    this.serverURL = serverURL;
  }
  ...

You can now remove the isSecure check in the process routine, because new instances of HttpsURLConnection will automatically be assigned objects of your new trust manager and hostname verifier classes – the default objects you stored in the classes with the HttpsClient class’s static initializer.

With that, you’re set to connect to any https server. Here’s a little insight for you: The difficult part – the real work – of writing https clients involves writing real code for these classes. I’ll write a future article that provides details on these processes. Again, I remind you: Don’t accidentally ship your clients with DumbHostnameVerifier in place! (Unless, of course, you want to. After all, it’s your code…)

Effective Communications and Apache Ant

Nothing bothers me more, when searching for the solution to a software problem that I’ve encountered, than to find someone with similar problems asking questions on various message boards, only to have response after response sound something like this:

“What’s changed in your code?”

“Look for the problem in your code.”

“You’ve messed something up in your code.”

“Your environment is hosed.”

Recently, I had a problem building a very large Java project with Apache Ant. I kept getting, “Error starting modern compiler.” about a third of the way into the build (5-10 minutes). Not getting any help from the core project team, I did what I usually do – I turned to Google search and immediately found a few people with the same problem. Unfortunately, most of them were using Ant in conjunction with Eclipse. I was getting the same error message from the command line.

I can usually judge by now the age of a problem by the number and quality of responses I find in a Google search. This was clearly a fairly recent issue. One link I found was in reference to the Apache build itself, wherein a bug was filed against Ant for this very issue (or one very nearly like it).

But it irks me to no end when people feel the need to respond to queries on issues like this, without having anything useful to say. If you haven’t encountered the problem before, or you don’t have any particular insight into what’s causing it, then please don’t respond with silly accusations about how the original poster’s environment must be hosed, or how his code must be at fault.

The fact is, software tools have bugs. It’s that simple. Even a tool as revered in the Java world as Ant will have defects. The solutions I eventually found included either changing my project build script in such a way as to cause Ant to fork a new process when it get’s to a particularly large build, or to increase the amount of virtual memory allocated to Ant via an Ant command-line option. I chose to set ANT_OPTS=-Xmx512m in my environment before executing Ant (mainly because I disagree in principle with project-specific solutions to general tool problems).

As it turns out, the root cause of this problem seems to be related more to the fact that Ant can’t spawn a child process, rather than that the wrong compiler was referred to by some environment variable. Java 1.6 has more problems than Java 1.5, probably because 1.6 is larger and more resource intensive than 1.5. The inaccuracy of the message (“modern compiler”??) leads us to believe that the problem is with the compiler itself. But that’s an entirely different problem in effective communications…

Java Secure HTTP Keys, Part II

In my last article, I described the process of configuring client-side key and trust stores within a Java web client application. To keep it simple, I purposely used the built-in functionality of HttpsURLConnection to read certain System properties to obtain references to these credential stores, along with their passwords.

However, for an embedded client–as would be the case with library code–you’d not want to rely on any System properties, because these belong to your user and her application, not to your library. But, manually configuring the key and trust stores for a client-side https connection is a little more involved.

In this article, I’d like to show you how it’s done, and I’d like to begin by suggesting some required reading for a solid understanding of the way it all works. I’m referring to the Java Secure Socket Extensions (JSSE) Reference Guide. Since JSSE was introduced in Java 1.4, and hasn’t really changed much since then, this document is officially up to date–even in Java SE 6.

Getting Started…

Note that the process for setting up the key and trust stores hasn’t changed, so I’ll simply refer you to my previous article for this information.

To summarize, the goal here is to associate our key and trust stores with our client-side connections without specifying them in System properties. And it’s amazing the amount of extra work we have to go through in order to accomplish this seemingly simple task.

The first thing we’ll do is remove the calls to System.setProperty in our AuditRestClient constructor. We still need the values we wrote to those properties, so we’ll just convert them to constants in the AuditRestClient class. At some later point, these should undoubtedly be converted to properties that we read from our own configuration file, but for now, these constants will do:

  public class AuditRestClient
  {
    // URL components (should be configured variables)
    private static final String HTTP = "HTTP";
    private static final String HTTPS = "HTTPS";
    private static final String HOSTNAME = "10.0.0.1";
    private static final Integer PORT = 9015;

    // secure channel key material stores (should be configured)
    private static final String keystore = "/tmp/keystore.jks";
    private static final String truststore = "/tmp/truststore.jks";
    private static final String keypass = "changeit";
    private static final String trustpass = "changeit";

    // secure channel variables
    private Boolean isSecure = true;
    private SSLSocketFactory sslSocketFactory = null;

    public AuditRestClient()
    {
      setupSocketFactory();
    }
    ...

Building Your Own Socket Factory

The new version of the AuditRestClient constructor calls a private method called setupSocketFactory, which configures an SSLSocketFactory object for use later when we configure our HttpsURLConnection object. Here’s the code:

    ...
    private void setupSocketFactory()
    {
      try
      {
        String protocol = "TLS";
        String type = "JKS";
        String algorithm = KeyManagerFactory.getDefaultAlgorithm();
        String trustAlgorithm =
            TrustManagerFactory.getDefaultAlgorithm();

        // create and initialize an SSLContext object
        SSLContext sslContext = SSLContext.getInstance(protocol);
        sslContext.init(getKeyManagers(type, algorithm),
            getTrustManagers(type, trustAlgorithm),
            new SecureRandom());

        // obtain the SSLSocketFactory from the SSLContext
        sslSocketFactory = sslContext.getSocketFactory();
      }
      catch (Exception e) { e.printStackTrace(); }
    }
    ...

This private helper method calls two other private methods, getKeyManagers and getTrustManagers to configure the key and trust stores. Each of these two routines also call a routine named getStore to obtain the key and trust stores from the configured key and trust managers. Again, here’s the code for all three of these methods:

    ...
    private KeyStore getStore(String type,
        String filename, String pwd) throws Exception
    {
      KeyStore ks = KeyStore.getInstance(type);
      InputStream istream = null;

      try
      {
        File ksfile = new File(filename);
        istream = new FileInputStream(ksfile);
        ks.load(istream, pwd != null? pwd.toCharArray(): null);
      }
      finally { if (istream != null) istream.close(); }

      return ks;
    }

    private KeyManager[] getKeyManagers(String type,
        String algorithm) throws Exception
    {
      KeyStore ks = getStore(type, keyStore, keyPass);
      KeyManagerFactory kmf =
          KeyManagerFactory.getInstance(algorithm);

      kmf.init(ks, keypass.toCharArray());

      return kmf.getKeyManagers();
    }

    private TrustManager[] getTrustManagers(String type,
        String algorithm) throws Exception
    {
      KeyStore ts = getStore(type, trustStore, trustPass);
      TrustManagerFactory tmf =
          TrustManagerFactory.getInstance(algorithm);

      tmf.init(ts);

      return tmf.getTrustManagers();
    }
    ...

The getStore method calls KeyStore.getInstance to obtain an instance of the key store associated with the specified type–in this case, “JKS”. It should be noted that if you wish to specify your own provider, you may do so by calling the other version of KeyStore.getInstance, which accepts a string provider name, as well.

Using Your New Socket Factory

Now that you have your socket factory built (whew!), it’s time to look at how it’s used by the rest of the AuditRestClient code. Here’s the context for the use of the new object:

    public void send(JSONObject event)
    {
      byte[] bytes = event.toString().getBytes();
      HttpURLConnection conn = null;

      try
      {
        URL url = new URL(isSecure? HTTPS: HTTP,
            HOSTNAME, PORT, "/audit/log/test");
        conn = (HttpURLConnection)url.openConnection();
        if (isSecure)
        {
          HttpsURLConnection sconn = (HttpsURLConnection)conn;
          sconn.setSSLSocketFactory(sslSocketFactory);
        }
        conn.setRequestMethod("POST");
        conn.addRequestProperty("Content-Length", "" + bytes.length);
        conn.addRequestProperty("Content-Type", CTYPE);
        conn.addRequestProperty("Connection", "Keep-Alive");
        conn.setDoOutput(true);
        conn.setDoInput(true);
        conn.connect();
        ...

Now, this code is completely independent of application owned System properties. Additionally, it’s portable between secure and non-secure HTTP channels. This protocol portability requires a type cast of the connection from HttpURLConnection to HttpsURLConnection in one place (as highlighted in the example above in bold text).

You may have also noticed that I converted the previous version of send to use the other popular form of the URL constructor. This form accepts constituent parts of the URL as separate parameters, rather than as a single string. It’s a bit more efficient under the covers, as the constructor doesn’t need to parse these components from the URL string. It made more sense on my end, as well since I’m parameterizing several of these parts now anyway. Attributes like HOSTNAME and PORT will eventually be read from a library configuration file.

Java Secure HTTP Client Key Management

My current project at Novell involves the development of a ReSTful web service for submission of audit records from security applications. The server is a Jersey servlet within an embedded Tomcat 6 container.

One of the primary reasons for using a ReSTful web service for this purpose is to alleviate the need to design and build a heavy-weight audit record submission client library. Such client libraries need to be orthogonally portable across both hardware platforms and languages in order to be useful to Novell’s customers. Just maintaining the portability of this client library in one language is difficult enough, without adding multiple languages to the matrix.

Regardless of our motivation, we still felt the need to provide a quality reference implementation of a typical audit client library to our customers. They may incorporate as much or as little of this code as they wish, but a good reference implementation is worth a thousand pages of documentation. (Don’t get me wrong, however–this is no excuse for not writing good documentation! The combination of quality concise documentation and a good reference implementation is really the best solution.)

The idea here is simple: Our customers won’t have to deal with difficulties that we stumble upon and then subsequently provide solutions for. Additionally, it’s just plain foolish to provide a server component for which you’ve never written a client. It’s like publishing a library API that you’ve never written to. You don’t know if the API will even work the way you originally intended until you’ve at least tried it out.

Since we’re already using Java in the server, we’ve decided that our initial client reference implementation should also be written in Java. Yesterday found my code throwing one exception after another while simply trying to establish the TLS connection to the server from the client. All of these problems ultimately came down to my lack of understanding of the Java key store and trust store concepts.

You see, the establishment of a TLS connection from within a Java client application depends heavily on the proper configuration of a client-side trust store. If you’re using mutual authentication, as we are, then you also need to properly configure a client-side key store for the client’s private key. The level at which we are consuming Java network interfaces also demands that we specify these stores in system properties. More on this later…

Using Curl as an Https Client

We based our initial assumptions about how the Java client needed to be configured on our use of the curl command line utility in order to test the web service. The curl command line looks something like this:

  curl -k --cert client.cer --cert-type DER --key client-key.pem
    --key-type PEM --header "Content-Type: application/audit+json"
    -X POST --data @test-event.json https://10.0.0.1:9015/audit/log/test

The important aspects of this command-line include the use of the –cert, –cert-type, –key and –key-type parameters, as well as the fact that we specified a protocol scheme of “https” in the URL.

With one exception, the remaining options are related to which http method to use (-X), what data to send (–data), and which message properties to send (–header). The exception is the -k option, and therein lay most of our problems with this Java client.

The curl man-page indicates that the -k/–insecure option allows the TLS handshake to succeed without verifying the server certificate in the client’s CA (Certificate Authority) trust store. The reason this option was added was because several releases of the curl package shipped with a terribly out-dated trust store, and people were getting tired of having to manually add certificates to their trust stores everytime they hit a newer site.

Doing it in Java

But this really isn’t the safe way to access any secure public web service. Without server certificate verification, your client can’t really know that it’s not communicating with a server that just says it’s the right server. (“Trust me!”)

During the TLS handshake, the server’s certificate is passed to the client. The client should then verify the subject name of the certificate. But verify it against what? Well, let’s consider–what information does the client have access to, outside of the certificate itself? It has the fully qualified URL that it used to contact the server, which usually contains the DNS host name. And indeed, a client is supposed to compare the CN (Common Name) portion of the subject DN (Distinguished Name) in the server certificate to the DNS host name in the URL, according to section 3.1 “Server Identity” of RFC 2818 “HTTP over TLS”.

Java’s HttpsURLConnection class strictly enforces the advice given in RFC 2818 regarding peer verification. You can override these constraints, but you have to basically write your own version of HttpsURLConnection, or sub-class it and override the methods that verify peer identity.

Creating Java Key and Trust Stores

Before even attempting a client connection to our server, we had to create three key stores:

  1. A server key store.
  2. A client key store.
  3. A client trust store.

The server key store contains the server’s self-signed certificate and private key. This store is used by the server to sign messages and to return credentials to the client.

The client key store contains the client’s self-signed certificate and private key. This store is used by the client for the same purpose–to send client credentials to the server during the TLS mutual authentication handshake. It’s also used to sign client-side messages for the server during the TLS handshake. (Note that once authentication is established, encryption happens using a secret or symetric key encryption algorithm, rather than public/private or asymetric key encryption. Symetric key encryption is a LOT faster.)

The client trust store contains the server’s self-signed certificate. Client-side trust stores normally contain a set of CA root certificates. These root certificates come from various widely-known certificate vendors, such as Entrust and Verisign. Presumably, almost all publicly visible servers have a purchased certificate from one of these CA’s. Thus, when your web browser connects to such a public server over a secure HTTP connection, the server’s certificate can be verified as having come from one of these well-known certificate vendors.

I first generated my server key store, but this keystore contains the server’s private key also. I didn’t want the private key in my client’s trust store, so I extracted the certificate into a stand-alone certificate file. Then I imported that server certificate into a trust store. Finally, I generated the client key store:

  $ keytool -genkey -alias server -keyalg RSA \
  > -storepass changeit -keystore server-keystore.jks
  What is your first and last name?
    [Unknown]:  audit-server
  What is the name of your organizational unit?
    [Unknown]:  Eng
  What is the name of your organization?
    [Unknown]:  Novell
  What is the name of your City or Locality?
    [Unknown]:  Provo
  What is the name of your State or Province?
    [Unknown]:  Utah
  What is the two-letter country code for this unit?
    [Unknown]:  US
  Is CN=audit-server, OU=Eng, O=Novell, L=Provo, ST=Utah, C=US correct?
    [no]:  yes

  Enter key password for <server>
          (RETURN if same as keystore password):  
  $
  $ keytool -exportcert -keystore server-keystore.jks \
  > -file server.der -alias server -storepass changeit
  Certificate stored in file <server.der>
  $
  $ keytool -importcert -trustcacerts -alias server \
  > -keystore server-truststore.jks -storepass changeit \
  > -file server.der
  Owner: CN=audit-server, OU=Eng, O=Novell, L=Provo, ST=Utah, C=US
  Issuer: CN=audit-server, OU=Eng, O=Novell, L=Provo, ST=Utah, C=US
  Serial number: 491cad67
  Valid from: Thu Nov 13 15:42:47 MST 2008 until: Wed Feb 11 15:42:47 MST 2009
  Certificate fingerprints:
           MD5:  EE:FA:EE:78:A8:42:2B:F2:3A:04:50:37:D3:94:B3:C0
           SHA1: 4E:BA:9B:2F:FC:84:10:5A:2E:62:D2:5B:B3:70:70:B5:2F:03:E1:CD
	   Signature algorithm name: SHA1withRSA
           Version: 3
  Trust this certificate? [no]:  yes
  Certificate was added to keystore
  $
  $ keytool -genkey -alias client -keyalg RSA -storepass changeit \
  > -keystore client-keystore.jks
  What is your first and last name?
    [Unknown]:  audit-client
  What is the name of your organizational unit?
    [Unknown]:  Eng
  What is the name of your organization?
    [Unknown]:  Novell
  What is the name of your City or Locality?
    [Unknown]:  Provo
  What is the name of your State or Province?
    [Unknown]:  Utah
  What is the two-letter country code for this unit?
    [Unknown]:  US
  Is CN=audit-client, OU=Eng, O=Novell, L=Provo, ST=Utah, C=US correct?
    [no]:  yes

  Enter key password for <client>
          (RETURN if same as keystore password):  
  $
  $ ls -1
  client-keystore.jks
  server.der
  server-keystore.jks
  server-truststore.jks
  $

Telling the Client About Keys

There are various ways of telling the client about its key and trust stores. One method involves setting system properties on the command line. This is commonly used because it avoids the need to enter absolute paths directly into the source code, or to manage separate configuration files.

  $ java -Djavax.net.ssl.keyStore=/tmp/keystore.jks ...

Another method is to set the same system properties inside the code itself, like this:

  public class AuditRestClient
  {
    public AuditRestClient() 
    {
      System.setProperty("javax.net.ssl.keyStore", 
          "/tmp/keystore.jks");
      System.setProperty("javax.net.ssl.keyStorePassword", 
          "changeit");
      System.setProperty("javax.net.ssl.trustStore", 
          "/tmp/truststore.jks");
      System.setProperty("javax.net.ssl.trustStorePassword", 
          "changeit");
    }
    ...

I chose the latter, as I’ll eventually extract the strings into property files loaded as needed by the client code. I don’t really care for the fact that Java makes me specify these stores in system properties. This is especially a problem for our embedded client code, because our customers may have other uses for these system properties in the applications in which they will embed our code. Here’s the rest of the simple client code:

    ...
    public void send(JSONObject event) 
    {
      byte[] bytes = event.toString().getBytes();
      HttpURLConnection conn = null; 
		
      try
      {
        // establish connection parameters
        URL url = new URL("https://10.0.0.1:9015/audit/log/test");
        conn = (HttpURLConnection)url.openConnection();
        conn.setRequestMethod("POST");
        conn.addRequestProperty("Content-Length", "" + bytes.length);
        conn.addRequestProperty("Content-Type", "application/audit1+json");
        conn.setDoOutput(true);
        conn.setDoInput(true);
        conn.connect();

        // send POST data
        OutputStream out = (OutputStream)conn.getOutputStream();
        out.write(bytes);
        out.flush();
        out.close(); 		

        // get response code and data
        System.out.println(conn.getResponseCode());
        BufferedReader read = new BufferedReader(new InputStreamReader(conn.getInputStream()));
        String query = null;
        while((query = read.readLine()) != null)
          System.out.println(query);
      }
      catch(MalformedURLException e) { e.printStackTrace(); }
      catch(ProtocolException e) { e.printStackTrace(); }
      catch(IOException e) { e.printStackTrace(); }
      finally { conn.disconnect(); }
    }
  }

Getting it Wrong…

I also have a static test “main” function so I can send some content. But when I tried to execute this test, I got an exception indicating that the server certificate didn’t match the host name. I was using a hard-coded IP address (10.0.0.1), but my certificate contained the name “audit-server”.

It turns out that the HttpsURLConnection class uses an algorithm to determine if the server that sent the certificate really belongs to the server on the other end of the connection. If the URL contains an IP address, then it attempts to locate a matching IP address in the “Alternate Names” portion of the server certificate.

Did you notice a keytool prompt to enter alternate names when you generated your server certificate? I didn’t–and it turns out there isn’t one. The Java keytool utility doesn’t provide a way to enter alternate names–a standardized extension of the X509 certificate format. To enter an alternate name containing the requisite IP address, you’d have to generate your certificate using the openssl utility, or some other more functional certificate generation tool, and then find a way to import these foreign certificates into a Java key store.

…And then Doing it Right

On the other hand, if the URL contains a DNS name, then HttpsURLConnection attempts to match the CN portion of the Subject DN with the DNS name. This means that your server certificates have to contain the DNS name of the server as the CN portion of the subject. Returning to keytool, I regenerated my server certificate and stores using the following commands:

  $ keytool -genkey -alias server -keyalg RSA \
  > -storepass changeit -keystore server-keystore.jks
  What is your first and last name?
    [Unknown]:  jmc-test.provo.novell.com

  ... (the rest is the same) ...
  
  $ keytool -exportcert -keystore server-keystore.jks \
  > -file server.der -alias server -storepass changeit
  Certificate stored in file <server.der>
  $
  $ keytool -importcert -trustcacerts -alias server \
  > -keystore server-truststore.jks -storepass changeit \
  > -file server.der
  Owner: CN=jmc-test.provo.novell.com, OU=Eng, O=Novell, L=Provo, ST=Utah, C=US
  Issuer: CN=jmc-test.provo.novell.com, OU=Eng, O=Novell, L=Provo, ST=Utah, C=US
  Serial number: 491cad67
  Valid from: Thu Nov 13 15:42:47 MST 2008 until: Wed Feb 11 15:42:47 MST 2009
  Certificate fingerprints:
           MD5:  EE:FA:EE:78:A8:42:2B:F2:3A:04:50:37:D3:94:B3:C0
           SHA1: 4E:BA:9B:2F:FC:84:10:5A:2E:62:D2:5B:B3:70:70:B5:2F:03:E1:CD
	   Signature algorithm name: SHA1withRSA
           Version: 3
  Trust this certificate? [no]:  yes
  Certificate was added to keystore
  $

Of course, I also had to change the way I was specifying my URL in the client code:

  ...
  URL url = new URL("https://jmc-test.provo.novell.com:9015/audit/log/test");
  conn = (HttpURLConnection)url.openConnection();
  ...

At this point, I was finally able to connect to my server and send the message. Is this reasonable? Probably not for my case. Both my client and server are within a corporate firewall, and controlled by the same IT staff, so to force this sort of gyration is really unreasonable. Can we do anything about it? Well, one thing that you can do is to provide a custom host name verifier like this:

  ...
  URL url = new URL("https://jmc-sentinel.dnsdhcp.provo.novell.com:9015/audit/AuditLog/test");
  conn = (HttpsURLConnection)url.openConnection();
  conn.setHostnameVerifier(new HostnameVerifier()
  { 
    public boolean verify(String hostname, SSLSession session) 
        { return true; }
  });
  conn.setRequestMethod("POST");
  ...

When you do this, however, you should be aware that you give up the right to treat the connection as anything but an https connection. Note that we had to change the type of “conn” to HttpsURLConnection from its original type of HttpURLConnection. This means, sadly, that this code will now only work with secure http connections. I chose to use the DNS name in my URL, although a perfectly viable option would also have been the creation of a certificate containing the IP address as an “Alternate Name”.

Is This Okay?!

Ultimately, our client code will probably be embedded in some fairly robust and feature-rich security applications. Given this fact, we can’t really expect our customers to be okay with our sample code taking over the system properties for key and trust store management. No, we’ll have to rework this code to do the same sort of thing that Tomcat does–manage our own lower-level SSL connections, and thereby import certificates and CA chains ourselves. In a future article, I’ll show you how to do this.

PUT or POST: The REST of the Story

Web service designers have tried for some time now to correlate CRUD (Create, Retrieve, Update and Delete) semantics with the Representational State Transfer (REST) verbs defined by the HTTP specification–GET, PUT, POST, DELETE, HEAD, etc.

So often, developers will try to correlate these two concepts–CRUD and REST–using a one-to-one mapping of verbs from the two spaces, like this:

  • Create = PUT
  • Retrieve = GET
  • Update = POST
  • Delete = DELETE

“How to Create a REST Protocol” is an example of a very well-written article about REST, but which makes this faulty assumption. (In fairness to the author, he may well have merely “simplified REST for the masses”, as his article doesn’t specifically state that this mapping is the ONLY valid mapping. And indeed, he makes the statement that the reader should not assume the mapping indicates a direct mapping to SQL operations.)

In the article, “I don’t get PUT versus POST” the author clearly understands the semantic differences between PUT and POST, but fails to understand the benefits (derived from the HTTP protocol) of the proper REST semantics. Ultimately, he promotes the simplified CRUD to REST mapping as layed out above.

But such a trivial mapping is inaccurate at best. The semantics of these two verb spaces have no direct correlation. This is not to say you can’t create a CRUD client that can talk to a REST service. Rather, you need to add some additional higher-level logic to the mapping to complete the transformation from one space to the other.

While Retrieve really does map to an HTTP GET request, and likewise Delete really does map to an HTTP DELETE operation, the same cannot be said of Create and PUT or Update and POST. In some cases, Create means PUT, but in other cases it means POST. Likewise, in some cases Update means POST, while in others it means PUT.

The crux of the issue comes down to a concept known as idempotency. An operation is idempotent if a sequence of two or more of the same operation results in the same resource state as would a single instance of that operation. According to the HTTP 1.1 specification, GET, HEAD, PUT and DELETE are idempotent, while POST is not. That is, a sequence of multiple attempts to PUT data to a URL will result in the same resource state as a single attempt to PUT data to that URL, but the same cannot be said of a POST request. This is why a browser always pops up a warning dialog when you back up over a POSTed form. “Are you sure you want to purchase that item again!?” (Would that the warning was always this clear!)

After that discussion, a more realistic mapping would seem to be:

  • Create = PUT iff you are sending the full content of the specified resource (URL).
  • Create = POST if you are sending a command to the server to create a subordinate of the specified resource, using some server-side algorithm.
  • Retrieve = GET.
  • Update = PUT iff you are updating the full content of the specified resource.
  • Update = POST if you are requesting the server to update one or more subordinates of the specified resource.
  • Delete = DELETE.

NOTE: “iff” means “if and only if”.

Analysis

Create can be implemented using an HTTP PUT, if (and only if) the payload of the request contains the full content of the exactly specified URL. For instance, assume a client issues the following Create OR Update request:

   HTTP/1.1 PUT /GrafPak/Pictures/1000.jpg
   ...

   <full content of 1000.jpg ... >

This command is idempotent because sending the same command once or five times in a row will have exactly the same effect; namely that the payload of the request will end up becoming the full content of the resource specified by the URL, “/GrafPak/Pictures/1000.jpg”.

On the other hand, the following request is NOT idempotent because the results of sending it either once or several times are different:

   HTTP/1.1 POST /GrafPak/Pictures
   ...

   <?xml version="1.0" encoding="UTF-8"?> 
   <GrafPak operation="add" type="jpeg">
     <[CDATA[ <full content of some picture ... > ]]>
   </GrafPak>

Specifically, sending this command twice will result in two “new” pictures being added to the Pictures container on the server. According to the HTTP 1.1 specification, the server’s response should be something like “201 Created” with Location headers for each response containing the resource (URL) references to the newly created resources–something like “/GrafPak/Pictures/1001.jpg” and “/GrafPak/Pictures/1002.jpg”.

The value of the Location response header allows the client application to directly address these new picture objects on the server in subsequent operations. In fact, the client application could even use PUT to directly update these new pictures in an idempotent fashion.

What it comes down to is that PUT must create or update a specified resource by sending the full content of that same resource. POST operations, on the other hand, tell a web service exactly how to modify the contents of a resource that may be considered a container of other resources. POST operations may or may not result in additional directly accessible resources.

References

Getting the Most Out of Your HTPC

Well, now that you have this wonderful Home Theater PC (HTPC), what do you do with it? In this article, I’ll provide some insight on how to configure your HTPC for maximum enjoyment. You paid a lot for this fancy piece of hardware. In fact, I paid as much for my HTPC as I did for my Denon 7.1 channel digital decoder and amplifier, and about half as much as I paid for my 720p projector. There’d better be a good reason for spending that much. Let’s explore…

Watching on the Big Screen

The first thing to consider is how you’ve connected your HTPC. Mine is not connected physically to my video system. That is, I have my HTPC sitting in another room of my home that I currently use as a den or study. It allows me the peace and quiet that I need to continue the on-going process of converting my movie collection into streamable media that I can serve from my HTPC.

I recognize that some people may want to connect their HTPC directly into their home theater system. Eventually, I’ll do this myself. But I have a problem, and you may also. Unless you have the latest video and audio equipment in your home theatre, you’re probably facing a physical connection issue that can’t simply be ignored. By this I mean that your slightly older video display (projector or TV) probably accepts, at most, analog component video (YPbPr) inputs. But the connection on the back of your HTPC (if you purchased the Gigabyte motherboard I mentioned in that first HTPC article) has only VGA, DVI-D and HDMI outputs.

Direct Digital Connection

If you’re lucky enough to have newer home theater equipment–a 1080p projector with HDMI inputs, and a newer 5.1 channel digital decoder/amp with HDMI video switching capability, then you’re really set. Just plug the HDMI output from the onboard ATI video circuitry into one of the free HDMI inputs on your amplifier, and start watching!

The nice thing about an end-to-end digital connection is that you’ll be able to watch your Blu-ray content in the highest resolution available to your display device. Such a connection between your HTPC and your TV will provide exactly the same home theater experience you’d get from your 350 dollar Blu-ray player.

Direct Analog Connection

As mentioned, to get the most out of a direct connection, I really need to use the digital (either DVD-D or HDMI) outputs. But, short of upgrading my amp and projector, I have little recourse here. My somewhat older Denon amplifier has component video switching capability for up to three inputs switched to one output, which is really nice for older devices and monitors. But unfortunately, none of this is compatible with modern digital signals. Until I come into some spare cash, I’m going to have to settle for an end-to-end analog signal between my HTPC and my projector.

To make matters worse, VGA has nothing whatsoever to do with component video, except that they’re both analog signals. Unfortunately these two analog signals operate in different color spaces, so there’s no ad-hoc wiring harness that you can solder together that will allow you to generate component video from the VGA signal at the back of your HTPC.

The solution to this problem is an inexpensive video transcoder. There are various devices available for reasonable prices that will actively convert from one color space to the other. Some of them have more capabilities–and are thus more expensive–than others. I’ve mentioned these devices briefly in my first HTPC article, but I’ll cover them in more detail here.

The device I’ve found that seems to be the best compromise between price and performance is one manufactured by Audio Authority called the 9A60 VGA to Component Video Transcoder. This is a sweet little device–the sweetest aspect of which is the price. In the first place, it does exactly what you want it to do, no more and no less. It converts an RGB signal from a VGA connector to YPbPr Component Video via the standard 3 RCA jacks, with no video scaling or dimensional transformations.

Incidentally, the best price I’ve found on the 9A60 is at mythic.tv for 105 dollars.

Setting Video Card Resolution

Regardless of the type of connection you establish, you’ll have to configure your HTPC’s video card to provide the exact resolution and format expected by your projector or television. An HDMI connection will make setting the computer’s resolution a bit easier, but it has to be done nonetheless.

The resolution expected by your viewing device of choice often depends on how you’ve configured it. For televisions, the resolution is somewhat hard-coded into the device, but projectors can usually be configured to display in different resolutions. Both types of devices can automatically handle a slightly varying range of resolution, regardless of configuration, the rendering quality of which depends on the quality of the display circuitry in the device.

You also need to understand the correlation between TV industry display resolutions and computer display resolutions. In the television industry, resolutions are defined in terms of number of scan lines and whether the signal is progressive or interlaced. Thus, you’ll often hear of TV’s that can display 480p, 720p, 1080i, or 1080p. The numeric values indicate the number of horizontal scan lines displayed, and the letter is either “i” for interlaced, or “p” for progressive (non-interlaced).

The number of scan lines directly corresponds to the vertical resolution on your HTPC. Thus, to generate a 1080p signal to your HD television, you’re going to have to configure your HTPC’s video card to display a resolution of (‘something’ x 1080). The ‘something’ is determined by back-calculating the horizontal resolution from the aspect ratio of your television.

The aspect ratio of US televisions (I mean NTSC/ATSC, rather than the European PAL standard) is either 4:3 or 16:9. So, on a wide-screen (16:9) US television, you would use the following formula to determine the horizontal resolution of your video card:

   Hr = Vr * 16 / 9

where ‘Hr’ stands for Horizontal resolution, and ‘Vr’ stands for Vertical resolution. Thus, the proper horizontal resolution for a 1080p display is 1080 * 16 / 9, or 1920.

The biggest problem you’re likely to run into in this process is actually finding a conforming resolution in the list handed to you by the Microsoft Windows video card configuration dialogs. Windows wants to query the monitor to find out what it can handle, and then transform this information into a set of resolutions compatible with your monitor, but when your monitor is effectively the Audio Authority 9A60, you’ll find it to be quite uninformative regarding what it can handle. Windows responds by giving you a minimal set of choices.

Fortunately, there is free software available in the form of an application called PowerStrip by a Taiwanese company called Entech, which allows you to manually choose your horizontal and vertical resolution, as well as color depth, and horizontal and vertical sync rates. These values must be chosen carefully, or you can damage your display device, but most TV’s and projectors are much more resilient than computer monitors. PowerStrip is pretty self-explanatory, and there are guides abounding on the Internet, so I’ll forego the details here.

Extender Technology

Before I’m ready to connect my HTPC directly to my home theater system, I’m going to use it for several months to convert my video collection, so I’ll want to use “Windows Media Center Extender Technology” and my home network to display my Media Center console on my home theater projector remotely.

Microsoft sells an extender device designed explicitly for this purpose, however, I already have an XBox 360 that I got for my family for Christmas last year, and the 360 has built-in WMC extender functionality. You activate it through the 360 console’s Media page. Look for the option to connect to a Windows Media Center PC.

When you select this “connect” option, the 360 displays an 8 digit random number on the screen, and tells you to use this number at the appropriate location when setting up the extender on your HTPC. In the Media Center setup menu of your HTPC, you’ll find an option for setting up an extender. During this setup wizard, an entry dialog will be displayed, where you’ll be asked to enter this 2-part, 8-digit value. Once you’ve entered this value, the rest is trivial, and your 360 will display your Windows Media Center console.

You can use your game controller to move about the WMC menus and select various options. There’s a cheat-sheet provided by Microsoft that will help you understand how the controller buttons map to Media Center functionality.

Watching Digital Television

TV cards–even digital TV cards–are so inexpensive these days, it would be a shame if you chose to forego that expense. I dare say a TV card costs less than the memory in your HTPC. With that TV card, you get the ability to watch digital TV in full definition.

Of course, if you’d rather spend 400 dollars on a stand-alone digital broadcast tuner, feel free. I much prefer the 80 dollar Hauppaugh WinTV solution. In fact, it’s so cheap, It’s worth considering purchasing two such tuners. Windows Media Center will recognize and consume both units. You can then use one of them to record from one channel, while you’re watching another channel on the other. You can even enjoy picture-in-picture features using both tuners–want to watch a movie while not missing the big game (or vice-versa)? Hmmmm. 400 dollars for a single stand-alone tuner, or 160 dollars for a couple of tuner cards? Not a tough choice.

In fact, the Hauppaugh WinTV 1800 card is actually two tuners in one; an analog tuner and a digital tuner. So even one card will let you do some of the fancy stuff–like recording a digital program while watching an analog program, each on different channels. But if you’re hooked on the realistic quality of digital TV, then you’ll probably almost forget that you have an analog tuner in your TV card. I didn’t even bother connecting the analog tuner to the antenna wire.

This does bring up an interesting side issue for me. The TV card has four antenna inputs on the back: TV, DTV, FM, and QUAM. Okay, I can understanding separate inputs for FM radio and Satellite or Cable input, but was it really necessary to separate the inputs for Analog and Digital TV? I can get a really nice analog picture by connecting my digital antenna to my Analog antenna input. I suppose it’s conceivable that your area has digital and analog broadcast towers set up in different locations, which would preclude aiming TV and DTV antennas in different directions… What I’d really like to see is some sort of software switch or hardware jumper that bridges the DTV input to the TV input, so I don’t have to use an input cable splitter to connect my DTV antenna wire to both inputs.

Time-Shifting and the Media Center Programming Guide

One of the nicest features of Windows Media center is the ability to easily record a program for later viewing. I can sit down on Saturday afternoon, and check out the schedule for the coming week. In a few minutes, and with just a few clicks, I can schedule the recording of broadcast movies or shows I want to watch. If you always schedule tuner B to record, then you know you can always watch tuner A without worrying about bumping in to a recording session.

Remember when you had to get out the manual for your VCR whenever you wanted to record a program on TV. It was a fairly complex and time-consuming process to configure your VCR to record a program at a later time. If you just wanted to record something now, it wasn’t too bad. You could almost figure it out without the manual (just press the red record button and the play button at the same time–often this combination was highlighted on the remote for this purpose). But if you wanted to record a program that was scheduled to start when you were not home, now that was a different matter. How’d I do that last time? Dang! Where’s that VCR manual?!

Windows Media Center comes with an online programming guide for the United States. If you live in the US, you simply supply your zip code when you configure your tuner card (and, of course, agree to the online content use license), and Media Center will configure your TV viewing experience with an online programming guide. Recording any program is as simple as finding the upcoming program in the guide, and pressing the record button at the bottom of the screen. This isn’t perfect–it never has been. Last minute programming changes will always be sources of heartburn, but the media providers understand this, and try more then ever to ensure that the content is accurate.

You even have the option of recording an entire season of a program with one button. Do you like a particular television program, but you forget to record it half the time, so there are gaps in your understanding of the program plot? No problem. Let Media Center do the remembering for you. Just tell it to record the entire season, and then forget it. If you become busy with life and stuff (who doesn’t?), and are unable to watch your program for a few weeks, don’t worry–when the load lightens up again, the missed episodes will be there for you to watch.

You can also watch a program while it’s being recorded. Now, why would you want to do that?! Okay, you can perhaps understand that you might wish to save this program and watch it again later. But most people who record while watching do so for one reason: They want to skip commercials on the fly. Just start recording a program you want to watch, then go away for 15 minutes or so. When you come back, you’ll have enough recorded material so that when a commercial starts, you can fast forward over it to the show again. By the time you get to the next commercial, enough material has been recorded to allow you to skip this one as well. This is a common feature on 200 dollar Personal Video Recorder (PVR) devices. PVR functionality comes built-in to a Media Center PC with a tuner card.

DVD and Blu-ray Movies

My system includes a Blu-ray disc player, so I can watch my Blu-ray discs on my HTPC. At the time of this writing, Blu-ray players (not recorders) can be had for between 100 and 150 dollars, and they’re coming down in price fast. It won’t be long before, like internal DVD players, you can pick one up for about 20 bucks.

But Blu-ray players can also play DVD’s and CD’s, as well. This shouldn’t be too surprising, as DVD players can also play CD’s. Thus, for about 100 bucks, I have an HTPC-based replacement for my stand-alone Blu-ray/DVD player. Such a device would normally cost 350 dollars or more in today’s market. (It’s becoming easier and easier to justify the 1000 dollar cost of my HTPC!)

For complete instructions on how to create a playable archive of your purchased movie content, see my previous article, entitled, “Creating a Disk-Based Movie Archive”.

NetFlix Streaming Media

One of my favorite services (and a primary motivation for me to build an HTPC in the first place) is NetFlix streaming video. I’ve had a NetFlix subscription for a couple of years now. Last year when NetFlix came out with free streaming video for current subscribers, I thought Christmas had come early for me.

If you’ve got your Media Center PC connected directly to your television, then you have a several options. The most obvious option is to open a browser window from your HTPC desktop, and navigate to netflix.com. In this case, you’re accessing NetFlix streaming video just as you always have (if, that is, you’ve used NetFlix streaming video in the past), except that now you’re watching it on your television, instead of your computer monitor.

If you’re using an WMC extender, or if you simply want to configure WMC as your only desktop (by making it non-minimizable), then you have fewer options. Since you can’t access your browser application from the extender console, you’ll have to find a way to access NetFlix streaming video through WMC itself. There are two approaches you can take.

One of these is a free software project hosted by Google code, called VMCNetFlix. VMCNetFlix is basically a Windows Media Center application that makes the NetFlix Web API available through the Windows Media Center interface. To use VMCNetFlix, you must be using Windows Vista Media Center (thus, the ‘VMC’ portion of the name), which comes packaged with Windows Vista Home Premium, Business or Ultimate editions. Assuming you are, simply go to the VMCNetFlix project download page, and download the package appropriate for your hardware architecture (32- or 64-bit).

Install the package by double-clicking on it, and then bring up Windows Media Center. Navigate up or down to the “Online Media” menu, and select the “Program Library” option. If you’ve seen this screen before, then you should see a new item in the list with the familiar NetFlix motif. Select the NetFlix program, and the VMCNetFlix application will help you configure your Media Center to access your NetFlix account.

I like this option because it’s easy to use, fully functional, and best of all–free. In sharp contrast, the other option for accessing NetFlix streaming content through WMC is just plain stupid. I’m sorry, but I just don’t understand how normally intelligent people can conceive of what they deem to be viable business models that fly in the face of reality. If you’re using an XBox 360 as a Media Center extender, then you can also access NetFlix streaming content through your XBox Live! account, if you have one. This would be fine, except that you have to have a Gold account, which means you’ll be charged a monthly fee to use a service that you already pay a monthly fee to use. Now, of course, if you’re an avid gamer, and you already pay for an XBox Live! Gold account, then this requirement probably won’t bother you (much).

The sad part about the XBox Live! method is that it’s the only officially sanctioned way of accessing NetFlix streaming content from the Media Center console. To be sure, there’s nothing illegal about using VMCNetFlix. It’s just that it’s a bit of a hack, which means that anytime NetFlix decides to change their web API, VMCNetFlix will have to be updated to accommodate the modifications.

Additional Features

You can also play games and execute other pc-based software. You’re not limited to using your HTPC as a media center. Unless you’ve configured WMC to be non-minimizable, you can simply click the usually minimize button in the upper-left corner and you’re looking at the Windows Vista PC screen on your TV. This means that any software you have installed is available from your TV. There are a few Windows games that can be played through the “Online Media/Program Files” menu.

There is on-line content available through Windows Media Center. Most of this is subscription based, but you’ll have to decide whether it’s worth the price. And finally, you can do most of the usually things with Media Center that you can do with media on your PC, including playing music. If you have a really nice stereo, this could be a great way to use your PC-based music collection.

Since PC’s are naturally extensible, having a PC as a component in your home theater makes your home theater extensible. Whenever a new PC-based media experience becomes available, you’ll be ready to take full advantage of it.

Creating a Disk-Based Movie Archive

Okay, I’ll admit it–I’m sorta lazy. I’d like to be able to put all of my DVD’s and Blu-ray movies into a jukebox, and then select the one I’d like to watch from the comfort of my couch. Well, that would be nice, but I don’t know of any way to add 300 DVD and Blu-ray players to my computer–nor would I want to. But, hey! My hard drive is more than large enough to store the content of my DVD and Blu-ray collection, if only there were a way to get them off the discs and onto the hard drive.

The Dirty Word – Decrypting

As I’ve pointed out in past articles (look out, here comes the obligatory disclaimer), it is illegal under the Digital Millenium Copyright Act of 1998 to decrypt copyrighted material that has been encrypted for copy-protection purposes.

The Spirit of the Law

That said, there are companies (Fusion Research, for instance) that sell media server hardware and software that have found legal loop holes in this legislation. Here’s what they do. They copy the entire disc to your hard drive, unmodified and fully encrypted. Then, they play or serve the image to your CSS-licensed display device, whether that be simply DVD player software, such as PowerDVD, or a remote extender device. Since the content has never been decrypted, except by CSS-licensed players, no one has broken the law–so they say. Regardless, some companies have been sued over this technique in recent years. However, as far as I know, they’ve won these lawsuites, so we can safely say that their assumptions are correct about the spirit of the DMCA.

But all of this is strictly irrelevant to us. We’re not playing with expensive video archiving and serving hardware. We simply want to play movies from Windows Media Player, without having to put the disc into a drive. Well, let’s see if we can do that, while maintaining the spirit of the Copyright Act, if not the DMCA. First consider why copyright holders don’t want you to unencrypt their content. I can think of several reasons:

  • They want to force you to watch their FBI and Interpol warning notices. Laugh if you will, but this is a really valid reason. They want you to be reminded each time you watch their material that they own a copyright on that material. Each time you’re reminded of this fact, it strengthens your inner resolve not to involve yourself in piracy.
  • They want to force you to watch their corporate logos. Again, this is not really a laughing matter to media developers. It’s marketing in the deepest sense of the word. These logos help them to establish themselves in consumer minds as important players in the media industry. Media developers spend billions of dollars on these compaigns, so think again if you think they don’t work.
  • They want to ensure that you’re only watching material targeted for the geographic region for which it was intended. If you live in the United States, for example, you can probably only watch a region 1 DVD in your DVD player. Content is released in different regions at different times because of the time required to prepare a release for a particular language or set of marketing requirements. To begin profit-taking as quickly as possible on a media venture, copyright holders will release content to a region as soon as it’s been fully mastered–usually beginning in the United States. But they don’t want Asian viewers (for instance) watching movies until they’re actually released in Asia, even if they’ve already been released in the US. The marketing campaigns for each region are different for demographic reasons. If US content were watched in Asia, then less sales of the Asian version of that content would occur when the Asian marketing campaign was finally ready, thus reducing the effectiveness of the Asian marketing campaigns.
  • They want it to be more trouble than it’s worth to skip the embedded trailers for other films by the same vendors. While you at least have the option of skipping the trailers, many people have trouble finding the right combination of buttons on the remote to skip directly to the disc menu, so they often just wade through the trailers when the disc is inserted.

Commercial content pirates use sophisticated machinery to make bit-for-bit copies of copyrighted materials. These duplication devices are unaffected by the self-contained copy-protection schemes that modern DVD and Blu-ray content use. Thus, it’s safe to say that content providers are not attempting to stop commercial pirates by using encryption techology–they simply can’t, so they fight these battles using police forces and court systems, instead. Thus, content encryption exists solely to keep the honest people honest. If being forced to watch the FBI and Interpol warnings is not sufficient impetus, then at the very least it will be somewhat more difficult for us to make copies to hand out to our friends and family.

It’s a sad, but true fact that most of the encryption efforts put into copyrighted media exist for marketing reasons, and not at all to control commercial piracy. Note also that I wasn’t mis-using the word “force” in each of these points. Have you ever tried to skip over the FBI warning with your DVD remote? You can’t do it. You usually get a little universal symbol in the upper right corner of the display that indicates the fast-forward function is not allowed at this time. Additionally, DVD’s are region encoded, and US players are designed to simply not play a DVD encoded for a different region. The rules for software players are a bit different, but it’s not off by much. The DVD or Blu-ray drive in your computer can have its region code reset at most 5 times before it locks onto the last setting.

I mention all of this for one reason: Because the techniques I’m about to give you are designed to defeat all of the reasons why content providers want your DVD’s to remain encrypted and intact. In past articles, I’ve used concepts like the US Copyright Act’s Fair Use clause to justify making backup copies of your own media. While we’re not talking about copying rented movies, or about making free copies of your movies for your friends and family, we are talking about defeating the copyright holders’ real purposes for copy protection. The only thing worth storing on your hard drive is the movie itself, so we’ll be stripping out the incidental garbage that you don’t really want to watch anyway, much less have it consume valuable hard drive space.

Getting Down to Business

There are two stages to the process of formatting video and audio content for streaming. The first is decryption, and the second is transformation. The decryption process is (quite ironically) the simplest of the two. This is mostly because the transformation process involves many more choices on your part.

Ripping DVD and Blu-ray Discs

Now, there are several free software programs available on the Internet which can mostly do what you need. Really, whether or not these programs can do the job depends on when the movie was released. With the advent of the DMCA legislation in 1998, new development on most of the free DVD decryption software was stopped after threatening letters were sent by copyright holders to the software developers. Between that, and the fact that the CSS algorithm is updated occasionally in an attempt to defeat such decryption software, it’s pretty safe to say that many movies released after about 2003 are difficult to decrypt with the remnants of the free software you might find floating around out there.

In the interest of completeness, I’ll name a few of these programs:

  • DeCSS
  • DVD Decrypter
  • DVD Shrink
  • Rip It 4 Me!
  • DVDFab (HD)
  • DVDPro
  • DVD Ripper
  • Magic DVD Ripper
  • Bingo DVD Ripper
  • Xilisoft DVD Ripper
  • ImToo DVD Ripper
  • DVD43

Note that I haven’t provided links to these. They change from time to time, and I won’t be accused of “facilitating” the propagation of these illegal programs, and thereby potentially losing my ability to use my WordPress account. Simply enter the names into your favorite search engine, and they’ll more than likely be the top links on your results pages.

Keep in mind that many of these titles are evaluation versions of for-profit programs. I’ve ordered the ones I know are truly free at the top of the list. The biggest concern I personally have with the for-sale programs is that I’ve found that you’re often buying more than you’ve bargained for. A large body of demographic market analysis has shown that people who download and purchase programs like this are likely candidates for commercial sales of other types of software and products. Thus, I’ve found that some of these programs come with ad banners, and even spyware built into them! You pay for them, and they use you.

AnyDVD (HD)

I currently maintain that the only program worth purchasing (in my humble opinion) is AnyDVD HD developed and provided by a company based in the Carribean islands called SlySoft. If you can imagine a company devoted to what basically amounts to developing and selling illegal software, but which has nothing but the most altruistic motives, then SlySoft is your company. You won’t find any spyware or advertising banners, and you won’t have to wonder what else the software is doing to your system. The people at SlySoft basically disagree with the DMCA legislation, and have found a legal way to fight that battle–using what amounts to a form of civil disobedience–by moving to an area outside of US jurisdiction.

The standard DVD version of this program is about 40 US dollars. The HD version sells for about 100 dollars, and comes with the additional ability to decrypt Blu-ray and HD DVD discs. New releases are made available by SlySoft on a near monthly basis to keep abreast of the latest CSS encryption algorithm changes, and these updates are free to registered customers. A full uncrippled version of AnyDVD (HD) can be used for free for a three-week trial period. Thus, the following text assumes you are using AnyDVD (HD).

One interesting aspect of AnyDVD, which is not to be found in any other type of DVD and Blu-ray decryption software is that it runs as a service on your computer that sits in between the DVD/Blu-ray/HD DVD drive and the Operating System itself (Microsoft Windows, in this case). The value of this approach is that any software designed to play or manipulate DVD content sees a modified view of the disc. All discs appear to be unencrypted. Thus AnyDVD (HD) has the effect of enhancing all DVD manipulation software with decryption functionality.

And there are plenty of programs on the market for manipulating DVD content. Most of them assume you’re manipulating unencrypted content, because, well, it’s illegal to decrypt DVD content these days. If you’ve been duped into buying a program for manipulating DVD content, only to find that it doesn’t work with any of the DVDs in your movie drawer, then AnyDVD has a few surprises in store for you. Your software will now find all of your movies to be nicely unencrypted!

That cool feature aside, AnyDVD can also be used to copy your DVDs and Blu-ray discs to your hard drive. The reason you want to do this is because the transformation process is time consuming, and should be done in batches. Thus you’ll want to work with 8 or 10 disc images at once. To do this, you’ll need copies of the unencrypted content of several of your discs on your hard drive at once. Then you can go away for a few days while the transformation software’s batch processor does its job.

The lastest version of AnyDVD can create both file-based and ISO-image-based copies of your discs. After installing AnyDVD, simply right-click on the little red fox icon in your system tray, and choose “Rip Video DVD to Hard Disk…”. This will create a folder named after the disc title in your “My Documents” directory (configurable).

Don’t use the “Rip to Image…” feature. This is great for creating an ISO disc image file, which can then be used to burned a (unencrypted backup copy of a) DVD using readily available DVD burning software such as Roxio or Nero, but the ISO image can’t easily be manipulated by most DVD transformation software (although this would be a cool feature for most such packages).

In the following section, I’ll show you how to use transformation software to copy just the video and audio data files you actually need. This saves you a (small) bit of time in the data copy phase, but significantly increases the transformation setup time for a batch of movies.

Converting Content to a Streamable Format

The transformation process is the process of converting the MPEG-2 DVD video and Dolby or DTS Digital multi-channel audio data files in the DVD data directory into a single streaming media file that you can easily play on your Windows Media Center PC. This requires software, and while there is no doubt a number of excellent free software packages available, the best transformation software, hands down, is found in for-purchase products.

Since there’s nothing illegal about such programs, you’ll find plenty of them on the Internet. The one I use, and have found to be the most effective for my purposes is called TMPGEnc 4.0 XPress by Pegasys, Inc. However, TMPGEnc is by no means the only option, and you may find something else that is more to your liking. Because I’m used to TMPGEnc, and because it’s a fairly popular program, I’ll illustrate the video transformation process with it.

TMPGEnc is comprised of two separate programs, the encoder program, and the batch processor. The encoder program is what you use to configure a transformation. Transformations are then registered with the batch encoder, which performs the transformation process in the background. The encoder program may also be used to directly execute a transformation, but you may not use the encoder program to configure a transformation while it’s busy executing another transformation, so the batch encoder is the tool of choice for execution.

Start the encoder program. Were it not for the myriad options available in the various stages of configuring a transformation, you could almost guess the process because TMPGEnc presents it as a step-wise set of tabs on the main window. The four tabs across the top are labeled, “Start”, “Source”, “Format”, and “Encode”.

The Start page allows you to choose the type of project to begin. Essentially, these choices amount to either starting a new project, or opening an existing, previously saved project. Click on “Start a new project” and you’ll be moved to the “Source” page. Here you can decide whether you wish to add audio and video files directly, or use the source wizard to help you select a set of files. Click on “Source wizard”, and a dialog opens with three options, “File”, “DVD Video, DVD-VR or DVD-RAM”, or “Microsoft Windows XP Media Center Edition video recorder file”. Choose the center option, “DVD Video”, and click “Next…”. Select the directory containing DVD data files you’ve previously imported using AnyDVD.

After a few seconds, another dialog opens which contains a list of titles found in that directory. These titles correspond directly to various elements of the DVD disc contents, including FBI warnings, trailers, special features, and the movie itself. Locate the longest title by time–usually between one and three hours in length and check the box next to that Title. Select the drop-down within that title box and ensure that the correct audio stream is selected (eg., “Dolby Digital, 48000Hz, 5.1/6ch, English”). The second drop-down box allows you to select sub-titles, but if you enable subtitles then you should understand that these will be embedded in your final video output–you won’t be able to turn them off as you can with a normal DVD movie.

Once you’ve selected and configured the track you want, then click the “Next…” button. At this point, you are presented with the option of importing chapter entry points into the key-frame list. Key frames are complete picture frames in the frame list that make up the movie. MPEG encoding is a form of delta encoding, which means that most of the frames in the video stream are made up simply of sets of changes from the previous frame. Every few frames, a key frame is embedded in the stream, but for the most part, each frame is simply a composite of all previous frames, plus delta data in the current frame. By adding chapter entry points as key frames, you have the ability to fast-forward, if you will, to these key frame points while watching the movie.

Make sure the “Copy selected titles to the hard drive” is unchecked, as the data is already on the hard drive. Back a few paragraphs, I mentioned that you could use TMPGEnc to copy just the data you needed, rather than the entire movie. This is where you would do that. If you wish, rather than copy an entire DVD using AnyDVD, simply insert the DVD into your drive, and select the drive from within TMPGEnc, it will take a bit longer to generate a title list–30 seconds perhaps. When you reach this screen, make sure the “Copy…” option is selected, and TMPGEnc will copy the desired track (unencrypted via AnyDVD) directly from the DVD. For now, uncheck this option and click “OK”.

You’re now presented with another tabbed dialog. The three tabs are, “Clip properties”, “Cut-edit” and “Filters”. Up to now, you might have guessed what to do, but from this point on, the process becomes more complex. Begin on the “Clip properties” page. For the most part, the only thing you need to do here is name the clip. I name the clip after the movie I’m encoding.

Make a mental note of the aspect ratio displayed a few lines down. It will probably say something like “Pixel 40:33 (NTSC 16:9)”, or “Pixel 10:11 (NTSC 4:3)”. Skip the “Cut-edit” page, and move right to the “Filters” page by clicking on the “Filters” tab at the top. This screen shows a list of filters to be applied to the video stream down the left side. The default list contains about 8 different filters, most of which are disabled. The “Deinterlace” filter is always enabled, and there’s no way to disable it. The “Picture resize” filter is also always enabled.

If it’s not already highlighted, click on the “Deinterlace” filter, and then look at the bottom third of the window. By default, the “Deinterlace when necessary” option is selected in the top (Deinterlace mode) drop-down box. Change this option to “Deinterlace always”.

Enable the “Picture crop” filter, by checking the check box on the left side of the “Picture crop” filter icon. Ensure it’s selected so that its options are displayed in the filter options pane in the bottom third of the dialog. Likely as not, the video window in the top two-thirds of the dialog is completely black. This is because most video data streams from movies fade in from black, and by default, TMPGEnc’s filter window starts you off viewing at the very first frame. Grab the slider under the blue information pane and drag it an inch or so to the right.

If you’ve selected a 16:9 wide-screen movie to experiment with, then you are probably looking at a frame in the movie with black bars on the top and bottom. We need to get rid of these bars. We could encode the film with the black bars, but it’s just a waste of disk space to do so, and entirely unnecessary. Increase the value in the “Top” field in the filter options pane by clicking the little “up-arrow” on the right of the “Top” value edit box. As you do so, you’ll see the video frame increase in breadth, and the black bar decrease at the top. Increase the “Top” value until all of the black is gone. Do the same for the “Bottom” value.

If you’re encoding a standard “Anamorphic” wide-screen DVD, you should find that after cropping off the black bars from the top and bottom, the “Size after cropping” field will display about 720 x 360. Now, 720 is twice 360, so you might think your visible frame should be about twice as wide as it is tall. A quick check with a ruler will show you that it’s more nearly 2.5 times as wide as tall. Well, that doesn’t make sense, does it? And in fact, a glance at the back of the DVD case will probably indicate that the film is formatted to preserve the original theater presentation aspect. Well, if you’ve done any thinking about this before, you are probably aware that original theater presentation format is more like 2.35:1 or even more. Click on the “Clip properties” tab for a second, and note again the “Aspect ratio” field under “Clip Settings”. Anamorphic wide screen means that the pixel width to height ratio is not 1:1 either, but in this case, 40:33. To find the true width to height ratio, you also have to take the pixel ratio into account.

Back on the “Filters” screen, you may now press “OK”, to accept your “Picture crop” and “Deinterlace” filter settings. The Clip dialog is dismissed, and you’re returned to the main window, where you can now select the “Format” button to move to the encoding options. You’re now presented with a dialog containing a tree-view on the left side. This tree-view shows a hierarchy of encoding formats. These are the output formats that you will be selecting from.

For Windows Media Center with an Extender, you need to have the most Windows compatibility possible. I encode my movies using Windows Media Video and Windows Media Audio formats. Under “Output templates for specific format” you’ll find the third entry (in 4.0) to be “Windows Media Video file output”. Clicking this option will display information about the option on the right side of the dialog. Now click on “Select” at the bottom.

You now see a window similar to the Filter window we looked at earlier, except this is the codec configuration dialog, allowing you to configure the output format options. Within the “Video” tab, select “Windows Media Video 9 Advanced Profile”, set the “Size” to “Pixel 40:33 (NTSC wide)” (actually, set this to the value matching the clip you’re encoding). Choose “30 fps (progressive)” under “Framerate”, and set “Video encode type” to “2 pass VBR (average bitrate)”. These settings work well for me.

Now, set the “Average bitrate” to at least “1500 kb/s”. Larger values will increase quality only a little, but will increase the file size quite a bit. Ensure that the “Maximum key frame interval” is set to at least “8000 ms”. Smaller values here will embed more full (key) frames, giving you slightly better quality at the expense of more hard disk space. I set my “Video quality” field to about 95 percent. This slider actually move between still and motion video quality. Technically, you should increase this value for movies with more dynamic motion, and decrease it for movies with less dynamic motion.

Moving on to the “Audio” tab, select “Windows Media Audio 10 Professional” for the “Audio codec”, and then select “2 pass VBR (average bitrate)” for “Audio encoding type”. Under “Audio format”, select either the 2 or 5.1 channel, “256 kbps, 48 kHz, 24 bit VBR” option. MP3 files downloaded from iTunes come in at 128 kb/s. iTunes plus songs are are DRM-free and come in at 256 kb/s. Many people would tell you that both of these quality levels are too low. These people generally purchase music CD’s and rip them to 320 kb/s or higher. Audio CD’s use an uncompressed format called PCM, but audiophiles will tell you that even PCM quality is bad compared to analog (high-quality reel-to-reel, or the lesser, but more readily available vinyl album format). Ideally, PCM (uncompressed) digital is the best digital option we have available, but even DVD audio is compressed to some degree. PCM audio is available on Blu-ray as a new format called HD Audio (more on this later).

Maybe my hearing is gone at 44 years old, but I find 256 kb/s to be a reasonable compromise between quality and disk space. There’s an article on quality differences between various sampling rates of AAC and MP3 at Planet of Sound, if you’re interested.

You may, of course, play with all of these video and audio settings to find a quality-to-size ratio that suits your tastes. I like high quality, so these fairly high quality settings are more to my liking. They will generate a 1 to 1.5 GB video file from the original 4-5 GB of DVD data. I’m happy with this size, as it allows me to store about 700 movies on my 1 TB drive.

Finally, the “Other” tab has fields for adding certain meta-data values to the output file, include “Title”, “Artist”, “Copyright” and “Comment”. I set the title here using the exact text I wish to have displayed under the movie icon in the Media Center video display panel, because Windows Media Center uses this field as the video name.

When you’re finished, then move on to the “Encode” window. Ensure the “Output file name” field contains the correct path and file name for your output file. Now, at the bottom of this window, you’ll find three iconic buttons, an arrow pointing to a film strip, a stylized clock, and a magifying glass over a film strip. These buttons represent “Encode”, “Send to batch encoder” and “Preview output”, respectively. Choose the center button–the one with the clock. This will add the job to the TMPGEnc batch encoder. The batch encoder window will open and your job will be added to the list.

While your job has been added to the batch encoder, it’s still present in the TMPGEnc program as well. You may click on the right button–the preview button–to see a preview of the output. Depending on the speed and power of your CPU, the amount of memory you have, and the encoding options you chose, you may find this preview output to be a bit choppy. The reason for this is that the movie is being filtered and encoded on the fly from the original DVD video and audio input.

At this point, you may click the “Start” button in TMPGEnc, and begin the process again with another movie. There are various places in the process where you can save your settings. You may then apply these saved settings to the next film, saving yourself the effort of having to remember how each setting was selected. I recommend you use these options.

Getting it Done Faster

If you have a multi-core CPU, you can tell the batch encoder to use multiple cores to encode multiple streams simultaneously. Select the “Options” tab in the batch encoder window, and choose “Preferences” from the menu. At the top of the “Preferences” dialog, you’ll find an option for setting the “Task count”. The range of this value is between 1 and the number of CPU’s or CPU cores you have available. I recommend using at least 1 less than the maximum, because the transcoding process is very CPU intensive. If you aren’t using your HTPC for anything else, then go ahead and set it to the maximum, but don’t say I didn’t warn you. You won’t be able to do anything else while the batch encoder has control.

Once you have streaming video available to play, just configure Windows Media Center to display video content from the directory in which you stored your output files.

High Definition Content

Encoding HD content involves a similar process in TMPGEnc to that of encoding DVD content, but there are some differences that I’d like to point out.

First of all, the Blu-ray disc format is fairly new, and hasn’t been “hacked” as much as the DVD format. Additionally, it’s quite an outlay of cash–between 5,000 and 10,000 dollars–for a company to purchase the Blu-ray specifications, and certain licenses must be adhered to, making it difficult to generate programs that manipulate commercial Blu-ray content.

One important hindrance is that two separate specifications were originally developed for Blu-ray content; the BDMV format for commercial content, and the HDMV format for home and amateur video enthusiasts. Clearly, the intent here was that readily available software for manipulating Blu-ray content would operate and generate discs using the HDMV format, which means they would not coincidentally work with the slightly different BDMV format. This difference in formats would have the effect of reducing the number of legitimate programs on the market that would just happen to work with commercial Blu-ray discs–as long as they were unencrypted.

As it turns out, an oversight in the Blu-ray specification may have saved us on this point. The spec says that HDMV format support is optional in Blu-ray players, and a significant share of the players on the market today have taken advantage of this fact, and left out support for HDMV. Thus, Blu-ray software developers can claim that they’re developing BDMV creation software because HDMV support is not pervasive in players.

Regardless, it will be a while before there are many programs on the market for manipulating commercial Blu-ray disc content. TMPGEnc doesn’t currently provide a Blu-ray import wizard, so the process of adding an HD content clip to the “Sources” window is a bit more manual.

Additionally, TMPGEnc doesn’t discriminate much when it comes to multi-stream input files. My first attempts at using a Blu-ray .m2ts file as an input file had me watching HD content in Spanish on my XBox 360 Media Center extender. I had no option for selecting the English sound track, and the Spanish track was somewhat randomly selected by TMPGEnc from the list of audio streams available in the .m2ts stream file.

To overcome this problem, I had to download a freeware program called “tsremux”, which accepts a .m2ts input file, allows you to select the individual streams in the file, and generates an output file containing only the streams you want. How do you know which streams you want? Well, that’s a bit more difficult. I downloaded yet another freeware program called “BDEdit”, which decodes and displays the internal format of the Blu-ray .m2ts files found in a Blu-ray “STREAMS” directory. By looking carefully at the contents of each stream (hint: look at the largest one first), I was able to determine which of the numbered streams contained the content I wanted. Then, moving into tsremux, I opened the correct .mt2s file, and selected the same numbered streams. The tsremux program also has options to convert a Blu-ray HD audio stream to a DTS Digital stream. While it would be nice to maintain the HD audio format, TMPGEnc doesn’t yet understand this format, so you have to downgrade it a bit, or you’ll end up with a silent film.

Once you’ve “remuxed” your .m2ts file, you can simply use the “Add file” option in the “Source” tab of TMPGEnc to add the remuxed version of the .m2ts file. It becomes one of your clips, to which you can apply filters and codecs just like DVD content.

Other HD-specific issues you should be aware of are that HD content generally uses a 1:1 pixel aspect ratio, and this ratio should be carried through the entire transformation configuration process in TMPGEnc.

HD content is 1920 x 1080, but the content is encoded on the disk at 1920 x 1200, so you’ll want to crop off the black bars again. Ensure that you generate 1920 x 1080 output files, so you get the highest resolution possible on your Media Center display, or XBox 360 extender display. For HD content, you’ll find that a direct connection between the video card and the projector or TV offers the best picture quality, rivaling that of Blu-ray itself.

You’ll have to do a few Google searches to location the tsremux and BDEdit programs. These are more or less contra-band, and are thus a bit difficult to come by. Let’s just say I’ll leave the location of these utilities as an exercise to the reader.

Summary

The process I’ve shown you includes my own personal preferences. I recommend you play with your transformation package of choice. Get to know what the features mean, and try out different options to see what their effects are. Give it some time–it’s a lot to take in at once.

In the spirit of doing what’s right, let me remind you again that we rip, store and play only content that we’ve purchased for our own personal use. Anything else is illegal according to the spirit of the law, and frankly, unethical in both the judicial and capitalistic senses of the word! So have fun, but do what’s right!

DMCA and Fair Use

I’d like to take a short detour from technical articles to cover some legalities regarding encrypted DVD and Blu-ray content. Now, this probably sounds like I’m going to write a lengthy disclaimer about copying encrypted copyrighted materials. I’m not. Instead, I’m going to delve into the law a bit and examine some of the rights that we’ve had in the past, but which have essentially been revoked or nullified by laws enacted due primarily to commercial lobbying efforts during the last decade.

The Digital Millenium Copyright Act (DMCA), signed into law by Bill Clinton in 1998 is one of the most significant infringements of American public rights ever recorded in history. And the true irony of it is that it has little or no effect on the problems it was originally tauted to solve–that of commercial media piracy. Because lobbyists and legislators are not stupid (I really believe this), one can only assume from this that the actual intent of the law is something different from the published intent.

The effects on consumer rights are both broad and deep. Where once we could simply set our VCRs to record broadcast content, so we could watch it later when we had time (and without commercial interruption, I might add), we could in the future be stopped cold from doing any such thing–even with broadcast content.

In 1976 a lawsuit brought to bare against Sony over the market introduction of the Betamax video recorder, by various broadcast and media content copyright holders, provided landmark legislation that nearly guaranteed the rights of the American consumer to record broadcast content for later viewing–a process also known as “time-shifting”. Sony won the case for time-shifting based on sub-section 107 of section 17 of the United States Code–more commonly known as the “Fair Use” clause of the US Copyright Act.

In early 2009, all analog broadcast will be turned off, in favor of digital broadcast, which is already happening today. To this statement, you might respond with, “Oh, that! Well, I have my 40 dollar set-top box already, so I don’t care.” But pure digital broadcast content paves the way for broadcast content encryption–all of it. Your set-top box will simply quit working in a few years. The very fact that satellite subscription services already encrypt all of their content should be a big red flag. You’ll have to subscribe to broadcast content. And, guess what–due to DMCA legislation, it’s already become illegal to descramble that content for time-shifting or other fair use purposes, because the act of copy-protection circumvention was made a new crime by that very legislation. Thus, fair use doesn’t even come into play.

Did you know you have to purchase a special TV to watch Blu-ray movies in full definition? Of course you did. Your old analog TV simply didn’t have the resolution. Everyone knows that. However, because of the careful timing of the market introduction of Blu-ray content, Blu-ray players, the HDMI and HDCP transmission standards, and HDMI-based digital video monitors (1080p televisions), most people are unaware of the fact that they can’t play their Blu-ray movies at full definition on any device that doesn’t have an HDMI input–even if it does have a component video input. If you want an end-to-end digital experience, then you’ve got to have a TV with a digital input.

After all, they purchased Blu-ray movies to see high-definition content. Why in the world would they want to then convert that picture into anything less than Blu-ray quality? Who would pay 30 dollars for a Blu-ray disc, and then watch it in DVD quality, when they could have spent half that amount for the DVD version of the same movie? No one would, but in the process of setting themselves up for a Blu-ray experience, they’ve also set themselves up with an end-to-end encrypted channel between the Blu-ray content and the very screen on which they view the content. The problem, of course comes into play for folks that don’t have the 5,000 to 10,000 bucks required to enjoy a true high definition experience at home.

If you have a high-end computer system, for example–perhaps you’re a gamer, and you’ve already laid out the money for a nice video system on your PC–you need special hardware, and software-based licenses to play your blu-ray discs on your PC in end-to-end digital definition. If you have a WUXGA (1920 x 1200) monitor, you must use either the HDMI port or the DVI-D (digital) port on your monitor. But if you do, then your monitor has to decrypt the HDCP digital signal.

This becomes much more of an issue with for-purchase on-line high-definition content–Amazon Unbox, or iTunes HD video, for example. If you do have the hardware, but you’ve lost the content licenses–perhaps you upgraded your operating system and unknowingly lost the licenses that Microsoft Windows silently stores for you–you’ll find your expensive monitor stubbornly refusing to display the HD content on your hard drive–yep, that content that you purchased. The monitor will simply refuse to decrypt any HDCP-encrypted content for which it can’t acquire a license over the HDMI cable. We’re being silently herded into a small coral, from which we’ll find it impossible to do the things we’ve always been able to do before with our legally purchased copyrighted material. Fair use has been subtly, but effectively by-passed.

Have you look around lately at the up-converting DVD players available these days? In fact, you can find all sorts of devices that will “up-convert” video from 480p (DVD quality) to 1080p–but only over HDMI. I spent 75 dollars last year on an “up-converting” DVD player. I plugged it into my component video channel, and messed with it for several hours before I found an obscure reference, at the bottom of page 57 of the manual, to the fact that the up-conversion feature only works over the HDMI input, but not the component input. I took it back to Best Buy and got my money back.

Why is this happening? Look, I’m not a conspiracy theorist. I believe I’ve mentioned this before. But when the facts are so glaringly obvious to anyone who puts just a little effort into looking around, it becomes difficult to deny the possiblity that the MPAA has an end-goal in all of these subtle changes in the electronics and media industries. Ironically, when I questioned the sales person at Best Buy about the HDMI requirement of up-converting DVD players, he laughed and said, “Of course it only works over HDMI! Didn’t you know that?” Well, now how was I supposed to know something like that when the industry has gone to such great efforts to obscure the details from the average consumer, by carefully using market timing tactics against technology “upgrades”?!

As a society, we’re essentially putting up with this garbage, because it doesn’t have anything to do with the issue dejour. By that, I mean human rights. If it has to do with gay or lesbian activities, or with pro-life vs. pro-abortion, we’re all over those topics. But if it has to do with the rights of some commercial interests vs. that of the consumer, then we feel like we’re inherently protected. Because, after all, we’re consumers of commercial products, right?. Why would the industry want to hurt us? Why indeed.

About the only freedom that we’ve had in the past to uphold our rights as US consumers toward copyright holders is the fair use clause of the Copyright Act. Fair use has been the cause of a fair amount of heartburn to copyright holders, as it doesn’t necessarily guarantee them all present and future rights. Well, they’ve finally found a way around fair use, and we’re nearly locked into it now.

The true power of fair use to uphold consumer rights is that it’s very losely defined. Rather than laying down rigid rules and guidelines that can easily be used by the executive branch, fair use provides a four-pronged test for any given situation. The test must be exercised by the judicial branch to create new legislation from the bench for each new circumstance.

Rick Cotton, a New York Times commentator wrote this about fair use as it relates to the DMCA. The entire article can be found on the New Your Times blog site:

Because fairness cannot be reduced to a set of bright line rules, whether a use is fair is determined on a case by case basis and a large body of law has developed over decades to address this issue. The Copyright Act sets out a four factor test (although other factors can be considered). The factors include the purpose and character of the use, the nature of the original work, the amount taken from the existing work and the importance of what is taken and the effect of the use on the potential market for or value of the copyrighted work. Thus, as a legal matter, a case-by-case analysis remains the standard.

Despite the loose definition of fair use, one can easily see that making a backup copy of purchased media is easily covered by these tests. Fair use tends to allow consumers to do reasonable things with media they’ve purchased, as long as those things don’t, for instance, decrease potential profit (in terms of future sales) of the copyrighted materials. Of course making a backup copy of purchased media isn’t going to hurt the copyright holder! Making a backup copy is not the same as making copies for your friends and family. Making a backup copy is not going to stop anyone who would otherwise purchase a copy from doing so in the future. In fact–quite ironically, I might add–the ability and guaranteed right to easily make a backup copy might just provoke a purchase that otherwise would not have happened.

Fair use has, in the past, protected consumers against litigation over issues like making backup copies of purchased media–until DMCA, that is. With the advent of the DMCA, it has literally become a crime to decrypt copy-protected media. Fair use doesn’t even enter the picture. Before you can make a backup copy, you have to decrypt the original content, and doing so is now simply a crime against US federal law.

Congressman Rick Boucher of Virginia wrote an article in 2002 in CNET News about his attempts to reform DMCA legislation to accomplish its originally intended goals–to stop commercial piracy of copyrighted media:

The American public has traditionally enjoyed the ability to make convenient and incidental copies of copyrighted works without obtaining the prior consent of copyright owners. These traditional “fair use” rights are at the foundation of the receipt and use of information by the American people. Unfortunately, those rights are now under attack.

In my next article, I’ll really talk about converting video formats from DVD and Blu-ray to streamable media, but I’ll remind you that it is illegal today to decrypt copyrighted materials that are encrypted. Whatever the purpose, it’s simply illegal to do it. Fair use doesn’t come into play at all. The only consolation is the fact that, regardless of the power with which they are endowed, copyright holders are not likely to prosecute you for making personal backup copies, or for converting your movies to different formats so you can view them the way you want to. The most significant reason for this is that such individual prosecution would be expensive and would have very little effect on the copyright holder’s bottom line profit margin. In other words, they’re more likely to go after the big dogs.

To wrap things up, I leave you with a reference to an article written by Fred von Lohmann, a senior staff attorney with the Electronic Frontier Foundation. Mr. Lohmann has written a very complete and very readable treatese on the subject of fair use and DRM entitled simply, “Fair Use and Digital Rights Management“. This is recommended reading for anyone interested in digging just a little deeper than average into the consumer rights ramifications of the DMCA and fair use.

More Fun With PC Hardware

In my last entry, I wrote about how I’d built a new Home Theater PC (HTPC). Well, it’s been about a month since that event, and much has happened to me with respect to this project that you may find interesting–or at the very least informational.

I was trying to encode the high-definition video and audio content streams from National Treasure 2 into a WMV/WMA stream file. My next article will contain details on this process, but I will warn you in advance–encoding Blu-ray content to high-definition streams takes a while. It’s taken me an average of about 26 hours per title, so far. I’ve only got a few, so I haven’t been too worried about it. But it does bother me when something terminates the process after it’s 89 percent completed–several times in a row!

First we had a power outage. Okay. Restart and go away for another day. Then (again at 89 percent), Windows blue-screened and rebooted. Vista reported it as a USB problem. Hmmm… Okay, start it up again. Another blue-screen. This time it was a memory problem. Okay, now this was starting to get on my nerves. I was beginning to believe that Disney had managed some how to add code to their content streams that would crash my machine at 89 percent! Okay, not really, but if I were the superstitious type…

But this time the system wouldn’t even boot. Thirty-one days after the invoice date on my new HTPC, the Gigabyte motherboard went kaput. Guess how many days from the invoice date until NewEgg won’t provide a Return Merchandise Authorization (RMA) number for a defective motherboard? You got it–thirty days. NewEgg’s RMA service is completely automated, so if you need an RMA number, you simply go online, select the past invoice in your account with the defective hardware, and click on it for more instructions. But after thirty days, the line item on the invoice for a motherboard becomes grey’ed out, so you can’t select it.

Luckily, Gigabyte is a reputable company, and was more than willing to talk to me about my options. They really wanted to walk me through some diagnostics before shipping motherboards back and forth. This is a good thing for both the company and the customer. Often, what appears to be a defective motherboard is really a bad processor, or a defective memory stick.

Not in this case, however. I had already performed all of the diagnostics requested by the technician by the time I spoke with him on the phone. Mostly, this involved disconnecting all of the devices from the motherboard, except the CPU, which is required in this case, and of course, the power supply connections, and then hooking up a speaker and listening for diagnostic tones when you power up the system. In my case, there were no diagnostic tones under any circumstances–a bad sign all around.

As a side note here, most motherboard manufacturers that sell directly to the end-user have already “wised up” about diagnostics. Most of them have added circuitry to their systems that will run diagnostics without requiring the CPU to be installed. This really cuts down on “No Trouble Found” (NTF) returns. I suggested through email that Gigabyte do the same with their hardware, and was informed that my suggestion would be passed on to the appropriate product groups. Frankly, I’m pretty sure they’ve already considered the idea, and are even possibly incorporating it into their newer designs.

The last thing the technician wanted me to try was swapping out the CPU. I told him I wasn’t a professional system builder. That was all I needed to say. He understood and said, “Okay then, we’ll play the board swap game.” (Basically, I’m not in the habit of keeping spare $80 CPU’s laying around, just in case I need to test a motherboard.)

Making Lemonade

After the call ended I had a thought: I’ve already wished more than once that I had originally purchased a quad processor CPU and more RAM–4GB instead of the 2GB that I started with. Each core in the CPU can be used by the TMPGEnc 4.0 Xpress batch encoder to independently transcode a video stream. With 4 CPU’s instead of 2, I can transcode 4 movies at a time. And this was the perfect opportunity to get an upgrade. So back I went to my browser, and ordered an AMD Phenom X4 9750 processor–the most advanced low-wattage processor this motherboard would handle–and a 2 x 2GB memory kit.

I couldn’t get Crucial memory from the company I used (Directron–NewEgg didn’t carry that particular CPU), but this turned out to be a good thing anyway. (This will make more sense in a few paragraphs.) By the way, I’m almost as impressed with my Directron shopping experience as I have been with my NewEgg experience.

It’s a minimum 3 week turn-around on an RMA with Gigabyte, and I didn’t want to wait that long. So I also went back to NewEgg and bought a duplicate Gigabyte motherboard. Why? Well, I like this board–I just had back luck with one of them. Stuff like this happens with computer hardware, so you have to be ready for it. This doesn’t mean you have to pay for it. You just have to be prepared to use the RMA process as part of a do-it-yourself computer project.

Another reason I bought another motherboard is to maximize the value of the loss I’d already sustained upgrading to a better CPU. I’d just use the replacement motherboard, and the original dual core CPU to upgrade the family computer. After installing the new board, I could see that the original board was indeed defective, so I felt good about having sent the old one in for replacement.

My new CPU and memory sticks hadn’t arrived yet, but that didn’t mean I couldn’t get on with the task at hand–converting my DVD’s and Blu-ray discs to stream files that I could watch on my XBox!

When It Rains…

Okay, so now I’m back in business. I restarted the encoding process for National Treasure 2, and came back the next morning to find…would you believe it!? Windows had rebooted after about 89 percent! Honestly! What is going on here?!

Rather than go through this 24 hour process yet again with my fingers crossed, I decided to become a bit more proactive about my troubleshooting techniques. I booted up the Vista installation CD and selected the “Repair” option, which brought up a menu containing several diagnostic and repair options. One of them was a memory checker. I ran it and Vista immediately found problems with my Crucial 1GB memory sticks. Certain values, when written to certain locations in memory, were different when read back again. This is never a good sign for memory sticks. They only have one job–to remember perfectly every value written to them.

But, to be sure, I downloaded the ISO image for MemTest86+, a freeware memory testing program, and burned a bootable CD. I then booted up the MemTest86+ CD and ran the program over night. In the morning, it had found over 230 errors in my RAM. Okay, I have bad memory also. Weird, but not unheard of. So I headed back to my browser and went to NewEgg again, taking some comfort in the fact that their memory return policy was much better than their CPU or motherboard policies.

While it’s true that NewEgg will RMA defective memory sticks for up to one year, in my case, the product had been discontinued, so the memory kit line item was also grey’ed out on my invoice. So I went to Crucial’s web site. Ah, no wonder NewEgg didn’t sell the product anymore. Crucial had discontinued it. Am I jinxed, or what!? So I called NewEgg, and they said they would gladly take the memory back and replace it with a similar product. This is why I like doing business with this company. They appear to be truly interested in retaining their customers.

But I thought I’d do some research on these memory sticks, while I was at it. After a few Google searches, I found myself staring at the customer review page for the product on (none other than) NewEgg’s website! If I had read the reviews for this memory kit, I probably would have gone with a different brand in the first place.

The trouble is, according to previous customers, Crucial and a few other companies rate their memory products based on over-clocking practices, rather than on standard motherboard settings. 240-pin DDR2 memory sticks are supposed to operate correctly on 1.8 volts, but sometimes manufacturers test their products at 1.9, 2.0 and 2.1 volts, so they can increase the apparent marketing value of their products. But what about the poor sap that doesn’t know anything about over-clocking, or worse yet–doesn’t own a motherboard designed to configure these parameters?

Gigabyte is what you might call a “progressive” motherboard company. Their motherboards are designed with over-clockers in mind. That is, the BIOS can be configured to manage all sorts of low-level voltage and clock settings. And the manual goes into great detail on how and why you might wish to use these settings–all underwritten with a major disclaimer about damage due to “improperly” using them.

I bumped the memory voltage from 1.8 to 1.9 and reloaded the memory tests. After an hour, I began to get errors again. So I went back into the BIOS and bumped the memory voltage up to 2.1 volts–the highest setting allowed by the software. Another hour of testing showed that I was still getting errors. At this point, I had to assume that the memory was just plain defective.

Upon further thought, it occured to me that these bad memory sticks may very well have been responsible for damaging the original motherboard. So I reset the BIOS settings to normal, shut down the system and waited for my new CPU and memory to arrive. While I waited, I popped out the old memory sticks and sent them back to NewEgg under RMA.

During this whole process, I’ve learned a few lessons about building PC’s. For one thing, I’ve learned about the value of burn-in tests for new systems. Had I spent a couple of days up front running memory diagnostics like MemTest86+, I’d have found out about the memory problems early–perhaps in time to have saved the original motherboard (if the Crucial memory problems did in fact destroy the motherboard, as I suspect).

Really Back in Business!

The next day, my new parts arrived in the mail. I popped in the new 9750 CPU and 2 x 2GB memory sticks, booted up the MemTest86+ CD (I have learned my lesson), and tested the memory for several hours. No defects. Incidentally, this reminded me of the days when you had to do this with new hard drives. The difference with hard drives was that you could mark the bad sectors as “bad”, and end up with just a little less space on your new hard drive. (We’ve somehow gotten past this these days with IDE and SATA drives. I believe I read somewhere that they come pre-tested with all the potentially bad sectors already marked for you!)

Next I completely reinstalled Windows Vista, which wasn’t too hard since I’d stored all my previously encoded content on a secondary 1TB hard drive. The OS drive was only 250 GB, and contained only a few programs that I’d installed for the purpose of encoding my content. After reinstalling these programs, I fired up TMPGEnc 4.0 Xpress and began the long arduous process of encoding National Treasure 2 again. I started this last night, so it’s not yet finished. Nevertheless, I have every confidence that it will complete this time (but I’m still going to cross my fingers when it gets close to 89 percent…)

As soon as the replacement motherboard arrives from Gigabyte, and the replacement memory sticks come back from NewEgg, then I’ll get to work on the family computer. The kids won’t know what happened when they boot up Windows XP home and find themselves on the login screen before they have time to run to the kitchen for a snack!