RESTful Authentication

My last post on RESTful transactions sure seemed to attract a lot of attention. There are a number of REST discussion topics that tend to get a lot of hand-waving by the REST community, but no real concrete answers seem to be forthcoming. I believe the most fundamental reasons for this include the fact that the existing answers are unpalatable – both to the web services world at large, and to REST purists. Once in a while when they do mention a possible solution to a tricky REST-based issue, the web services world responds violently – mostly because REST purists give answers like “just don’t do that” to questions like “How do I handle session management in a RESTful manner?”

I recently read an excellent treatise on the subject of melding RESTful web services concepts with enterprise web service needs. Benjamin Carlyle’s Sound Advice blog entry, entitled The REST Statelessness Constraint hits the mark dead center. Rather than try to persuade enterprise web service designers not to do non-RESTful things, Benjamin instead tries to convey the purposes behind REST constraints (in this case, specifically statelessness), allowing web service designers to make rational tradeoffs in REST purity for the sake of enterprise goals, functionality, and performance. Nice job Ben!

The fact is that the REST architectural style was designed with one primary goal in mind: to create web architectures that would scale well to the Internet. The Internet is large, representing literally billions of clients. To make a web service scale to a billion-client network, you have to make hard choices. For instance, http is connectionless. Connectionless protocols scale very well to large numbers of clients. Can you imagine a web server that had to manage 500,000 simultaneous long-term connections?

Server-side session data is a difficult concept to shoehorn into a RESTful architecture, and it’s the subject of this post. Lots of web services – I’d venture to say 99 percent of them – manage authentication using SSL/TLS and the HTTP “basic auth” authentication scheme. They use SSL/TLS to keep from exposing a user’s name and password over the wire, essentially in clear text. They use basic auth because it’s trivial. Even banking institutions use this mechanism because, for the most part, it’s secure. Those who try to go beyond SSL/TLS/basic auth often do so because they have special needs, such as identity federation of disparate services.

To use SSL/TLS effectively, however, these services try hard to use long-term TCP connections. HTTP 1.0 had no built-in mechanism for allowing long-term connections, but NetScape hacked in an add-on mechanism in the form of the “connection: keep-alive” header, and most web browsers support it, even today. HTTP 1.1 specifies that connections remain open by default. If an HTTP 1.1 client sends the “connection: close” header in a request then the server will close the connection after sending the response, but otherwise, the connection remains open.

This is a nice enhancement, because it allows underlying transport-level security mechanisms like SSL/TLS to optimize transport-level session management. Each new SSL/TLS connection has to be authenticated, and this process costs a few round-trips between client and server. By allowing multiple requests to occur over the same authenticated sesssion, the cost of transport-level session management is amortized over several requests.

In fact, by using SSL/TLS mutual authentication as the primary authentication mechanism, no application state need be maintained by the server at all for authentication purposes. For any given request, the server need only ask the connection layer who the client is. If the service requires SSL/TLS mutual auth, and the client has made a request, then the server knows that the client is authenticated. Authorization (resource access control) must still be handled by the service, but authorization data is not session data, it’s service data.

However, SSL/TLS mutual auth has an inherent deployment problem: key management. No matter how you slice it, authentication requires that the server know something about the client in order to authenticate that client. For SSL/TLS mutual auth, that something is a public key certificate. Somehow, each client must create a public key certificate and install it on the server. Thus, mutual auth is often reserved for the enterprise, where key management is done by IT departments for the entire company. Even then, IT departments cringe at the thought of key management issues.

User name and password schemes are simpler, because often web services will provide users a way of creating their account and setting their user name and password in the process. Credential management done. Key management can be handled in the same way, but it’s not as simple. Some web services allow users to upload their public key certificate, which is the SSL/TLS mutual-auth equivalent of setting a password. But a user has to create a public/private key pair, and then generate a public key certificate from this key pair. Java keytool makes this process as painless as possible, but it’s still far from simple. No – user name and password is by far the simpler solution.

As I mentioned above, the predominant solution today is a combination of CA-based transport-layer certificate validation for server authentication, and HTTP basic auth for client authentication. The web service obtains a public/private key pair that’s been generated by a well-known Certificate Authority (CA). This is done by generating a certificate signing request using either openssl or the Java keytool utility (or by using less mainstream tools provided by the CA). Because most popular web browsers today ship well-known CA certificates in their truststores, and because clients implicitly trust services that provide certificates signed by these well-known CA’s, people tend to feel warm and fuzzy because no warning messages pop up on the screen when they connect to one of these services. Should they fear? Given the service verification process used by CAs like Entrust and Verisign, they probably should, but that problem is very difficult to solve, so most people just live with this stop-gap solution.

On the server side, the web service needs to know the identity of the client in order to know what service resources that client should have access to. If a client requests a protected resource, the server must be able to validate that client’s right to the resource. If the client hasn’t authenticated yet, the server challenges the client for credentials using a response header and a “401 Unauthorized” response code. Using the basic auth scheme, the client base64-encodes his user name and password and returns this string in a response header. Now, base64 encoding is not encrytion, so the client is essentially passing his user name and password in what amounts to clear text. This is why SSL/TLS is used. By the time the server issues the challenge, the SSL/TLS encrypted channel is already established, so the user’s credentials are protected from even non-casual snoopers.

When the proper credentials arrive in the next attempt to request the protected resource, the server decodes the user name and password, verifies them against its user database, and either returns the requested resource, or fails the request with “401 Unauthorized” again, if the user doesn’t have the requisite rights to the requested resource.

If this was the extent of the matter, there would be nothing unRESTful about this protocol. Each subsequent request contains the user’s name and password in the Authorization header, so the server has the option of using this information on each request to ensure that only authorized users can access protected resources. No session state is managed by the server here. Session or application state is managed by the client, using a well-known protocol for passing client credentials on each request – basic auth.

But things don’t usually stop there. Web services want to provide a good session experience for the user – perhaps a shopping cart containing selected items. Servers typically implement shopping carts by keeping a session database, and associating collections of selected items with users in this database. How long should such session data be kept around? What if the user tires of shopping before she checks out, goes for coffee, and gets hit by a car? Most web services deal with such scenarios by timing out shopping carts after a fixed period – anywhere from an hour to a month. What if the session includes resource locks? For example, items in a shopping cart are sometimes made unavailable to others for selection – they’re locked. Companies like to offer good service to customers, but keeping items locked in your shopping cart for a month while you’re recovering in the hospital just isn’t good business.

REST principles dictate that keeping any sort of session data is not viable for Internet-scalable web services. One approach is to encode all session data in a cookie that’s passed back and forth between client and server. While this approach allows the server to be completely stateless with respect to the client, it has its flaws. First, even though the data is application state data, it’s still owned by the server, not the client. Most clients don’t even try to interpret this data. They just hand it back to the server on each successive request. But this data is application state data, so the client should manage it, not the server.

There’s no good answers to these questions yet. What it comes down to is that service design is a series of trade-offs. If you really need your web service to scale to billions of users, then you’d better find ways to make your architecture compliant with REST principles. If you’re only worried about servicing a few thousand users at a time, then perhaps you can relax the constraints a bit. The point is that you should understand the constraints, and then make informed design decisions.

RESTful Transactions

I was reading recently in RESTful Web Services (Leonard Richardson & Sam Ruby, O’Reilly, 2007) about how to implement transactional behavior in a RESTful web service. Most web services today do this with an overloaded POST operation, but the authors assert that this isn’t necessary.

Their example (in Chapter Eight) uses the classic bank account transaction scenario, where a customer wants to transfer 50 dollars from checking to savings. I’ll recap here for your benefit. Both accounts start with 200 dollars. So after a successful transaction, the checking account should contain 150 dollars and the savings account should contain 250 dollars. Let’s consider what happens when two clients operate on the same resources:

Client A -> Read account: 200 dollars
Client A -> Withdraw 50 dollars: 200 - 50 = 150 dollars
Client A -> Write account: 150 dollars

Client B -> Read account: 150 dollars
Client B -> Withdraw 50 dollars: 150 - 50 = 100 dollars
Client B -> Write account: 100 dollars

This is all well and good until you consider that the steps in these operations might not be atomic. Transactions protect against the following situation, wherein the separate steps of these two Clients’ operations are interleaved:

Client A -> Read account: 200 dollars
Client B -> Read account: 200 dollars
Client A -> Withdraw 50 dollars: 200 - 50 = 150 dollars
Client B -> Withdraw 50 dollars: 200 - 50 = 150 dollars
Client A -> Write account: 150 dollars
Client B -> Write account: 150 dollars

After both operations, the account should contain 100 dollars, but because no account locking was in effect during the two updates, the second withdrawal is lost. Thus 100 dollars was physically removed from the account, but the account balance reflects only a 50 dollar withdrawal. Transaction semantics would cause the following series of steps to occur:

Client A -> Begin transaction
Client A -> Read account: 200 dollars
Client B -> Begin Transaction (block)
Client A -> Withdraw 50 dollars: 200 - 50 = 150 dollars
Client A -> Write account: 150 dollars
Client A -> Commit transaction
Client B -> (unblock) Read account: 150 dollars
Client B -> Withdraw 50 dollars: 150 - 50 = 100 dollars
Client B -> Write account: 100 dollars
Client B -> Commit transaction

Web Transactions

The authors’ approach to RESTful web service transactions involves using POST against a “transaction factory” URL. In this case /transactions/account-transfer represents the transaction factory. The checking account is represented by /accounts/checking/11 and the savings account by /accounts/savings/55.

Now, if you recall from my October 2008 post, PUT or POST: The REST of the Story, POST is designed to be used to create new resources whose URL is not known in advance, whereas PUT is designed to update or create a resource at a specific URL. Thus, POSTing against a transaction factory should create a new transaction and return its URL in the Location response header.

A user might make the following series of web requests:

GET /transaction/account-transfer/11a5/accounts/checking/11 HTTP/1.1
Host: example.com
...
200 Ok

balance=200
---
GET /transaction/account-transfer/11a5/accounts/savings/55 HTTP/1.1
Host: example.com
...
200 Ok

balance=200

The fact that the client reads the account balances before beginning is implied by the text, rather than stated explicitly. At some later time (hopefully not much later) the transaction is started:

POST /transaction/account-transfer HTTP/1.1
Host: example.com
...
201 Created
Location: /transaction/account-transfer/11a5
---
PUT /transaction/account-transfer/11a5/accounts/checking/11 HTTP/1.1
Host: example.com

balance=150
...
200 Ok
---
PUT /transaction/account-transfer/11a5/accounts/savings/55 HTTP/1.1
Host: example.com

balance=250
...
200 Ok
---
PUT /transaction/account-transfer/11a5 HTTP/1.1
Host: example.com

committed=true
...
200 Ok

At first glance, this appears to be a nice design, until you begin to consider the way such a system might be implemented on the back end. The authors elaborate on one approach. They state that documents PUT to resources within the transaction might be serialized during building of the transaction. When the transaction is committed the entire set of serialized operations could then be executed by the server within a server-side database transaction. The result of committing the transaction is then returned to the client as the result of the client’s commit on the web transaction.

However, this can’t work properly, as the server would have to have the client’s view of the original account balances in order to ensure that no changes had slipped in after the client had read the accounts, but before the transaction was committed (or even begun!). As it stands, changes could be made by a third-party to the accounts before the new balances are written and there’s no way for the server to ensure that these other modifications are not overwritten by outdated state provided by the transaction log. It is, after all, the entire purpose of a transaction to protect a database against this very scenario.

Fixing the Problem

One way to make this work is to include account balance read (GET) operations within the transaction, like this:

POST /transaction/account-transfer HTTP/1.1
Host: example.com
...
201 Created
Location: /transaction/account-transfer/11a5
---
GET /transaction/account-transfer/11a5/accounts/checking/11 HTTP/1.1
Host: example.com
...
200 Ok

balance=200
---
PUT /transaction/account-transfer/11a5/accounts/checking/11 HTTP/1.1
Host: example.com

balance=150
...
200 Ok
---
GET /transaction/account-transfer/11a5/accounts/savings/55 HTTP/1.1
Host: example.com
...
200 Ok

balance=200
---
PUT /transaction/account-transfer/11a5/accounts/savings/55 HTTP/1.1
Host: example.com

balance=250
...
200 Ok
---
PUT /transaction/account-transfer/11a5 HTTP/1.1
Host: example.com

committed=true
...
200 Ok

The GET operations would, of course, return real data in real time. But the fact that the accounts were read within the transaction would give the server a reference point for later comparison during the execution of the back-end database transaction. If the values of either account balance are modified before the back-end transaction is begun, then the server would have to abort the transaction and the client would have to begin a new transaction.

This mechanism is similar in operation to lock-free data structure semantics. Lock-free data structures are found in low-level systems programming on symmetric multi-processing (SMP) hardware. A lock-free data structure allows multiple threads to make updates without the aid of concurrency locks such as mutexes and spinlocks. Essentially, the mechanism guarantees that an attempt to read, update and write a data value will either succeed or fail in a transactional manner. The implementation of such a system usually revolves around the concept of a machine-level test and set operation. The would-be modifier, reads the data element, updates the read copy, and then performs a conditional write, wherein the condition is that the value is the same as the originally read value. If the value is different, the operation is aborted and retried. Even under circumstances of high contention the update will likely eventually occur.

How this system applies to our web service transaction is simple: If the values of either account are modified outside of the web transaction before the back-end database transaction is begun (at the time the commit=true document is PUT), then the server must abort the transaction (by returning “500 Internal server error” or something). The client must then retry the entire transaction again. This pattern must continue until the client is lucky enough to make all of the modifications within the transaction that need to be made before anyone else touches any of the affected resources. This may sound nasty, but as we’ll see in a moment, the alternatives have less desirable effects.

Inline Transaction Processing

Another approach is to actually have the server begin a database transaction at the point where the transaction resource is created with the initial POST operation above. Again, the client must read the resources within the transaction. Now the server can guarantee atomicity — and data integrity.

As with the previous approach, this approach works whether the database uses global- or resource-level locking. All web transaction operations happen in real time within a database transaction, so reads return real data and writes happen during the write requests, but of course the writes aren’t visible to other readers until the transaction is committed.

A common problem with this approach is that the database transaction is now exposed as a “wire request”, which means that a transaction can be left outstanding by a client that dies in the middle of the operation. Such transactions have to be aborted when the server notices the client is gone. Since HTTP is a stateless, connectionless protocol, it’s difficult for a server to tell when a client has died. At the very least, database transactions begun by web clients should be timed out. Unfortunately, while timing out a database transaction, no one else can write to the locked resources, which can be a real problem if the database uses global locking. Additional writers are blocked until the transaction is either committed or aborted. Locking a highly contended resource over a series of network requests can significantly impact scalability, as the time frame for a given lock has just gone through the ceiling.

It’s clear that creating proper RESTful transaction semantics is a tricky problem.