I was reading recently in RESTful Web Services (Leonard Richardson & Sam Ruby, O’Reilly, 2007) about how to implement transactional behavior in a RESTful web service. Most web services today do this with an overloaded POST operation, but the authors assert that this isn’t necessary.
Their example (in Chapter Eight) uses the classic bank account transaction scenario, where a customer wants to transfer 50 dollars from checking to savings. I’ll recap here for your benefit. Both accounts start with 200 dollars. So after a successful transaction, the checking account should contain 150 dollars and the savings account should contain 250 dollars. Let’s consider what happens when two clients operate on the same resources:
Client A -> Read account: 200 dollars Client A -> Withdraw 50 dollars: 200 - 50 = 150 dollars Client A -> Write account: 150 dollars Client B -> Read account: 150 dollars Client B -> Withdraw 50 dollars: 150 - 50 = 100 dollars Client B -> Write account: 100 dollars
This is all well and good until you consider that the steps in these operations might not be atomic. Transactions protect against the following situation, wherein the separate steps of these two Clients’ operations are interleaved:
Client A -> Read account: 200 dollars Client B -> Read account: 200 dollars Client A -> Withdraw 50 dollars: 200 - 50 = 150 dollars Client B -> Withdraw 50 dollars: 200 - 50 = 150 dollars Client A -> Write account: 150 dollars Client B -> Write account: 150 dollars
After both operations, the account should contain 100 dollars, but because no account locking was in effect during the two updates, the second withdrawal is lost. Thus 100 dollars was physically removed from the account, but the account balance reflects only a 50 dollar withdrawal. Transaction semantics would cause the following series of steps to occur:
Client A -> Begin transaction Client A -> Read account: 200 dollars Client B -> Begin Transaction (block) Client A -> Withdraw 50 dollars: 200 - 50 = 150 dollars Client A -> Write account: 150 dollars Client A -> Commit transaction Client B -> (unblock) Read account: 150 dollars Client B -> Withdraw 50 dollars: 150 - 50 = 100 dollars Client B -> Write account: 100 dollars Client B -> Commit transaction
The authors’ approach to RESTful web service transactions involves using POST against a “transaction factory” URL. In this case /transactions/account-transfer represents the transaction factory. The checking account is represented by /accounts/checking/11 and the savings account by /accounts/savings/55.
Now, if you recall from my October 2008 post, PUT or POST: The REST of the Story, POST is designed to be used to create new resources whose URL is not known in advance, whereas PUT is designed to update or create a resource at a specific URL. Thus, POSTing against a transaction factory should create a new transaction and return its URL in the Location response header.
A user might make the following series of web requests:
GET /transaction/account-transfer/11a5/accounts/checking/11 HTTP/1.1 Host: example.com ... 200 Ok balance=200 --- GET /transaction/account-transfer/11a5/accounts/savings/55 HTTP/1.1 Host: example.com ... 200 Ok balance=200
The fact that the client reads the account balances before beginning is implied by the text, rather than stated explicitly. At some later time (hopefully not much later) the transaction is started:
POST /transaction/account-transfer HTTP/1.1 Host: example.com ... 201 Created Location: /transaction/account-transfer/11a5 --- PUT /transaction/account-transfer/11a5/accounts/checking/11 HTTP/1.1 Host: example.com balance=150 ... 200 Ok --- PUT /transaction/account-transfer/11a5/accounts/savings/55 HTTP/1.1 Host: example.com balance=250 ... 200 Ok --- PUT /transaction/account-transfer/11a5 HTTP/1.1 Host: example.com committed=true ... 200 Ok
At first glance, this appears to be a nice design, until you begin to consider the way such a system might be implemented on the back end. The authors elaborate on one approach. They state that documents PUT to resources within the transaction might be serialized during building of the transaction. When the transaction is committed the entire set of serialized operations could then be executed by the server within a server-side database transaction. The result of committing the transaction is then returned to the client as the result of the client’s commit on the web transaction.
However, this can’t work properly, as the server would have to have the client’s view of the original account balances in order to ensure that no changes had slipped in after the client had read the accounts, but before the transaction was committed (or even begun!). As it stands, changes could be made by a third-party to the accounts before the new balances are written and there’s no way for the server to ensure that these other modifications are not overwritten by outdated state provided by the transaction log. It is, after all, the entire purpose of a transaction to protect a database against this very scenario.
Fixing the Problem
One way to make this work is to include account balance read (GET) operations within the transaction, like this:
POST /transaction/account-transfer HTTP/1.1 Host: example.com ... 201 Created Location: /transaction/account-transfer/11a5 --- GET /transaction/account-transfer/11a5/accounts/checking/11 HTTP/1.1 Host: example.com ... 200 Ok balance=200 --- PUT /transaction/account-transfer/11a5/accounts/checking/11 HTTP/1.1 Host: example.com balance=150 ... 200 Ok --- GET /transaction/account-transfer/11a5/accounts/savings/55 HTTP/1.1 Host: example.com ... 200 Ok balance=200 --- PUT /transaction/account-transfer/11a5/accounts/savings/55 HTTP/1.1 Host: example.com balance=250 ... 200 Ok --- PUT /transaction/account-transfer/11a5 HTTP/1.1 Host: example.com committed=true ... 200 Ok
The GET operations would, of course, return real data in real time. But the fact that the accounts were read within the transaction would give the server a reference point for later comparison during the execution of the back-end database transaction. If the values of either account balance are modified before the back-end transaction is begun, then the server would have to abort the transaction and the client would have to begin a new transaction.
This mechanism is similar in operation to lock-free data structure semantics. Lock-free data structures are found in low-level systems programming on symmetric multi-processing (SMP) hardware. A lock-free data structure allows multiple threads to make updates without the aid of concurrency locks such as mutexes and spinlocks. Essentially, the mechanism guarantees that an attempt to read, update and write a data value will either succeed or fail in a transactional manner. The implementation of such a system usually revolves around the concept of a machine-level test and set operation. The would-be modifier, reads the data element, updates the read copy, and then performs a conditional write, wherein the condition is that the value is the same as the originally read value. If the value is different, the operation is aborted and retried. Even under circumstances of high contention the update will likely eventually occur.
How this system applies to our web service transaction is simple: If the values of either account are modified outside of the web transaction before the back-end database transaction is begun (at the time the commit=true document is PUT), then the server must abort the transaction (by returning “500 Internal server error” or something). The client must then retry the entire transaction again. This pattern must continue until the client is lucky enough to make all of the modifications within the transaction that need to be made before anyone else touches any of the affected resources. This may sound nasty, but as we’ll see in a moment, the alternatives have less desirable effects.
Inline Transaction Processing
Another approach is to actually have the server begin a database transaction at the point where the transaction resource is created with the initial POST operation above. Again, the client must read the resources within the transaction. Now the server can guarantee atomicity — and data integrity.
As with the previous approach, this approach works whether the database uses global- or resource-level locking. All web transaction operations happen in real time within a database transaction, so reads return real data and writes happen during the write requests, but of course the writes aren’t visible to other readers until the transaction is committed.
A common problem with this approach is that the database transaction is now exposed as a “wire request”, which means that a transaction can be left outstanding by a client that dies in the middle of the operation. Such transactions have to be aborted when the server notices the client is gone. Since HTTP is a stateless, connectionless protocol, it’s difficult for a server to tell when a client has died. At the very least, database transactions begun by web clients should be timed out. Unfortunately, while timing out a database transaction, no one else can write to the locked resources, which can be a real problem if the database uses global locking. Additional writers are blocked until the transaction is either committed or aborted. Locking a highly contended resource over a series of network requests can significantly impact scalability, as the time frame for a given lock has just gone through the ceiling.
It’s clear that creating proper RESTful transaction semantics is a tricky problem.