I suppose everyone does this and I just haven’t heard about it. I don’t get out much, so it seems cool to me.
When we redesigned our company website (www.laridian.com) a couple years back, I needed a way to automatically update best-seller lists, new releases, and other dynamic data on the site without relying on an employee to do it every week/month/quarter. Initially, I considered writing a script that did this kind of thing and was launched by the OS on a schedule every so often, but I try to stay away from creating yet another little thing I’ll have to remember if we ever move the site or are forced to recreate it on another server.
So it occurred to me that I could keep track of when the last time was I had created a particular list or other piece of dynamic content on the site, and the first user who requests it after some time period (say once a month for “best sellers” and once a week for “new releases”) would cause the site to notice the content was old and regenerate it. That’s a cool idea on its own, but isn’t the subject of this article.
One of the problems I wanted to avoid was having two or three users who happened to show up at about the same time all trigger the process. I was concerned that it might be time-intensive and while I don’t mind delaying one customer while the data is created, I didn’t want to delay everyone who visits the site during those few seconds. So I came up with the idea of using SQL Server to implement a generic “lock” or “semaphore” capability I could use anywhere on the site.
The idea is to have a simple table with a Name field and a SetTime field. The Name field is given the UNIQUE constraint, so that duplicate records with the same Name field are not allowed. The first customer session that discovers it needs to rebuild the best-sellers list tries to INSERT a record with Name = ‘Best Sellers’ and SetTime = GETDATE(). If the INSERT succeeds, the process “owns the lock” and can do what it needs to do. If someone else comes along shortly thereafter and discovers it, too, needs to update the best-sellers list, it will try to do the same INSERT and will fail due to the existence of a record with the same Name field. This second process does not own the lock, and cannot update the best-sellers list. Instead, it uses the old list.
Once the first session has updated the list, it simply DELETEs the record, thus releasing its lock on the best-sellers list.
Since INSERT is an atomic operation there’s no possibility that two sessions are going to both believe they wrote the record.
Since the web is a flaky place, it’s necessary to allow for the possibility that a lock obtained a long time ago was never released. So every request for a lock checks the SetTime field. If the existing record is “too old” it is deleted before the attempt is made to INSERT the record.
This allows a certain amount of interprocess cooperation and communication between my Classic ASP pages with very little effort.
One of the side-effects is that the locks span not only all the processes running on the server, but can be made to span processes running on user devices. A recent use case that surfaced for this capability was the necessity of keeping a user from synchronizing his notes, highlights, or bookmarks from two (or more devices) with the Laridian “cloud” at the same time. The results can be unexpected loss of data on one or both of the devices.
The solution to this potential problem was for the synchronization process to request a lock that contains both the name of the table being synchronized and the customer ID. That way, many customers can synchronize, say, Bible bookmarks at the same time, but any one user can only synchronize one device at a time. This is a little more complicated than it seems, since PocketBible for Windows and PocketBible for iOS each have their own synchronization script on the server, while our newer clients (PocketBible for Android, Windows RT, and Windows Phone) use our new TCP-based synchronization server. The scripts for the older clients are written in Classic ASP and are invoked through HTTP POST operations from the client, while the new TCP server is written in C# and runs as a Windows Service. All have access to the same SQL Server database, and all implement the same locking strategy, which is working well.
In addition, during the debug process the TCP server runs on my local machine and connects via VPN to SQL Server. I can use and test the locking mechanism in this way before it goes live.
The combination of a very simple implementation using technology (SQL Server) that is well-known and well-tested, and the ability to implement locking across platforms makes this an interesting and (I would argue) elegant solution to a large number of problems.