PG Casts

The Skip Locked feature in Postgres 9.5

Episode #7 hosted by Jack Christensen

Our topic this episode is the new 'skip locked' feature in PostgreSQL 9.5.

Postgres Version 9.5

Transcript

The most obvious use for skip locked is multiple workers consuming jobs from a single source. So our example will be a simple job queue.

We will create simple jobs table with id and payload columns.

create table jobs(
  id serial primary key,
  payload json not null
);

Now let's add a few jobs.

insert into jobs(payload) values
('{"type": "send_welcome", "to": "john@example.com"}'),
('{"type": "send_password_reset", "to": "sally@example.com"}'),
('{"type": "send_welcome_email", "to": "jane@example.com"}'),
('{"type": "send_welcome_email", "to": "sam@example.com"}'),
('{"type": "send_password_reset", "to": "bill@example.com"}');

Let's take a look at the table.

table jobs;

A worker would typically do the following. First, start a transaction.

begin;

PostgreSQL's row locking functionality only works inside of a transaction.

Next, select a row with a for update clause.

select * from jobs limit 1 for update;

Now the worker application would execute the job which in this case is to send an email. Once completed, the job is deleted.

delete from jobs where id=1;

And finally, commit the transaction.

commit;

This works fine for a single worker. But happens if we have multiple workers?

-- Run in two connections at the same time
begin;
select * from jobs limit 1 for update;

The second worker is blocked trying to read the locked row.

Only when the first worker completes can the second begin.

-- In the first connection
delete from jobs where id=2;
commit;

With this style of row locking our workers are serialized, and there isn't any point to having more than one worker.

-- In the second connection
delete from jobs where id=3;
commit;

skip locked is a new feature in Postgres 9.5 that solves this issue. It purposely bypasses the isolation between transactions that Postgres normally provides and skips rows that other transactions have locked.

Let's try this again.

-- Run in two connections at the same time
begin;
select * from jobs limit 1 for update skip locked;

Both connections were able to select and lock different rows. Now we can have multiple workers consuming from the same job queue at the same time.

Thanks for watching.