aboutsummaryrefslogtreecommitdiff
path: root/src/backend/port/unix_latch.c
diff options
context:
space:
mode:
authorTom Lane <tgl@sss.pgh.pa.us>2011-08-09 15:30:45 -0400
committerTom Lane <tgl@sss.pgh.pa.us>2011-08-09 15:30:45 -0400
commit4e15a4db5e65e43271f8d20750d6500ab12632d0 (patch)
tree453d328ca3280a79869da3d489a86a0703c87f89 /src/backend/port/unix_latch.c
parentcff60f2dfa470d5736a19d36eb910844e31db764 (diff)
downloadpostgresql-4e15a4db5e65e43271f8d20750d6500ab12632d0.tar.gz
postgresql-4e15a4db5e65e43271f8d20750d6500ab12632d0.zip
Documentation improvement and minor code cleanups for the latch facility.
Improve the documentation around weak-memory-ordering risks, and do a pass of general editorialization on the comments in the latch code. Make the Windows latch code more like the Unix latch code where feasible; in particular provide the same Assert checks in both implementations. Fix poorly-placed WaitLatch call in syncrep.c. This patch resolves, for the moment, concerns around weak-memory-ordering bugs in latch-related code: we have documented the restrictions and checked that existing calls meet them. In 9.2 I hope that we will install suitable memory barrier instructions in SetLatch/ResetLatch, so that their callers don't need to be quite so careful.
Diffstat (limited to 'src/backend/port/unix_latch.c')
-rw-r--r--src/backend/port/unix_latch.c166
1 files changed, 79 insertions, 87 deletions
diff --git a/src/backend/port/unix_latch.c b/src/backend/port/unix_latch.c
index 9940a42e33c..950a3a40117 100644
--- a/src/backend/port/unix_latch.c
+++ b/src/backend/port/unix_latch.c
@@ -3,60 +3,6 @@
* unix_latch.c
* Routines for inter-process latches
*
- * A latch is a boolean variable, with operations that let you to sleep
- * until it is set. A latch can be set from another process, or a signal
- * handler within the same process.
- *
- * The latch interface is a reliable replacement for the common pattern of
- * using pg_usleep() or select() to wait until a signal arrives, where the
- * signal handler sets a global variable. Because on some platforms, an
- * incoming signal doesn't interrupt sleep, and even on platforms where it
- * does there is a race condition if the signal arrives just before
- * entering the sleep, the common pattern must periodically wake up and
- * poll the global variable. pselect() system call was invented to solve
- * the problem, but it is not portable enough. Latches are designed to
- * overcome these limitations, allowing you to sleep without polling and
- * ensuring a quick response to signals from other processes.
- *
- * There are two kinds of latches: local and shared. A local latch is
- * initialized by InitLatch, and can only be set from the same process.
- * A local latch can be used to wait for a signal to arrive, by calling
- * SetLatch in the signal handler. A shared latch resides in shared memory,
- * and must be initialized at postmaster startup by InitSharedLatch. Before
- * a shared latch can be waited on, it must be associated with a process
- * with OwnLatch. Only the process owning the latch can wait on it, but any
- * process can set it.
- *
- * There are three basic operations on a latch:
- *
- * SetLatch - Sets the latch
- * ResetLatch - Clears the latch, allowing it to be set again
- * WaitLatch - Waits for the latch to become set
- *
- * The correct pattern to wait for an event is:
- *
- * for (;;)
- * {
- * ResetLatch();
- * if (work to do)
- * Do Stuff();
- *
- * WaitLatch();
- * }
- *
- * It's important to reset the latch *before* checking if there's work to
- * do. Otherwise, if someone sets the latch between the check and the
- * ResetLatch call, you will miss it and Wait will block.
- *
- * To wake up the waiter, you must first set a global flag or something
- * else that the main loop tests in the "if (work to do)" part, and call
- * SetLatch *after* that. SetLatch is designed to return quickly if the
- * latch is already set.
- *
- *
- * Implementation
- * --------------
- *
* The Unix implementation uses the so-called self-pipe trick to overcome
* the race condition involved with select() and setting a global flag
* in the signal handler. When a latch is set and the current process
@@ -65,8 +11,8 @@
* interrupt select() on all platforms, and even on platforms where it
* does, a signal that arrives just before the select() call does not
* prevent the select() from entering sleep. An incoming byte on a pipe
- * however reliably interrupts the sleep, and makes select() to return
- * immediately if the signal arrives just before select() begins.
+ * however reliably interrupts the sleep, and causes select() to return
+ * immediately even if the signal arrives before select() begins.
*
* When SetLatch is called from the same process that owns the latch,
* SetLatch writes the byte directly to the pipe. If it's owned by another
@@ -100,7 +46,7 @@
/* Are we currently in WaitLatch? The signal handler would like to know. */
static volatile sig_atomic_t waiting = false;
-/* Read and write end of the self-pipe */
+/* Read and write ends of the self-pipe */
static int selfpipe_readfd = -1;
static int selfpipe_writefd = -1;
@@ -116,7 +62,7 @@ static void sendSelfPipeByte(void);
void
InitLatch(volatile Latch *latch)
{
- /* Initialize the self pipe if this is our first latch in the process */
+ /* Initialize the self-pipe if this is our first latch in the process */
if (selfpipe_readfd == -1)
initSelfPipe();
@@ -127,13 +73,14 @@ InitLatch(volatile Latch *latch)
/*
* Initialize a shared latch that can be set from other processes. The latch
- * is initially owned by no-one, use OwnLatch to associate it with the
+ * is initially owned by no-one; use OwnLatch to associate it with the
* current process.
*
* InitSharedLatch needs to be called in postmaster before forking child
* processes, usually right after allocating the shared memory block
- * containing the latch with ShmemInitStruct. The Unix implementation
- * doesn't actually require that, but the Windows one does.
+ * containing the latch with ShmemInitStruct. (The Unix implementation
+ * doesn't actually require that, but the Windows one does.) Because of
+ * this restriction, we have no concurrency issues to worry about here.
*/
void
InitSharedLatch(volatile Latch *latch)
@@ -145,23 +92,30 @@ InitSharedLatch(volatile Latch *latch)
/*
* Associate a shared latch with the current process, allowing it to
- * wait on it.
+ * wait on the latch.
+ *
+ * Although there is a sanity check for latch-already-owned, we don't do
+ * any sort of locking here, meaning that we could fail to detect the error
+ * if two processes try to own the same latch at about the same time. If
+ * there is any risk of that, caller must provide an interlock to prevent it.
*
- * Make sure that latch_sigusr1_handler() is called from the SIGUSR1 signal
- * handler, as shared latches use SIGUSR1 to for inter-process communication.
+ * In any process that calls OwnLatch(), make sure that
+ * latch_sigusr1_handler() is called from the SIGUSR1 signal handler,
+ * as shared latches use SIGUSR1 for inter-process communication.
*/
void
OwnLatch(volatile Latch *latch)
{
Assert(latch->is_shared);
- /* Initialize the self pipe if this is our first latch in the process */
+ /* Initialize the self-pipe if this is our first latch in this process */
if (selfpipe_readfd == -1)
initSelfPipe();
/* sanity check */
if (latch->owner_pid != 0)
elog(ERROR, "latch already owned");
+
latch->owner_pid = MyProcPid;
}
@@ -173,25 +127,26 @@ DisownLatch(volatile Latch *latch)
{
Assert(latch->is_shared);
Assert(latch->owner_pid == MyProcPid);
+
latch->owner_pid = 0;
}
/*
- * Wait for a given latch to be set, postmaster death, or until timeout is
- * exceeded. 'wakeEvents' is a bitmask that specifies which of those events
+ * Wait for a given latch to be set, or for postmaster death, or until timeout
+ * is exceeded. 'wakeEvents' is a bitmask that specifies which of those events
* to wait for. If the latch is already set (and WL_LATCH_SET is given), the
* function returns immediately.
*
- * The 'timeout' is given in microseconds. It must be >= 0 if WL_TIMEOUT
- * event is given, otherwise it is ignored. On some platforms, signals cause
- * the timeout to be restarted, so beware that the function can sleep for
- * several times longer than the specified timeout.
+ * The 'timeout' is given in microseconds. It must be >= 0 if WL_TIMEOUT flag
+ * is given. On some platforms, signals cause the timeout to be restarted,
+ * so beware that the function can sleep for several times longer than the
+ * specified timeout.
*
* The latch must be owned by the current process, ie. it must be a
* backend-local latch initialized with InitLatch, or a shared latch
* associated with the current process by calling OwnLatch.
*
- * Returns bit field indicating which condition(s) caused the wake-up. Note
+ * Returns bit mask indicating which condition(s) caused the wake-up. Note
* that if multiple wake-up conditions are true, there is no guarantee that
* we return all of them in one call, but we will return at least one. Also,
* according to the select(2) man page on Linux, select(2) may spuriously
@@ -200,7 +155,7 @@ DisownLatch(volatile Latch *latch)
* readable, or postmaster has died, even when none of the wake conditions
* have been satisfied. That should be rare in practice, but the caller
* should not use the return value for anything critical, re-checking the
- * situation with PostmasterIsAlive() or read() on a socket if necessary.
+ * situation with PostmasterIsAlive() or read() on a socket as necessary.
*/
int
WaitLatch(volatile Latch *latch, int wakeEvents, long timeout)
@@ -247,12 +202,18 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
int hifd;
/*
- * Clear the pipe, and check if the latch is set already. If someone
+ * Clear the pipe, then check if the latch is set already. If someone
* sets the latch between this and the select() below, the setter will
* write a byte to the pipe (or signal us and the signal handler will
* do that), and the select() will return immediately.
+ *
+ * Note: we assume that the kernel calls involved in drainSelfPipe()
+ * and SetLatch() will provide adequate synchronization on machines
+ * with weak memory ordering, so that we cannot miss seeing is_set
+ * if the signal byte is already in the pipe when we drain it.
*/
drainSelfPipe();
+
if ((wakeEvents & WL_LATCH_SET) && latch->is_set)
{
result |= WL_LATCH_SET;
@@ -263,7 +224,10 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
break;
}
+ /* Must wait ... set up the event masks for select() */
FD_ZERO(&input_mask);
+ FD_ZERO(&output_mask);
+
FD_SET(selfpipe_readfd, &input_mask);
hifd = selfpipe_readfd;
@@ -281,7 +245,6 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
hifd = sock;
}
- FD_ZERO(&output_mask);
if (wakeEvents & WL_SOCKET_WRITEABLE)
{
FD_SET(sock, &output_mask);
@@ -320,21 +283,30 @@ WaitLatchOrSocket(volatile Latch *latch, int wakeEvents, pgsocket sock,
{
result |= WL_POSTMASTER_DEATH;
}
- } while(result == 0);
+ } while (result == 0);
waiting = false;
return result;
}
/*
- * Sets a latch and wakes up anyone waiting on it. Returns quickly if the
- * latch is already set.
+ * Sets a latch and wakes up anyone waiting on it.
+ *
+ * This is cheap if the latch is already set, otherwise not so much.
*/
void
SetLatch(volatile Latch *latch)
{
pid_t owner_pid;
+ /*
+ * XXX there really ought to be a memory barrier operation right here,
+ * to ensure that any flag variables we might have changed get flushed
+ * to main memory before we check/set is_set. Without that, we have to
+ * require that callers provide their own synchronization for machines
+ * with weak memory ordering (see latch.h).
+ */
+
/* Quick exit if already set */
if (latch->is_set)
return;
@@ -346,13 +318,21 @@ SetLatch(volatile Latch *latch)
* we're in a signal handler. We use the self-pipe to wake up the select()
* in that case. If it's another process, send a signal.
*
- * Fetch owner_pid only once, in case the owner simultaneously disowns the
- * latch and clears owner_pid. XXX: This assumes that pid_t is atomic,
- * which isn't guaranteed to be true! In practice, the effective range of
- * pid_t fits in a 32 bit integer, and so should be atomic. In the worst
- * case, we might end up signaling wrong process if the right one disowns
- * the latch just as we fetch owner_pid. Even then, you're very unlucky if
- * a process with that bogus pid exists.
+ * Fetch owner_pid only once, in case the latch is concurrently getting
+ * owned or disowned. XXX: This assumes that pid_t is atomic, which isn't
+ * guaranteed to be true! In practice, the effective range of pid_t fits
+ * in a 32 bit integer, and so should be atomic. In the worst case, we
+ * might end up signaling the wrong process. Even then, you're very
+ * unlucky if a process with that bogus pid exists and belongs to
+ * Postgres; and PG database processes should handle excess SIGUSR1
+ * interrupts without a problem anyhow.
+ *
+ * Another sort of race condition that's possible here is for a new process
+ * to own the latch immediately after we look, so we don't signal it.
+ * This is okay so long as all callers of ResetLatch/WaitLatch follow the
+ * standard coding convention of waiting at the bottom of their loops,
+ * not the top, so that they'll correctly process latch-setting events that
+ * happen before they enter the loop.
*/
owner_pid = latch->owner_pid;
if (owner_pid == 0)
@@ -374,11 +354,23 @@ ResetLatch(volatile Latch *latch)
Assert(latch->owner_pid == MyProcPid);
latch->is_set = false;
+
+ /*
+ * XXX there really ought to be a memory barrier operation right here, to
+ * ensure that the write to is_set gets flushed to main memory before we
+ * examine any flag variables. Otherwise a concurrent SetLatch might
+ * falsely conclude that it needn't signal us, even though we have missed
+ * seeing some flag updates that SetLatch was supposed to inform us of.
+ * For the moment, callers must supply their own synchronization of flag
+ * variables (see latch.h).
+ */
}
/*
- * SetLatch uses SIGUSR1 to wake up the process waiting on the latch. Wake
- * up WaitLatch.
+ * SetLatch uses SIGUSR1 to wake up the process waiting on the latch.
+ *
+ * Wake up WaitLatch, if we're waiting. (We might not be, since SIGUSR1 is
+ * overloaded for multiple purposes.)
*/
void
latch_sigusr1_handler(void)