aboutsummaryrefslogtreecommitdiff
path: root/src/backend/executor/nodeGatherMerge.c
diff options
context:
space:
mode:
authorRobert Haas <rhaas@postgresql.org>2018-02-02 09:00:59 -0500
committerRobert Haas <rhaas@postgresql.org>2018-02-02 09:00:59 -0500
commit9222c0d9ed9794d54fc3f5101498829eaec9e799 (patch)
tree53a706d621a1edc1a9c792f690604a23e978dff0 /src/backend/executor/nodeGatherMerge.c
parenta2a22057617dc84b500f85938947c125183f1289 (diff)
downloadpostgresql-9222c0d9ed9794d54fc3f5101498829eaec9e799.tar.gz
postgresql-9222c0d9ed9794d54fc3f5101498829eaec9e799.zip
Add new function WaitForParallelWorkersToAttach.
Once this function has been called, we know that all workers have started and attached to their error queues -- so if any of them subsequently exit uncleanly, we'll be sure to throw an ERROR promptly. Otherwise, users of the ParallelContext machinery must be careful not to wait forever for a worker that has failed to start. Parallel query manages to work without needing this for reasons explained in new comments added by this patch, but it's a useful primitive for other parallel operations, such as the pending patch to make creating a btree index run in parallel. Amit Kapila, revised by me. Additional review by Peter Geoghegan. Discussion: http://postgr.es/m/CAA4eK1+e2MzyouF5bg=OtyhDSX+=Ao=3htN=T-r_6s3gCtKFiw@mail.gmail.com
Diffstat (limited to 'src/backend/executor/nodeGatherMerge.c')
-rw-r--r--src/backend/executor/nodeGatherMerge.c9
1 files changed, 8 insertions, 1 deletions
diff --git a/src/backend/executor/nodeGatherMerge.c b/src/backend/executor/nodeGatherMerge.c
index a3e34c69800..6858c91e8c2 100644
--- a/src/backend/executor/nodeGatherMerge.c
+++ b/src/backend/executor/nodeGatherMerge.c
@@ -710,7 +710,14 @@ gm_readnext_tuple(GatherMergeState *gm_state, int nreader, bool nowait,
/* Check for async events, particularly messages from workers. */
CHECK_FOR_INTERRUPTS();
- /* Attempt to read a tuple. */
+ /*
+ * Attempt to read a tuple.
+ *
+ * Note that TupleQueueReaderNext will just return NULL for a worker which
+ * fails to initialize. We'll treat that worker as having produced no
+ * tuples; WaitForParallelWorkersToFinish will error out when we get
+ * there.
+ */
reader = gm_state->reader[nreader - 1];
tup = TupleQueueReaderNext(reader, nowait, done);