diff options
author | Tom Lane <tgl@sss.pgh.pa.us> | 2017-08-30 17:21:08 -0400 |
---|---|---|
committer | Tom Lane <tgl@sss.pgh.pa.us> | 2017-08-30 17:21:08 -0400 |
commit | 04e9678614ec64ad9043174ac99a25b1dc45233a (patch) | |
tree | 84374124ceecab08afdf7bd19e12930506076e36 /src/backend/executor/nodeGather.c | |
parent | 41b0dd987d44089dc48e9c70024277e253b396b7 (diff) | |
download | postgresql-04e9678614ec64ad9043174ac99a25b1dc45233a.tar.gz postgresql-04e9678614ec64ad9043174ac99a25b1dc45233a.zip |
Code review for nodeGatherMerge.c.
Comment the fields of GatherMergeState, and organize them a bit more
sensibly. Comment GMReaderTupleBuffer more usefully too. Improve
assorted other comments that were obsolete or just not very good English.
Get rid of the use of a GMReaderTupleBuffer for the leader process;
that was confusing, since only the "done" field was used, and that
in a way redundant with need_to_scan_locally.
In gather_merge_init, avoid calling load_tuple_array for
already-known-exhausted workers. I'm not sure if there's a live bug there,
but the case is unlikely to be well tested due to timing considerations.
Remove some useless code, such as duplicating the tts_isempty test done by
TupIsNull.
Remove useless initialization of ps.qual, replacing that with an assertion
that we have no qual to check. (If we did, the code would fail to check
it.)
Avoid applying heap_copytuple to a null tuple. While that fails to crash,
it's confusing and it makes the code less legible not more so IMO.
Propagate a couple of these changes into nodeGather.c, as well.
Back-patch to v10, partly because of the possibility that the
gather_merge_init change is fixing a live bug, but mostly to keep
the branches in sync to ease future bug fixes.
Diffstat (limited to 'src/backend/executor/nodeGather.c')
-rw-r--r-- | src/backend/executor/nodeGather.c | 21 |
1 files changed, 12 insertions, 9 deletions
diff --git a/src/backend/executor/nodeGather.c b/src/backend/executor/nodeGather.c index f9cf1b2f875..d93fbacdf9e 100644 --- a/src/backend/executor/nodeGather.c +++ b/src/backend/executor/nodeGather.c @@ -71,6 +71,8 @@ ExecInitGather(Gather *node, EState *estate, int eflags) gatherstate->ps.plan = (Plan *) node; gatherstate->ps.state = estate; gatherstate->ps.ExecProcNode = ExecGather; + + gatherstate->initialized = false; gatherstate->need_to_scan_locally = !node->single_copy; gatherstate->tuples_needed = -1; @@ -82,10 +84,10 @@ ExecInitGather(Gather *node, EState *estate, int eflags) ExecAssignExprContext(estate, &gatherstate->ps); /* - * initialize child expressions + * Gather doesn't support checking a qual (it's always more efficient to + * do it in the child node). */ - gatherstate->ps.qual = - ExecInitQual(node->plan.qual, (PlanState *) gatherstate); + Assert(!node->plan.qual); /* * tuple table initialization @@ -169,15 +171,16 @@ ExecGather(PlanState *pstate) */ pcxt = node->pei->pcxt; LaunchParallelWorkers(pcxt); + /* We save # workers launched for the benefit of EXPLAIN */ node->nworkers_launched = pcxt->nworkers_launched; + node->nreaders = 0; + node->nextreader = 0; /* Set up tuple queue readers to read the results. */ if (pcxt->nworkers_launched > 0) { - node->nreaders = 0; - node->nextreader = 0; - node->reader = - palloc(pcxt->nworkers_launched * sizeof(TupleQueueReader *)); + node->reader = palloc(pcxt->nworkers_launched * + sizeof(TupleQueueReader *)); for (i = 0; i < pcxt->nworkers_launched; ++i) { @@ -316,8 +319,8 @@ gather_readnext(GatherState *gatherstate) tup = TupleQueueReaderNext(reader, true, &readerdone); /* - * If this reader is done, remove it. If all readers are done, clean - * up remaining worker state. + * If this reader is done, remove it, and collapse the array. If all + * readers are done, clean up remaining worker state. */ if (readerdone) { |