aboutsummaryrefslogtreecommitdiff
path: root/src/backend/executor/nodeGather.c
Commit message (Collapse)AuthorAge
...
* pgindent run for 9.6Robert Haas2016-06-09
|
* Tweak EXPLAIN for parallel query to show workers launched.Robert Haas2016-04-15
| | | | | | | | | The previous display was sort of confusing, because it didn't distinguish between the number of workers that we planned to launch and the number that actually got launched. This has already confused several people, so display both numbers and label them clearly. Julien Rouhaud, reviewed by me.
* Minor optimizations based on ParallelContext having nworkers_launched.Robert Haas2016-03-04
| | | | | | | | | | | Originally, we didn't have nworkers_launched, so code that used parallel contexts had to be preprared for the possibility that not all of the workers requested actually got launched. But now we can count on knowing the number of workers that were successfully launched, which can shave off a few cycles and simplify some code slightly. Amit Kapila, reviewed by Haribabu Kommi, per a suggestion from Peter Geoghegan.
* Fix spelling mistakes.Robert Haas2016-01-14
| | | | Same patch submitted independently by David Rowley and Peter Geoghegan.
* Update copyright for 2016Bruce Momjian2016-01-02
| | | | Backpatch certain files through 9.1
* Read from the same worker repeatedly until it returns no tuple.Robert Haas2015-12-23
| | | | | | | | | | | | | The original coding read tuples from workers in round-robin fashion, but performance testing shows that it works much better to read enough to empty one queue before moving on to the next. I believe the reason for this is that, with the old approach, we could easily wake up a worker repeatedly to write only one new tuple into the shm_mq each time. With this approach, by the time the process gets scheduled, it has a decent chance of being able to fill the entire buffer in one go. Patch by me. Dilip Kumar helped with performance testing.
* Fix obsolete comment.Robert Haas2015-11-30
| | | | | | It's amazing how fast things become obsolete these days. Amit Langote
* Avoid server crash when worker registration fails at execution time.Robert Haas2015-11-20
| | | | | | | | | | The previous coding attempts to destroy the DSM in this case, but child nodes might have stored data there and still be holding onto pointers in this case. So don't do that. Also, free the reader array instead of leaking it. Extracted from two different patch versions both by Amit Kapila.
* Avoid aggregating worker instrumentation multiple times.Robert Haas2015-11-18
| | | | Amit Kapila, per design ideas from me.
* Add missing "static" qualifier.Tom Lane2015-11-10
| | | | Per buildfarm member pademelon.
* Fix rebasing mistake in nodeGather.cRobert Haas2015-11-09
| | | | | | | | | The patches committed as 6e71dd7ce9766582da453f493bc371d64977282f and 3a1f8611f2582df0a16bcd35caed2e1526387643 were developed in parallel but dependent on each other in a way that I failed to notice. This patch to fix the problem was prepared by Amit Kapila.
* Modify tqueue infrastructure to support transient record types.Robert Haas2015-11-06
| | | | | | | | | | | | | | | | | | | | | | | | | Commit 4a4e6893aa080b9094dadbe0e65f8a75fee41ac6, which introduced this mechanism, failed to account for the fact that the RECORD pseudo-type uses transient typmods that are only meaningful within a single backend. Transferring such tuples without modification between two cooperating backends does not work. This commit installs a system for passing the tuple descriptors over the same shm_mq being used to send the tuples themselves. The two sides might not assign the same transient typmod to any given tuple descriptor, so we must also substitute the appropriate receiver-side typmod for the one used by the sender. That adds some CPU overhead, but still seems better than being unable to pass records between cooperating parallel processes. Along the way, move the logic for handling multiple tuple queues from tqueue.c to nodeGather.c; tqueue.c now provides a TupleQueueReader, which reads from a single queue, rather than a TupleQueueFunnel, which potentially reads from multiple queues. This change was suggested previously as a way to make sure that nodeGather.c rather than tqueue.c had policy control over the order in which to read from queues, but it wasn't clear to me until now how good an idea it was. typmod mapping needs to be performed separately for each queue, and it is much simpler if the tqueue.c code handles that and leaves multiplexing multiple queues to higher layers of the stack.
* Update parallel executor support to reuse the same DSM.Robert Haas2015-10-30
| | | | | | | | | | | | | | | | Commit b0b0d84b3d663a148022e900ebfc164284a95f55 purported to make it possible to relaunch workers using the same parallel context, but it had an unpleasant race condition: we might reinitialize after the workers have sent their last control message but before they have dettached the DSM, leaving to crashes. Repair by introducing a new ParallelContext operation, ReinitializeParallelDSM. Adjust execParallel.c to use this new support, so that we can rescan a Gather node by relaunching workers but without needing to recreate the DSM. Amit Kapila, with some adjustments by me. Extracted from latest parallel sequential scan patch.
* Make Gather node projection-capable.Robert Haas2015-10-28
| | | | | | | | | The original Gather code failed to mark a Gather node as not able to do projection, but it couldn't, even though it did call initialize its projection info via ExecAssignProjectionInfo. There doesn't seem to be any good reason for this node not to have projection capability, so clean things up so that it does. Without this, plans using Gather nodes might need to carry extra Result nodes to do projection.
* Fix typos in comments.Robert Haas2015-10-22
| | | | CharSyam
* Add header comments to execParallel.c and nodeGather.c.Robert Haas2015-10-22
| | | | | Patch by me, per a note from Simon Riggs. Reviewed by Amit Kapila and Amit Langote.
* Rewrite interaction of parallel mode with parallel executor support.Robert Haas2015-10-16
| | | | | | | | | | | | | | In the previous coding, before returning from ExecutorRun, we'd shut down all parallel workers. This was dead wrong if ExecutorRun was called with a non-zero tuple count; it had the effect of truncating the query output. To fix, give ExecutePlan control over whether to enter parallel mode, and have it refuse to do so if the tuple count is non-zero. Rewrite the Gather logic so that it can cope with being called outside parallel mode. Commit 7aea8e4f2daa4b39ca9d1309a0c4aadb0f7ed81b is largely to blame for this problem, though this patch modifies some subsequently-committed code which relied on the guarantees it purported to make.
* Add missing "static" specifier.Tom Lane2015-10-03
| | | | Per buildfarm (pademelon, at least, doesn't like this).
* Add a Gather executor node.Robert Haas2015-09-30
A Gather executor node runs any number of copies of a plan in an equal number of workers and merges all of the results into a single tuple stream. It can also run the plan itself, if the workers are unavailable or haven't started up yet. It is intended to work with the Partial Seq Scan node which will be added in future commits. It could also be used to implement parallel query of a different sort by itself, without help from Partial Seq Scan, if the single_copy mode is used. In that mode, a worker executes the plan, and the parallel leader does not, merely collecting the worker's results. So, a Gather node could be inserted into a plan to split the execution of that plan across two processes. Nested Gather nodes aren't currently supported, but we might want to add support for that in the future. There's nothing in the planner to actually generate Gather nodes yet, so it's not quite time to break out the champagne. But we're getting close. Amit Kapila. Some designs suggestions were provided by me, and I also reviewed the patch. Single-copy mode, documentation, and other minor changes also by me.