aboutsummaryrefslogtreecommitdiff
path: root/src/backend/replication/logical/logicalfuncs.c
diff options
context:
space:
mode:
authorTomas Vondra <tomas.vondra@postgresql.org>2021-06-11 20:19:48 +0200
committerTomas Vondra <tomas.vondra@postgresql.org>2021-06-11 20:23:33 +0200
commitb676ac443b6a83558d4701b2dd9491c0b37e17c4 (patch)
tree2c2b6679178de4a7151f5781dcff723c6dcc85cc /src/backend/replication/logical/logicalfuncs.c
parent96540f80f8334a3f0f4a13f0d42e4565d8fa9eb7 (diff)
downloadpostgresql-b676ac443b6a83558d4701b2dd9491c0b37e17c4.tar.gz
postgresql-b676ac443b6a83558d4701b2dd9491c0b37e17c4.zip
Optimize creation of slots for FDW bulk inserts
Commit b663a41363 introduced bulk inserts for FDW, but the handling of tuple slots turned out to be problematic for two reasons. Firstly, the slots were re-created for each individual batch. Secondly, all slots referenced the same tuple descriptor - with reasonably small batches this is not an issue, but with large batches this triggers O(N^2) behavior in the resource owner code. These two issues work against each other - to reduce the number of times a slot has to be created/dropped, larger batches are needed. However, the larger the batch, the more expensive the resource owner gets. For practical batch sizes (100 - 1000) this would not be a big problem, as the benefits (latency savings) greatly exceed the resource owner costs. But for extremely large batches it might be much worse, possibly even losing with non-batching mode. Fixed by initializing tuple slots only once (and reusing them across batches) and by using a new tuple descriptor copy for each slot. Discussion: https://postgr.es/m/ebbbcc7d-4286-8c28-0272-61b4753af761%40enterprisedb.com
Diffstat (limited to 'src/backend/replication/logical/logicalfuncs.c')
0 files changed, 0 insertions, 0 deletions