aboutsummaryrefslogtreecommitdiff
path: root/src
diff options
context:
space:
mode:
authorTom Lane <tgl@sss.pgh.pa.us>2025-03-06 11:54:27 -0500
committerTom Lane <tgl@sss.pgh.pa.us>2025-03-06 11:54:31 -0500
commit0f21db36d663fcf0789290902c84cc460ef0df8b (patch)
tree44ae79040304471ac90251aab6183e7a961cb94e /src
parente33969abc1934cc7fd92d539e51a2b8ae46d6a33 (diff)
downloadpostgresql-0f21db36d663fcf0789290902c84cc460ef0df8b.tar.gz
postgresql-0f21db36d663fcf0789290902c84cc460ef0df8b.zip
Fix some performance issues in GIN query startup.
If a GIN index search had a lot of search keys (for example, "jsonbcol ?| array[]" with tens of thousands of array elements), both ginFillScanKey() and startScanKey() took O(N^2) time. Worse, those loops were uncancelable for lack of CHECK_FOR_INTERRUPTS. The problem in ginFillScanKey() is the brute-force search key de-duplication done in ginFillScanEntry(). The most expedient solution seems to be to just stop trying to de-duplicate once there are "too many" search keys. We could imagine working harder, say by using a sort-and-unique algorithm instead of brute force compare-all-the-keys. But it seems unlikely to be worth the trouble. There is no correctness issue here, since the code already allowed duplicate keys if any extra_data is present. The problem in startScanKey() is the loop that attempts to identify the first non-required search key. In the submitted test case, that vainly tests all the key positions, and each iteration takes O(N) time. One part of that is that it's reinitializing the entryRes[] array from scratch each time, which is entirely unnecessary given that the triConsistentFn isn't supposed to scribble on its input. We can easily adjust the array contents incrementally instead. The other part of it is that the triConsistentFn may itself take O(N) time (and does in this test case). This is all extremely brute force: in simple cases with AND or OR semantics, we could know without any looping whatever that all or none of the keys are required. But GIN opclasses don't have any API for exposing that knowledge, so at least in the short run there is little to be done about that. Put in a CHECK_FOR_INTERRUPTS so that at least the loop is cancelable. These two changes together resolve the primary complaint that the test query doesn't respond promptly to cancel interrupts. Also, while they don't completely eliminate the O(N^2) behavior, they do provide quite a nice speedup for mid-sized examples. Bug: #18831 Reported-by: Niek <niek.brasa@hitachienergy.com> Author: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://postgr.es/m/18831-e845ac44ebc5dd36@postgresql.org Backpatch-through: 13
Diffstat (limited to 'src')
-rw-r--r--src/backend/access/gin/ginget.c10
-rw-r--r--src/backend/access/gin/ginscan.c7
2 files changed, 12 insertions, 5 deletions
diff --git a/src/backend/access/gin/ginget.c b/src/backend/access/gin/ginget.c
index f3b19d280c3..4a56f19390d 100644
--- a/src/backend/access/gin/ginget.c
+++ b/src/backend/access/gin/ginget.c
@@ -558,16 +558,18 @@ startScanKey(GinState *ginstate, GinScanOpaque so, GinScanKey key)
qsort_arg(entryIndexes, key->nentries, sizeof(int),
entryIndexByFrequencyCmp, key);
+ for (i = 1; i < key->nentries; i++)
+ key->entryRes[entryIndexes[i]] = GIN_MAYBE;
for (i = 0; i < key->nentries - 1; i++)
{
/* Pass all entries <= i as FALSE, and the rest as MAYBE */
- for (j = 0; j <= i; j++)
- key->entryRes[entryIndexes[j]] = GIN_FALSE;
- for (j = i + 1; j < key->nentries; j++)
- key->entryRes[entryIndexes[j]] = GIN_MAYBE;
+ key->entryRes[entryIndexes[i]] = GIN_FALSE;
if (key->triConsistentFn(key) == GIN_FALSE)
break;
+
+ /* Make this loop interruptible in case there are many keys */
+ CHECK_FOR_INTERRUPTS();
}
/* i is now the last required entry. */
diff --git a/src/backend/access/gin/ginscan.c b/src/backend/access/gin/ginscan.c
index 63ded6301e2..84aa14594f8 100644
--- a/src/backend/access/gin/ginscan.c
+++ b/src/backend/access/gin/ginscan.c
@@ -68,8 +68,13 @@ ginFillScanEntry(GinScanOpaque so, OffsetNumber attnum,
*
* Entries with non-null extra_data are never considered identical, since
* we can't know exactly what the opclass might be doing with that.
+ *
+ * Also, give up de-duplication once we have 100 entries. That avoids
+ * spending O(N^2) time on probably-fruitless de-duplication of large
+ * search-key sets. The threshold of 100 is arbitrary but matches
+ * predtest.c's threshold for what's a large array.
*/
- if (extra_data == NULL)
+ if (extra_data == NULL && so->totalentries < 100)
{
for (i = 0; i < so->totalentries; i++)
{