diff options
author | David Rowley <drowley@postgresql.org> | 2024-03-11 18:19:56 +1300 |
---|---|---|
committer | David Rowley <drowley@postgresql.org> | 2024-03-11 18:19:56 +1300 |
commit | e629846472255c7a636c453a522755ef05489c90 (patch) | |
tree | 8a9d71edf18dc45310c70c156232fc700a288c53 /src/backend/executor/nodeMemoize.c | |
parent | 21e3a8bc3544a1cfcff85933bc9c0664af32a8b8 (diff) | |
download | postgresql-e629846472255c7a636c453a522755ef05489c90.tar.gz postgresql-e629846472255c7a636c453a522755ef05489c90.zip |
Fix incorrect accessing of pfree'd memory in Memoize
For pass-by-reference types, the code added in 0b053e78b, which aimed to
resolve a memory leak, was overly aggressive in resetting the per-tuple
memory context which could result in pfree'd memory being accessed
resulting in failing to find previously cached results in the hash
table.
What was happening was prepare_probe_slot() was switching to the
per-tuple memory context and calling ExecEvalExpr(). ExecEvalExpr() may
have required a memory allocation. Both MemoizeHash_hash() and
MemoizeHash_equal() were aggressively resetting the per-tuple context
and after determining the hash value, the context would have gotten reset
before MemoizeHash_equal() was called. This could have resulted in
MemoizeHash_equal() looking at pfree'd memory.
This is less likely to have caused issues on a production build as some
other allocation would have had to have reused the pfree'd memory to
overwrite it. Otherwise, the original contents would have been intact.
However, this clearly caused issues on MEMORY_CONTEXT_CHECKING builds.
Author: Tender Wang, Andrei Lepikhov
Reported-by: Tender Wang (using SQLancer)
Reviewed-by: Andrei Lepikhov, Richard Guo, David Rowley
Discussion: https://postgr.es/m/CAHewXNnT6N6UJkya0z-jLFzVxcwGfeRQSfhiwA+NyLg-x8iGew@mail.gmail.com
Backpatch-through: 14, where Memoize was added
Diffstat (limited to 'src/backend/executor/nodeMemoize.c')
-rw-r--r-- | src/backend/executor/nodeMemoize.c | 15 |
1 files changed, 11 insertions, 4 deletions
diff --git a/src/backend/executor/nodeMemoize.c b/src/backend/executor/nodeMemoize.c index 18870f10e11..45157ffb9c7 100644 --- a/src/backend/executor/nodeMemoize.c +++ b/src/backend/executor/nodeMemoize.c @@ -13,7 +13,7 @@ * Memoize nodes are intended to sit above parameterized nodes in the plan * tree in order to cache results from them. The intention here is that a * repeat scan with a parameter value that has already been seen by the node - * can fetch tuples from the cache rather than having to re-scan the outer + * can fetch tuples from the cache rather than having to re-scan the inner * node all over again. The query planner may choose to make use of one of * these when it thinks rescans for previously seen values are likely enough * to warrant adding the additional node. @@ -207,7 +207,6 @@ MemoizeHash_hash(struct memoize_hash *tb, const MemoizeKey *key) } } - ResetExprContext(econtext); MemoryContextSwitchTo(oldcontext); return murmurhash32(hashkey); } @@ -265,7 +264,6 @@ MemoizeHash_equal(struct memoize_hash *tb, const MemoizeKey *key1, } } - ResetExprContext(econtext); MemoryContextSwitchTo(oldcontext); return match; } @@ -273,7 +271,7 @@ MemoizeHash_equal(struct memoize_hash *tb, const MemoizeKey *key1, { econtext->ecxt_innertuple = tslot; econtext->ecxt_outertuple = pslot; - return ExecQualAndReset(mstate->cache_eq_expr, econtext); + return ExecQual(mstate->cache_eq_expr, econtext); } } @@ -699,9 +697,18 @@ static TupleTableSlot * ExecMemoize(PlanState *pstate) { MemoizeState *node = castNode(MemoizeState, pstate); + ExprContext *econtext = node->ss.ps.ps_ExprContext; PlanState *outerNode; TupleTableSlot *slot; + CHECK_FOR_INTERRUPTS(); + + /* + * Reset per-tuple memory context to free any expression evaluation + * storage allocated in the previous tuple cycle. + */ + ResetExprContext(econtext); + switch (node->mstatus) { case MEMO_CACHE_LOOKUP: |