aboutsummaryrefslogtreecommitdiff
path: root/src/backend/utils/mmgr
Commit message (Collapse)AuthorAge
* Ignore not-yet-defined Portals in pg_cursors view.Tom Lane2024-10-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | pg_cursor() supposed that any Portal it finds in the hash table must have sourceText set up, but there's an edge case where that is not so. A newly-created Portal has sourceText = NULL, and that doesn't change until PortalDefineQuery is called. In SPI_cursor_open_internal, we perform GetCachedPlan between CreatePortal and PortalDefineQuery, and it's possible for user-defined code to execute during that planning and cause a fetch from the pg_cursors view, resulting in a null-pointer-dereference crash. (It looks like the same could happen in exec_bind_message, but I've not tried to provoke a failure there.) I considered trying to fix this by setting sourceText sooner, but there may be instances of this same calling pattern in extensions, and we couldn't be sure they'd get the memo promptly. It seems better to redefine pg_cursor as not showing Portals that have not yet had PortalDefineQuery called on them, which we can do by just skipping them if sourceText is still NULL. (Before a1c692358, pg_cursor would instead return a row with NULL in the statement column. We could revert to that behavior but it doesn't really seem like a better definition, especially since our documentation doesn't suggest that the column could be NULL.) Per report from PetSerAl. Back-patch to all supported branches. Discussion: https://postgr.es/m/CAKygsHTBXLXjwV43kpZa+Cs+XTiaeeJiZdL4cPBm9f4MTdw7wg@mail.gmail.com
* Fix incorrect sentinel byte logic in GenerationRealloc()David Rowley2024-07-06
| | | | | | | | | | | | | | | | | | | | | This only affects MEMORY_CONTEXT_CHECKING builds. This fixes an off-by-one issue in GenerationRealloc() where the fast-path code which tries to reuse the existing allocation if the existing chunk is >= the new requested size. The code there thought it was always ok to use the existing chunk, but when oldsize == size there isn't enough space to store the sentinel byte. If both sizes matched exactly set_sentinel() would overwrite the first byte beyond the chunk and then subsequent GenerationRealloc() calls could then fail the Assert(chunk->requested_size < oldsize) check which is trying to ensure the chunk is large enough to store the sentinel. The same issue does not exist in aset.c as the sentinel checking code only adds a sentinel byte if there's enough space in the chunk. Reported-by: Alexander Lakhin <exclusion@gmail.com> Discussion: https://postgr.es/m/49275921-7b39-41af-5eb8-97b50ce3312e@gmail.com Backpatch-through: 16, where the problem was introduced by 0e480385e
* Preserve CurrentMemoryContext across Start/CommitTransactionCommand.Tom Lane2024-07-01
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Up to now, committing a transaction has caused CurrentMemoryContext to get set to TopMemoryContext. Most callers did not pay any particular heed to this, which is problematic because TopMemoryContext is a long-lived context that never gets reset. If the caller assumes it can leak memory because it's running in a limited-lifespan context, that behavior translates into a session-lifespan memory leak. The first-reported instance of this involved ProcessIncomingNotify, which is called from the main processing loop that normally runs in MessageContext. That outer-loop code assumes that whatever it allocates will be cleaned up when we're done processing the current client message --- but if we service a notify interrupt, then whatever gets allocated before the next switch to MessageContext will be permanently leaked in TopMemoryContext. sinval catchup interrupts have a similar problem, and I strongly suspect that some places in logical replication do too. To fix this in a generic way, let's redefine the behavior as "CommitTransactionCommand restores the memory context that was current at entry to StartTransactionCommand". This clearly fixes the issue for the notify and sinval cases, and it seems to match the mental model that's in use in the logical replication code, to the extent that anybody thought about it there at all. For consistency, likewise make subtransaction exit restore the context that was current at subtransaction start (rather than always selecting the CurTransactionContext of the parent transaction level). This case has less risk of resulting in a permanent leak than the outer-level behavior has, but it would not meet the principle of least surprise for some CommitTransactionCommand calls to restore the previous context while others don't. While we're here, also change xact.c so that we reset TopTransactionContext at transaction exit and then re-use it in later transactions, rather than dropping and recreating it in each cycle. This probably doesn't save a lot given the context recycling mechanism in aset.c, but it should save a little bit. Per suggestion from David Rowley. (Parenthetically, the text in src/backend/utils/mmgr/README implies that this is how I'd planned to implement it as far back as commit 1aebc3618 --- but the code actually added in that commit just drops and recreates it each time.) This commit also cleans up a few places outside xact.c that were needlessly making TopMemoryContext current, and thus risking more leaks of the same kind. I don't think any of them represent significant leak risks today, but let's deal with them while the issue is top-of-mind. Per bug #18512 from wizardbrony. Commit to HEAD only, as this change seems to have some risk of breaking things for some callers. We'll apply a narrower fix for the known-broken cases in the back branches. Discussion: https://postgr.es/m/3478884.1718656625@sss.pgh.pa.us
* Fix typos and duplicate wordsDaniel Gustafsson2024-04-18
| | | | | | | | | | | | This fixes various typos, duplicated words, and tiny bits of whitespace mainly in code comments but also in docs. Author: Daniel Gustafsson <daniel@yesql.se> Author: Heikki Linnakangas <hlinnaka@iki.fi> Author: Alexander Lakhin <exclusion@gmail.com> Author: David Rowley <dgrowleyml@gmail.com> Author: Nazir Bilal Yavuz <byavuz81@gmail.com> Discussion: https://postgr.es/m/3F577953-A29E-4722-98AD-2DA9EFF2CBB8@yesql.se
* Update mmgr's README to mention BumpContextDavid Rowley2024-04-17
| | | | | | | | | | | | Oversight in 29f6a959c. In passing, since we now have 4 memory context types to choose from, provide a brief overview of the specialities of each memory context type. Reported-by: Amul Sul Author: Amul Sul, David Rowley Discussion: https://postgr.es/m/CAAJ_b94U2s9nHh--DEK=sPEZUQ+x7vQJ7529fF8UAH97QJ9NXg@mail.gmail.com
* Push dedicated BumpBlocks to the tail of the blocks listDavid Rowley2024-04-17
| | | | | | | | | | | | | | | | | | | | | | | BumpContext relies on using the head block from its 'blocks' field to use as the current block to allocate new chunks to. When we receive an allocation request larger than allocChunkLimit, we place these chunks on a new dedicated block and, until now, we pushed the block onto the *head* of the 'blocks' list. This behavior caused the previous bump block to no longer be available for new normal-sized (non-large) allocations and would result in blocks only being partially filled if a large allocation request arrived before the block became full. Here adjust the code to push these dedicated blocks onto the *tail* of the blocks list so that the head block remains intact and available to be used by normal allocation request sizes until it becomes full. In passing, make the elog(ERROR) calls for the unsupported callbacks consistent. Likewise for the header comments for those functions. Discussion: https://postgr.es/m/CAApHDvp9___r-ayJj0nZ6GD3MeCGwGZ0_6ZptWpwj+zqHtmwCw@mail.gmail.com Discussion: https://postgr.es/m/CAApHDvqerXpzUnuDQfUEi3DZA+9=Ud9WSt3ruxN5b6PcOosx2g@mail.gmail.com
* Remove unused BumpBlockIsValid macroDavid Rowley2024-04-10
| | | | | | | | | | | The bump allocator was recently added in 29f6a959c. Our other allocators have a similar macro to this, but seemingly the version of the macro for those allocators is only used in places where the chunk header is decoded. Since the bump allocator has no chunk header, none of those functions exist for bump therefore macro is unused. Remove it. Reported-by: Peter Eisentraut Discussion: https://postgr.es/m/5f724fb2-96e1-4f36-b65b-47b337ad432e@eisentraut.org
* Fix incorrect KeeperBlock macro in bump.cDavid Rowley2024-04-08
| | | | | | | | | | | | | The macro was missing a MAXALIGN around the sizeof(BumpContext) which would cause problems detecting the keeper block on 32-bit systems that have a MAXALIGN value of 8. Thank you to Andres Freund, Tomas Vondra and Tom Lane for investigating and testing. Reported-by: Melanie Plageman, Tomas Vondra Discussion: https://postgr.es/m/CAAKRu_Y6dZjiJEZghgNZp0Gjar1JVq-CH7XGDqExDVHnPgDjuw@mail.gmail.com Discussion: https://postgr.es/m/a4a10b89-6ba8-4abd-b449-019aafff04fc@enterprisedb.com
* Introduce a bump memory allocatorDavid Rowley2024-04-08
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This introduces a bump MemoryContext type. The bump context is best suited for short-lived memory contexts which require only allocations of memory and never a pfree or repalloc, which are unsupported. Memory palloc'd into a bump context has no chunk header. This makes bump a useful context type when lots of small allocations need to be done without any need to pfree those allocations. Allocation sizes are rounded up to the next MAXALIGN boundary, so with this and no chunk header, allocations are very compact indeed. Allocations are also very fast as bump does not check any freelists to try and make use of previously free'd chunks. It just checks if there is enough room on the current block, and if so it bumps the freeptr beyond this chunk and returns the value that the freeptr was previously pointing to. Simple and fast. A new block is malloc'd when there's not enough space in the current block. Code using the bump allocator must take care never to call any functions which could try to call realloc() (or variants), pfree(), GetMemoryChunkContext() or GetMemoryChunkSpace() on a bump allocated chunk. Due to lack of chunk headers, these operations are unsupported. To increase the chances of catching such issues, when compiled with MEMORY_CONTEXT_CHECKING, bump allocated chunks are given a header and any attempt to perform an unsupported operation will result in an ERROR. Without MEMORY_CONTEXT_CHECKING, code attempting an unsupported operation could result in a segfault. A follow-on commit will implement the first user of bump. Author: David Rowley Reviewed-by: Nathan Bossart Reviewed-by: Matthias van de Meent Reviewed-by: Tomas Vondra Reviewed-by: John Naylor Discussion: https://postgr.es/m/CAApHDvqGSpCU95TmM=Bp=6xjL_nLys4zdZOpfNyWBk97Xrdj2w@mail.gmail.com
* Enlarge bit-space for MemoryContextMethodIDDavid Rowley2024-04-07
| | | | | | | | | | | | | | | | Reserve 4 bits for MemoryContextMethodID rather than 3. 3 bits did technically allow a maximum of 8 memory context types, however, we've opted to reserve some bit patterns which left us with only 4 slots, all of which were used. Here we add another bit which frees up 8 slots for future memory context types. In passing, adjust the enum names in MemoryContextMethodID to make it more clear which ones can be used and which ones are reserved. Author: Matthias van de Meent, David Rowley Discussion: https://postgr.es/m/CAApHDvqGSpCU95TmM=Bp=6xjL_nLys4zdZOpfNyWBk97Xrdj2w@mail.gmail.com
* Allow specifying initial and maximum segment sizes for DSA.Masahiko Sawada2024-03-27
| | | | | | | | | | | | | | Previously, the DSA segment size always started with 1MB and grew up to DSA_MAX_SEGMENT_SIZE. It was inconvenient in certain scenarios, such as when the caller desired a soft constraint on the total DSA segment size, limiting it to less than 1MB. This commit introduces the capability to specify the initial and maximum DSA segment sizes when creating a DSA area, providing more flexibility and control over memory usage. Reviewed-by: John Naylor, Tomas Vondra Discussion: https://postgr.es/m/CAD21AoAYGGC1ePjVX0H%2Bpp9rH%3D9vuPK19nNOiu12NprdV5TVJA%40mail.gmail.com
* Make the order of the header file includes consistentPeter Eisentraut2024-03-13
| | | | | | | | Similar to commit 7e735035f20. Author: Richard Guo <guofenglinux@gmail.com> Reviewed-by: Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/CAMbWs4-WhpCFMbXCjtJ%2BFzmjfPrp7Hw1pk4p%2BZpU95Kh3ofZ1A%40mail.gmail.com
* Avoid recursion in MemoryContext functionsAlexander Korotkov2024-03-08
| | | | | | | | | | | | | | | | | | | | You might run out of stack space with recursion, which is not nice in functions that might be used e.g. at cleanup after transaction abort. MemoryContext contains pointer to parent and siblings, so we can traverse a tree of contexts iteratively, without using stack. Refactor the functions to do that. MemoryContextStats() still recurses, but it now has a limit to how deep it recurses. Once the limit is reached, it prints just a summary of the rest of the hierarchy, similar to how it summarizes contexts with lots of children. That seems good anyway, because a context dump with hundreds of nested contexts isn't very readable. Report by Egor Chindyaskin and Alexander Lakhin. Discussion: https://postgr.es/m/1672760457.940462079%40f306.i.mail.ru Author: Heikki Linnakangas Reviewed-by: Robert Haas, Andres Freund, Alexander Korotkov, Tom Lane
* Add template for adaptive radix treeJohn Naylor2024-03-07
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This implements a radix tree data structure based on the design in "The Adaptive Radix Tree: ARTful Indexing for Main-Memory Databases" by Viktor Leis, Alfons Kemper, and ThomasNeumann, 2013. The main technique that makes it adaptive is using several different node types, each with a different capacity of elements, and a different algorithm for accessing them. The nodes start small and grow/shrink as needed. The main advantage over hash tables is efficient sorted iteration and better memory locality when successive keys are lexicographically close together. The implementation currently assumes 64-bit integer keys, and traversing the tree is in general slower than a linear probing hash table, so this is not a general-purpose associative array. The paper describes two other techniques not implemented here, namely "path compression" and "lazy expansion". These can further reduce memory usage and speed up traversal, but the former would add significant complexity and the latter requires storing the full key with the value. We do trivially compress the path when leading bytes of the key are zeros, however. For value storage, we use "combined pointer/value slots", as recommended in the paper. Values of size equal or smaller than the the platform's pointer type are stored in the array of child pointers in the last level node, while larger values are each stored in a separate allocation. This is for now fixed at compile time, but it would be fairly trivial to allow determining at runtime how variable-length values are stored. One innovation in our implementation compared to the ART paper is decoupling the notion of node "size class" from "kind". The size classes within a given node kind have the same underlying type, but a variable capacity for children, so we can introduce additional node sizes with little additional code. To enable different use cases to specialize for different value types and for shared/local memory, we use macro-templatized code generation in the same manner as simplehash.h and sort_template.h. Future commits will use this infrastructure for storing TIDs. Patch by Masahiko Sawada and John Naylor, but a substantial amount of credit is due to Andres Freund, whose proof-of-concept was a valuable source of coding idioms and awareness of performance pitfalls, and who reviewed earlier versions. Discussion: https://postgr.es/m/CAD21AoAfOZvmfR0j8VmZorZjL7RhTiQdVttNuC4W-Shdc2a-AA%40mail.gmail.com
* Remove unused #include's from backend .c filesPeter Eisentraut2024-03-04
| | | | | | | | | | | | | | | | | | | | | | | | | | | | as determined by include-what-you-use (IWYU) While IWYU also suggests to *add* a bunch of #include's (which is its main purpose), this patch does not do that. In some cases, a more specific #include replaces another less specific one. Some manual adjustments of the automatic result: - IWYU currently doesn't know about includes that provide global variable declarations (like -Wmissing-variable-declarations), so those includes are being kept manually. - All includes for port(ability) headers are being kept for now, to play it safe. - No changes of catalog/pg_foo.h to catalog/pg_foo_d.h, to keep the patch from exploding in size. Note that this patch touches just *.c files, so nothing declared in header files changes in hidden ways. As a small example, in src/backend/access/transam/rmgr.c, some IWYU pragma annotations are added to handle a special case there. Discussion: https://www.postgresql.org/message-id/flat/af837490-6b2f-46df-ba05-37ea6a6653fc%40eisentraut.org
* Optimize GenerationAlloc() and SlabAlloc()David Rowley2024-03-04
| | | | | | | | | | | | | | | | | | | | | | | | In a similar effort to 413c18401, separate out the hot and cold paths in GenerationAlloc() and SlabAlloc() to avoid having to setup the stack frame for the hot path. This additionally adjusts how we use the GenerationContext's freeblock. Freeblock, when set, is now always empty and we only switch to using it when the current allocation request finds the current block does not have enough space and the freeblock is large enough to accomodate the allocation. This commit also adjusts GenerationFree() so that if we pfree the final allocation in the current generation block, we now mark that block as empty and keep it as the current block. Previously we free'd that block and set the current block to NULL. Doing that meant we needed a special case in GenerationAlloc to check if GenerationContext.block was NULL. So this both reduces free/malloc calls and reduces the work done in GenerationAlloc(). In passing, improve some comments in aset.c Discussion: https://postgr.es/m/CAApHDvpHVSJqqb4B4OZLixr=CotKq-eKkbwZqvZVo_biYvUvQA@mail.gmail.com
* Fix integer underflow in shared memory debuggingDaniel Gustafsson2024-02-29
| | | | | | | | | | | dsa_dump would print a large negative number instead of zero for segment bin 0. Fix by explicitly checking for underflow and add special case for bin 0. Backpatch to all supported versions. Author: Ian Ilyasov <ianilyasov@outlook.com> Reviewed-by: Robert Haas <robertmhaas@gmail.com> Discussion: https://postgr.es/m/GV1P251MB1004E0D09D117D3CECF9256ECD502@GV1P251MB1004.EURP251.PROD.OUTLOOK.COM Backpatch-through: v12
* Refactor AllocSetAlloc(), separating hot and cold pathsDavid Rowley2024-02-28
| | | | | | | | | | | | | | | | | | Allocating from a free list or from a block which contains enough space already, we deem to be common code paths and want to optimize for those. Having to allocate a new block, either a normal block or a dedicated one for a large allocation, we deem to be less common, therefore we class that as "cold". Both cold paths require a malloc so are going to be slower as a result of that regardless. The main motivation here is to remove the calls to malloc() in the hot path and because of this, the compiler is now free to not bother setting up the stack frame in AllocSetAlloc(), thus making the hot path much cheaper. Author: Andres Freund Reviewed-by: David Rowley Discussion: https://postgr.es/m/20210719195950.gavgs6ujzmjfaiig@alap3.anarazel.de
* Adjust memory allocation functions to allow sibling callsDavid Rowley2024-02-27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Many modern compilers are able to optimize function calls to functions where the parameters of the called function match a leading subset of the calling function's parameters. If there are no instructions in the calling function after the function is called, then the compiler is free to avoid any stack frame setup and implement the function call as a "jmp" rather than a "call". This is called sibling call optimization. Here we adjust the memory allocation functions in mcxt.c to allow this optimization. This requires moving some responsibility into the memory context implementations themselves. It's now the responsibility of the MemoryContext to check for malloc failures. This is good as it both allows the sibling call optimization, but also because most small and medium allocations won't call malloc and just allocate memory to an existing block. That can't fail, so checking for NULLs in that case isn't required. Also, traditionally it's been the responsibility of palloc and the other allocation functions in mcxt.c to check for invalid allocation size requests. Here we also move the responsibility of checking that into the MemoryContext. This isn't to allow the sibling call optimization, but more because most of our allocators handle large allocations separately and we can just add the size check when doing large allocations. We no longer check this for non-large allocations at all. To make checking the allocation request sizes and ERROR handling easier, add some helper functions to mcxt.c for the allocators to use. Author: Andres Freund Reviewed-by: David Rowley Discussion: https://postgr.es/m/20210719195950.gavgs6ujzmjfaiig@alap3.anarazel.de
* Add EXPLAIN (MEMORY) to report planner memory consumptionAlvaro Herrera2024-01-29
| | | | | | | | | | | | | | | | | | | | This adds a new "Memory:" line under the "Planning:" group (which currently only has "Buffers:") when the MEMORY option is specified. In order to make the reporting reasonably accurate, we create a separate memory context for planner activities, to be used only when this option is given. The total amount of memory allocated by that context is reported as "allocated"; we subtract memory in the context's freelists from that and report that result as "used". We use MemoryContextStatsInternal() to obtain the quantities. The code structure to show buffer usage during planning was not in amazing shape, so I (Álvaro) modified the patch a bit to clean that up in passing. Author: Ashutosh Bapat Reviewed-by: David Rowley, Andrey Lepikhov, Jian He, Andy Fan Discussion: https://www.postgresql.org/message-id/CAExHW5sZA=5LJ_ZPpRO-w09ck8z9p7eaYAqq3Ks9GDfhrxeWBw@mail.gmail.com
* Update comment, generation mem contexts have a "keeper" blockHeikki Linnakangas2024-01-26
| | | | | The keeper block was introduced in commit 1b0d9aa4f7, but it forgot to update this comment.
* Update copyright for 2024Bruce Momjian2024-01-03
| | | | | | | | Reported-by: Michael Paquier Discussion: https://postgr.es/m/ZZKTDPxBBMt3C0J9@paquier.xyz Backpatch-through: 12
* Simplify newNode() by removing special casesHeikki Linnakangas2023-12-19
| | | | | | | | | | | | | | | | | | | | | - Remove MemoryContextAllocZeroAligned(). It was supposed to be a faster version of MemoryContextAllocZero(), but modern compilers turn the MemSetLoop() into a call to memset() anyway, making it more or less identical to MemoryContextAllocZero(). That was the only user of MemSetTest, MemSetLoop, so remove those too, as well as palloc0fast(). - Convert newNode() to a static inline function. When this was originally originally written, it was written as a macro because testing showed that gcc didn't inline the size check as we intended. Modern compiler versions do, and now that it just calls palloc0() there is no size-check to inline anyway. One nice effect is that the palloc0() takes one less argument than MemoryContextAllocZeroAligned(), which saves a few instructions in the callers of newNode(). Reviewed-by: Peter Eisentraut, Tom Lane, John Naylor, Thomas Munro Discussion: https://www.postgresql.org/message-id/b51f1fa7-7e6a-4ecc-936d-90a8a1659e7c@iki.fi
* Fix dsa.c with different resource owners.Heikki Linnakangas2023-11-15
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The comments in dsa.c suggested that areas were owned by resource owners, but it was not in fact tracked explicitly. The DSM attachments held by the dsa were owned by resource owners, but not the area itself. That led to confusion if you used one resource owner to attach or create the area, but then switched to a different resource owner before allocating or even just accessing the allocations in the area with dsa_get_address(). The additional DSM segments associated with the area would get owned by a different resource owner than the initial segment. To fix, add an explicit 'resowner' field to dsa_area. It replaces the 'mapping_pinned' flag; resowner == NULL now indicates that the mapping is pinned. This is arguably a bug fix, but I'm not backpatching because it doesn't seem to be a live bug in the back branches. In 'master', it is a bug because commit b8bff07daa made ResourceOwners more strict so that you are no longer allowed to remember new resources in a ResourceOwner after you have started to release it. Merely accessing a dsa pointer might need to attach a new DSM segment, and before this commit it was temporarily remembered in the current owner for a very brief period even if the DSA was pinned. And that could happen in AtEOXact_PgStat(), which is called after the owner is already released. Reported-by: Alexander Lakhin Reviewed-by: Alexander Lakhin, Thomas Munro, Andres Freund Discussion: https://www.postgresql.org/message-id/11b70743-c5f3-3910-8e5b-dd6c115ff829%40gmail.com
* Shrink memory contexts struct sizesDavid Rowley2023-07-17
| | | | | | | | | | | | | | | | | | | | | | | | | | | Here we reduce the block size fields in AllocSetContext, GenerationContext and SlabContext from Size down to uint32. Ever since c6e0fe1f2, blocks for non-dedicated palloc chunks can no longer be larger than 1GB, so there's no need to store the various block size fields as 64-bit values. 32 bits are enough to store 2^30. Here we also further reduce the memory context struct sizes by getting rid of the 'keeper' field which stores a pointer to the context's keeper block. All the context types which have this field always allocate the keeper block in the same allocation as the memory context itself, so the keeper block always comes right at the end of the context struct. Add some macros to calculate that address rather than storing it in the context. Overall, in AllocSetContext and GenerationContext, this saves 20 bytes on 64-bit builds which for ALLOCSET_SMALL_SIZES can sometimes mean the difference between having to allocate a 2nd block and storing all the required allocations on the keeper block alone. Such contexts are used in relcache to store cache entries for indexes, of which there can be a large number in a single backend. Author: Melih Mutlu Reviewed-by: David Rowley Discussion: https://postgr.es/m/CAGPVpCSOW3uJ1QJmsMR9_oE3X7fG_z4q0AoU4R_w+2RzvroPFg@mail.gmail.com
* Re-bin segment when memory pages are freed.Thomas Munro2023-07-04
| | | | | | | | | | | | | | | | | | | | | It's OK to be lazy about re-binning memory segments when allocating, because that can only leave segments in a bin that's too high. We'll search higher bins if necessary while allocating next time, and also eventually re-bin, so no memory can become unreachable that way. However, when freeing memory, the largest contiguous range of free pages might go up, so we should re-bin eagerly to make sure we don't leave the segment in a bin that is too low for get_best_segment() to find. The re-binning code is moved into a function of its own, so it can be called whenever free pages are returned to the segment's free page map. Back-patch to all supported releases. Author: Dongming Liu <ldming101@gmail.com> Reviewed-by: Robert Haas <robertmhaas@gmail.com> (earlier version) Reviewed-by: Thomas Munro <thomas.munro@gmail.com> Discussion: https://postgr.es/m/CAL1p7e8LzB2LSeAXo2pXCW4%2BRya9s0sJ3G_ReKOU%3DAjSUWjHWQ%40mail.gmail.com
* Pre-beta mechanical code beautification.Tom Lane2023-05-19
| | | | | | | | | | | | | | | Run pgindent, pgperltidy, and reformat-dat-files. This set of diffs is a bit larger than typical. We've updated to pg_bsd_indent 2.1.2, which properly indents variable declarations that have multi-line initialization expressions (the continuation lines are now indented one tab stop). We've also updated to perltidy version 20230309 and changed some of its settings, which reduces its desire to add whitespace to lines to make assignments etc. line up. Going forward, that should make for fewer random-seeming changes to existing code. Discussion: https://postgr.es/m/20230428092545.qfb3y5wcu4cm75ur@alvherre.pgsql
* Fix various typos and incorrect/outdated name referencesDavid Rowley2023-04-19
| | | | | Author: Alexander Lakhin Discussion: https://postgr.es/m/699beab4-a6ca-92c9-f152-f559caf6dc25@gmail.com
* Fix various typosDavid Rowley2023-04-18
| | | | | | | | | | | | This fixes many spelling mistakes in comments, but a few references to invalid parameter names, function names and option names too in comments and also some in string constants Also, fix an #undef that was undefining the incorrect definition Author: Alexander Lakhin Reviewed-by: Justin Pryzby Discussion: https://postgr.es/m/d5f68d19-c0fc-91a9-118d-7c6a5a3f5fad@gmail.com
* Adjust Valgrind macro usage to protect chunk headersDavid Rowley2023-04-15
| | | | | | | | | | | | | | Prior to this commit we only ever protected MemoryChunk's requested_size field with Valgrind NOACCESS. This means that if the hdrmask field is ever accessed accidentally then we're not going to get any warnings from Valgrind about it. Valgrind would have warned us about the problem fixed in 92957ed98 had we already been doing this. Per suggestion from Tom Lane Reviewed-by: Richard Guo Discussion: https://postgr.es/m/1650235.1672694719@sss.pgh.pa.us Discussion: https://postgr.es/m/CAApHDvr=FZNGbj252Z6M9BSFKoq6BMxgkQ2yEAGUYoo7RquqZg@mail.gmail.com
* Fix minor signed/unsigned mixupPeter Eisentraut2023-04-05
| | | | | The chunk header is unsigned, and the output format takes unsigned, so casting it to signed in between is incorrect.
* Fix erroneous Valgrind markings in AllocSetRealloc.Tom Lane2023-02-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If asked to decrease the size of a large (>8K) palloc chunk, AllocSetRealloc could improperly change the Valgrind state of memory beyond the new end of the chunk: it would mark data UNDEFINED as far as the old end of the chunk after having done the realloc(3) call, thus tromping on the state of memory that no longer belongs to it. One would normally expect that memory to now be marked NOACCESS, so that this mislabeling might prevent detection of later errors. If realloc() had chosen to move the chunk someplace else (unlikely, but well within its rights) we could also mismark perfectly-valid DEFINED data as UNDEFINED, causing false-positive valgrind reports later. Also, any malloc bookkeeping data placed within this area might now be wrongly marked, causing additional problems. Fix by replacing relevant uses of "oldsize" with "Min(size, oldsize)". It's sufficient to mark as far as "size" when that's smaller, because whatever remains in the new chunk size will be marked NOACCESS below, and we expect realloc() to have taken care of marking the memory beyond the new official end of the chunk. While we're here, also rename the function's "oldsize" variable to "oldchksize" to more clearly explain what it actually holds, namely the distance to the end of the chunk (that is, requested size plus trailing padding). This is more consistent with the use of "size" and "chksize" to hold the new requested size and chunk size. Add a new variable "oldsize" in the one stanza where we're actually talking about the old requested size. Oversight in commit c477f3e44. Back-patch to all supported branches, as that was, just in case anybody wants to do valgrind testing on back branches. Karina Litskevich Discussion: https://postgr.es/m/CACiT8iaAET-fmzjjZLjaJC4zwSJmrFyL7LAdHwaYyjjQOQ4hcg@mail.gmail.com
* Add MSVC support for pg_leftmost_one_pos32() and friendsJohn Naylor2023-02-20
| | | | | | | | | | | | | | | To allow testing for general support for fast bitscan intrinsics, add symbols HAVE_BITSCAN_REVERSE and HAVE_BITSCAN_FORWARD. Also do related cleanup in AllocSetFreeIndex(): Previously, we tested for HAVE__BUILTIN_CLZ and copied the relevant internals of pg_leftmost_one_pos32(), with a special fallback that does less work than the general fallback for that function. Now that we have a more general test, we just call pg_leftmost_one_pos32() directly for platforms with intrinsic support. On gcc at least, there is no difference in the binary for non-assert builds. Discussion: https://www.postgresql.org/message-id/CAFBsxsEPc%2BFnX_0vmmQ5DHv60sk4rL_RZJ%2BMD6ei%3D76L0kFMvA%40mail.gmail.com
* Avoid type cheats for invalid dsa_handles and dshash_table_handles.Tom Lane2023-01-25
| | | | | | | | | | | | | | Invent separate macros for "invalid" values of these types, so that we needn't embed knowledge of their representations into calling code. These are all zeroes anyway ATM, so this is not fixing any live bug, but it makes the code cleaner and more future-proof. I (tgl) also chose to move DSM_HANDLE_INVALID into dsm_impl.h, since it seems like it should live beside the typedef for dsm_handle. Hou Zhijie, Nathan Bossart, Kyotaro Horiguchi, Tom Lane Discussion: https://postgr.es/m/OS0PR01MB5716860B1454C34E5B179B6694C99@OS0PR01MB5716.jpnprd01.prod.outlook.com
* Fix typos in code and commentsMichael Paquier2023-01-11
| | | | | Author: Justin Pryzby Discussion: https://postgr.es/m/20230110045722.GD9837@telsasoft.com
* Fix some compiler warnings in aset.c and generation.cDavid Rowley2023-01-05
| | | | | | | | | | This fixes a couple of unused variable warnings that could be seen when compiling with MEMORY_CONTEXT_CHECKING but not USE_ASSERT_CHECKING. Defining MEMORY_CONTEXT_CHECKING without asserts is a little unusual, however, we shouldn't be producing any warnings from such a build. Author: Richard Guo Discussion: https://postgr.es/m/CAMbWs4_D-vgLEh7eO47p=73u1jWO78NWf6Qfv1FndY1kG-Q-jA@mail.gmail.com
* Update copyright for 2023Bruce Momjian2023-01-02
| | | | Backpatch-through: 11
* Add palloc_aligned() to allow aligned memory allocationsDavid Rowley2022-12-22
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This introduces palloc_aligned() and MemoryContextAllocAligned() which allow callers to obtain memory which is allocated to the given size and also aligned to the specified alignment boundary. The alignment boundaries may be any power-of-2 value. Currently, the alignment is capped at 2^26, however, we don't expect values anything like that large. The primary expected use case is to align allocations to perhaps CPU cache line size or to maybe I/O page size. Certain use cases can benefit from having aligned memory by either having better performance or more predictable performance. The alignment is achieved by requesting 'alignto' additional bytes from the underlying allocator function and then aligning the address that is returned to the requested alignment. This obviously does waste some memory, so alignments should be kept as small as what is required. It's also important to note that these alignment bytes eat into the maximum allocation size. So something like: palloc_aligned(MaxAllocSize, 64, 0); will not work as we cannot request MaxAllocSize + 64 bytes. Additionally, because we're just requesting the requested size plus the alignment requirements from the given MemoryContext, if that context is the Slab allocator, then since slab can only provide chunks of the size that's specified when the slab context is created, then this is not going to work. Slab will generate an error to indicate that the requested size is not supported. The alignment that is requested in palloc_aligned() is stored along with the allocated memory. This allows the alignment to remain intact through repalloc() calls. Author: Andres Freund, David Rowley Reviewed-by: Maxim Orlov, Andres Freund, John Naylor Discussion: https://postgr.es/m/CAApHDvpxLPUMV1mhxs6g7GNwCP6Cs6hfnYQL5ffJQTuFAuxt8A%40mail.gmail.com
* Fix newly introduced bug in slab.cDavid Rowley2022-12-22
| | | | | | | | | | | | | | | | | | | | d21ded75f changed the way slab.c works but introduced a bug that meant we could end up with the slab's curBlocklistIndex pointing to the wrong list. The condition which was checking for this was failing to account for two things: 1. The curBlocklistIndex could be 0 as we've currently got no non-full blocks to put chunks on. In this case, the dlist_is_empty() check cannot be performed as there can be any number of completely full blocks at that index. 2. The curBlocklistIndex may be greater than the index we just moved the block onto. Since we need to ensure we fill up fuller blocks first, we must reset curBlocklistIndex when changing any blocklist element that's less than the curBlocklistIndex too. Reported-by: Takamichi Osumi Discussion: https://postgr.es/m/TYCPR01MB8373329C6329768D7E093D68EDEB9@TYCPR01MB8373.jpnprd01.prod.outlook.com
* Add copyright notices to meson filesAndrew Dunstan2022-12-20
| | | | Discussion: https://postgr.es/m/222b43a5-2fb3-2c1b-9cd0-375d376c8246@dunslane.net
* Improve the performance of the slab memory allocatorDavid Rowley2022-12-20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Slab has traditionally been fairly slow when compared with the AllocSet or Generation memory allocators. Part of this slowness came from having to write out an entire block when we allocate a new block in order to populate the free list indexes within the block's memory. Additional slowness came from having to move a block onto another dlist each time we palloc or pfree a chunk from it. Here we optimize both of those cases and do a little bit extra to improve the performance of the slab allocator. Here, instead of writing out the free list indexes when allocating a new block, we introduce the concept of "unused" chunks. When a block is first allocated all chunks are unused. These chunks only make it onto the free list when they are pfree'd. When allocating new chunks on an existing block, we have the choice of consuming a chunk from the free list or an unused chunk. When both exist, we opt to use one from the free list, as these have been used already and the memory of them is more likely to be cached by the CPU. Here we also reduce the number of block lists from there being one for every possible value of free chunks on a block to just having a small fixed number of block lists. We keep the 0th block list for completely full blocks and anything else stores blocks for some range of free chunks with fuller blocks appearing on lower block list array elements. This reduces how often we must move a block to another list when we allocate or free chunks, but still allows us to prefer to put new chunks on fuller blocks and perhaps allow blocks with fewer chunks to be free'd later once all their remaining chunks have been pfree'd. Additionally, we now store a list of "emptyblocks", which are blocks that no longer contain any allocated chunks. We now keep up to 10 of these around to avoid having to thrash malloc/free when allocation patterns continually cause blocks to become free of any allocated chunks only to allocate more chunks again. Now only once we have 10 of these, we free the block. This does raise the high water mark for the total memory that a slab context can consume. It does not seem entirely unreasonable that we might one day want to make this a property of SlabContext rather than a compile-time constant. Let's wait and see if there is any evidence to support that this is required before doing it. Author: Andres Freund, David Rowley Tested-by: Tomas Vondra, John Naylor Discussion: https://postgr.es/m/20210717194333.mr5io3zup3kxahfm@alap3.anarazel.de
* Static assertions cleanupPeter Eisentraut2022-12-15
| | | | | | | | | | | | | | | | | | | | | Because we added StaticAssertStmt() first before StaticAssertDecl(), some uses as well as the instructions in c.h are now a bit backwards from the "native" way static assertions are meant to be used in C. This updates the guidance and moves some static assertions to better places. Specifically, since the addition of StaticAssertDecl(), we can put static assertions at the file level. This moves a number of static assertions out of function bodies, where they might have been stuck out of necessity, to perhaps better places at the file level or in header files. Also, when the static assertion appears in a position where a declaration is allowed, then using StaticAssertDecl() is more native than StaticAssertStmt(). Reviewed-by: John Naylor <john.naylor@enterprisedb.com> Discussion: https://www.postgresql.org/message-id/flat/941a04e7-dd6f-c0e4-8cdf-a33b3338cbda%40enterprisedb.com
* Add repalloc0 and repalloc0_arrayPeter Eisentraut2022-11-12
| | | | | | | | These zero out the space added by repalloc. This is a common pattern that is quite hairy to code by hand. Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/b66dfc89-9365-cb57-4e1f-b7d31813eeec@enterprisedb.com
* Remove AssertArg and AssertStatePeter Eisentraut2022-10-28
| | | | | | | | | These don't offer anything over plain Assert, and their usage had already been declared obsolescent. Author: Nathan Bossart <nathandbossart@gmail.com> Reviewed-by: Michael Paquier <michael@paquier.xyz> Discussion: https://www.postgresql.org/message-id/20221009210148.GA900071@nathanxps13
* Rename SetSingleFuncCall() to InitMaterializedSRF()Michael Paquier2022-10-18
| | | | | | | | | | | | | | | | | | Per discussion, the existing routine name able to initialize a SRF function with materialize mode is unpopular, so rename it. Equally, the flags of this function are renamed, as of: - SRF_SINGLE_USE_EXPECTED -> MAT_SRF_USE_EXPECTED_DESC - SRF_SINGLE_BLESS -> MAT_SRF_BLESS The previous function and flags introduced in 9e98583 are kept around for compatibility purposes, so as any extension code already compiled with v15 continues to work as-is. The declarations introduced here for compatibility will be removed from HEAD in a follow-up commit. The new names have been suggested by Andres Freund and Melanie Plageman. Discussion: https://postgr.es/m/20221013194820.ciktb2sbbpw7cljm@awork3.anarazel.de Backpatch-through: 15
* Make some minor improvements in memory-context infrastructure.Tom Lane2022-10-14
| | | | | | | | | | | | | | | We lack a version of repalloc() that supports MCXT_ALLOC_NO_OOM semantics, so invent repalloc_extended() with the usual set of flags. repalloc_huge() becomes a legacy wrapper for that. Also, fix dynahash.c so that it can support HASH_ENTER_NULL requests when using the default palloc-based allocator. The only reason it didn't do that already was the lack of the MCXT_ALLOC_NO_OOM option when that code was written, ages ago. While here, simplify a few overcomplicated tests in mcxt.c. Discussion: https://postgr.es/m/2982579.1662416866@sss.pgh.pa.us
* Harden memory context allocators against bogus chunk pointers.Tom Lane2022-10-10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Before commit c6e0fe1f2, functions such as AllocSetFree could pretty safely presume that they were given a valid chunk pointer for their own type of context, because the indirect call through a memory context object and method struct would be very unlikely to work otherwise. But now, if pfree() is mistakenly invoked on a pointer to garbage, we have three chances in eight of ending up at one of these functions. That means we need to take extra measures to verify that we are looking at what we're supposed to be looking at, especially in debug builds. Hence, add code to verify that the chunk's back-link to a block header leads to a memory context object that satisfies the right sort of IsA() check. This is still a bit weaker than what we did before, but for the moment assume that an IsA() check is sufficient. As a compromise between speed and safety, implement these checks as Asserts when dealing with small chunks but plain test-and-elogs when dealing with large (external) chunks. The latter case should not be too performance-critical, but the former case probably is. In slab.c, all chunks are small; but nonetheless use a plain test in SlabRealloc, because that is certainly not performance-critical, indeed we should be suspicious that it's being called in error. In aset.c, additionally add some assertions that the "value" field of the chunk header is within the small range allowed for freelist indexes. Without that, we might find ourselves trying to wipe most of memory when CLOBBER_FREED_MEMORY is enabled, or scribbling on a "freelist header" that's far away from the context object. Eventually, field experience might show us that it's smarter for these tests to be active always, but for now we'll try to get away with just having them as assertions. While at it, also be more uniform about asserting that context objects passed as parameters are of the type we expect. Some places missed that altogether, and slab.c was for no very good reason doing it differently from the other allocators. Discussion: https://postgr.es/m/3578387.1665244345@sss.pgh.pa.us
* Improve our ability to detect bogus pointers passed to pfree et al.Tom Lane2022-10-06
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Commit c6e0fe1f2 was a shade too trusting that any pointer passed to pfree, repalloc, etc will point at a valid chunk. Notably, passing a pointer that was actually obtained from malloc tended to result in obscure assertion failures, if not worse. (On FreeBSD I've seen such mistakes take down the entire cluster, seemingly as a result of clobbering shared memory.) To improve matters, extend the mcxt_methods[] array so that it has entries for every possible MemoryContextMethodID bit-pattern, with the currently unassigned ID codes pointing to error-reporting functions. Then, fiddle with the ID assignments so that patterns likely to be associated with bad pointers aren't valid ID codes. In particular, we should avoid assigning bit patterns 000 (zeroed memory) and 111 (wipe_mem'd memory). It turns out that on glibc (Linux), malloc uses chunk headers that have flag bits in the same place we keep MemoryContextMethodID, and that the bit patterns 000, 001, 010 are the only ones we'll see as long as the backend isn't threaded. So we can have very robust detection of pfree'ing a malloc-assigned block on that platform, at least so long as we can refrain from using up those ID codes. On other platforms, we don't have such a good guarantee, but keeping 000 reserved will be enough to catch many such cases. While here, make GetMemoryChunkMethodID() local to mcxt.c, as there seems no need for it to be exposed even in memutils_internal.h. Patch by me, with suggestions from Andres Freund and David Rowley. Discussion: https://postgr.es/m/2910981.1665080361@sss.pgh.pa.us
* Remove MemoryContextContains().Tom Lane2022-10-06
| | | | | | | | | | | | | | | | | MemoryContextContains is no longer reliable in the wake of c6e0fe1f2, because there's no longer very much redundancy in chunk headers. (It wasn't *completely* reliable even before that, as there was a chance of a false positive if you passed it something that didn't point to an mcxt chunk at all. But it was generally good enough.) Hence, remove it. There is no remaining core code that requires it. Extensions that have been using it might be able to substitute a test like "GetMemoryChunkContext(ptr) == context", recognizing that this explicitly requires that the pointer point to some chunk. Tom Lane and David Rowley Discussion: https://postgr.es/m/1913788.1664898906@sss.pgh.pa.us
* meson: Add initial version of meson based build systemAndres Freund2022-09-21
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Autoconf is showing its age, fewer and fewer contributors know how to wrangle it. Recursive make has a lot of hard to resolve dependency issues and slow incremental rebuilds. Our home-grown MSVC build system is hard to maintain for developers not using Windows and runs tests serially. While these and other issues could individually be addressed with incremental improvements, together they seem best addressed by moving to a more modern build system. After evaluating different build system choices, we chose to use meson, to a good degree based on the adoption by other open source projects. We decided that it's more realistic to commit a relatively early version of the new build system and mature it in tree. This commit adds an initial version of a meson based build system. It supports building postgres on at least AIX, FreeBSD, Linux, macOS, NetBSD, OpenBSD, Solaris and Windows (however only gcc is supported on aix, solaris). For Windows/MSVC postgres can now be built with ninja (faster, particularly for incremental builds) and msbuild (supporting the visual studio GUI, but building slower). Several aspects (e.g. Windows rc file generation, PGXS compatibility, LLVM bitcode generation, documentation adjustments) are done in subsequent commits requiring further review. Other aspects (e.g. not installing test-only extensions) are not yet addressed. When building on Windows with msbuild, builds are slower when using a visual studio version older than 2019, because those versions do not support MultiToolTask, required by meson for intra-target parallelism. The plan is to remove the MSVC specific build system in src/tools/msvc soon after reaching feature parity. However, we're not planning to remove the autoconf/make build system in the near future. Likely we're going to keep at least the parts required for PGXS to keep working around until all supported versions build with meson. Some initial help for postgres developers is at https://wiki.postgresql.org/wiki/Meson With contributions from Thomas Munro, John Naylor, Stone Tickle and others. Author: Andres Freund <andres@anarazel.de> Author: Nazir Bilal Yavuz <byavuz81@gmail.com> Author: Peter Eisentraut <peter@eisentraut.org> Reviewed-By: Peter Eisentraut <peter.eisentraut@enterprisedb.com> Discussion: https://postgr.es/m/20211012083721.hvixq4pnh2pixr3j@alap3.anarazel.de