Monitoring Database Activitymonitoringdatabase activitydatabase activitymonitoring
A database administrator frequently wonders, What is the system
doing right now?
This chapter discusses how to find that out.
Several tools are available for monitoring database activity and
analyzing performance. Most of this chapter is devoted to describing
PostgreSQL's statistics collector,
but one should not neglect regular Unix monitoring programs such as
ps>, top>, iostat>, and vmstat>.
Also, once one has identified a
poorly-performing query, further investigation might be needed using
PostgreSQL's command.
discusses EXPLAIN>
and other methods for understanding the behavior of an individual
query.
Standard Unix Toolspsto monitor activity
On most Unix platforms, PostgreSQL modifies its
command title as reported by ps>, so that individual server
processes can readily be identified. A sample display is
$ ps auxww | grep ^postgres
postgres 15551 0.0 0.1 57536 7132 pts/0 S 18:02 0:00 postgres -i
postgres 15554 0.0 0.0 57536 1184 ? Ss 18:02 0:00 postgres: writer process
postgres 15555 0.0 0.0 57536 916 ? Ss 18:02 0:00 postgres: checkpointer process
postgres 15556 0.0 0.0 57536 916 ? Ss 18:02 0:00 postgres: wal writer process
postgres 15557 0.0 0.0 58504 2244 ? Ss 18:02 0:00 postgres: autovacuum launcher process
postgres 15558 0.0 0.0 17512 1068 ? Ss 18:02 0:00 postgres: stats collector process
postgres 15582 0.0 0.0 58772 3080 ? Ss 18:04 0:00 postgres: joe runbug 127.0.0.1 idle
postgres 15606 0.0 0.0 58772 3052 ? Ss 18:07 0:00 postgres: tgl regression [local] SELECT waiting
postgres 15610 0.0 0.0 58772 3056 ? Ss 18:07 0:00 postgres: tgl regression [local] idle in transaction
(The appropriate invocation of ps> varies across different
platforms, as do the details of what is shown. This example is from a
recent Linux system.) The first process listed here is the
master server process. The command arguments
shown for it are the same ones used when it was launched. The next five
processes are background worker processes automatically launched by the
master process. (The stats collector> process will not be present
if you have set the system not to start the statistics collector; likewise
the autovacuum launcher> process can be disabled.)
Each of the remaining
processes is a server process handling one client connection. Each such
process sets its command line display in the form
postgres: user> database> host> activity>
The user, database, and (client) host items remain the same for
the life of the client connection, but the activity indicator changes.
The activity can be idle> (i.e., waiting for a client command),
idle in transaction> (waiting for client inside a BEGIN> block),
or a command type name such as SELECT>. Also,
waiting> is appended if the server process is presently waiting
on a lock held by another session. In the above example we can infer
that process 15606 is waiting for process 15610 to complete its transaction
and thereby release some lock. (Process 15610 must be the blocker, because
there is no other active session. In more complicated cases it would be
necessary to look into the
pg_locks
system view to determine who is blocking whom.)
If has been configured the
cluster name will also be shown in ps> output:
$ psql -c 'SHOW cluster_name'
cluster_name
--------------
server1
(1 row)
$ ps aux|grep server1
postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: server1: writer process
...
If you have turned off then the
activity indicator is not updated; the process title is set only once
when a new process is launched. On some platforms this saves a measurable
amount of per-command overhead; on others it's insignificant.
Solaris requires special handling. You must
use /usr/ucb/ps, rather than
/bin/ps. You also must use two
flags, not just one. In addition, your original invocation of the
postgres command must have a shorter
ps status display than that provided by each
server process. If you fail to do all three things, the ps>
output for each server process will be the original postgres>
command line.
The Statistics CollectorstatisticsPostgreSQL's statistics collector>
is a subsystem that supports collection and reporting of information about
server activity. Presently, the collector can count accesses to tables
and indexes in both disk-block and individual-row terms. It also tracks
the total number of rows in each table, and information about vacuum and
analyze actions for each table. It can also count calls to user-defined
functions and the total time spent in each one.
PostgreSQL also supports reporting of the exact
command currently being executed by other server processes. This
facility is independent of the collector process.
Statistics Collection Configuration
Since collection of statistics adds some overhead to query execution,
the system can be configured to collect or not collect information.
This is controlled by configuration parameters that are normally set in
postgresql.conf>. (See for
details about setting configuration parameters.)
The parameter enables monitoring
of the current command being executed by any server process.
The parameter controls whether
statistics are collected about table and index accesses.
The parameter enables tracking of
usage of user-defined functions.
The parameter enables monitoring
of block read and write times.
Normally these parameters are set in postgresql.conf> so
that they apply to all server processes, but it is possible to turn
them on or off in individual sessions using the command. (To prevent
ordinary users from hiding their activity from the administrator,
only superusers are allowed to change these parameters with
SET>.)
The statistics collector transmits the collected information to other
PostgreSQL processes through temporary files.
These files are stored in the directory named by the
parameter,
pg_stat_tmp by default.
For better performance, stats_temp_directory> can be
pointed at a RAM-based file system, decreasing physical I/O requirements.
When the server shuts down cleanly, a permanent copy of the statistics
data is stored in the pg_stat subdirectory, so that
statistics can be retained across server restarts. When recovery is
performed at server start (e.g. after immediate shutdown, server crash,
and point-in-time recovery), all statistics counters are reset.
Viewing Collected Statistics
Several predefined views, listed in , are available to show the results
of statistics collection. Alternatively, one can
build custom views using the underlying statistics functions, as discussed
in .
When using the statistics to monitor current activity, it is important
to realize that the information does not update instantaneously.
Each individual server process transmits new statistical counts to
the collector just before going idle; so a query or transaction still in
progress does not affect the displayed totals. Also, the collector itself
emits a new report at most once per PGSTAT_STAT_INTERVAL
milliseconds (500 ms unless altered while building the server). So the
displayed information lags behind actual activity. However, current-query
information collected by track_activities is
always up-to-date.
Another important point is that when a server process is asked to display
any of these statistics, it first fetches the most recent report emitted by
the collector process and then continues to use this snapshot for all
statistical views and functions until the end of its current transaction.
So the statistics will show static information as long as you continue the
current transaction. Similarly, information about the current queries of
all sessions is collected when any such information is first requested
within a transaction, and the same information will be displayed throughout
the transaction.
This is a feature, not a bug, because it allows you to perform several
queries on the statistics and correlate the results without worrying that
the numbers are changing underneath you. But if you want to see new
results with each query, be sure to do the queries outside any transaction
block. Alternatively, you can invoke
pg_stat_clear_snapshot(), which will discard the
current transaction's statistics snapshot (if any). The next use of
statistical information will cause a new snapshot to be fetched.
A transaction can also see its own statistics (as yet untransmitted to the
collector) in the views pg_stat_xact_all_tables>,
pg_stat_xact_sys_tables>,
pg_stat_xact_user_tables>, and
pg_stat_xact_user_functions>. These numbers do not act as
stated above; instead they update continuously throughout the transaction.
Standard Statistics ViewsView NameDescriptionpg_stat_activitypg_stat_activity
One row per server process, showing information related to
the current activity of that process, such as state and current query.
See for details.
pg_stat_archiver>pg_stat_archiverOne row only, showing statistics about the
WAL archiver process's activity. See
for details.
pg_stat_bgwriter>pg_stat_bgwriterOne row only, showing statistics about the
background writer process's activity. See
for details.
pg_stat_database>pg_stat_databaseOne row per database, showing database-wide statistics. See
for details.
pg_stat_all_tables>pg_stat_all_tables
One row for each table in the current database, showing statistics
about accesses to that specific table.
See for details.
pg_stat_sys_tables>pg_stat_sys_tablesSame as pg_stat_all_tables>, except that only
system tables are shown.pg_stat_user_tables>pg_stat_user_tablesSame as pg_stat_all_tables>, except that only user
tables are shown.pg_stat_xact_all_tables>pg_stat_xact_all_tablesSimilar to pg_stat_all_tables>, but counts actions
taken so far within the current transaction (which are not>
yet included in pg_stat_all_tables> and related views).
The columns for numbers of live and dead rows and vacuum and
analyze actions are not present in this view.pg_stat_xact_sys_tables>pg_stat_xact_sys_tablesSame as pg_stat_xact_all_tables>, except that only
system tables are shown.pg_stat_xact_user_tables>pg_stat_xact_user_tablesSame as pg_stat_xact_all_tables>, except that only
user tables are shown.pg_stat_all_indexes>pg_stat_all_indexes
One row for each index in the current database, showing statistics
about accesses to that specific index.
See for details.
pg_stat_sys_indexes>pg_stat_sys_indexesSame as pg_stat_all_indexes>, except that only
indexes on system tables are shown.pg_stat_user_indexes>pg_stat_user_indexesSame as pg_stat_all_indexes>, except that only
indexes on user tables are shown.pg_statio_all_tables>pg_statio_all_tables
One row for each table in the current database, showing statistics
about I/O on that specific table.
See for details.
pg_statio_sys_tables>pg_statio_sys_tablesSame as pg_statio_all_tables>, except that only
system tables are shown.pg_statio_user_tables>pg_statio_user_tablesSame as pg_statio_all_tables>, except that only
user tables are shown.pg_statio_all_indexes>pg_statio_all_indexes
One row for each index in the current database,
showing statistics about I/O on that specific index.
See for details.
pg_statio_sys_indexes>pg_statio_sys_indexesSame as pg_statio_all_indexes>, except that only
indexes on system tables are shown.pg_statio_user_indexes>pg_statio_user_indexesSame as pg_statio_all_indexes>, except that only
indexes on user tables are shown.pg_statio_all_sequences>pg_statio_all_sequences
One row for each sequence in the current database,
showing statistics about I/O on that specific sequence.
See for details.
pg_statio_sys_sequences>pg_statio_sys_sequencesSame as pg_statio_all_sequences>, except that only
system sequences are shown. (Presently, no system sequences are defined,
so this view is always empty.)pg_statio_user_sequences>pg_statio_user_sequencesSame as pg_statio_all_sequences>, except that only
user sequences are shown.pg_stat_user_functions>pg_stat_user_functions
One row for each tracked function, showing statistics
about executions of that function. See
for details.
pg_stat_xact_user_functions>pg_stat_xact_user_functionsSimilar to pg_stat_user_functions>, but counts only
calls during the current transaction (which are not>
yet included in pg_stat_user_functions>).pg_stat_replication>pg_stat_replicationOne row per WAL sender process, showing statistics about
replication to that sender's connected standby server.
See for details.
pg_stat_database_conflicts>pg_stat_database_conflicts
One row per database, showing database-wide statistics about
query cancels due to conflict with recovery on standby servers.
See for details.
The per-index statistics are particularly useful to determine which
indexes are being used and how effective they are.
The pg_statio_> views are primarily useful to
determine the effectiveness of the buffer cache. When the number
of actual disk reads is much smaller than the number of buffer
hits, then the cache is satisfying most read requests without
invoking a kernel call. However, these statistics do not give the
entire story: due to the way in which PostgreSQL>
handles disk I/O, data that is not in the
PostgreSQL> buffer cache might still reside in the
kernel's I/O cache, and might therefore still be fetched without
requiring a physical read. Users interested in obtaining more
detailed information on PostgreSQL> I/O behavior are
advised to use the PostgreSQL> statistics collector
in combination with operating system utilities that allow insight
into the kernel's handling of I/O.
pg_stat_activity ViewColumnTypeDescriptiondatid>oid>OID of the database this backend is connected todatname>name>Name of the database this backend is connected topid>integer>Process ID of this backendusesysid>oid>OID of the user logged into this backendusename>name>Name of the user logged into this backendapplication_name>text>Name of the application that is connected
to this backendclient_addr>inet>IP address of the client connected to this backend.
If this field is null, it indicates either that the client is
connected via a Unix socket on the server machine or that this is an
internal process such as autovacuum.
client_hostname>text>Host name of the connected client, as reported by a
reverse DNS lookup of client_addr>. This field will
only be non-null for IP connections, and only when is enabled.
client_port>integer>TCP port number that the client is using for communication
with this backend, or -1> if a Unix socket is used
backend_start>timestamp with time zone>Time when this process was started, i.e., when the
client connected to the server
xact_start>timestamp with time zone>Time when this process' current transaction was started, or null
if no transaction is active. If the current
query is the first of its transaction, this column is equal to the
query_start> column.
query_start>timestamp with time zone>Time when the currently active query was started, or if
state> is not active>, when the last query
was started
state_change>timestamp with time zone>Time when the state> was last changedwaiting>boolean>True if this backend is currently waiting on a lockstate>text>Current overall state of this backend.
Possible values are:
active>: The backend is executing a query.
idle>: The backend is waiting for a new client command.
idle in transaction>: The backend is in a transaction,
but is not currently executing a query.
idle in transaction (aborted)>: This state is similar to
idle in transaction>, except one of the statements in
the transaction caused an error.
fastpath function call>: The backend is executing a
fast-path function.
disabled>: This state is reported if is disabled in this backend.
backend_xidxidTop-level transaction identifier of this backend, if any.backend_xminxidThe current backend's xmin> horizon.query>text>Text of this backend's most recent query. If
state> is active> this field shows the
currently executing query. In all other states, it shows the last query
that was executed.
The pg_stat_activity view will have one row
per server process, showing information related to
the current activity of that process.
The waiting> and state> columns are
independent. If a backend is in the active> state,
it may or may not be waiting>. If the state is
active> and waiting> is true, it means
that a query is being executed, but is being blocked by a lock
somewhere in the system.
pg_stat_archiver ViewColumnTypeDescriptionarchived_count>bigintNumber of WAL files that have been successfully archivedlast_archived_wal>textName of the last WAL file successfully archivedlast_archived_time>timestamp with time zoneTime of the last successful archive operationfailed_count>bigintNumber of failed attempts for archiving WAL fileslast_failed_wal>textName of the WAL file of the last failed archival operationlast_failed_time>timestamp with time zoneTime of the last failed archival operationstats_reset>timestamp with time zoneTime at which these statistics were last reset
The pg_stat_archiver view will always have a
single row, containing data about the archiver process of the cluster.
pg_stat_bgwriter ViewColumnTypeDescriptioncheckpoints_timed>bigintNumber of scheduled checkpoints that have been performedcheckpoints_req>bigintNumber of requested checkpoints that have been performedcheckpoint_write_time>double precision
Total amount of time that has been spent in the portion of
checkpoint processing where files are written to disk, in milliseconds
checkpoint_sync_time>double precision
Total amount of time that has been spent in the portion of
checkpoint processing where files are synchronized to disk, in
milliseconds
buffers_checkpoint>bigintNumber of buffers written during checkpointsbuffers_clean>bigintNumber of buffers written by the background writermaxwritten_clean>bigintNumber of times the background writer stopped a cleaning
scan because it had written too many buffersbuffers_backend>bigintNumber of buffers written directly by a backendbuffers_backend_fsync>bigintNumber of times a backend had to execute its own
fsync> call (normally the background writer handles those
even when the backend does its own write)buffers_alloc>bigintNumber of buffers allocatedstats_reset>timestamp with time zoneTime at which these statistics were last reset
The pg_stat_bgwriter view will always have a
single row, containing global data for the cluster.
pg_stat_database ViewColumnTypeDescriptiondatid>oid>OID of a databasedatname>name>Name of this databasenumbackends>integer>Number of backends currently connected to this database.
This is the only column in this view that returns a value reflecting
current state; all other columns return the accumulated values since
the last reset.xact_commit>bigint>Number of transactions in this database that have been
committedxact_rollback>bigint>Number of transactions in this database that have been
rolled backblks_read>bigint>Number of disk blocks read in this databaseblks_hit>bigint>Number of times disk blocks were found already in the buffer
cache, so that a read was not necessary (this only includes hits in the
PostgreSQL buffer cache, not the operating system's file system cache)
tup_returned>bigint>Number of rows returned by queries in this databasetup_fetched>bigint>Number of rows fetched by queries in this databasetup_inserted>bigint>Number of rows inserted by queries in this databasetup_updated>bigint>Number of rows updated by queries in this databasetup_deleted>bigint>Number of rows deleted by queries in this databaseconflicts>bigint>Number of queries canceled due to conflicts with recovery
in this database. (Conflicts occur only on standby servers; see
for details.)
temp_files>bigint>Number of temporary files created by queries in this database.
All temporary files are counted, regardless of why the temporary file
was created (e.g., sorting or hashing), and regardless of the
setting.
temp_bytes>bigint>Total amount of data written to temporary files by queries in
this database. All temporary files are counted, regardless of why
the temporary file was created, and
regardless of the setting.
deadlocks>bigint>Number of deadlocks detected in this databaseblk_read_time>double precision>Time spent reading data file blocks by backends in this database,
in millisecondsblk_write_time>double precision>Time spent writing data file blocks by backends in this database,
in millisecondsstats_reset>timestamp with time zone>Time at which these statistics were last reset
The pg_stat_database view will contain one row
for each database in the cluster, showing database-wide statistics.
pg_stat_all_tables ViewColumnTypeDescriptionrelid>oid>OID of a tableschemaname>name>Name of the schema that this table is inrelname>name>Name of this tableseq_scan>bigint>Number of sequential scans initiated on this tableseq_tup_read>bigint>Number of live rows fetched by sequential scansidx_scan>bigint>Number of index scans initiated on this tableidx_tup_fetch>bigint>Number of live rows fetched by index scansn_tup_ins>bigint>Number of rows insertedn_tup_upd>bigint>Number of rows updatedn_tup_del>bigint>Number of rows deletedn_tup_hot_upd>bigint>Number of rows HOT updated (i.e., with no separate index
update required)n_live_tup>bigint>Estimated number of live rowsn_dead_tup>bigint>Estimated number of dead rowsn_mod_since_analyze>bigint>Estimated number of rows modified since this table was last analyzedlast_vacuum>timestamp with time zone>Last time at which this table was manually vacuumed
(not counting VACUUM FULL>)last_autovacuum>timestamp with time zone>Last time at which this table was vacuumed by the autovacuum
daemonlast_analyze>timestamp with time zone>Last time at which this table was manually analyzedlast_autoanalyze>timestamp with time zone>Last time at which this table was analyzed by the autovacuum
daemonvacuum_count>bigint>Number of times this table has been manually vacuumed
(not counting VACUUM FULL>)autovacuum_count>bigint>Number of times this table has been vacuumed by the autovacuum
daemonanalyze_count>bigint>Number of times this table has been manually analyzedautoanalyze_count>bigint>Number of times this table has been analyzed by the autovacuum
daemon
The pg_stat_all_tables view will contain
one row for each table in the current database (including TOAST
tables), showing statistics about accesses to that specific table. The
pg_stat_user_tables and
pg_stat_sys_tables views
contain the same information,
but filtered to only show user and system tables respectively.
pg_stat_all_indexes ViewColumnTypeDescriptionrelid>oid>OID of the table for this indexindexrelid>oid>OID of this indexschemaname>name>Name of the schema this index is inrelname>name>Name of the table for this indexindexrelname>name>Name of this indexidx_scan>bigint>Number of index scans initiated on this indexidx_tup_read>bigint>Number of index entries returned by scans on this indexidx_tup_fetch>bigint>Number of live table rows fetched by simple index scans using this
index
The pg_stat_all_indexes view will contain
one row for each index in the current database,
showing statistics about accesses to that specific index. The
pg_stat_user_indexes and
pg_stat_sys_indexes views
contain the same information,
but filtered to only show user and system indexes respectively.
Indexes can be used via either simple index scans or bitmap>
index scans. In a bitmap scan
the output of several indexes can be combined via AND or OR rules,
so it is difficult to associate individual heap row fetches
with specific indexes when a bitmap scan is used. Therefore, a bitmap
scan increments the
pg_stat_all_indexes>.idx_tup_read>
count(s) for the index(es) it uses, and it increments the
pg_stat_all_tables>.idx_tup_fetch>
count for the table, but it does not affect
pg_stat_all_indexes>.idx_tup_fetch>.
The idx_tup_read> and idx_tup_fetch> counts
can be different even without any use of bitmap scans,
because idx_tup_read> counts
index entries retrieved from the index while idx_tup_fetch>
counts live rows fetched from the table. The latter will be less if any
dead or not-yet-committed rows are fetched using the index, or if any
heap fetches are avoided by means of an index-only scan.
pg_statio_all_tables ViewColumnTypeDescriptionrelid>oid>OID of a tableschemaname>name>Name of the schema that this table is inrelname>name>Name of this tableheap_blks_read>bigint>Number of disk blocks read from this tableheap_blks_hit>bigint>Number of buffer hits in this tableidx_blks_read>bigint>Number of disk blocks read from all indexes on this tableidx_blks_hit>bigint>Number of buffer hits in all indexes on this tabletoast_blks_read>bigint>Number of disk blocks read from this table's TOAST table (if any)toast_blks_hit>bigint>Number of buffer hits in this table's TOAST table (if any)tidx_blks_read>bigint>Number of disk blocks read from this table's TOAST table indexes (if any)tidx_blks_hit>bigint>Number of buffer hits in this table's TOAST table indexes (if any)
The pg_statio_all_tables view will contain
one row for each table in the current database (including TOAST
tables), showing statistics about I/O on that specific table. The
pg_statio_user_tables and
pg_statio_sys_tables views
contain the same information,
but filtered to only show user and system tables respectively.
pg_statio_all_indexes ViewColumnTypeDescriptionrelid>oid>OID of the table for this indexindexrelid>oid>OID of this indexschemaname>name>Name of the schema this index is inrelname>name>Name of the table for this indexindexrelname>name>Name of this indexidx_blks_read>bigint>Number of disk blocks read from this indexidx_blks_hit>bigint>Number of buffer hits in this index
The pg_statio_all_indexes view will contain
one row for each index in the current database,
showing statistics about I/O on that specific index. The
pg_statio_user_indexes and
pg_statio_sys_indexes views
contain the same information,
but filtered to only show user and system indexes respectively.
pg_statio_all_sequences ViewColumnTypeDescriptionrelid>oid>OID of a sequenceschemaname>name>Name of the schema this sequence is inrelname>name>Name of this sequenceblks_read>bigint>Number of disk blocks read from this sequenceblks_hit>bigint>Number of buffer hits in this sequence
The pg_statio_all_sequences view will contain
one row for each sequence in the current database,
showing statistics about I/O on that specific sequence.
pg_stat_user_functions ViewColumnTypeDescriptionfuncid>oid>OID of a functionschemaname>name>Name of the schema this function is infuncname>name>Name of this functioncalls>bigint>Number of times this function has been calledtotal_time>double precision>Total time spent in this function and all other functions
called by it, in millisecondsself_time>double precision>Total time spent in this function itself, not including
other functions called by it, in milliseconds
The pg_stat_user_functions view will contain
one row for each tracked function, showing statistics about executions of
that function. The parameter
controls exactly which functions are tracked.
pg_stat_replication ViewColumnTypeDescriptionpid>integer>Process ID of a WAL sender processusesysid>oid>OID of the user logged into this WAL sender processusename>name>Name of the user logged into this WAL sender processapplication_name>text>Name of the application that is connected
to this WAL senderclient_addr>inet>IP address of the client connected to this WAL sender.
If this field is null, it indicates that the client is
connected via a Unix socket on the server machine.
client_hostname>text>Host name of the connected client, as reported by a
reverse DNS lookup of client_addr>. This field will
only be non-null for IP connections, and only when is enabled.
client_port>integer>TCP port number that the client is using for communication
with this WAL sender, or -1> if a Unix socket is used
backend_start>timestamp with time zone>Time when this process was started, i.e., when the
client connected to this WAL sender
backend_xminxidThis standby's xmin> horizon reported
by .state>text>Current WAL sender statesent_location>pg_lsn>Last transaction log position sent on this connectionwrite_location>pg_lsn>Last transaction log position written to disk by this standby
serverflush_location>pg_lsn>Last transaction log position flushed to disk by this standby
serverreplay_location>pg_lsn>Last transaction log position replayed into the database on this
standby serversync_priority>integer>Priority of this standby server for being chosen as the
synchronous standbysync_state>text>Synchronous state of this standby server
The pg_stat_replication view will contain one row
per WAL sender process, showing statistics about replication to that
sender's connected standby server. Only directly connected standbys are
listed; no information is available about downstream standby servers.
pg_stat_database_conflicts ViewColumnTypeDescriptiondatid>oid>OID of a databasedatname>name>Name of this databaseconfl_tablespace>bigint>Number of queries in this database that have been canceled due to
dropped tablespacesconfl_lock>bigint>Number of queries in this database that have been canceled due to
lock timeoutsconfl_snapshot>bigint>Number of queries in this database that have been canceled due to
old snapshotsconfl_bufferpin>bigint>Number of queries in this database that have been canceled due to
pinned buffersconfl_deadlock>bigint>Number of queries in this database that have been canceled due to
deadlocks
The pg_stat_database_conflicts view will contain
one row per database, showing database-wide statistics about
query cancels occurring due to conflicts with recovery on standby servers.
This view will only contain information on standby servers, since
conflicts do not occur on master servers.
Statistics Functions
Other ways of looking at the statistics can be set up by writing
queries that use the same underlying statistics access functions used by
the standard views shown above. For details such as the functions' names,
consult the definitions of the standard views. (For example, in
psql> you could issue \d+ pg_stat_activity>.)
The access functions for per-database statistics take a database OID as an
argument to identify which database to report on.
The per-table and per-index functions take a table or index OID.
The functions for per-function statistics take a function OID.
Note that only tables, indexes, and functions in the current database
can be seen with these functions.
Additional functions related to statistics collection are listed in .
Additional Statistics FunctionsFunctionReturn TypeDescriptionpg_backend_pid()integer
Process ID of the server process handling the current session
pg_stat_get_activity(integer)pg_stat_get_activitysetof record
Returns a record of information about the backend with the specified PID, or
one record for each active backend in the system if NULL is
specified. The fields returned are a subset of those in the
pg_stat_activity view.
pg_stat_clear_snapshot()pg_stat_clear_snapshotvoid
Discard the current statistics snapshot
pg_stat_reset()pg_stat_resetvoid
Reset all statistics counters for the current database to zero
(requires superuser privileges)
pg_stat_reset_shared(text)pg_stat_reset_sharedvoid
Reset some cluster-wide statistics counters to zero, depending on the
argument (requires superuser privileges).
Calling pg_stat_reset_shared('bgwriter')> will zero all the
counters shown in the pg_stat_bgwriter> view.
Calling pg_stat_reset_shared('archiver')> will zero all the
counters shown in the pg_stat_archiver> view.
pg_stat_reset_single_table_counters(oid)pg_stat_reset_single_table_countersvoid
Reset statistics for a single table or index in the current database to
zero (requires superuser privileges)
pg_stat_reset_single_function_counters(oid)pg_stat_reset_single_function_countersvoid
Reset statistics for a single function in the current database to
zero (requires superuser privileges)
pg_stat_get_activity, the underlying function of
the pg_stat_activity> view, returns a set of records
containing all the available information about each backend process.
Sometimes it may be more convenient to obtain just a subset of this
information. In such cases, an older set of per-backend statistics
access functions can be used; these are shown in .
These access functions use a backend ID number, which ranges from one
to the number of currently active backends.
The function pg_stat_get_backend_idset provides a
convenient way to generate one row for each active backend for
invoking these functions. For example, to show the PID>s and
current queries of all backends:
SELECT pg_stat_get_backend_pid(s.backendid) AS pid,
pg_stat_get_backend_activity(s.backendid) AS query
FROM (SELECT pg_stat_get_backend_idset() AS backendid) AS s;
Per-Backend Statistics FunctionsFunctionReturn TypeDescriptionpg_stat_get_backend_idset()setof integerSet of currently active backend ID numbers (from 1 to the
number of active backends)pg_stat_get_backend_activity(integer)textText of this backend's most recent query>
pg_stat_get_backend_activity_start(integer)timestamp with time zoneTime when the most recent query was startedpg_stat_get_backend_client_addr(integer)inetIP address of the client connected to this backendpg_stat_get_backend_client_port(integer)integerTCP port number that the client is using for communicationpg_stat_get_backend_dbid(integer)oidOID of the database this backend is connected topg_stat_get_backend_pid(integer)integerProcess ID of this backendpg_stat_get_backend_start(integer)timestamp with time zoneTime when this process was startedpg_stat_get_backend_userid(integer)oidOID of the user logged into this backendpg_stat_get_backend_waiting(integer)booleanTrue if this backend is currently waiting on a lockpg_stat_get_backend_xact_start(integer)timestamp with time zoneTime when the current transaction was started
Viewing Lockslockmonitoring
Another useful tool for monitoring database activity is the
pg_locks system table. It allows the
database administrator to view information about the outstanding
locks in the lock manager. For example, this capability can be used
to:
View all the locks currently outstanding, all the locks on
relations in a particular database, all the locks on a
particular relation, or all the locks held by a particular
PostgreSQL session.
Determine the relation in the current database with the most
ungranted locks (which might be a source of contention among
database clients).
Determine the effect of lock contention on overall database
performance, as well as the extent to which contention varies
with overall database traffic.
Details of the pg_locks view appear in
.
For more information on locking and managing concurrency with
PostgreSQL, refer to .
Dynamic TracingDTracePostgreSQL provides facilities to support
dynamic tracing of the database server. This allows an external
utility to be called at specific points in the code and thereby trace
execution.
A number of probes or trace points are already inserted into the source
code. These probes are intended to be used by database developers and
administrators. By default the probes are not compiled into
PostgreSQL; the user needs to explicitly tell
the configure script to make the probes available.
Currently, the
DTrace
utility is supported, which, at the time of this writing, is available
on Solaris, OS X, FreeBSD, NetBSD, and Oracle Linux. The
SystemTap project
for Linux provides a DTrace equivalent and can also be used. Supporting other dynamic
tracing utilities is theoretically possible by changing the definitions for
the macros in src/include/utils/probes.h>.
Compiling for Dynamic Tracing
By default, probes are not available, so you will need to
explicitly tell the configure script to make the probes available
in PostgreSQL. To include DTrace support
specify Built-in Probes
A number of standard probes are provided in the source code,
as shown in ;
shows the types used in the probes. More probes can certainly be
added to enhance PostgreSQL>'s observability.
Built-in DTrace ProbesNameParametersDescriptiontransaction-start(LocalTransactionId)Probe that fires at the start of a new transaction.
arg0 is the transaction ID.transaction-commit(LocalTransactionId)Probe that fires when a transaction completes successfully.
arg0 is the transaction ID.transaction-abort(LocalTransactionId)Probe that fires when a transaction completes unsuccessfully.
arg0 is the transaction ID.query-start(const char *)Probe that fires when the processing of a query is started.
arg0 is the query string.query-done(const char *)Probe that fires when the processing of a query is complete.
arg0 is the query string.query-parse-start(const char *)Probe that fires when the parsing of a query is started.
arg0 is the query string.query-parse-done(const char *)Probe that fires when the parsing of a query is complete.
arg0 is the query string.query-rewrite-start(const char *)Probe that fires when the rewriting of a query is started.
arg0 is the query string.query-rewrite-done(const char *)Probe that fires when the rewriting of a query is complete.
arg0 is the query string.query-plan-start()Probe that fires when the planning of a query is started.query-plan-done()Probe that fires when the planning of a query is complete.query-execute-start()Probe that fires when the execution of a query is started.query-execute-done()Probe that fires when the execution of a query is complete.statement-status(const char *)Probe that fires anytime the server process updates its
pg_stat_activity>.status>.
arg0 is the new status string.checkpoint-start(int)Probe that fires when a checkpoint is started.
arg0 holds the bitwise flags used to distinguish different checkpoint
types, such as shutdown, immediate or force.checkpoint-done(int, int, int, int, int)Probe that fires when a checkpoint is complete.
(The probes listed next fire in sequence during checkpoint processing.)
arg0 is the number of buffers written. arg1 is the total number of
buffers. arg2, arg3 and arg4 contain the number of xlog file(s) added,
removed and recycled respectively.clog-checkpoint-start(bool)Probe that fires when the CLOG portion of a checkpoint is started.
arg0 is true for normal checkpoint, false for shutdown
checkpoint.clog-checkpoint-done(bool)Probe that fires when the CLOG portion of a checkpoint is
complete. arg0 has the same meaning as for clog-checkpoint-start.subtrans-checkpoint-start(bool)Probe that fires when the SUBTRANS portion of a checkpoint is
started.
arg0 is true for normal checkpoint, false for shutdown
checkpoint.subtrans-checkpoint-done(bool)Probe that fires when the SUBTRANS portion of a checkpoint is
complete. arg0 has the same meaning as for
subtrans-checkpoint-start.multixact-checkpoint-start(bool)Probe that fires when the MultiXact portion of a checkpoint is
started.
arg0 is true for normal checkpoint, false for shutdown
checkpoint.multixact-checkpoint-done(bool)Probe that fires when the MultiXact portion of a checkpoint is
complete. arg0 has the same meaning as for
multixact-checkpoint-start.buffer-checkpoint-start(int)Probe that fires when the buffer-writing portion of a checkpoint
is started.
arg0 holds the bitwise flags used to distinguish different checkpoint
types, such as shutdown, immediate or force.buffer-sync-start(int, int)Probe that fires when we begin to write dirty buffers during
checkpoint (after identifying which buffers must be written).
arg0 is the total number of buffers.
arg1 is the number that are currently dirty and need to be written.buffer-sync-written(int)Probe that fires after each buffer is written during checkpoint.
arg0 is the ID number of the buffer.buffer-sync-done(int, int, int)Probe that fires when all dirty buffers have been written.
arg0 is the total number of buffers.
arg1 is the number of buffers actually written by the checkpoint process.
arg2 is the number that were expected to be written (arg1 of
buffer-sync-start); any difference reflects other processes flushing
buffers during the checkpoint.buffer-checkpoint-sync-start()Probe that fires after dirty buffers have been written to the
kernel, and before starting to issue fsync requests.buffer-checkpoint-done()Probe that fires when syncing of buffers to disk is
complete.twophase-checkpoint-start()Probe that fires when the two-phase portion of a checkpoint is
started.twophase-checkpoint-done()Probe that fires when the two-phase portion of a checkpoint is
complete.buffer-read-start(ForkNumber, BlockNumber, Oid, Oid, Oid, int, bool)Probe that fires when a buffer read is started.
arg0 and arg1 contain the fork and block numbers of the page (but
arg1 will be -1 if this is a relation extension request).
arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs
identifying the relation.
arg5 is the ID of the backend which created the temporary relation for a
local buffer, or InvalidBackendId (-1) for a shared buffer.
arg6 is true for a relation extension request, false for normal
read.buffer-read-done(ForkNumber, BlockNumber, Oid, Oid, Oid, int, bool, bool)Probe that fires when a buffer read is complete.
arg0 and arg1 contain the fork and block numbers of the page (if this
is a relation extension request, arg1 now contains the block number
of the newly added block).
arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs
identifying the relation.
arg5 is the ID of the backend which created the temporary relation for a
local buffer, or InvalidBackendId (-1) for a shared buffer.
arg6 is true for a relation extension request, false for normal
read.
arg7 is true if the buffer was found in the pool, false if not.buffer-flush-start(ForkNumber, BlockNumber, Oid, Oid, Oid)Probe that fires before issuing any write request for a shared
buffer.
arg0 and arg1 contain the fork and block numbers of the page.
arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs
identifying the relation.buffer-flush-done(ForkNumber, BlockNumber, Oid, Oid, Oid)Probe that fires when a write request is complete. (Note
that this just reflects the time to pass the data to the kernel;
it's typically not actually been written to disk yet.)
The arguments are the same as for buffer-flush-start.buffer-write-dirty-start(ForkNumber, BlockNumber, Oid, Oid, Oid)Probe that fires when a server process begins to write a dirty
buffer. (If this happens often, it implies that
is too
small or the bgwriter control parameters need adjustment.)
arg0 and arg1 contain the fork and block numbers of the page.
arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs
identifying the relation.buffer-write-dirty-done(ForkNumber, BlockNumber, Oid, Oid, Oid)Probe that fires when a dirty-buffer write is complete.
The arguments are the same as for buffer-write-dirty-start.wal-buffer-write-dirty-start()Probe that fires when a server process begins to write a
dirty WAL buffer because no more WAL buffer space is available.
(If this happens often, it implies that
is too small.)wal-buffer-write-dirty-done()Probe that fires when a dirty WAL buffer write is complete.xlog-insert(unsigned char, unsigned char)Probe that fires when a WAL record is inserted.
arg0 is the resource manager (rmid) for the record.
arg1 contains the info flags.xlog-switch()Probe that fires when a WAL segment switch is requested.smgr-md-read-start(ForkNumber, BlockNumber, Oid, Oid, Oid, int)Probe that fires when beginning to read a block from a relation.
arg0 and arg1 contain the fork and block numbers of the page.
arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs
identifying the relation.
arg5 is the ID of the backend which created the temporary relation for a
local buffer, or InvalidBackendId (-1) for a shared buffer.smgr-md-read-done(ForkNumber, BlockNumber, Oid, Oid, Oid, int, int, int)Probe that fires when a block read is complete.
arg0 and arg1 contain the fork and block numbers of the page.
arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs
identifying the relation.
arg5 is the ID of the backend which created the temporary relation for a
local buffer, or InvalidBackendId (-1) for a shared buffer.
arg6 is the number of bytes actually read, while arg7 is the number
requested (if these are different it indicates trouble).smgr-md-write-start(ForkNumber, BlockNumber, Oid, Oid, Oid, int)Probe that fires when beginning to write a block to a relation.
arg0 and arg1 contain the fork and block numbers of the page.
arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs
identifying the relation.
arg5 is the ID of the backend which created the temporary relation for a
local buffer, or InvalidBackendId (-1) for a shared buffer.smgr-md-write-done(ForkNumber, BlockNumber, Oid, Oid, Oid, int, int, int)Probe that fires when a block write is complete.
arg0 and arg1 contain the fork and block numbers of the page.
arg2, arg3, and arg4 contain the tablespace, database, and relation OIDs
identifying the relation.
arg5 is the ID of the backend which created the temporary relation for a
local buffer, or InvalidBackendId (-1) for a shared buffer.
arg6 is the number of bytes actually written, while arg7 is the number
requested (if these are different it indicates trouble).sort-start(int, bool, int, int, bool)Probe that fires when a sort operation is started.
arg0 indicates heap, index or datum sort.
arg1 is true for unique-value enforcement.
arg2 is the number of key columns.
arg3 is the number of kilobytes of work memory allowed.
arg4 is true if random access to the sort result is required.sort-done(bool, long)Probe that fires when a sort is complete.
arg0 is true for external sort, false for internal sort.
arg1 is the number of disk blocks used for an external sort,
or kilobytes of memory used for an internal sort.lwlock-acquire(char *, int, LWLockMode)Probe that fires when an LWLock has been acquired.
arg0 is the LWLock's tranche.
arg1 is the LWLock's offset within its tranche.
arg2 is the requested lock mode, either exclusive or shared.lwlock-release(char *, int)Probe that fires when an LWLock has been released (but note
that any released waiters have not yet been awakened).
arg0 is the LWLock's tranche.
arg1 is the LWLock's offset within its tranche.lwlock-wait-start(char *, int, LWLockMode)Probe that fires when an LWLock was not immediately available and
a server process has begun to wait for the lock to become available.
arg0 is the LWLock's tranche.
arg1 is the LWLock's offset within its tranche.
arg2 is the requested lock mode, either exclusive or shared.lwlock-wait-done(char *, int, LWLockMode)Probe that fires when a server process has been released from its
wait for an LWLock (it does not actually have the lock yet).
arg0 is the LWLock's tranche.
arg1 is the LWLock's offset within its tranche.
arg2 is the requested lock mode, either exclusive or shared.lwlock-condacquire(char *, int, LWLockMode)Probe that fires when an LWLock was successfully acquired when the
caller specified no waiting.
arg0 is the LWLock's tranche.
arg1 is the LWLock's offset within its tranche.
arg2 is the requested lock mode, either exclusive or shared.lwlock-condacquire-fail(char *, int, LWLockMode)Probe that fires when an LWLock was not successfully acquired when
the caller specified no waiting.
arg0 is the LWLock's tranche.
arg1 is the LWLock's offset within its tranche.
arg2 is the requested lock mode, either exclusive or shared.lock-wait-start(unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, LOCKMODE)Probe that fires when a request for a heavyweight lock (lmgr lock)
has begun to wait because the lock is not available.
arg0 through arg3 are the tag fields identifying the object being
locked. arg4 indicates the type of object being locked.
arg5 indicates the lock type being requested.lock-wait-done(unsigned int, unsigned int, unsigned int, unsigned int, unsigned int, LOCKMODE)Probe that fires when a request for a heavyweight lock (lmgr lock)
has finished waiting (i.e., has acquired the lock).
The arguments are the same as for lock-wait-start.deadlock-found()Probe that fires when a deadlock is found by the deadlock
detector.
Defined Types Used in Probe ParametersTypeDefinitionLocalTransactionIdunsigned intLWLockModeintLOCKMODEintBlockNumberunsigned intOidunsigned intForkNumberintboolchar
Using Probes
The example below shows a DTrace script for analyzing transaction
counts in the system, as an alternative to snapshotting
pg_stat_database> before and after a performance test:
#!/usr/sbin/dtrace -qs
postgresql$1:::transaction-start
{
@start["Start"] = count();
self->ts = timestamp;
}
postgresql$1:::transaction-abort
{
@abort["Abort"] = count();
}
postgresql$1:::transaction-commit
/self->ts/
{
@commit["Commit"] = count();
@time["Total time (ns)"] = sum(timestamp - self->ts);
self->ts=0;
}
When executed, the example D script gives output such as:
# ./txn_count.d `pgrep -n postgres` or ./txn_count.d <PID>
^C
Start 71
Commit 70
Total time (ns) 2312105013
SystemTap uses a different notation for trace scripts than DTrace does,
even though the underlying trace points are compatible. One point worth
noting is that at this writing, SystemTap scripts must reference probe
names using double underscores in place of hyphens. This is expected to
be fixed in future SystemTap releases.
You should remember that DTrace scripts need to be carefully written and
debugged, otherwise the trace information collected might
be meaningless. In most cases where problems are found it is the
instrumentation that is at fault, not the underlying system. When
discussing information found using dynamic tracing, be sure to enclose
the script used to allow that too to be checked and discussed.
More example scripts can be found in the PgFoundry
dtrace project.
Defining New Probes
New probes can be defined within the code wherever the developer
desires, though this will require a recompilation. Below are the steps
for inserting new probes:
Decide on probe names and data to be made available through the probes
Add the probe definitions to src/backend/utils/probes.d>
Include pg_trace.h> if it is not already present in the
module(s) containing the probe points, and insert
TRACE_POSTGRESQL> probe macros at the desired locations
in the source code
Recompile and verify that the new probes are available
Example:
Here is an example of how you would add a probe to trace all new
transactions by transaction ID.
Decide that the probe will be named transaction-start> and
requires a parameter of type LocalTransactionId
Add the probe definition to src/backend/utils/probes.d>:
probe transaction__start(LocalTransactionId);
Note the use of the double underline in the probe name. In a DTrace
script using the probe, the double underline needs to be replaced with a
hyphen, so transaction-start> is the name to document for
users.
At compile time, transaction__start> is converted to a macro
called TRACE_POSTGRESQL_TRANSACTION_START> (notice the
underscores are single here), which is available by including
pg_trace.h>. Add the macro call to the appropriate location
in the source code. In this case, it looks like the following:
TRACE_POSTGRESQL_TRANSACTION_START(vxid.localTransactionId);
After recompiling and running the new binary, check that your newly added
probe is available by executing the following DTrace command. You
should see similar output:
# dtrace -ln transaction-start
ID PROVIDER MODULE FUNCTION NAME
18705 postgresql49878 postgres StartTransactionCommand transaction-start
18755 postgresql49877 postgres StartTransactionCommand transaction-start
18805 postgresql49876 postgres StartTransactionCommand transaction-start
18855 postgresql49875 postgres StartTransactionCommand transaction-start
18986 postgresql49873 postgres StartTransactionCommand transaction-start
There are a few things to be careful about when adding trace macros
to the C code:
You should take care that the data types specified for a probe's
parameters match the data types of the variables used in the macro.
Otherwise, you will get compilation errors.
On most platforms, if PostgreSQL is
built with