With sufficiently bad luck, it was possible for a parallel worker to
attempt attach to a DSA area after all other backends have detached
from it, which is not legal. If the worker had waited a little longer
to get started, the DSM itself would have been destroyed, which is why
this wasn't noticed before.
Thomas Munro, per a report from Andreas Seltenreich
Discussion: http://postgr.es/m/87h92g83t3.fsf@credativ.de
copyObject() is declared to return void *, which allows easily assigning
the result independent of the input, but it loses all type checking.
If the compiler supports typeof or something similar, cast the result to
the input type. This creates a greater amount of type safety. In some
cases, where the result is assigned to a generic type such as Node * or
Expr *, new casts are now necessary, but in general casts are now
unnecessary in the normal case and indicate that something unusual is
happening.
Reviewed-by: Mark Dilger <hornschnorter@gmail.com>
Reduce noise from TAP tests by changing 'diag' to 'note', so output only
goes to the test's log file not stdout, unless in verbose mode. This
also removes the junk on screen when running the TAP tests in parallel.
Author: Craig Ringer <craig@2ndquadrant.com>
dsm_create and dsm_attach assumed that a current resource owner was
always in place. Exploration with the API show that this is
inconvenient: sometimes one must create a dummy resowner, create/attach
the DSM, only to pin the mapping later, which is wasteful. Change
create/attach so that if there is no current resowner, the dsm is
effectively pinned right from the start.
Discussion: https://postgr.es/m/20170324232710.32acsfsvjqfgc6ud@alvherre.pgsql
Reviewed by Thomas Munro.
Formerly, a Var referencing an already-dropped column was allowed and would
always produce a NULL value. However, that behavior was implemented in
slot_getattr which the new expression code doesn't use; thus there is now a
risk of returning theoretically-deleted data. We had regression test cases
that purported to exercise this, but they failed to expose any problem,
apparently because plpgsql filters the dropped column and produces an
output tuple that has a NULL there already.
Ideally the DROP or ALTER attempt in these test cases would get rejected
due to dependency checks; but until that happens, let's modify the behavior
so that we fail the query during executor start. This was already true for
the related case of a column having changed type underneath us, and there's
no obvious reason why we need to be laxer for dropped columns.
In passing, adjust the error messages in CheckVarSlotCompatibility to
include the composite type name. In the cases shown in the regression
tests this is always just "record", but it should be more useful in
actual stale-plan cases, where the slot tupdesc would be a table's
tupdesc directly.
Discussion: https://postgr.es/m/16803.1490723570@sss.pgh.pa.us
There are no functional changes here; this simply encapsulates knowledge
of the ItemPointerData struct so that a future patch can change things
without more breakage.
All direct users of ip_blkid and ip_posid are changed to use existing
macros ItemPointerGetBlockNumber and ItemPointerGetOffsetNumber
respectively. For callers where that's inappropriate (because they
Assert that the itempointer is is valid-looking), add
ItemPointerGetBlockNumberNoCheck and ItemPointerGetOffsetNumberNoCheck,
which lack the assertion but are otherwise identical.
Author: Pavan Deolasee
Discussion: https://postgr.es/m/CABOikdNnFon4cJiL=h1mZH3bgUeU+sWHuU4Yr8AB=j3A2p1GiA@mail.gmail.com
We were assigning values near 255 through "char *" pointers. On machines
where char is signed, that's not entirely kosher, and it's reasonable
for compilers to warn about it.
A better solution would be to change the pointer type to "unsigned char *",
but that would be vastly more invasive. For the moment, let's just apply
this simple backpatchable solution.
Aleksander Alekseev
Discussion: https://postgr.es/m/20170220141239.GD12278@e733.localdomain
Discussion: https://postgr.es/m/2839.1490714708@sss.pgh.pa.us
Automatically drop all logical replication slots associated with a
database when the database is dropped. Previously we threw an ERROR
if a slot existed. Now we throw ERROR only if a slot is active in
the database being dropped.
Craig Ringer
The previous change wanted to avoid modifying $_ in grep, but the code
just made the change in a local variable and then lost it. Rewrite the
code using a separate map and grep, which is clearer anyway.
Author: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Valgrind complains that some uninitialized bytes are being passed around
by the extended statistics code since commit 7b504eb282, as reported
by Andres Freund. Silence it.
Tomas Vondra submitted a patch which he verified to fix the complaints
in his machine; however I messed with it a bit before pushing, so any
remaining problems are likely my (Álvaro's) fault.
Author: Tomas Vondra
Discussion: https://postgr.es/m/20170325211031.4xxoptigqxm2emn2@alap3.anarazel.de
Most seriously, fix use of incorrect block ID, per a report from
Jeff Janes that it causes a crash and a diagnosis from Amit Kapila.
Improve consistency between the hash and btree versions of this
code by adding back a PANIC that btree has, and by registering
data in the xlog record in the same way, per complaints from
Jeff Janes and Amit Kapila.
Tidy up some minor cosmetic points, per complaints from Amit
Kapila.
Patch by Ashutosh Sharma, reviewed by Amit Kapila, and tested by
Jeff Janes.
Discussion: http://postgr.es/m/CAMkU=1w-9Qe=Ff1o6bSaXpNO9wqpo7_9GL8_CVhw4BoVVHasqg@mail.gmail.com
If file was created/deleted just before powerloss it's possible that
file system will miss that. To prevent it, call fsync() where creating/
unlinkg file is critical.
Author: Michael Paquier
Reviewed-by: Ashutosh Bapat, Takayuki Tsunakawa, me
The code for the reworked n-distinct estimation on commit 7b504eb282 was
written differently in a previous version of the patch, prior to commit;
on rewriting it, we missed updating an initializer. This caused the
code to (mistakenly) apply a fudge factor even in the case where a
single value is applied, leading to incorrect results.
This means that the 'relvarcount' variable name is now wrong. Add a
comment to try and make the situation clearer, and remove an incorrect
comment I added.
Problem noticed, and code patch, by Tomas Vondra. Additional commentary
by Álvaro.
Previous coding uses three nested loops which obviously were a pain for
large number of table's children. Patch replaces inner loop with
a hashmap.
Author: Aleksander Alekseev
Reviewed-by: me
https://commitfest.postgresql.org/13/1058/
As suggested by Tom Lane, avoid printing specific estimated cost values,
because they vary across architectures; instead, verify plan shapes (in
this case, HashAggregate vs. GroupAggregate), as we do in other planner
tests.
We can now remove expected/stats_ext_1.out.
Author: Tomas Vondra
Instead of
plan = plpy.prepare(...)
res = plpy.execute(plan, ...)
you can now write
plan = plpy.prepare(...)
res = plan.execute(...)
or even
res = plpy.prepare(...).execute(...)
and similarly for the cursor() method.
This is more in object oriented style, and makes the hybrid nature of
the existing execute() function less confusing.
Reviewed-by: Andrew Dunstan <andrew.dunstan@2ndquadrant.com>
Patch introduces a hash map reloid -> PgStat_TableStatus which improves
performance in case of large number of tables/partitions.
Author: Aleksander Alekseev
Reviewed-by: Andres Freund, Anastasia Lubennikova, Tels, me
https://commitfest.postgresql.org/13/1058/
The previous default 'pg_log' might have indicated by its "pg_" prefix
that it is an internal system directory. The new default is more in
line with the typical naming of directories with user-facing log files.
Together with the renaming of pg_clog and pg_xlog, this should clear up
that difference.
Author: Andreas Karlsson <andreas@proxel.se>
Fix all perlcritic warnings of severity level 5, except in
src/backend/utils/Gen_dummy_probes.pl, which is automatically generated.
Reviewed-by: Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>
Reviewed-by: Daniel Gustafsson <daniel@yesql.se>
There was a thinko whereby we tested the wrong tuple after fetching it
from cache; avoid that by using generate_relation_name instead, which is
simpler. Also, the statistics name was not qualified, so add that. (It
could be argued that qualification should be conditional on the schema
not being on search path. We can add that later, but at least this form
is correct.)
Author: David Rowley, Álvaro Herrera
Discussion: https://postgr.es/m/CAKJS1f8RjLeVZJ2+93pdQGuZJeBF-ifsHaFMR-q-6-Z0qxA8cA@mail.gmail.com
This extends the Aggregate node with two new features: HashAggregate
can now run multiple hashtables concurrently, and a new strategy
MixedAggregate populates hashtables while doing sorted grouping.
The planner will now attempt to save as many sorts as possible when
planning grouping sets queries, while not exceeding work_mem for the
estimated combined sizes of all hashtables used. No SQL-level changes
are required. There should be no user-visible impact other than the
new EXPLAIN output and possible changes to result ordering when ORDER
BY was not used (which affected a few regression tests). The
enable_hashagg option is respected.
Author: Andrew Gierth
Reviewers: Mark Dilger, Andres Freund
Discussion: https://postgr.es/m/87vatszyhj.fsf@news-spur.riddles.org.uk
Previously, auxiliary processes and background workers not connected
to a database (such as the logical replication launcher) weren't
shown. Include them, so that we can see the associated wait state
information. Add a new column to identify the processes type, so that
people can filter them out easily using SQL if they wish.
Before this patch was written, there was discussion about whether we
should expose this information in a separate view, so as to avoid
contaminating pg_stat_activity with things people might not want to
see. But putting everything in pg_stat_activity was a more popular
choice, so that's what the patch does.
Kuntal Ghosh, reviewed by Amit Langote and Michael Paquier. Some
revisions and bug fixes by me.
Discussion: http://postgr.es/m/CA+TgmoYES5nhkEGw9nZXU8_FhA8XEm8NTm3-SO+3ML1B81Hkww@mail.gmail.com
In commit b8d7f053c, we needed to fix ExecEvalWholeRowVar to not change
the state of the slot it's copying. The initial quick hack at that
required two rounds of tuple construction, which is not very nice.
To fix, add another primitive to tuptoaster.c that does precisely what
we need. (I initially tried to do this by refactoring one of the
existing functions into two pieces; but it looked like that might hurt
performance for the existing case, and the amount of code that could
be shared is not very large, so I gave up on that.)
Discussion: https://postgr.es/m/26088.1490315792@sss.pgh.pa.us
Change one more place where ExecInitCheck/ExecPrepareCheck's insistence
on getting implicit-AND-format quals wasn't really helpful, because the
caller had to do make_ands_implicit() for no reason that it cared about.
Using ExecPrepareExpr directly simplifies the code and saves cycles.
The only remaining use of these functions is to process
resultRelInfo->ri_PartitionCheck quals. However, implicit-AND format
does seem to be what we want for that, so leave it alone.
Fix an incorrect assert condition (noted by Coverity), and spell the new
name of the function correctly. Typos introduced in commit 7c4f52409.
Michael Paquier
The compiler is entitled to store a char[] local variable with no
particular alignment requirement. Our RADIUS code cavalierly took such
a local variable and cast its address to a struct type that does have
alignment requirements. On an alignment-picky machine this would lead
to bus errors. To fix, declare the local variable honestly, and then
cast its address to char * for use in the I/O calls.
Given the lack of field complaints, there must be very few if any
people affected; but nonetheless this is a clear portability issue,
so back-patch to all supported branches.
Noted while looking at a Coverity complaint in the same code.
Failure to free serveraddrs pointed out by Coverity, failure to close
socket noted by code-reading. These bugs seem to be quite old, but
given the low probability of taking these error-exit paths and the
minimal consequences of the leaks (since the process would presumably
exit shortly anyway), it doesn't seem worth back-patching.
Michael Paquier and Tom Lane
Both Andres and I were happy with "*op->resvalue = *op->resvalue;",
but Coverity isn't; and it has a point, because some compilers might
not be smart enough to elide that. So remove it. In passing, also
avoid doing unnecessary assignments to *op->resnull when it's already
known to have the right value.
The previous code still contained expression evaluation time support
for CaseExprs without a defresult. But transformCaseExpr() creates a
default expression if necessary.
Author: Andres Freund
Discussion: https://postgr.es/m/4834.1490480275@sss.pgh.pa.us
This replaces the old, recursive tree-walk based evaluation, with
non-recursive, opcode dispatch based, expression evaluation.
Projection is now implemented as part of expression evaluation.
This both leads to significant performance improvements, and makes
future just-in-time compilation of expressions easier.
The speed gains primarily come from:
- non-recursive implementation reduces stack usage / overhead
- simple sub-expressions are implemented with a single jump, without
function calls
- sharing some state between different sub-expressions
- reduced amount of indirect/hard to predict memory accesses by laying
out operation metadata sequentially; including the avoidance of
nearly all of the previously used linked lists
- more code has been moved to expression initialization, avoiding
constant re-checks at evaluation time
Future just-in-time compilation (JIT) has become easier, as
demonstrated by released patches intended to be merged in a later
release, for primarily two reasons: Firstly, due to a stricter split
between expression initialization and evaluation, less code has to be
handled by the JIT. Secondly, due to the non-recursive nature of the
generated "instructions", less performance-critical code-paths can
easily be shared between interpreted and compiled evaluation.
The new framework allows for significant future optimizations. E.g.:
- basic infrastructure for to later reduce the per executor-startup
overhead of expression evaluation, by caching state in prepared
statements. That'd be helpful in OLTPish scenarios where
initialization overhead is measurable.
- optimizing the generated "code". A number of proposals for potential
work has already been made.
- optimizing the interpreter. Similarly a number of proposals have
been made here too.
The move of logic into the expression initialization step leads to some
backward-incompatible changes:
- Function permission checks are now done during expression
initialization, whereas previously they were done during
execution. In edge cases this can lead to errors being raised that
previously wouldn't have been, e.g. a NULL array being coerced to a
different array type previously didn't perform checks.
- The set of domain constraints to be checked, is now evaluated once
during expression initialization, previously it was re-built
every time a domain check was evaluated. For normal queries this
doesn't change much, but e.g. for plpgsql functions, which caches
ExprStates, the old set could stick around longer. The behavior
around might still change.
Author: Andres Freund, with significant changes by Tom Lane,
changes by Heikki Linnakangas
Reviewed-By: Tom Lane, Heikki Linnakangas
Discussion: https://postgr.es/m/20161206034955.bh33paeralxbtluv@alap3.anarazel.de
As explained at the head of parallel_schedule, we place an arbitrary limit
of 20 test cases per parallel group. Commit c7a9fa399 overlooked this.
Least messy solution seems to be to move the "comments" test to the next
group, since it doesn't really belong in a group of datatype tests anyway.
Commit e3920ac82 created "regress_subscription_user2" in subscription.sql,
but forgot to drop it, causing the regression tests to fail if run twice
without re-initdb'ing.
If the upstream walsender is using a physical replication slot, store the
catalog_xmin in the slot's catalog_xmin field. If the upstream doesn't use a
slot and has only a PGPROC entry behaviour doesn't change, as we store the
combined xmin and catalog_xmin in the PGPROC entry.
Author: Craig Ringer
These tests require the test database to be in UTF8 encoding. Until
there is a better solution, take them out of the default test set and
treat them like the existing collate.linux.utf8 test, meaning it has to
be selected manually.
Because tuple packing is different (because of the MAXALIGN difference),
the expected costs of a seqscan is different.
The commonly used trick of eliding costs in EXPLAIN output (COSTS OFF)
would make the tests completely pointless. Instead, add an alternative
expected file.
In SQL, the ability to use parallel query was previous contingent on
fcache->readonly_func, which is only set for non-volatile functions;
but the volatility of a function has no bearing on whether queries
inside it can use parallelism. Remove that condition.
SPI_execute and SPI_execute_with_args always run the plan just once,
though not necessarily to completion. Given the changes in commit
691b8d5928, it's sensible to pass
CURSOR_OPT_PARALLEL_OK here, so do that. This improves access to
parallelism for any caller that uses these functions to execute
queries. Such callers include plperl, plpython, pltcl, and plpgsql,
though it's not the case that they all use these functions
exclusively.
In plpgsql, allow parallel query for plain SELECT queries (as
opposed to PERFORM, which already worked) and for plain expressions
(which probably won't go through the executor at all, because they
will likely be simple expressions, but if they do then this helps).
Rafia Sabih and Robert Haas, reviewed by Dilip Kumar and Amit Kapila
Discussion: http://postgr.es/m/CAOGQiiMfJ+4SQwgG=6CVHWoisiU0+7jtXSuiyXBM3y=A=eJzmg@mail.gmail.com
Previously manual VACUUM did not report the number of skipped frozen
pages even when VERBOSE option is specified. But this information is
helpful to monitor the VACUUM activity, and also autovacuum reports that
number in the log file when the condition of log_autovacuum_min_duration
is met.
This commit changes VACUUM VERBOSE so that it reports the number
of frozen pages that it skips.
Author: Masahiko Sawada
Reviewed-by: Yugo Nagata and Jim Nasby
Discussion: http://postgr.es/m/CAD21AoDZQKCxo0L39Mrq08cONNkXQKXuh=2DP1Q8ebmt35SoaA@mail.gmail.com
Add support for explicitly declared statistic objects (CREATE
STATISTICS), allowing collection of statistics on more complex
combinations that individual table columns. Companion commands DROP
STATISTICS and ALTER STATISTICS ... OWNER TO / SET SCHEMA / RENAME are
added too. All this DDL has been designed so that more statistic types
can be added later on, such as multivariate most-common-values and
multivariate histograms between columns of a single table, leaving room
for permitting columns on multiple tables, too, as well as expressions.
This commit only adds support for collection of n-distinct coefficient
on user-specified sets of columns in a single table. This is useful to
estimate number of distinct groups in GROUP BY and DISTINCT clauses;
estimation errors there can cause over-allocation of memory in hashed
aggregates, for instance, so it's a worthwhile problem to solve. A new
special pseudo-type pg_ndistinct is used.
(num-distinct estimation was deemed sufficiently useful by itself that
this is worthwhile even if no further statistic types are added
immediately; so much so that another version of essentially the same
functionality was submitted by Kyotaro Horiguchi:
https://postgr.es/m/20150828.173334.114731693.horiguchi.kyotaro@lab.ntt.co.jp
though this commit does not use that code.)
Author: Tomas Vondra. Some code rework by Álvaro.
Reviewed-by: Dean Rasheed, David Rowley, Kyotaro Horiguchi, Jeff Janes,
Ideriha Takeshi
Discussion: https://postgr.es/m/543AFA15.4080608@fuzzy.czhttps://postgr.es/m/20170320190220.ixlaueanxegqd5gr@alvherre.pgsql
Commit 7aea8e4f2d allowed a parallel
plan to be generated when for a RETURN QUERY or RETURN QUERY EXECUTE
statement in a PL/pgsql block, but that's a bad idea because plplgsql
asks the executor for 50 rows at a time. That means that we'll always
be running serially a plan that was intended for parallel execution,
which is not a good idea. Fix by not requesting a parallel plan from
the outset.
Per discussion, back-patch to 9.6. There is a slight risk that, due
to optimizer error, somebody could have a case where the parallel plan
executed serially is actually faster than the supposedly-best serial
plan, but the consensus seems to be that that's not sufficient
justification for leaving 9.6 unpatched.
Discussion: http://postgr.es/m/CA+TgmoZ_ZuH+auEeeWnmtorPsgc_SmP+XWbDsJ+cWvWBSjNwDQ@mail.gmail.com
Discussion: http://postgr.es/m/CA+TgmobXEhvHbJtWDuPZM9bVSLiTj-kShxQJ2uM5GPDze9fRYA@mail.gmail.com
If your connection to the database server is lost while a COMMIT is
in progress, it may be difficult to figure out whether the COMMIT was
successful or not. This function will tell you, provided that you
don't wait too long to ask. It may be useful in other situations,
too.
Craig Ringer, reviewed by Simon Riggs and by me
Discussion: http://postgr.es/m/CAMsr+YHQiWNEi0daCTboS40T+V5s_+dst3PYv_8v2wNVH+Xx4g@mail.gmail.com
Always return tupleslot and tupledesc from libpqrcv_exec. This avoids
requiring callers to handle that separately.
Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
Reported-by: Michael Banck <michael.banck@credativ.de>
If a user has a SCRAM verifier in pg_authid.rolpassword, there's no reason
we cannot attempt to perform SCRAM authentication instead of MD5. The worst
that can happen is that the client doesn't support SCRAM, and the
authentication will fail. But previously, it would fail for sure, because
we would not even try. SCRAM is strictly more secure than MD5, so there's
no harm in trying it. This allows for a more graceful transition from MD5
passwords to SCRAM, as user passwords can be changed to SCRAM verifiers
incrementally, without changing pg_hba.conf.
Refactor the code in auth.c to support that better. Notably, we now have to
look up the user's pg_authid entry before sending the password challenge,
also when performing MD5 authentication. Also simplify the concept of a
"doomed" authentication. Previously, if a user had a password, but it had
expired, we still performed SCRAM authentication (but always returned error
at the end) using the salt and iteration count from the expired password.
Now we construct a fake salt, like we do when the user doesn't have a
password or doesn't exist at all. That simplifies get_role_password(), and
we can don't need to distinguish the "user has expired password", and
"user does not exist" cases in auth.c.
On second thoughts, also rename uaSASL to uaSCRAM. It refers to the
mechanism specified in pg_hba.conf, and while we use SASL for SCRAM
authentication at the protocol level, the mechanism should be called SCRAM,
not SASL. As a comparison, we have uaLDAP, even though it looks like the
plain 'password' authentication at the protocol level.
Discussion: https://www.postgresql.org/message-id/6425.1489506016@sss.pgh.pa.us
Reviewed-by: Michael Paquier
Assert-enabled build crashes but without asserts it works by wrong way:
it may not reset forcing full page write and preventing from starting
exclusive backup with the same name as cancelled.
Patch replaces pair of booleans
nonexclusive_backup_running/exclusive_backup_running to single enum to
correctly describe backup state.
Backpatch to 9.6 where bug was introduced
Reported-by: David Steele
Authors: Michael Paquier, David Steele
Reviewed-by: Anastasia Lubennikova
https://commitfest.postgresql.org/13/1068/
Commit 249cf070e3 assigned to one of
the labels in the middle the value that should have been assigned
to the first member of the enum. Rushabh's patch didn't have that
defect as submitted, but I managed to mess it up while editing.
Repair.
Add a column collprovider to pg_collation that determines which library
provides the collation data. The existing choices are default and libc,
and this adds an icu choice, which uses the ICU4C library.
The pg_locale_t type is changed to a union that contains the
provider-specific locale handles. Users of locale information are
changed to look into that struct for the appropriate handle to use.
Also add a collversion column that records the version of the collation
when it is created, and check at run time whether it is still the same.
This detects potentially incompatible library upgrades that can corrupt
indexes and other structures. This is currently only supported by
ICU-provided collations.
initdb initializes the default collation set as before from the `locale
-a` output but also adds all available ICU locales with a "-x-icu"
appended.
Currently, ICU-provided collations can only be explicitly named
collations. The global database locales are still always libc-provided.
ICU support is enabled by configure --with-icu.
Reviewed-by: Thomas Munro <thomas.munro@enterprisedb.com>
Reviewed-by: Andreas Karlsson <andreas@proxel.se>
This provides infrastructure for looking up arbitrary, user-supplied
XIDs without a risk of scary-looking failures from within the clog
module. Normally, the oldest XID that can be safely looked up in CLOG
is the same as the oldest XID that can reused without causing
wraparound, and the latter is already tracked. However, while
truncation is in progress, the values are different, so we must
keep track of them separately.
Craig Ringer, reviewed by Simon Riggs and by me.
Discussion: http://postgr.es/m/CAMsr+YHQiWNEi0daCTboS40T+V5s_+dst3PYv_8v2wNVH+Xx4g@mail.gmail.com
Previously, it was unsafe to execute a plan in parallel if
ExecutorRun() might be called with a non-zero row count. However,
it's quite easy to fix things up so that we can support that case,
provided that it is known that we will never call ExecutorRun() a
second time for the same QueryDesc. Add infrastructure to signal
this, and cross-checks to make sure that a caller who claims this is
true doesn't later reneg.
While that pattern never happens with queries received directly from a
client -- there's no way to know whether multiple Execute messages
will be sent unless the first one requests all the rows -- it's pretty
common for queries originating from procedural languages, which often
limit the result to a single tuple or to a user-specified number of
tuples.
This commit doesn't actually enable parallelism in any additional
cases, because currently none of the places that would be able to
benefit from this infrastructure pass CURSOR_OPT_PARALLEL_OK in the
first place, but it makes it much more palatable to pass
CURSOR_OPT_PARALLEL_OK in places where we currently don't, because it
eliminates some cases where we'd end up having to run the parallel
plan serially.
Patch by me, based on some ideas from Rafia Sabih and corrected by
Rafia Sabih based on feedback from Dilip Kumar and myself.
Discussion: http://postgr.es/m/CA+TgmobXEhvHbJtWDuPZM9bVSLiTj-kShxQJ2uM5GPDze9fRYA@mail.gmail.com
GIN vacuum during cleaning posting tree can lock this whole tree for a long
time with by holding LockBufferForCleanup() on root. Patch changes it with
two ways: first, cleanup lock will be taken only if there is an empty page
(which should be deleted) and, second, it tries to lock only subtree, not the
whole posting tree.
Author: Andrey Borodin with minor editorization by me
Reviewed-by: Jeff Davis, me
https://commitfest.postgresql.org/13/896/
Adds write_lag, flush_lag and replay_lag cols to pg_stat_replication.
Implements a lag tracker module that reports the lag times based upon
measurements of the time taken for recent WAL to be written, flushed and
replayed and for the sender to hear about it. These times
represent the commit lag that was (or would have been) introduced by each
synchronous commit level, if the remote server was configured as a
synchronous standby. For an asynchronous standby, the replay_lag column
approximates the delay before recent transactions became visible to queries.
If the standby server has entirely caught up with the sending server and
there is no more WAL activity, the most recently measured lag times will
continue to be displayed for a short time and then show NULL.
Physical replication lag tracking is automatic. Logical replication tracking
is possible but is the responsibility of the logical decoding plugin.
Tracking is a private module operating within each walsender individually,
with values reported to shared memory. Module not used outside of walsender.
Design and code is good enough now to commit - kudos to the author.
In many ways a difficult topic, with important and subtle behaviour so this
shoudl be expected to generate discussion and multiple open items: Test now!
Author: Thomas Munro, following designs by Fujii Masao and Simon Riggs
Review: Simon Riggs, Ian Barwick and Craig Ringer
Add functionality for a new subscription to copy the initial data in the
tables and then sync with the ongoing apply process.
For the copying, add a new internal COPY option to have the COPY source
data provided by a callback function. The initial data copy works on
the subscriber by receiving COPY data from the publisher and then
providing it locally into a COPY that writes to the destination table.
A WAL receiver can now execute full SQL commands. This is used here to
obtain information about tables and publications.
Several new options were added to CREATE and ALTER SUBSCRIPTION to
control whether and when initial table syncing happens.
Change pg_dump option --no-create-subscription-slots to
--no-subscription-connect and use the new CREATE SUBSCRIPTION
... NOCONNECT option for that.
Author: Petr Jelinek <petr.jelinek@2ndquadrant.com>
Tested-by: Erik Rijkers <er@xs4all.nl>
Internally, we have supported the option to either wait for all of the
WAL associated with a backup to be archived, or to return immediately.
This option is useful to users of pg_stop_backup() as well, when they
are reading the stop backup record position and checking that the WAL
they need has been archived independently.
This patch adds an additional, optional, argument to pg_stop_backup()
which allows the user to indicate if they wish to wait for the WAL to be
archived or not. The default matches current behavior, which is to
wait.
Author: David Steele, with some minor changes, doc updates by me.
Reviewed by: Takayuki Tsunakawa, Fujii Masao
Discussion: https://postgr.es/m/758e3fd1-45b4-5e28-75cd-e9e7f93a4c02@pgmasters.net
This changes all the RADIUS related parameters (radiusserver,
radiussecret, radiusport, radiusidentifier) to be plural and to accept a
comma separated list of servers, which will be tried in order.
Reviewed by Adam Brightwell
Previously AELs were registered against the top-level xid, which could
cause locks to be held much longer than necessary in some cases during
Hot Standby replay. We now record locks directly against their appropriate
xids. Requires few code changes because original code allowed for this
situation but didn’t fully implement it.
Discussion: CAKJS1f9vJ841HY=wonnLVbfkTWGYWdPN72VMxnArcGCjF3SywA@mail.gmail.com
Author: Simon Riggs and David Rowley
Logical replication no longer uses the "replication" keyword. It just
matches database entries in the normal way. The "replication" keyword
now only applies to physical replication.
Reviewed-by: Petr Jelinek <petr.jelinek@2ndquadrant.com>
Previously, the new owner had to be a superuser. The new rules are more
refined similar to other objects.
Reviewed-by: Petr Jelinek <petr.jelinek@2ndquadrant.com>
A hot standby replica keeps a list of Access Exclusive locks for a top
level transaction. These locks are released when the top level transaction
ends. Searching of this list is O(N^2), and each transaction had to pay the
price of searching this list for locks, even if it didn't take any AE
locks itself.
This patch optimizes this case by having the master server track which
transactions took AE locks, and passes that along to the standby server in
the commit/abort record. This allows the standby to only try to release
locks for transactions which actually took any, avoiding the majority of
the performance issue.
Refactor MyXactAccessedTempRel into MyXactFlags to allow minimal additional
cruft with this.
Analysis and initial patch by David Rowley
Author: David Rowley and Simon Riggs
Uses page-based mechanism to ensure we’re using the correct timeline.
Tests are included to exercise the functionality using a cold disk-level copy
of the master that's started up as a replica with slots intact, but the
intended use of the functionality is with later features.
Craig Ringer, reviewed by Simon Riggs and Andres Freund
Allows testing of logical decoding using SQL interface and/or pg_recvlogical
Most logical decoding tests are in contrib/test_decoding. This module
is for work that doesn't fit well there, like where server restarts
are required.
Craig Ringer
Partitioned tables do not contain any data; only their unpartitioned
descendents need to be scanned. However, the partitioned tables still
need to be locked, even though they're not scanned. To make that
work, Append and MergeAppend relations now need to carry a list of
(unscanned) partitioned relations that must be locked, and InitPlan
must lock all partitioned result relations.
Aside from the obvious advantage of avoiding some work at execution
time, this has two other advantages. First, it may improve the
planner's decision-making in some cases since the empty relation
might throw things off. Second, it paves the way to getting rid of
the storage for partitioned tables altogether.
Amit Langote, reviewed by me.
Discussion: http://postgr.es/m/6837c359-45c4-8044-34d1-736756335a15@lab.ntt.co.jp
of SP-GiST.
Bug exists since initial commit of box opclass for SP-GiST,
so backpath to 9.6
Author: Nikita Glukhov with minor editorization of tests by me
Reviewed-by: Kyotaro Horiguchi, Anastasia Lubennikova
https://commitfest.postgresql.org/13/981/
The current DirectFunctionCall functions use NULL as the flinfo in
initializing the FunctionCallInfoData for the call. That means the
called function has no fn_mcxt or fn_extra to work with, and attempting
to do so will result in an access violation. These functions instead use
the provided flinfo, which will usually be the caller's own flinfo. The
caller needs to ensure that it doesn't use the fn_extra in way that is
incompatible with the way the called function will use it. The called
function should not rely on anything else in the provided context, as it
will be relevant to the caller, not the callee.
Original code from Tom Lane.
Discussion: https://postgr.es/m/db2b70a4-78d7-294a-a315-8e7f506c5978@2ndQuadrant.com
Commit b6fb534f added two new node fields but neglected to add copy and
comparison support for them, Mea culpa, should have checked for that.
per buildfarm animals with -DCOPY_PARSE_PLAN_TREES
There is still some inconsistency with the error messages surrounding
foreign servers. Some use the word "foreign" and some don't. My
inclination is to remove all such uses of "foreign" on the basis that
the CREATE/ALTER/DROP SERVER commands don't use the word. However, that
is left for another day. In this patch I have kept to the existing usage
in the affected commands, which omits "foreign".
Anastasia Lubennikova, reviewed by Arthur Zakirov and Ashtosh Bapat.
Discussion: http://postgr.es/m/7c2ab9b8-388a-1ce0-23a3-7acf2a0ed3c6@postgrespro.ru
User mappings are essentially anonymous, so messages referring to "user
mapping foo on server bar" are wrong, and inconsistent with other error
messages referring to user mappings. To be consistent with existing use,
use "user mapping for foo on server bar" instead.
I dropped the noise word "user" from the original suggestion to be
consistent with other uses.
Discussion: http://postgr.es/m/56c6f8ab-b2d6-f1fa-deb0-1d18cf67f7b9@2ndQuadrant.com
Clear LH_PAGE_HAS_DEAD_TUPLES during replay, similar to what gets done
for btree. Update hashdesc.c for xl_hash_vacuum_one_page.
Oversights in commit 6977b8b7f4 spotted
by Amit Kapila. Patch by Ashutosh Sharma.
Bump WAL version. The original patch to make hash indexes write-ahead
logged probably should have done this, and the single page vacuuming
patch probably should have done it again, but better late than never.
Discussion: http://postgr.es/m/CAA4eK1Kd=mJ9xreovcsh0qMiAj-QqCphHVQ_Lfau1DR9oVjASQ@mail.gmail.com
Commit 818fd4a67 missed cleaning up the symlinks it added for various .c
files imported from src/port and src/common. Neatnik-ishly make the
file lists in the "clean" target look exactly like the earlier lists of
what to symlink in.
TidScan plan nodes were not systematically tested before. These additions
raise the LOC coverage number for the basic regression tests from 52% to
92% in nodeTidscan.c, and from 60% to 93% in tidpath.c.
Andres Freund, tweaked a bit by me
Discussion: https://postgr.es/m/20170320062511.hp5qeurtxrwsvfxr@alap3.anarazel.de
this makes buffer access strategy have no effect.
Change was a part of commit 48354581a4 during 9.6
release cycle, so backpath to 9.6
Reported-by: Jim Nasby
Author: Alexander Korotkov
Reviewed-by: Jim Nasby, Andres Freund
https://commitfest.postgresql.org/13/1029/
Not every platform supports non-default collations, as pointed out by
the buildfarm, so skip collation-related regression tests in pg_dump
when they aren't supported.
The non-concurrent code path for REFRESH MATERIALIZED VIEW failed to
report its updates to the stats collector. This is bad since it means
auto-analyze doesn't know there's any work to be done. Adjust it to
report the refresh as a table truncate followed by insertion of an
appropriate number of rows.
Since a matview could contain more than INT_MAX rows, change the
signature of pgstat_count_heap_insert() to accept an int64 rowcount.
(The accumulator it's adding into is already int64, but existing
callers could not insert more than a small number of rows at once,
so the argument had been declared just "int n".)
This is surely a bug fix, but changing pgstat_count_heap_insert()'s API
seems too risky for the back branches. Given the lack of previous
complaints, I'm not sure it's a big enough problem to justify a kluge
solution that would avoid that. So, no back-patch, at least for now.
Jim Mlodgenski, adjusted a bit by me
Discussion: https://postgr.es/m/CAB_5SRchSz7-WmdO5szdiknG8Oj_GGqJytrk1KRd11yhcMs1KQ@mail.gmail.com
This is really testing getopt more than pg_dump, and what getopt returns
exactly appears to differ based on platform, so remove this test.
Per buildfarm.
These improvements bring the lines-of-code coverage of pg_dump.c up to
87.7% (at least using LCOV 1.12, 1.11 seems to differ slightly). Nearly
every function is covered, three of the four which aren't are only
called when talking to older PG instances.
There is more which can, and should, be done here to improve the
coverage but it's past time to see what the buildfarm thinks of this.
What has been added:
- Coverage for many more command-line options
- Use command_fails_like instead of command_exit_is
- Operator classes, operator families
- Text search configuration, templates, parsers, dictionaries
- FDWs, servers, foreign tables
- Materialized views
- Improved Publications / Subscriptions test (though this needs work,
see PG10 open items and tests marked with XXX in 002_pg_dump.pl)
- Unlogged tables
- Partitioned tables
- Additional ACL testing for various object types
There is room for improvement, specifically:
- Various type-based option (alignment, storage, etc)
- Composite type collation
- Extra Procedural language functions (inline, validator)
- Different function options (SRF, Transform, config, security definer,
cost, leakproof)
- OpClass options (default, storage, order by, recheck)
- OpFamily options (order by, recheck)
- Aggregate functions (combinefunc, serialfunc, deserialfunc, etc)
- Text Search parser 'headline'
- Text Search template 'init'
- FDW options (handler, validator, options)
- Server options (type, version, options)
- User mapping options
- Default ACLs for sequences, types
- Security labels
- View circular dependencies (last function that needs coverage)
- Toast table autovacuum options
- Replica identity options
- Independent indexes (plus marking them as clustered on)
- Deferrable / initially deferred constraints
- Independent domain constraints
There's bits of extension pg_dump'ing also not covered, but those will
need to go into test_pg_dump (such as having a filter for config
tables).
Last, but not least, this approximately halves the number of tests run
with 'ok()' by removing the ok()-based checking of if all runs are
covered by each test. Instead, 002_pg_dump.pl will just exit out in
such a case (with a message in the log file). In general, when adding
tests, cover all runs unless there is a very good reason not to (such as
adding a 'catch-all' case). With these changes, the resulting output
and number of "tests" run is actually reduced.
Previous commits, notably 53be0b1add and
6f3bd98ebf, made it possible to see from
pg_stat_activity when a backend was stuck waiting for another backend,
but it's also fairly common for a backend to be stuck waiting for an
I/O. Add wait events for those operations, too.
Rushabh Lathia, with further hacking by me. Reviewed and tested by
Michael Paquier, Amit Kapila, Rajkumar Raghuwanshi, and Rahila Syed.
Discussion: http://postgr.es/m/CAGPqQf0LsYHXREPAZqYGVkDqHSyjf=KsD=k0GTVPAuzyThh-VQ@mail.gmail.com
Windows apparently will not detect socket write-ready events unless a
preceding send attempt returned WSAEWOULDBLOCK. In many usage patterns
that's satisfied by the caller of WaitEvenSetWait(), but not always.
Apply the same solution that we already had in pgwin32_select(), namely to
perform a dummy WSASend() call with len=0. This will return WSAEWOULDBLOCK
if there's no buffer space (even though it could legitimately do nothing
and report success, which makes me a bit nervous about this solution;
but since it's been working fine in libpq, let's roll with it).
In passing, improve the comments about this in pgwin32_select(), and remove
duplicated code there.
Back-patch to 9.6 where WaitEventSetWait() was introduced. We might need
to back-patch something similar into predecessor code. But given the lack
of complaints so far, it's not clear that the case ever gets exercised
in the back branches, so I'm not going to expend effort on it right now.
This should resolve recurring failures on buildfarm member bowerbird,
which has been failing since 1e8a85009 went in.
Diagnosis and patch by Petr Jelinek, cosmetic adjustments by me.
Discussion: https://postgr.es/m/5b6a6d6d-fb45-0afb-2e95-5600063c3dbd@2ndquadrant.com
Concurrent auto-analyze could be holding a snapshot, affecting the
removal of deleted row versions. Remove the deletion to avoid this
happening. Per buildfarm.
In passing, make the test independent of assumptions of physical row
order, just out of sheer paranoia.
Tests all combinations of users with MD5, plaintext and SCRAM verifiers
stored in pg_authid, with plain 'password', 'md5' and 'scram'
authentication methods.
Michael Paquier
If the process token contains SECURITY_SERVICE_RID, but it has been
disabled by the SE_GROUP_USE_FOR_DENY_ONLY attribute, win32_is_service()
would incorrectly report that we're running as a service. That situation
arises, e.g. if postmaster is launched with a restricted security token,
with the "Log in as Service" privilege explicitly removed.
Replace the broken code with CheckProcessTokenMembership(), which does
this correctly. Also replace similar code in win32_is_admin(), even
though it got this right, for simplicity and consistency.
Per bug #13755, reported by Breen Hagan. Back-patch to all supported
versions. Patch by Takayuki Tsunakawa, reviewed by Michael Paquier.
Discussion: https://www.postgresql.org/message-id/20151104062315.2745.67143%40wrigleys.postgresql.org
While \help CREATE would complete usefully, \help DROP or \help ALTER
did not complete anything.
Expand the list of things after CREATE and DROP to cover ALTER as well,
and use that for the ALTER completion. Also make minor tweaks to that
list.
Also add support for completing \help on multiword commands like CREATE
TEXT SEARCH ...
Author: Andreas Karlsson <andreas@proxel.se>
presence of page pins, which leads to serious estimation errors in the
planner. This particularly affects small heavily-accessed tables,
especially where locking (e.g. from FK constraints) forces frequent
vacuums for mxid cleanup.
Fix by keeping separate track of pages whose live tuples were actually
counted vs. pages that were only scanned for freezing purposes. Thus,
reltuples can only be set to 0 if all pages of the relation were
actually counted.
Backpatch to all supported versions.
Per bug #14057 from Nicolas Baccelli, analyzed by me.
Discussion: https://postgr.es/m/20160331103739.8956.94469@wrigleys.postgresql.org
These functions are intended to be used by monitoring tools, and,
unlike pg_ls_dir(), access to them can be granted to non-superusers,
so that those monitoring tools can observe the principle of least
privilege.
Dave Page, revised by me, and also reviewed a bit by Thomas Munro.
Discussion: http://postgr.es/m/CA+OCxow-X=D2fWdKy+HP+vQ1LtrgbsYQ=CshzZBqyFT5jOYrFw@mail.gmail.com
The original coding was trying to use a TypeName as a string Value,
which doesn't work; an oversight in my commit a61fd533. Repair.
Also, make sure we cover the broken case in the relevant test script.
Backpatch to 9.5.
Discussion: https://postgr.es/m/20170315151829.bhxsvrp75xdxhm3n@alvherre.pgsql
The buildfarm has reminded me that not all systems consider char to be
signed and we need to be explicit. Adjust the various bits of mac8.c
for what we intend, mostly using casts to unsigned char as suggested by
Tom, and adjust the tests for valid input accordingly. Explicitly make
the hexlookup table signed as it's useful to use -1 there to indicate an
invalid value.
Andres' compiler points out, quite correctly, that there's no need for
some of the overly paranoid checks which were put into mac8.c. Remove
those, as they're useless, add some comments and make a few other minor
improvements- reduce the size of hexlookup by making it a char array
instead of an int array, and pass in the ptr location directly instead
of making hex2_to_uchar re-calculate the location based off the offset
every time.
It appears dcae5facca forgot to add it to
pg_isolation_regress_installcheck, while it was added to
pg_regress_installcheck. It seems to so far have escaped notice,
because buildfarm animals requiring it, didn't actually use
pg_isolation_regress_installcheck anywhere - that changed with
60f826c5e6, triggering failures on narwhal and frogmouth.
I've decided to not, for now at least, backpatch this, because the
relevant invocations look quite different in the back branches. Seems
quite possible that we'll want to backport 60f826c5e6 as a whole if
it proves stable.
Discussion: https://postgr.es/m/20170315174003.3dyl4teashdwgblh@alap3.anarazel.de
The original coding in commit 1e8a85009 didn't use PQconnectPoll per
spec, and while the rewrite in e434ad39a is closer, it still doesn't
guarantee to wait until the socket is read-ready or write-ready (as
appropriate) before calling PQconnectPoll. It's not clear whether
that omission is causing the continuing failures on buildfarm member
bowerbird; but given the lack of other explanations meeting the
available facts, let's tighten that up and see what happens.
An independent issue in the same loop was that it had a race condition
whereby it could clear the process's latch without having serviced an
interrupt request, causing failure to respond to a cancel while waiting
for connection (the very problem 1e8a85009 was meant to fix).
Discussion: https://postgr.es/m/7295.1489596949@sss.pgh.pa.us
This adds in support for EUI-64 MAC addresses by adding a new data type
called 'macaddr8' (using our usual convention of indicating the number
of bytes stored).
This was largely a copy-and-paste from the macaddr data type, with
appropriate adjustments for having 8 bytes instead of 6 and adding
support for converting a provided EUI-48 (6 byte format) to the EUI-64
format. Conversion from EUI-48 to EUI-64 inserts FFFE as the 4th and
5th bytes but does not perform the IPv6 modified EUI-64 action of
flipping the 7th bit, but we add a function to perform that specific
action for the user as it may be commonly done by users who wish to
calculate their IPv6 address based on their network prefix and 48-bit
MAC address.
Author: Haribabu Kommi, with a good bit of rework of macaddr8_in by me.
Reviewed by: Vitaly Burovoy, Kuntal Ghosh
Discussion: https://postgr.es/m/CAJrrPGcUi8ZH+KkK+=TctNQ+EfkeCEHtMU_yo1mvX8hsk_ghNQ@mail.gmail.com
In DDL commands referring to an existing function, allow omitting the
argument list if the function name is unique in its schema, per SQL
standard.
This uses the same logic that the regproc type uses for finding
functions by name only.
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
Partitionwise join proposes add a concept of child join relations,
which will have the same relationship with join relations as "other
member" relations do with base relations. These relations will need
some but not all of the handling that we currently have for join
relations, and some but not all of the handling that we currently have
for appendrels, since they are a mix of the two. Refactor a little
bit so that the necessary bits of logic are exposed as separate
functions.
Ashutosh Bapat, reviewed and tested by Rajkumar Raghuwanshi and
by me.
Discussion: http://postgr.es/m/CAFjFpRfqotRR6cM3sooBHMHEVdkFfAZ6PyYg4GRZsoMuW08HjQ@mail.gmail.com