postgres_fdw would send query conditions involving system columns to the
remote server, even though it makes no effort to ensure that system
columns other than CTID match what the remote side thinks. tableoid,
in particular, probably won't match and might have some use in queries.
Hence, prevent sending conditions that include non-CTID system columns.
Also, create_foreignscan_plan neglected to check local restriction
conditions while determining whether to set fsSystemCol for a foreign
scan plan node. This again would bollix the results for queries that
test a foreign table's tableoid.
Back-patch the first fix to 9.3 where postgres_fdw was introduced.
Back-patch the second to 9.2. The code is probably broken in 9.1 as
well, but the patch doesn't apply cleanly there; given the weak state
of support for FDWs in 9.1, it doesn't seem worth fixing.
Etsuro Fujita, reviewed by Ashutosh Bapat, and somewhat modified by me
PSQLexec's error reporting turns out to be too verbose for this case, so
revert to using PQexec instead with minimal error reporting. Prior to
calling PQexec, we call a function that mimics just the echo_hidden
piece of PSQLexec.
Make it work more like FDW plans do: instead of assuming that there are
expressions in a CustomScan plan node that the core code doesn't know
about, insist that all subexpressions that need planner attention be in
a "custom_exprs" list in the Plan representation. (Of course, the
custom plugin can break the list apart again at executor initialization.)
This lets us revert the parts of the patch that exposed setrefs.c and
subselect.c processing to the outside world.
Also revert the GetSpecialCustomVar stuff in ruleutils.c; that concept
may work in future, but it's far from fully baked right now.
Instead of register_custom_path_provider and a CreateCustomScanPath
callback, let's just provide a standard function hook in set_rel_pathlist.
This is more flexible than what was previously committed, is more like the
usual conventions for planner hooks, and requires less support code in the
core. We had discussed this design (including centralizing the
set_cheapest() calls) back in March or so, so I'm not sure why it wasn't
done like this already.
This error counted the first line of a cell as "extra". The effect was
to cause far too frequent invocation of the pager. In most cases this
can be worked around (for example, by using the "less" pager with the -F
flag), so don't backpatch.
These commands were calling the database direct rather than calling
PSQLexec like other slash commands that needed database data.
The code is also changed not to pass the connection as a parameter to
the helper functions. It's available in a global variable, and that's
what PSQLexec uses.
There seems no prospect that any of this will ever be useful, and indeed
it's questionable whether some of it would work if it ever got called;
it's certainly not been exercised in a very long time, if ever. So let's
get rid of it, and make the comments about mark/restore in execAmi.c less
wishy-washy.
The mark/restore support for Result nodes is also currently dead code,
but that's due to planner limitations not because it's impossible that
it could be useful. So I left it in.
Get rid of the pernicious entanglement between planner and executor headers
introduced by commit 0b03e5951b.
Also, rearrange the CustomFoo struct/typedef definitions so that all the
typedef names are seen as used by the compiler. Without this pgindent
will mess things up a bit, which is not so important perhaps, but it also
removes a bizarre discrepancy between the declaration arrangement used for
CustomExecMethods and that used for CustomScanMethods and
CustomPathMethods.
Clean up the commentary around ExecSupportsMarkRestore to reflect the
rather large change in its API.
Const-ify register_custom_path_provider's argument. This necessitates
casting away const in the function, but that seems better than forcing
callers of the function to do so (or else not const-ify their method
pointer structs, which was sort of the whole point).
De-export fix_expr_common. I don't like the exporting of fix_scan_expr
or replace_nestloop_params either, but this one surely has got little
excuse.
execCurrent.c's search_plan_tree() must recognize a CustomScan on the
target relation. This would only be helpful for custom providers that
support CurrentOfExpr quals, which is probably a bit far-fetched, but
it's not impossible I think. But even without assuming that, we need
to recognize a scanned-relation match so that we will properly throw
error if the desired relation is being scanned with both a CustomScan
and a regular scan (ie, self-join).
Also recognize ForeignScanState for similar reasons. Supporting WHERE
CURRENT OF on a foreign table is probably even more far-fetched than
it is for custom scans, but I think in principle you could do it with
postgres_fdw (or another FDW that supports the ctid column). This
would be a back-patchable bug fix if existing FDWs handled CurrentOfExpr,
but I doubt any do so I won't bother back-patching.
Now that we have a policy of hiding varlena catalog fields behind
"#ifdef CATALOG_VARLEN", there is no need for their type names to be
acceptable to the C compiler. And experimentation shows that it does
not matter to pgindent either. (If it did, we'd have problems anyway,
since these typedefs are unreferenced so far as the C compiler is
concerned, and find_typedef fails to identify such typedefs.)
Hence, remove the phony typedefs that genbki.h provided to make
some varlena field definitions compilable.
In passing, rearrange #define's into what seemed a more logical order.
It's a false positive - the variable is only used when 'onleft' is true,
and it is initialized in that case. But the compiler doesn't necessarily
see that.
Each WAL record now carries information about the modified relation and
block(s) in a standardized format. That makes it easier to write tools that
need that information, like pg_rewind, prefetching the blocks to speed up
recovery, etc.
There's a whole new API for building WAL records, replacing the XLogRecData
chains used previously. The new API consists of XLogRegister* functions,
which are called for each buffer and chunk of data that is added to the
record. The new API also gives more control over when a full-page image is
written, by passing flags to the XLogRegisterBuffer function.
This also simplifies the XLogReadBufferForRedo() calls. The function can dig
the relation and block number from the WAL record, so they no longer need to
be passed as arguments.
For the convenience of redo routines, XLogReader now disects each WAL record
after reading it, copying the main data part and the per-block data into
MAXALIGNed buffers. The data chunks are not aligned within the WAL record,
but the redo routines can assume that the pointers returned by XLogRecGet*
functions are. Redo routines are now passed the XLogReaderState, which
contains the record in the already-disected format, instead of the plain
XLogRecord.
The new record format also makes the fixed size XLogRecord header smaller,
by removing the xl_len field. The length of the "main data" portion is now
stored at the end of the WAL record, and there's a separate header after
XLogRecord for it. The alignment padding at the end of XLogRecord is also
removed. This compansates for the fact that the new format would otherwise
be more bulky than the old format.
Reviewed by Andres Freund, Amit Kapila, Michael Paquier, Alvaro Herrera,
Fujii Masao.
Custom rules must come after pgxs inclusion, not before, because any
rule added before pgxs will break the default 'all' target.
Author: Cédric Villemain <cedric@2ndquadrant.fr>
That includes VACUUM on GIN, GiST and SP-GiST indexes, and B-tree indexes
large enough to cause page deletions in B-tree. Plus some other special
cases.
After this patch, the regression tests generate all different WAL record
types. Not all branches within the redo functions are covered, but it's a
step forward.
In pg_receivexlog, in order to check whether the current WAL file is
being opened or not, its file descriptor has to be checked against -1
as an invalid value. But, oops, 7900e94 added the incorrect test
checking the descriptor against 1. This commit fixes that bug.
Back-patch to 9.4 where the bug was added.
Spotted by Magnus Hagander
When pg_receivexlog --slot is connecting to the server, at the shutdown
of the server, walsender keeps waiting for the last WAL record to be
replicated and flushed in pg_receivexlog. But previously pg_receivexlog
issued sync command only when WAL file was switched. So there was
the case where the last WAL was never flushed and walsender had to
keep waiting infinitely. This caused the server shutdown to get stuck.
pg_recvlogical handles this problem by calling fsync() when it receives
the request of immediate reply from the server. That is, at shutdown,
walsender sends the request, pg_recvlogical receives it, flushes the last
WAL record, and sends the flush location back to the server. Since
walsender can see that the last WAL record is successfully flushed, it can
exit cleanly.
This commit introduces the same logic as pg_recvlogical has,
to pg_receivexlog.
Back-patch to 9.4 where pg_receivexlog was changed so that it can use
the replication slot.
Original patch by Michael Paquier, rewritten by me.
Bug report by Furuya Osamu.
The regression test cases added in commits b2cbced9e et al depended in part
on the Russian timezone offset changes of Oct 2014. While this is of no
particular concern for a default Postgres build, it was possible for a
build using --with-system-tzdata to fail the tests if the system tzdata
database wasn't au courant. Bjorn Munch and Christoph Berg both complained
about this while packaging 9.4rc1, so we probably shouldn't insist on the
system tzdata being up-to-date. Instead, make an equivalent test using a
zone change that occurred in Venezuela in 2007. With this patch, the
regression tests should pass using any tzdata set from 2012 or later.
(I can't muster much sympathy for somebody using --with-system-tzdata
on a machine whose system tzdata is more than three years out-of-date.)
These comments don't seem to have been touched in a long time. Make them
describe the current implementation rather than what was here last century,
and be a bit more explicit about the unreferenced-typedefs issue.
pg_dump/parallel.c was using realloc() directly with no error check.
While the odds of an actual failure here seem pretty low, Coverity
complains about it, so fix by using pg_realloc() instead.
While looking for other instances, I noticed a couple of places in
psql that hadn't gotten the memo about the availability of pg_realloc.
These aren't bugs, since they did have error checks, but verbosely
inconsistent code is not a good thing.
Back-patch as far as 9.3. 9.2 did not have pg_dump/parallel.c, nor
did it have pg_realloc available in all frontend code.
For <, <=, > and >= strategies, mark the first scan key
as already matched if scanning in an appropriate direction.
If index tuple contains no nulls we can skip the first
re-check for each tuple.
Author: Rajeev Rastogi
Reviewer: Haribabu Kommi
Rework of the code and comments by Simon Riggs
Allows pg_dump to use a snapshot previously defined by a concurrent
session that has either used pg_export_snapshot() or obtained a
snapshot when creating a logical slot. When this option is used with
parallel pg_dump, the snapshot defined by this option is used and no
new snapshot is taken.
Simon Riggs and Michael Paquier
Previously pg_receivexlog flushed WAL data only when WAL file was switched.
Then 3dad73e added -F option to pg_receivexlog so that users could control
how frequently sync commands were issued to WAL files. It also allowed users
to make pg_receivexlog flush WAL data immediately after writing by
specifying 0 in -F option. However feedback messages were not sent back
immediately even after a flush location was updated. So even if WAL data
was flushed in real time, the server could not see that for a while.
This commit removes -F option from and adds --synchronous to pg_receivexlog.
If --synchronous is specified, like the standby's wal receiver, pg_receivexlog
flushes WAL data as soon as there is WAL data which has not been flushed yet.
Then it sends back the feedback message identifying the latest flush location
to the server. This option is useful to make pg_receivexlog behave as sync
standby by using replication slot, for example.
Original patch by Furuya Osamu, heavily rewritten by me.
Reviewed by Heikki Linnakangas, Alvaro Herrera and Sawada Masahiko.
DST law changes in the Turks & Caicos Islands (America/Grand_Turk) and
in Fiji. New zone Pacific/Bougainville for portions of Papua New Guinea.
Historical changes for Korea and Vietnam.
There was some confusion on how to record the case that the operation
unlinks the last non-leaf page in the branch being deleted.
_bt_unlink_halfdead_page set the "topdead" field in the WAL record to
the leaf page, but the redo routine assumed that it would be an invalid
block number in that case. This commit fixes _bt_unlink_halfdead_page to
do what the redo routine expected.
This code is new in 9.4, so backpatch there.
Buildfarm members with CLOBBER_CACHE_ALWAYS advised us that commit
85b506bbfc was mistaken in setting the relpersistence value of the
index directly in the relcache entry, within reindex_index. The reason
for the failure is that an invalidation message that comes after mucking
with the relcache entry directly, but before writing it to the catalogs,
would cause the entry to become rebuilt in place from catalogs with the
old contents, losing the update.
Fix by passing the correct persistence value to
RelationSetNewRelfilenode instead; this routine also writes the updated
tuple to pg_class, avoiding the problem. Suggested by Tom Lane.
When checking a table that has an inheritance tree marked,
if no child tables remain, we skip ANALYZE. This patch emits
a message to show that the action has been skipped.
Author: Etsuro Fujita
Reviewer: Furuya Osamu
This removes ATChangeIndexesPersistence() introduced by f41872d0c1
which was too ugly to live for long. Instead, the correct persistence
marking is passed all the way down to reindex_index, so that the
transient relation built to contain the index relfilenode can
get marked correctly right from the start.
Author: Fabrízio de Royes Mello
Review and editorialization by Michael Paquier
and Álvaro Herrera
630cd14426 added initdb --sync-only, for use by pg_upgrade, by just
exposing the existing fsync code. That's wrong, because initdb so far
had absolutely no reason to deal with tablespaces.
Fix --sync-only by additionally explicitly syncing each of the
tablespaces.
Backpatch to 9.3 where --sync-only was introduced.
Abhijit Menon-Sen and Andres Freund
Unlogged relations are only reset when performing a unclean
restart. That means they have to be synced to disk during clean
shutdowns. During normal processing that's achieved by registering a
buffer's file to be fsynced at the next checkpoint when flushed. But
ResetUnloggedRelations() doesn't go through the buffer manager, so
nothing will force reset relations to disk before the next shutdown
checkpoint.
So just make ResetUnloggedRelations() fsync the newly created main
forks to disk.
Discussion: 20140912112246.GA4984@alap3.anarazel.de
Backpatch to 9.1 where unlogged tables were introduced.
Abhijit Menon-Sen and Andres Freund
Unlogged relations are reset at the end of crash recovery as they're
only synced to disk during a proper shutdown. Unfortunately that and
later steps can fail, e.g. due to running out of space. This reset
was, up to now performed after marking the database as having finished
crash recovery successfully. As out of space errors trigger a crash
restart that could lead to the situation that not all unlogged
relations are reset.
Once that happend usage of unlogged relations could yield errors like
"could not open file "...": No such file or directory". Luckily
clusters that show the problem can be fixed by performing a immediate
shutdown, and starting the database again.
To fix, just call ResetUnloggedRelations(UNLOGGED_RELATION_INIT)
earlier, before marking the database as having successfully recovered.
Discussion: 20140912112246.GA4984@alap3.anarazel.de
Backpatch to 9.1 where unlogged tables were introduced.
Abhijit Menon-Sen and Andres Freund
The initial patch for RLS mistakenly included headers associated with
the executor and planner bits in rewrite/rowsecurity.h. Per policy and
general good sense, executor headers should not be included in planner
headers or vice versa.
The include of execnodes.h was a mistaken holdover from previous
versions, while the include of relation.h was used for Relation's
definition, which should have been coming from utils/relcache.h. This
patch cleans these issues up, adds comments to the RowSecurityPolicy
struct and the RowSecurityConfigType enum, and changes Relation->rsdesc
to Relation->rd_rsdesc to follow Relation field naming convention.
Additionally, utils/rel.h was including rewrite/rowsecurity.h, which
wasn't a great idea since that was pulling in things not really needed
in utils/rel.h (which gets included in quite a few places). Instead,
use 'struct RowSecurityDesc' for the rd_rsdesc field and add comments
explaining why.
Lastly, add an include into access/nbtree/nbtsort.c for
utils/sortsupport.h, which was evidently missed due to the above mess.
Pointed out by Tom in 16970.1415838651@sss.pgh.pa.us; note that the
concerns regarding a similar situation in the custom-path commit still
need to be addressed.
Per complaint from Tom.
While at it, throw in some extra tests for nulls as well, and make sure
that the set of data we insert on the second round is not identical to
the first one. Both measures are intended to improve coverage of the
test.
Also uncomment the ON COMMIT DROP clause on the CREATE TEMP TABLE
commands. This doesn't have any effect for someone examining the
regression database after the tests are done, but it reduces clutter for
those that execute the script directly.
This function has a loop which can lead to uninterruptible process
"stalls" (actually infinite loops) when some bugs are triggered. Avoid
that unpleasant situation by adding a check for interrupts in a place
that shouldn't degrade performance in the normal case.
Backpatch to 9.3. Older branches have an identical loop here, but the
aforementioned bugs are only a problem starting in 9.3 so there doesn't
seem to be any point in backpatching any further.
In some workloads BufferGetBlockNumber() shows up in profiles due to
the sheer number of calls to it (and because it causes cache
misses). The compiler can't move it out of the loop because it's a
full extern function call...
pg_atomic_init_u64 (indirectly) uses compare/exchange to guarantee
atomic writes on platforms where compare/exchange is available, but
64bit writes aren't atomic (yes, those exist). That leads to a
harmless read of the initial value of variable.
Fix breakage induced by commits d8d3d2a4f3
and 463f2625a5fb183b6a8925ccde98bb3889f921d9: pg_dumpall has crashed when
attempting to dump from pre-8.1 servers since then, due to faulty
construction of the query used for dumping roles from older servers.
The query was erroneous as of the earlier commit, but it wasn't exposed
unless you tried to use --binary-upgrade, which you presumably wouldn't
with a pre-8.1 server. However commit 463f2625a made it fail always.
In HEAD, also fix additional breakage induced in the same query by
commit 491c029dbc, which evidently wasn't
tested against pre-8.1 servers either.
The bug is only latent in 9.1 because 463f2625a hadn't landed yet, but
it seems best to back-patch all branches containing the faulty query.
Gilles Darold
There are basically three situations in which logical decoding needs
to perform cache invalidation. During/After replaying a transaction
with catalog changes, when skipping a uninteresting transaction that
performed catalog changes and when erroring out while replaying a
transaction. Unfortunately these three cases were all done slightly
differently - partially because 8de3e410fa, which greatly simplifies
matters, got committed in the midst of the development of logical
decoding.
The actually problematic case was when logical decoding skipped
transaction commits (and thus processed invalidations). When used via
the SQL interface cache invalidation could access the catalog - bad,
because we didn't set up enough state to allow that correctly. It'd
not be hard to setup sufficient state, but the simpler solution is to
always perform cache invalidation outside a valid transaction.
Also make the different cache invalidation cases look as similar as
possible, to ease code review.
This fixes the assertion failure reported by Antonin Houska in
53EE02D9.7040702@gmail.com. The presented testcase has been expanded
into a regression test.
Backpatch to 9.4, where logical decoding was introduced.
When building the initial historic catalog snapshot there were
scenarios where snapbuild.c would use incorrect xmin/xmax values when
starting from a xl_running_xacts record. The values used were always a
bit suspect, but happened to be correct in the easy to test
cases. Notably the values used when the the initial snapshot was
computed while no other transactions were running were correct.
This is likely to be the cause of the occasional buildfarm failures on
animals markhor and tick; but it's quite possible to reproduce
problems without CLOBBER_CACHE_ALWAYS.
Backpatch to 9.4, where logical decoding was introduced.
There was a window in RestoreBackupBlock where a page would be zeroed out,
but not yet locked. If a backend pinned and locked the page in that window,
it saw the zeroed page instead of the old page or new page contents, which
could lead to missing rows in a result set, or errors.
To fix, replace RBM_ZERO with RBM_ZERO_AND_LOCK, which atomically pins,
zeroes, and locks the page, if it's not in the buffer cache already.
In stable branches, the old RBM_ZERO constant is renamed to RBM_DO_NOT_USE,
to avoid breaking any 3rd party extensions that might use RBM_ZERO. More
importantly, this avoids renumbering the other enum values, which would
cause even bigger confusion in extensions that use ReadBufferExtended, but
haven't been recompiled.
Backpatch to all supported versions; this has been racy since hot standby
was introduced.
The hope is that we can use this to produce better diagnostics in
some cases.
Peter Geoghegan, reviewed by Michael Paquier, with some further
changes by me.
This only happens if a client issues a Parse message with an empty query
string, which is a bit odd; but since it is explicitly called out as legal
by our FE/BE protocol spec, we'd probably better continue to allow it.
Fix by adding tests everywhere that the raw_parse_tree field is passed to
functions that don't or shouldn't accept NULL. Also make it clear in the
relevant comments that NULL is an expected case.
This reverts commits a73c9dbab0 and
2e9650cbcf, which fixed specific crash
symptoms by hacking things at what now seems to be the wrong end, ie the
callee functions. Making the callees allow NULL is superficially more
robust, but it's not always true that there is a defensible thing for the
callee to do in such cases. The caller has more context and is better
able to decide what the empty-query case ought to do.
Per followup discussion of bug #11335. Back-patch to 9.2. The code
before that is sufficiently different that it would require development
of a separate patch, which doesn't seem worthwhile for what is believed
to be an essentially cosmetic change.
Heikki noticed in 544E23C0.8090605@vmware.com that slot.c and
snapbuild.c were missing the FIN_CRC32 call when computing/checking
checksums of on disk files. That doesn't lower the the error detection
capabilities of the checksum, but is inconsistent with other usages.
In a followup mail Heikki also noticed that, contrary to a comment,
the 'version' and 'length' struct fields of replication slot's on disk
data where not covered by the checksum. That's not likely to lead to
actually missed corruption as those fields are cross checked with the
expected version and the actual file length. But it's wrong
nonetheless.
As fixing these issues makes existing on disk files unreadable, bump
the expected versions of on disk files for both slots and logical
decoding historic catalog snapshots. This means that loading old
files will fail with
ERROR: "replication slot file ... has unsupported version 1"
and
ERROR: "snapbuild state file ... has unsupported version 1 instead of
2" respectively. Given the low likelihood of anybody already using
these new features in a production setup that seems acceptable.
Fixing these issues made me notice that there's no regression test
covering the loading of historic snapshot from disk - so add one.
Backpatch to 9.4 where these features were introduced.
On Windows, DROP TABLESPACE has a race condition when run concurrently
with other processes having opened files in the tablespace. This led to
a rare failure on buildfarm member frogmouth. Back-patch to 9.4, where
the reconnection was introduced.
When the recursive search in dependency.c visits a column and then later
visits the whole table containing the column, it needs to propagate the
drop-context flags for the table to the existing target-object entry for
the column. Otherwise we might refuse the DROP (if not CASCADE) on the
incorrect grounds that there was no automatic drop pathway to the column.
Remarkably, this has not been reported before, though it's possible at
least when an extension creates both a datatype and a table using that
datatype.
Rather than just marking the column as allowed to be dropped, it might
seem good to skip the DROP COLUMN step altogether, since the later DROP
of the table will surely get the job done. The problem with that is that
the datatype would then be dropped before the table (since the whole
situation occurred because we visited the datatype, and then recursed to
the dependent column, before visiting the table). That seems pretty risky,
and the case is rare enough that it doesn't seem worth expending a lot of
effort or risk to make the drops happen in a safe order. So we just play
dumb and delete the column separately according to the existing drop
ordering rules.
Per report from Petr Jelinek, though this is different from his proposed
patch.
Back-patch to 9.1, where extensions were introduced. There's currently
no evidence that such cases can arise before 9.1, and in any case we would
also need to back-patch cb5c2ba2d8 to 9.0
if we wanted to back-patch this.
Previously the maximum size of GIN pending list was controlled only by
work_mem. But the reasonable value of work_mem and the reasonable size
of the list are basically not the same, so it was not appropriate to
control both of them by only one GUC, i.e., work_mem. This commit
separates new GUC, pending_list_cleanup_size, from work_mem to allow
users to control only the size of the list.
Also this commit adds pending_list_cleanup_size as new storage parameter
to allow users to specify the size of the list per index. This is useful,
for example, when users want to increase the size of the list only for
the GIN index which can be updated heavily, and decrease it otherwise.
Reviewed by Etsuro Fujita.
I missed an additional colon in previous patch. Oops. to make that mistake
less likely in the future, add comments as placeholders for unused inputs
and outputs in inline assembly.
The code that generates the BRIN_XLOG_UPDATE removes the buffer
reference when the page that's target for the updated tuple is freshly
initialized. This is a pretty usual optimization, but was breaking the
case where the revmap buffer, which is referenced in the same WAL
record, is getting a backup block: the replay code was using backup
block index 1, which is not valid when the update target buffer gets
pruned; the revmap buffer gets assigned 0 instead. Make sure to use the
correct backup block index for revmap when replaying.
Bug reported by Fujii Masao.
At one time it wasn't terribly important what column names were associated
with the fields of a composite Datum, but since the introduction of
operations like row_to_json(), it's important that looking up the rowtype
ID embedded in the Datum returns the column names that users would expect.
That did not work terribly well before this patch: you could get the column
names of the underlying table, or column aliases from any level of the
query, depending on minor details of the plan tree. You could even get
totally empty field names, which is disastrous for cases like row_to_json().
To fix this for whole-row Vars, look to the RTE referenced by the Var, and
make sure its column aliases are applied to the rowtype associated with
the result Datums. This is a tad scary because we might have to return
a transient RECORD type even though the Var is declared as having some
named rowtype. In principle it should be all right because the record
type will still be physically compatible with the named rowtype; but
I had to weaken one Assert in ExecEvalConvertRowtype, and there might be
third-party code containing similar assumptions.
Similarly, RowExprs have to be willing to override the column names coming
from a named composite result type and produce a RECORD when the column
aliases visible at the site of the RowExpr differ from the underlying
table's column names.
In passing, revert the decision made in commit 398f70ec07 to add
an alias-list argument to ExecTypeFromExprList: better to provide that
functionality in a separate function. This also reverts most of the code
changes in d685814835, which we don't need because we're no longer
depending on the tupdesc found in the child plan node's result slot to be
blessed.
Back-patch to 9.4, but not earlier, since this solution changes the results
in some cases that users might not have realized were buggy. We'll apply a
more restricted form of this patch in older branches.
Besides a couple of typo fixes, per David Rowley, Thom Brown, and Amit
Langote, and mentions of BRIN in the general CREATE INDEX page again per
David, this includes silencing MSVC compiler warnings (thanks Microsoft)
and an additional variable initialization per Coverity scanner.
Reported by David Rowley: variadic macros are a problem. Get rid of
them using a trick suggested by Tom Lane: add extra parentheses where
needed. In the future we might decide we don't need the calls at all
and remove them, but it seems appropriate to keep them while this code
is still new.
Also from David Rowley: brininsert() was trying to use a variable before
initializing it. Fix by moving the brin_form_tuple call (which
initializes the variable) to within the locked section.
Reported by Peter Eisentraut: can't use "new" as a struct member name,
because C++ compilers will choke on it, as reported by cpluspluscheck.
Work around accidental test failures because the working directory path
is too long by creating a temporary directory in the (hopefully shorter)
system location, symlinking that to the working directory, and creating
the tablespaces using the shorter path.
This allows extension modules to define their own methods for
scanning a relation, and get the core code to use them. It's
unclear as yet how much use this capability will find, but we
won't find out if we never commit it.
KaiGai Kohei, reviewed at various times and in various levels
of detail by Shigeru Hanada, Tom Lane, Andres Freund, Álvaro
Herrera, and myself.
Now that the backup blocks are appended to the WAL record in xloginsert.c,
XLogInsert doesn't see them anymore and cannot remove them from the version
reconstructed for xlog_outdesc. This makes running with wal_debug=on more
expensive, as we now make (unnecessary) temporary copies of the backup
blocks, but it doesn't seem worth convoluting the code to keep that
optimization.
Reported by Alvaro Herrera.
Test misc depends on brin, but it was earlier in the serial schedule
file. I didn't notice this because I only run the parallel schedule,
but the buildfarm exposed my folly ...
BRIN is a new index access method intended to accelerate scans of very
large tables, without the maintenance overhead of btrees or other
traditional indexes. They work by maintaining "summary" data about
block ranges. Bitmap index scans work by reading each summary tuple and
comparing them with the query quals; all pages in the range are returned
in a lossy TID bitmap if the quals are consistent with the values in the
summary tuple, otherwise not. Normal index scans are not supported
because these indexes do not store TIDs.
As new tuples are added into the index, the summary information is
updated (if the block range in which the tuple is added is already
summarized) or not; in the latter case, a subsequent pass of VACUUM or
the brin_summarize_new_values() function will create the summary
information.
For data types with natural 1-D sort orders, the summary info consists
of the maximum and the minimum values of each indexed column within each
page range. This type of operator class we call "Minmax", and we
supply a bunch of them for most data types with B-tree opclasses.
Since the BRIN code is generalized, other approaches are possible for
things such as arrays, geometric types, ranges, etc; even for things
such as enum types we could do something different than minmax with
better results. In this commit I only include minmax.
Catalog version bumped due to new builtin catalog entries.
There's more that could be done here, but this is a good step forwards.
Loosely based on ideas from Simon Riggs; code mostly by Álvaro Herrera,
with contribution by Heikki Linnakangas.
Patch reviewed by: Amit Kapila, Heikki Linnakangas, Robert Haas.
Testing help from Jeff Janes, Erik Rijkers, Emanuel Calvo.
PS:
The research leading to these results has received funding from the
European Union's Seventh Framework Programme (FP7/2007-2013) under
grant agreement n° 318633.
The code that generated a record to clear the F_TUPLES_DELETED flag hasn't
existed since we got rid of old-style VACUUM FULL. I kept the code that sets
the flag, although it's not used for anything anymore, because it might
still be interesting information for debugging purposes that some tuples
have been deleted from a page.
Likewise, the code to turn the root page from non-leaf to leaf page was
removed when we got rid of old-style VACUUM FULL. Remove the code to replay
that action, too.
dict_thesaurus stored phrase IDs in uint16 fields, so it would get confused
and even crash if there were more than 64K entries in the configuration
file. It turns out to be basically free to widen the phrase IDs to uint32,
so let's just do so.
This was complained of some time ago by David Boutin (in bug #7793);
he later submitted an informal patch but it was never acted on.
We now have another complaint (bug #11901 from Luc Ouellette) so it's
time to make something happen.
This is basically Boutin's patch, but for future-proofing I also added a
defense against too many words per phrase. Note that we don't need any
explicit defense against overflow of the uint32 counters, since before that
happens we'd hit array allocation sizes that repalloc rejects.
Back-patch to all supported branches because of the crash risk.
The default JSONB GIN opclass (jsonb_ops) converts numeric data values
to strings for storage in the index. It must ensure that numeric values
that would compare equal (such as 12 and 12.00) produce identical strings,
else index searches would have behavior different from regular JSONB
comparisons. Unfortunately the function charged with doing this was
completely wrong: it could reduce distinct numeric values to the same
string, or reduce equivalent numeric values to different strings. The
former type of error would only lead to search inefficiency, but the
latter type of error would cause index entries that should be found by
a search to not be found.
Repairing this bug therefore means that it will be necessary for 9.4 beta
testers to reindex GIN jsonb_ops indexes, if they care about getting
correct results from index searches involving numeric data values within
the comparison JSONB object.
Per report from Thomas Fanghaenel.
Previously .ready file was created for the timeline history file at the end
of an archive recovery even when WAL archiving was not enabled.
This creation is unnecessary and causes .ready file to remain infinitely.
This commit changes an archive recovery so that it creates .ready file for
the timeline history file only when WAL archiving is enabled.
Backpatch to all supported versions.
xlog.c is huge, this makes it a little bit smaller, which is nice. Functions
related to putting together the WAL record are in xloginsert.c, and the
lower level stuff for managing WAL buffers and such are in xlog.c.
Also move the definition of XLogRecord to a separate header file. This
causes churn in the #includes of all the files that write WAL records, and
redo routines, but it avoids pulling in xlog.h into most places.
Reviewed by Michael Paquier, Alvaro Herrera, Andres Freund and Amit Kapila.
Long ago we briefly had an "autocommit" GUC that turned server-side
autocommit on and off. That behavior was removed in 7.4 after concluding
that it broke far too much client-side logic, and making clients cope with
both behaviors was impractical. But the GUC variable was left behind, so
as not to break any client code that might be trying to read its value.
Enough time has now passed that we should remove the GUC completely.
Whatever vestigial backwards-compatibility benefit it had is outweighed by
the risk of confusion for newbies who assume it ought to do something,
as per a recent complaint from Wolfgang Wilhelm.
In passing, adjust what seemed to me a rather confusing documentation
reference to libpq's autocommit behavior. libpq as such knows nothing
about autocommit, so psql is probably what was meant.
Obviously, every translation unit should not be declaring this
separately. It needs to be PGDLLIMPORT as well, to avoid breaking
third-party code that uses any of the functions that the commit
mentioned above changed to macros.
This is a followup to commit 43ac12c6e6,
which added regression tests checking that I/O functions of built-in
types are not marked volatile. Complaining in CREATE TYPE should push
developers of add-on types to fix any misdeclared functions in their
types. It's just a warning not an error, to avoid creating upgrade
problems for what might be just cosmetic mis-markings.
Aside from adding the warning code, fix a number of types that were
sloppily created in the regression tests.
The previous coding assumed that we could just let buffers for the
database's old tablespace age out of the buffer arena naturally.
The folly of that is exposed by bug #11867 from Marc Munro: the user could
later move the database back to its original tablespace, after which any
still-surviving buffers would match lookups again and appear to contain
valid data. But they'd be missing any changes applied while the database
was in the new tablespace.
This has been broken since ALTER SET TABLESPACE was introduced, so
back-patch to all supported branches.
The old algorithm was found to not be the usual CRC-32 algorithm, used by
Ethernet et al. We were using a non-reflected lookup table with code meant
for a reflected lookup table. That's a strange combination that AFAICS does
not correspond to any bit-wise CRC calculation, which makes it difficult to
reason about its properties. Although it has worked well in practice, seems
safer to use a well-known algorithm.
Since we're changing the algorithm anyway, we might as well choose a
different polynomial. The Castagnoli polynomial has better error-correcting
properties than the traditional CRC-32 polynomial, even if we had
implemented it correctly. Another reason for picking that is that some new
CPUs have hardware support for calculating CRC-32C, but not CRC-32, let
alone our strange variant of it. This patch doesn't add any support for such
hardware, but a future patch could now do that.
The old algorithm is kept around for tsquery and pg_trgm, which use the
values in indexes that need to remain compatible so that pg_upgrade works.
While we're at it, share the old lookup table for CRC-32 calculation
between hstore, ltree and core. They all use the same table, so might as
well.
The reasons behind commit 0d147e43ad still
stand, so this reverts the non-cosmetic portion of commit
a7983e989d. Back-patch to 9.4, where the
latter commit first appeared.
Cygwin builds require this of dependencies pertaining to pattern rules.
On Cygwin, stat("foo") in the absence of a file with that exact name can
locate foo.exe. While GNU make uses stat() for dependencies of ordinary
rules, it uses readdir() to assess dependencies of pattern rules.
Therefore, a pattern rule dependency should match any underlying file
name exactly. Back-patch to 9.4, where the dependency was introduced.
A background worker can use pq_redirect_to_shm_mq() to direct protocol
that would normally be sent to the frontend to a shm_mq so that another
process may read them.
The receiving process may use pq_parse_errornotice() to parse an
ErrorResponse or NoticeResponse from the background worker and, if
it wishes, ThrowErrorData() to propagate the error (with or without
further modification).
Patch by me. Review by Andres Freund.
This reassociates a dynamic shared memory handle previous passed to
dsm_pin_mapping with the current resource owner, so that it will be
cleaned up at the end of the current query.
Patch by me. Review of the function name by Andres Freund, Amit
Kapila, Jim Nasby, Petr Jelinek, and Álvaro Herrera.
As noted by Noah Misch, my initial cut at fixing bug #11638 didn't cover
all cases where ANALYZE might be invoked in an unsafe context. We need to
test the result of IsInTransactionChain not IsTransactionBlock; which is
notationally a pain because IsInTransactionChain requires an isTopLevel
flag, which would have to be passed down through several levels of callers.
I chose to pass in_outer_xact (ie, the result of IsInTransactionChain)
rather than isTopLevel per se, as that seemed marginally more apropos
for the intermediate functions to know about.
Nobody seemed concerned about this naming when it originally went in,
but there's a pending patch that implements the opposite of
dsm_keep_mapping, and the term "unkeep" was judged unpalatable.
"unpin" has existing precedent in the PostgreSQL code base, and the
English language, so use this terminology instead.
Per discussion, back-patch to 9.4.
VACUUM and ANALYZE update the target table's pg_class row in-place, that is
nontransactionally. This is OK, more or less, for the statistical columns,
which are mostly nontransactional anyhow. It's not so OK for the DDL hint
flags (relhasindex etc), which might get changed in response to
transactional changes that could still be rolled back. This isn't a
problem for VACUUM, since it can't be run inside a transaction block nor
in parallel with DDL on the table. However, we allow ANALYZE inside a
transaction block, so if the transaction had earlier removed the last
index, rule, or trigger from the table, and then we roll back the
transaction after ANALYZE, the table would be left in a corrupted state
with the hint flags not set though they should be.
To fix, suppress the hint-flag updates if we are InTransactionBlock().
This is safe enough because it's always OK to postpone hint maintenance
some more; the worst-case consequence is a few extra searches of pg_index
et al. There was discussion of instead using a transactional update,
but that would change the behavior in ways that are not all desirable:
in most scenarios we're better off keeping ANALYZE's statistical values
even if the ANALYZE itself rolls back. In any case we probably don't want
to change this behavior in back branches.
Per bug #11638 from Casey Shobe. This has been broken for a good long
time, so back-patch to all supported branches.
Tom Lane and Michael Paquier, initial diagnosis by Andres Freund
Instead of initializing a new TransInvalidationInfo for every
transaction or subtransaction, we can just do it for those
transactions or subtransactions that actually need to queue
invalidation messages. That also avoids needing to free those
entries at the end of a transaction or subtransaction that does
not generate any invalidation messages, which is by far the
common case.
Patch by me. Review by Simon Riggs and Andres Freund.
If you call PQreset() repeatedly, and the connection cannot be
re-established, the error messages from the failed connection attempts
kept accumulating in the error string.
Fixes bug #11455 reported by Caleb Epstein. Backpatch to all supported
versions.
Since we got rid of non-MVCC catalog scans, the fourth reason given for
using a non-transactional update in index_update_stats() is obsolete.
The other three are still good, so we're not going to change the code,
but fix the comment.
Newer toolchains append the extension implicitly if missing, but
buildfarm member narwhal (gcc 3.4.2, ld 2.15.91 20040904) does not.
This affects most core libraries having an exports.txt file, namely
libpq and the ECPG support libraries. On Windows Server 2003, Windows
API functions that load and unload DLLs internally will mistakenly
unload a libpq whose DLL header reports "LIBPQ" instead of "LIBPQ.dll".
When, subsequently, control would return to libpq, the backend crashes.
Back-patch to 9.4, like commit 846e91e022.
Before that commit, we used a different linking technique that yielded
"libpq.dll" in the DLL header.
Commit 53566fc094 worked around this by
eliminating a call to a function that loads and unloads DLLs internally.
That commit is no longer necessary for correctness, but its improving
consistency with the MSVC build remains valid.
1. The comparison for matching terms used only the CRC to decide if there's
a match. Two different terms with the same CRC gave a match.
2. It assumed that if the second operand has more terms than the first, it's
never a match. That assumption is bogus, because there can be duplicate
terms in either operand.
Rewrite the implementation in a way that doesn't have those bugs.
Backpatch to all supported versions.
Commit ad5d46a449 thought that we could
get around the known portability issues of strftime's %Z specifier by
using %z instead. However, that idea seems to have been innocent of
any actual research, as it certainly missed the facts that
(1) %z is not portable to pre-C99 systems, and
(2) %z doesn't actually act differently from %Z on Windows anyway.
Per failures on buildfarm member hamerkop.
While at it, centralize the code defining what strftime format we
want to use in pg_dump; three copies of that string seems a bit much.
The malloc request was 1 byte too small for the worst-case output.
This seems relatively unlikely to cause any problems in practice,
as the worst case only occurs if the input string contains no
characters other than single-quote or newline, and even then
malloc alignment padding would probably save the day. But it's
definitely a bug.
David Rowley
Since we taught btree to handle ScalarArrayOpExpr quals natively (commit
9e8da0f757), the planner has always included
ScalarArrayOpExpr quals in index conditions if possible. However, if the
qual is for a non-first index column, this could result in an inferior plan
because we can no longer take advantage of index ordering (cf. commit
807a40c551). It can be better to omit the
ScalarArrayOpExpr qual from the index condition and let it be done as a
filter, so that the output doesn't need to get sorted. Indeed, this is
true for the query introduced as a test case by the latter commit.
To fix, restructure get_index_paths and build_index_paths so that we
consider paths both with and without ScalarArrayOpExpr quals in non-first
index columns. Redesign the API of build_index_paths so that it reports
what it found, saving useless second or third calls.
Report and patch by Andrew Gierth (though rather heavily modified by me).
Back-patch to 9.2 where this code was introduced, since the issue can
result in significant performance regressions compared to plans produced
by 9.1 and earlier.
Perl 5.12 ships with a somewhat broken version of Test::Simple, so skip
the tests if that is found.
The relevant fix is
0.98 Wed, 23 Feb 2011 14:38:02 +1100
Bug Fixes
* subtest() should not fail if $? is non-zero. (Aaron Crane)
The prove program included in Perl 5.8 does not support the --ext
option, so don't use that and use wildcards on the command line instead.
Note that the tests will still all be skipped, because, for instance,
the version of Test::More is too old, but at least the regular
mechanisms for handling that will apply, instead of failing to call
prove altogether.
Windows has one a locale whose name contains a non-ASCII character:
"Norwegian (Bokmål)" (that's an 'a' with a ring on top). That causes
trouble; when passing it setlocale(), it's not clear what encoding the
argument should be in. Another problem is that the locale name is stored in
pg_database catalog table, and the encoding used there depends on what
server encoding happens to be in use when the database is created. For
example, if you issue the CREATE DATABASE when connected to a UTF-8
database, the locale name is stored in pg_database in UTF-8. As long as all
locale names are pure ASCII, that's not a problem.
To work around that, map the troublesome locale name to a pure-ASCII alias
of the same locale, "norwegian-bokmal".
Now, this doesn't change the existing values that are already in
pg_database and in postgresql.conf. Old clusters will need to be fixed
manually. Instructions for that need to be put in the release notes.
This fixes bug #11431 reported by Alon Siman-Tov. Backpatch to 9.2;
backpatching further would require more work than seems worth it.
Apparently, this is a very common mistake for users to make; it is
better to have it fail reasonably rather than throw potentially a large
number of errors. Since we have a magic string at the start of the
file, we can detect the case easily and there's no other possible useful
behavior anyway.
Author: Craig Ringer
We have a project policy that I/O functions must not be volatile, as per
commit aab353a60b, but we weren't doing
anything to enforce that. In most usage the marking of the function
doesn't matter as long as its behavior is sane --- but I/O casts can
expose the marking as user-visible behavior, as per today's complaint
from Joe Van Dyk about contrib/ltree.
This test as such will only protect us against future errors in built-in
data types. To catch the same error in contrib or third-party types,
perhaps we should make CREATE TYPE complain? But that's a separate
issue from enforcing the policy for built-in types.
Don't crash if an ispell dictionary definition contains flags but not
any compound affixes. (This isn't a security issue since only superusers
can install affix files, but still it's a bad thing.)
Also, be more careful about detecting whether an affix-file FLAG command
is old-format (ispell) or new-format (myspell/hunspell). And change the
error message about mixed old-format and new-format commands into something
intelligible.
Per bug #11770 from Emre Hasegeli. Back-patch to all supported branches.
Testing reveals that the memory allocation we do at transaction start
has small but measurable overhead on simple transactions. To cut
down on that overhead, defer some of that work to the point when
AFTER triggers are first used, thus avoiding it altogether if they
never are.
Patch by me. Review by Andres Freund.
This commit simply removes the second argument of PSQLexec that was
set to the same value everywhere. Comments and code blocks related
to this parameter are removed.
Noticed by Heikki Linnakangas, reviewed by Michael Paquier
Previously, this was not exposed outside of miscinit.c. It is needed
for the pending pg_background patch, and will also be needed for
parallelism. Without it, there's no way for a background worker to
re-create the exact authentication environment that was present in the
process that started it, which could lead to security exposures.
Previously the archive recovery always created .ready file for
the last WAL file of the old timeline at the end of recovery even when
it's restored from the archive and has .done file. That is, there was
the case where the WAL file had both .ready and .done files.
This caused the already-archived WAL file to be archived again.
This commit prevents the archive recovery from creating .ready file
for the last WAL file if it has .done file, in order to prevent it from
being archived again.
This bug was added when cascading replication feature was introduced,
i.e., the commit 5286105800.
So, back-patch to 9.2, where cascading replication was added.
Reviewed by Michael Paquier
In a couple of code paths, pg_class_aclcheck is called in succession
with multiple different modes set. This patch combines those modes to
have a single call of this function and reduce a bit process overhead
for permission checking.
Author: Michael Paquier <michael@otacoo.com>
Reviewed-by: Fabrízio de Royes Mello <fabriziomello@gmail.com>
The EOF-detection logic in pqReadData was a bit confused about who should
set up the error message in case the kernel gives us read-ready-but-no-data
rather than ECONNRESET or some other explicit error condition. Since the
whole point of this situation is that the lower-level functions don't know
there's anything wrong, pqReadData itself must set up the message. But
keep the assumption that if an errno was reported, a message was set up at
lower levels.
Per bug #11712 from Marko Tiikkaja. It's been like this for a very long
time, so back-patch to all supported branches.
When commit 846e91e022 switched the linker
driver from dlltool/dllwrap to gcc, it became possible for linking to
choose shared libgcc. Backends having loaded a module dynamically
linked to libgcc can exit abnormally, which the postmaster treats like a
crash. Resume use of static libgcc exclusively, like 9.3 and earlier.
Back-patch to 9.4.
This improves consistency with the MSVC build. On buildfarm member
narwhal, since commit 846e91e022,
shfolder.dll:SHGetFolderPath() crashes when dblink calls it by way of
pqGetHomeDirectory(). Back-patch to 9.4, where that commit first
appeared. How it caused this regression remains a mystery. This is a
partial revert of commit 889f038129, which
adopted shfolder.dll for Windows NT 4.0 compatibility. PostgreSQL 8.2
dropped support for that operating system.
Building the documentation with XSLT does not check the DTD, like a
DSSSL build would. One can often get away with having invalid XML, but
the stylesheets might then create incorrect output, as they are not
designed to handle that. Therefore, check the validity of the XML
against the DTD, using xmllint, during the build.
Add xmllint detection to configure, and add some documentation.
xmllint comes with libxml2, which is already in use, but it might be in
a separate package, such as libxml2-utils on Debian.
Reviewed-by: Fabien COELHO <coelho@cri.ensmp.fr>
The duplication originated in cdd46c765, where restartpoints were
introduced.
In LogCheckpointStart's case the duplication actually lead to the
compiler's format string checking not to be effective because the
format string wasn't constant.
Arguably these messages shouldn't be elog(), but ereport() style
messages. That'd even allow to translate the messages... But as
there's more mistakes of that kind in surrounding code, it seems
better to change that separately.
Commit 7dbb606938 added a new CHECKPOINT_FLUSH_ALL flag. As that
commit needed to be backpatched I didn't change the numeric values of
the existing flags as that could lead to nastly problems if any
external code issued checkpoints. That's not a concern on master, so
renumber them there.
Also add a comment about CHECKPOINT_FLUSH_ALL above
CreateCheckPoint().
CREATE DATABASE and ALTER DATABASE .. SET TABLESPACE copy the source
database directory on the filesystem level. To ensure the on disk
state is consistent they block out users of the affected database and
force a checkpoint to flush out all data to disk. Unfortunately, up to
now, that checkpoint didn't flush out dirty buffers from unlogged
relations.
That bug means there could be leftover dirty buffers in either the
template database, or the database in its old location. Leading to
problems when accessing relations in an inconsistent state; and to
possible problems during shutdown in the SET TABLESPACE case because
buffers belonging files that don't exist anymore are flushed.
This was reported in bug #10675 by Maxim Boguk.
Fix by Pavan Deolasee, modified somewhat by me. Reviewed by MauMau and
Fujii Masao.
Backpatch to 9.1 where unlogged tables were introduced.
json_agg and json_object_agg and their associated transition functions
should have been marked as stable rather than immutable, as they call IO
functions indirectly. Changing this probably isn't going to make much
difference, as you can't use an aggregate function in an index
expression, but we should be correct nevertheless.
json_object, on the other hand, should be marked immutable rather than
stable, as it does not call IO functions.
As discussed on -hackers, this change is being made without bumping the
catalog version, as we don't want to do that at this stage of the cycle,
and the changes are very unlikely to affect anyone.
If an inline-able SQL function taking a composite argument is used in a
LATERAL subselect, and the composite argument is a lateral reference,
the planner could fail with "variable not found in subplan target list",
as seen in bug #11703 from Karl Bartel. (The outer function call used in
the bug report and in the committed regression test is not really necessary
to provoke the bug --- you can get it if you manually expand the outer
function into "LATERAL (SELECT inner_function(outer_relation))", too.)
The cause of this is that we generate the reltargetlist for the referenced
relation before doing eval_const_expressions() on the lateral sub-select's
expressions (cf find_lateral_references()), so what's scheduled to be
emitted by the referenced relation is a whole-row Var, not the simplified
single-column Var produced by optimizing the function's FieldSelect on the
whole-row Var. Then setrefs.c fails to match up that lateral reference to
what's available from the outer scan.
Preserving the FieldSelect optimization in such cases would require either
major planner restructuring (to recursively do expression simplification
on sub-selects much earlier) or some amazingly ugly kluge to change the
reltargetlist of a possibly-already-planned relation. It seems better
just to skip the optimization when the Var is from an upper query level;
the case is not so common that it's likely anyone will notice a few
wasted cycles.
AFAICT this problem only occurs for uplevel LATERAL references, so
back-patch to 9.3 where LATERAL was added.
Revert the output of the individual backslash commands that change print
settings back to the 9.3 way (not showing the command name in
parentheses). Implement \pset without arguments separately, showing all
settings with values in a table form.