Commit Graph

26948 Commits

Author SHA1 Message Date
Tom Lane
afb9249d06 Add support for doing late row locking in FDWs.
Previously, FDWs could only do "early row locking", that is lock a row as
soon as it's fetched, even though local restriction/join conditions might
discard the row later.  This patch adds callbacks that allow FDWs to do
late locking in the same way that it's done for regular tables.

To make use of this feature, an FDW must support the "ctid" column as a
unique row identifier.  Currently, since ctid has to be of type TID,
the feature is of limited use, though in principle it could be used by
postgres_fdw.  We may eventually allow FDWs to specify another data type
for ctid, which would make it possible for more FDWs to use this feature.

This commit does not modify postgres_fdw to use late locking.  We've
tested some prototype code for that, but it's not in committable shape,
and besides it's quite unclear whether it actually makes sense to do late
locking against a remote server.  The extra round trips required are likely
to outweigh any benefit from improved concurrency.

Etsuro Fujita, reviewed by Ashutosh Bapat, and hacked up a lot by me
2015-05-12 14:10:17 -04:00
Stephen Frost
aa4a0b9571 pgbench: Don't fail during startup
In pgbench, report, but ignore, any errors returned when attempting to
vacuum/truncate the default tables during startup.  If the tables are
needed, we'll error out soon enough anyway.

Per discussion with Tatsuo, David Rowley, Jim Nasby, Robert, Andres,
Fujii, Fabrízio de Royes Mello, Tomas Vondra, Michael Paquier, Peter,
based on a suggestion from Jeff Janes, patch from Robert, additional
message wording from Tom.
2015-05-12 13:13:12 -04:00
Andrew Dunstan
97e0aa6979 pg_basebackup -F t now succeeds with a long symlink target 2015-05-12 13:09:34 -04:00
Bruce Momjian
ea12b3ca8c doc build: use unique Makefile variable to control temp install 2015-05-12 12:30:50 -04:00
Alvaro Herrera
007c932e5a "Fix" test_ddl_deparse regress test schedule
MSVC is not smart enough to figure it out, so dumb down the Makefile and
remove the schedule file.

Also add a .gitignore file.

Author: Michael Paquier
2015-05-12 12:12:39 -03:00
Bruce Momjian
e8c19263e4 doc: prevent SGML 'make check' from building temp install
Report by Alvaro Herrera
2015-05-12 11:01:25 -04:00
Andrew Dunstan
72d422a522 Map basebackup tablespaces using a tablespace_map file
Windows can't reliably restore symbolic links from a tar format, so
instead during backup start we create a tablespace_map file, which is
used by the restoring postgres to create the correct links in pg_tblspc.
The backup protocol also now has an option to request this file to be
included in the backup stream, and this is used by pg_basebackup when
operating in tar mode.

This is done on all platforms, not just Windows.

This means that pg_basebackup will not not work in tar mode against 9.4
and older servers, as this protocol option isn't implemented there.

Amit Kapila, reviewed by Dilip Kumar, with a little editing from me.
2015-05-12 09:29:10 -04:00
Peter Eisentraut
d02f16470f Replace some appendStringInfo* calls with more appropriate variants
Author: David Rowley <dgrowleyml@gmail.com>
2015-05-11 20:38:55 -04:00
Alvaro Herrera
b488c580ae Allow on-the-fly capture of DDL event details
This feature lets user code inspect and take action on DDL events.
Whenever a ddl_command_end event trigger is installed, DDL actions
executed are saved to a list which can be inspected during execution of
a function attached to ddl_command_end.

The set-returning function pg_event_trigger_ddl_commands can be used to
list actions so captured; it returns data about the type of command
executed, as well as the affected object.  This is sufficient for many
uses of this feature.  For the cases where it is not, we also provide a
"command" column of a new pseudo-type pg_ddl_command, which is a
pointer to a C structure that can be accessed by C code.  The struct
contains all the info necessary to completely inspect and even
reconstruct the executed command.

There is no actual deparse code here; that's expected to come later.
What we have is enough infrastructure that the deparsing can be done in
an external extension.  The intention is that we will add some deparsing
code in a later release, as an in-core extension.

A new test module is included.  It's probably insufficient as is, but it
should be sufficient as a starting point for a more complete and
future-proof approach.

Authors: Álvaro Herrera, with some help from Andres Freund, Ian Barwick,
Abhijit Menon-Sen.

Reviews by Andres Freund, Robert Haas, Amit Kapila, Michael Paquier,
Craig Ringer, David Steele.
Additional input from Chris Browne, Dimitri Fontaine, Stephen Frost,
Petr Jelínek, Tom Lane, Jim Nasby, Steven Singer, Pavel Stěhule.

Based on original work by Dimitri Fontaine, though I didn't use his
code.

Discussion:
  https://www.postgresql.org/message-id/m2txrsdzxa.fsf@2ndQuadrant.fr
  https://www.postgresql.org/message-id/20131108153322.GU5809@eldon.alvh.no-ip.org
  https://www.postgresql.org/message-id/20150215044814.GL3391@alvh.no-ip.org
2015-05-11 19:14:31 -03:00
Stephen Frost
fa2642438f Allow LOCK TABLE .. ROW EXCLUSIVE MODE with INSERT
INSERT acquires RowExclusiveLock during normal operation and therefore
it makes sense to allow LOCK TABLE .. ROW EXCLUSIVE MODE to be executed
by users who have INSERT rights on a table (even if they don't have
UPDATE or DELETE).

Not back-patching this as it's a behavior change which, strictly
speaking, loosens security restrictions.

Per discussion with Tom and Robert (circa 2013).
2015-05-11 15:44:12 -04:00
Bruce Momjian
9d15292cfc pg_upgrade: use single or double-quotes in command-line strings
This is platform-dependent.
2015-05-11 12:57:48 -04:00
Tom Lane
20781765f7 Fix incorrect checking of deferred exclusion constraint after a HOT update.
If a row that potentially violates a deferred exclusion constraint is
HOT-updated later in the same transaction, the exclusion constraint would
be reported as violated when the check finally occurs, even if the row(s)
the new row originally conflicted with have since been removed.  This
happened because the wrong TID was passed to check_exclusion_constraint(),
causing the live HOT-updated row to be seen as a conflicting row rather
than recognized as the row-under-test.

Per bug #13148 from Evan Martin.  It's been broken since exclusion
constraints were invented, so back-patch to all supported branches.
2015-05-11 12:25:43 -04:00
Robert Haas
b4d4ce1d50 Increase threshold for multixact member emergency autovac to 50%.
Analysis by Noah Misch shows that the 25% threshold set by commit
53bb309d2d is lower than any other,
similar autovac threshold.  While we don't know exactly what value
will be optimal for all users, it is better to err a little on the
high side than on the low side.  A higher value increases the risk
that users might exhaust the available space and start seeing errors
before autovacuum can clean things up sufficiently, but a user who
hits that problem can compensate for it by reducing
autovacuum_multixact_freeze_max_age to a value dependent on their
average multixact size.  On the flip side, if the emergency cap
imposed by that patch kicks in too early, the user will experience
excessive wraparound scanning and will be unable to mitigate that
problem by configuration.  The new value will hopefully reduce the
risk of such bad experiences while still providing enough headroom
to avoid multixact member exhaustion for most users.

Along the way, adjust the documentation to reflect the effects of
commit 04e6d3b877, which taught
autovacuum to run for multixact wraparound even when autovacuum
is configured off.
2015-05-11 12:15:50 -04:00
Bruce Momjian
2200713aa8 initdb: only recommend pg_ctl to start the server
Previously we mentioned the 'postgres' binary method as well.
2015-05-11 12:14:57 -04:00
Bruce Momjian
c71e273402 pg_dump: suppress "Tablespace:" comment for default tablespaces
Report by Hans Ginzel
2015-05-11 11:45:43 -04:00
Robert Haas
04e6d3b877 Even when autovacuum=off, force it for members as we do in other cases.
Thomas Munro, with some adjustments by me.
2015-05-11 10:51:14 -04:00
Robert Haas
f6a6c46d7f Advance the stop point for multixact offset creation only at checkpoint.
Commit b69bf30b9b advanced the stop point
at vacuum time, but this has subsequently been shown to be unsafe as a
result of analysis by myself and Thomas Munro and testing by Thomas
Munro.  The crux of the problem is that the SLRU deletion logic may
get confused about what to remove if, at exactly the right time during
the checkpoint process, the head of the SLRU crosses what used to be
the tail.

This patch, by me, fixes the problem by advancing the stop point only
following a checkpoint.  This has the additional advantage of making
the removal logic work during recovery more like the way it works during
normal running, which is probably good.

At least one of the calls to DetermineSafeOldestOffset which this patch
removes was already dead, because MultiXactAdvanceOldest is called only
during recovery and DetermineSafeOldestOffset was set up to do nothing
during recovery.  That, however, is inconsistent with the principle that
recovery and normal running should work similarly, and was confusing to
boot.

Along the way, fix some comments that previous patches in this area
neglected to update.  It's not clear to me whether there's any
concrete basis for the decision to use only half of the multixact ID
space, but it's neither necessary nor sufficient to prevent multixact
member wraparound, so the comments should not say otherwise.
2015-05-10 22:21:20 -04:00
Robert Haas
312747c224 Fix DetermineSafeOldestOffset for the case where there are no mxacts.
Commit b69bf30b9b failed to take into
account the possibility that there might be no multixacts in existence
at all.

Report by Thomas Munro; patch by me.
2015-05-10 21:34:26 -04:00
Tom Lane
1a8a4e5cde Code review for foreign/custom join pushdown patch.
Commit e7cb7ee145 included some design
decisions that seem pretty questionable to me, and there was quite a lot
of stuff not to like about the documentation and comments.  Clean up
as follows:

* Consider foreign joins only between foreign tables on the same server,
rather than between any two foreign tables with the same underlying FDW
handler function.  In most if not all cases, the FDW would simply have had
to apply the same-server restriction itself (far more expensively, both for
lack of caching and because it would be repeated for each combination of
input sub-joins), or else risk nasty bugs.  Anyone who's really intent on
doing something outside this restriction can always use the
set_join_pathlist_hook.

* Rename fdw_ps_tlist/custom_ps_tlist to fdw_scan_tlist/custom_scan_tlist
to better reflect what they're for, and allow these custom scan tlists
to be used even for base relations.

* Change make_foreignscan() API to include passing the fdw_scan_tlist
value, since the FDW is required to set that.  Backwards compatibility
doesn't seem like an adequate reason to expect FDWs to set it in some
ad-hoc extra step, and anyway existing FDWs can just pass NIL.

* Change the API of path-generating subroutines of add_paths_to_joinrel,
and in particular that of GetForeignJoinPaths and set_join_pathlist_hook,
so that various less-used parameters are passed in a struct rather than
as separate parameter-list entries.  The objective here is to reduce the
probability that future additions to those parameter lists will result in
source-level API breaks for users of these hooks.  It's possible that this
is even a small win for the core code, since most CPU architectures can't
pass more than half a dozen parameters efficiently anyway.  I kept root,
joinrel, outerrel, innerrel, and jointype as separate parameters to reduce
code churn in joinpath.c --- in particular, putting jointype into the
struct would have been problematic because of the subroutines' habit of
changing their local copies of that variable.

* Avoid ad-hocery in ExecAssignScanProjectionInfo.  It was probably all
right for it to know about IndexOnlyScan, but if the list is to grow
we should refactor the knowledge out to the callers.

* Restore nodeForeignscan.c's previous use of the relcache to avoid
extra GetFdwRoutine lookups for base-relation scans.

* Lots of cleanup of documentation and missed comments.  Re-order some
code additions into more logical places.
2015-05-10 14:36:36 -04:00
Tom Lane
c594c75078 Add missing "static" marker.
Per buildfarm member pademelon.
2015-05-09 23:39:36 -04:00
Andrew Dunstan
cb9fa802b3 Add new OID alias type regnamespace
Catalog version bumped

Kyotaro HORIGUCHI
2015-05-09 13:36:52 -04:00
Andrew Dunstan
0c90f6769d Add new OID alias type regrole
The new type has the scope of whole the database cluster so it doesn't
behave the same as the existing OID alias types which have database
scope,
concerning object dependency. To avoid confusion constants of the new
type are prohibited from appearing where dependencies are made involving
it.

Also, add a note to the docs about possible MVCC violation and
optimization issues, which are general over the all reg* types.

Kyotaro Horiguchi
2015-05-09 13:06:49 -04:00
Stephen Frost
0cf56f14dd Improve ParseConfigFp comment wrt head/tail
The head_p and tail_p pointers passed to ParseConfigFp() are actually
input/output parameters, not strictly output paramaters.  This updates
the function comment to reflect that.

Per discussion with Tom.
2015-05-09 11:13:37 -04:00
Stephen Frost
9a0884176f Change default for include_realm to 1
The default behavior for GSS and SSPI authentication methods has long
been to strip the realm off of the principal, however, this is not a
secure approach in multi-realm environments and the use-case for the
parameter at all has been superseded by the regex-based mapping support
available in pg_ident.conf.

Change the default for include_realm to be '1', meaning that we do
NOT remove the realm from the principal by default.  Any installations
which depend on the existing behavior will need to update their
configurations (ideally by leaving include_realm set to 1 and adding a
mapping in pg_ident.conf, but alternatively by explicitly setting
include_realm=0 prior to upgrading).  Note that the mapping capability
exists in all currently supported versions of PostgreSQL and so this
change can be done today.  Barring that, existing users can update their
configurations today to explicitly set include_realm=0 to ensure that
the prior behavior is maintained when they upgrade.

This needs to be noted in the release notes.

Per discussion with Magnus and Peter.
2015-05-08 19:39:42 -04:00
Stephen Frost
f91feba877 Modify pg_stat_get_activity to build a tuplestore
This updates pg_stat_get_activity() to build a tuplestore for its
results instead of using the old-style multiple-call method.  This
simplifies the function, though that wasn't the primary motivation for
the change, which is that we may turn it into a helper function which
can filter the results (or not) much more easily.
2015-05-08 19:25:30 -04:00
Stephen Frost
4b342fb591 Bump catversion for pg_file_settings
Pointed out by Andres (thanks!)

Apologies for not including it in the initial patch.
2015-05-08 19:14:32 -04:00
Stephen Frost
a97e0c3354 Add pg_file_settings view and function
The function and view added here provide a way to look at all settings
in postgresql.conf, any #include'd files, and postgresql.auto.conf
(which is what backs the ALTER SYSTEM command).

The information returned includes the configuration file name, line
number in that file, sequence number indicating when the parameter is
loaded (useful to see if it is later masked by another definition of the
same parameter), parameter name, and what it is set to at that point.
This information is updated on reload of the server.

This is unfiltered, privileged, information and therefore access is
restricted to superusers through the GRANT system.

Author: Sawada Masahiko, various improvements by me.
Reviewers: David Steele
2015-05-08 19:09:26 -04:00
Andres Freund
bab64ef9e8 Fix two problems in infer_arbiter_indexes().
The first is a pretty simple bug where a relcache entry is used after
the relation is closed. In this particular situation it does not appear
to have bad consequences unless compiled with RELCACHE_FORCE_RELEASE.

The second is that infer_arbiter_indexes() skipped indexes that aren't
yet valid according to indcheckxmin. That's not required here, because
uniqueness checks don't care about visibility according to an older
snapshot.  While thats not really a bug, it makes things undesirably
non-deterministic.  There is some hope that this explains a test failure
on buildfarm member jaguarundi.

Discussion: 9096.1431102730@sss.pgh.pa.us
2015-05-08 22:28:23 +02:00
Heikki Linnakangas
de7688442f At promotion, archive last segment from old timeline with .partial suffix.
Previously, we would archive the possible-incomplete WAL segment with its
normal filename, but that causes trouble if the server owning that timeline
is still running, and tries to archive the same segment later. It's not nice
for the standby to trip up the master's archival like that. And it's pretty
confusing, anyway, to have an incomplete segment in the archive that's
indistinguishable from a normal, complete segment.

To avoid such confusion, add a .partial suffix to the file. Or to be more
precise, make a copy of the old segment under the .partial suffix, and
archive that instead of the original file. pg_receivexlog also uses the
.partial suffix for the same purpose, to tell apart incompletely streamed
files from complete ones.

There is no automatic mechanism to use the .partial files at recovery, so
they will go unused, unless the administrator manually copies to them to
the pg_xlog directory (and removes the .partial suffix). Recovery won't
normally need the WAL - when recovering to the new timeline, it will find
the same WAL on the first segment on the new timeline instead - but it
nevertheless feels better to archive the file with the .partial suffix, for
debugging purposes if nothing else.
2015-05-08 21:59:01 +03:00
Heikki Linnakangas
179cdd0981 Add macros to check if a filename is a WAL segment or other such file.
We had many instances of the strlen + strspn combination to check for that.
This makes the code a bit easier to read.
2015-05-08 21:58:57 +03:00
Peter Eisentraut
16c73e773b Fix whitespace 2015-05-08 14:45:53 -04:00
Andres Freund
e8898e9169 Minor ON CONFLICT related comments and doc fixes.
Geoff Winkless, Stephen Frost, Peter Geoghegan and me.
2015-05-08 19:24:14 +02:00
Robert Haas
53bb309d2d Teach autovacuum about multixact member wraparound.
The logic introduced in commit b69bf30b9b
and repaired in commits 669c7d20e6 and
7be47c56af helps to ensure that we don't
overwrite old multixact member information while it is still needed,
but a user who creates many large multixacts can still exhaust the
member space (and thus start getting errors) while autovacuum stands
idly by.

To fix this, progressively ramp down the effective value (but not the
actual contents) of autovacuum_multixact_freeze_max_age as member space
utilization increases.  This makes autovacuum more aggressive and also
reduces the threshold for a manual VACUUM to perform a full-table scan.

This patch leaves unsolved the problem of ensuring that emergency
autovacuums are triggered even when autovacuum=off.  We'll need to fix
that via a separate patch.

Thomas Munro and Robert Haas
2015-05-08 12:53:00 -04:00
Stephen Frost
195fbd4012 Remove reference to src/tools/backend/index.html
src/tools/backend was removed back in 63f1ccd, but
backend/storage/lmgr/README didn't get the memo.

Author: Amit Langote
2015-05-08 07:14:18 -04:00
Andres Freund
168d5805e4 Add support for INSERT ... ON CONFLICT DO NOTHING/UPDATE.
The newly added ON CONFLICT clause allows to specify an alternative to
raising a unique or exclusion constraint violation error when inserting.
ON CONFLICT refers to constraints that can either be specified using a
inference clause (by specifying the columns of a unique constraint) or
by naming a unique or exclusion constraint.  DO NOTHING avoids the
constraint violation, without touching the pre-existing row.  DO UPDATE
SET ... [WHERE ...] updates the pre-existing tuple, and has access to
both the tuple proposed for insertion and the existing tuple; the
optional WHERE clause can be used to prevent an update from being
executed.  The UPDATE SET and WHERE clauses have access to the tuple
proposed for insertion using the "magic" EXCLUDED alias, and to the
pre-existing tuple using the table name or its alias.

This feature is often referred to as upsert.

This is implemented using a new infrastructure called "speculative
insertion". It is an optimistic variant of regular insertion that first
does a pre-check for existing tuples and then attempts an insert.  If a
violating tuple was inserted concurrently, the speculatively inserted
tuple is deleted and a new attempt is made.  If the pre-check finds a
matching tuple the alternative DO NOTHING or DO UPDATE action is taken.
If the insertion succeeds without detecting a conflict, the tuple is
deemed inserted.

To handle the possible ambiguity between the excluded alias and a table
named excluded, and for convenience with long relation names, INSERT
INTO now can alias its target table.

Bumps catversion as stored rules change.

Author: Peter Geoghegan, with significant contributions from Heikki
    Linnakangas and Andres Freund. Testing infrastructure by Jeff Janes.
Reviewed-By: Heikki Linnakangas, Andres Freund, Robert Haas, Simon Riggs,
    Dean Rasheed, Stephen Frost and many others.
2015-05-08 05:43:10 +02:00
Andres Freund
2c8f4836db Represent columns requiring insert and update privileges indentently.
Previously, relation range table entries used a single Bitmapset field
representing which columns required either UPDATE or INSERT privileges,
despite the fact that INSERT and UPDATE privileges are separately
cataloged, and may be independently held.  As statements so far required
either insert or update privileges but never both, that was
sufficient. The required permission could be inferred from the top level
statement run.

The upcoming INSERT ... ON CONFLICT UPDATE feature needs to
independently check for both privileges in one statement though, so that
is not sufficient anymore.

Bumps catversion as stored rules change.

Author: Peter Geoghegan
Reviewed-By: Andres Freund
2015-05-08 00:20:46 +02:00
Alvaro Herrera
db5f98ab4f Improve BRIN infra, minmax opclass and regression test
The minmax opclass was using the wrong support functions when
cross-datatypes queries were run.  Instead of trying to fix the
pg_amproc definitions (which apparently is not possible), use the
already correct pg_amop entries instead.  This requires jumping through
more hoops (read: extra syscache lookups) to obtain the underlying
functions to execute, but it is necessary for correctness.

Author: Emre Hasegeli, tweaked by Álvaro
Review: Andreas Karlsson

Also change BrinOpcInfo to record each stored type's typecache entry
instead of just the OID.  Turns out that the full type cache is
necessary in brin_deform_tuple: the original code used the indexed
type's byval and typlen properties to extract the stored tuple, which is
correct in Minmax; but in other implementations that want to store
something different, that's wrong.  The realization that this is a bug
comes from Emre also, but I did not use his patch.

I also adopted Emre's regression test code (with smallish changes),
which is more complete.
2015-05-07 13:02:22 -03:00
Robert Haas
7be47c56af Fix incorrect math in DetermineSafeOldestOffset.
The old formula didn't have enough parentheses, so it would do the wrong
thing, and it used / rather than % to find a remainder.  The effect of
these oversights is that the stop point chosen by the logic introduced in
commit b69bf30b9b might be rather
meaningless.

Thomas Munro, reviewed by Kevin Grittner, with a whitespace tweak by me.
2015-05-07 11:19:31 -04:00
Magnus Hagander
1a241d22ae Properly send SCM status updates when shutting down service on Windows
The Service Control Manager should be notified regularly during a shutdown
that takes a long time. Previously we would increaes the counter, but forgot
to actually send the notification to the system. The loop counter was also
incorrectly initalized in the event that the startup of the system took long
enough for it to increase, which could cause the shutdown process not to wait
as long as expected.

Krystian Bigaj, reviewed by Michael Paquier
2015-05-07 15:04:13 +02:00
Magnus Hagander
d678bde655 Fix indentation that could mask a future bug
Michael Paquier, spotted using Coverity
2015-05-07 11:41:26 +02:00
Magnus Hagander
aa7cf3eef4 Fix minor resource leak in pg_dump
Michael Paquier, spotted using Coverity
2015-05-07 11:41:13 +02:00
Robert Haas
1998261034 Avoid using a C++ keyword as a structure member name.
Per request from Peter Eisentraut.
2015-05-05 22:41:03 -04:00
Alvaro Herrera
3b6db1f445 Add geometry/range functions to support BRIN inclusion
This commit adds the following functions:
    box(point) -> box
    bound_box(box, box) -> box
    inet_same_family(inet, inet) -> bool
    inet_merge(inet, inet) -> cidr
    range_merge(anyrange, anyrange) -> anyrange

The first of these is also used to implement a new assignment cast from
point to box.

These functions are the first part of a base to implement an "inclusion"
operator class for BRIN, for multidimensional data types.

Author: Emre Hasegeli
Reviewed by: Andreas Karlsson
2015-05-05 15:22:24 -03:00
Robert Haas
456ff08638 Fix some problems with patch to fsync the data directory.
pg_win32_is_junction() was a typo for pgwin32_is_junction().  open()
was used not only in a two-argument form, which breaks on Windows,
but also where BasicOpenFile() should have been used.

Per reports from Andrew Dunstan and David Rowley.
2015-05-05 09:29:49 -04:00
Peter Eisentraut
ad8d6d064c Fix typos
Author: Erik Rijkers <er@xs4all.nl>
2015-05-04 20:40:19 -04:00
Robert Haas
40f42d2a34 Use outerPlanState macro instead of referring to leffttree.
This makes the executor code more consistent.  It also removes
an apparently superfluous NULL test in nodeGroup.c.

Qingqing Zhou, reviewed by Tom Lane, and further revised by me.
2015-05-04 16:17:36 -04:00
Tom Lane
2503982be4 Improve procost estimates for some text search functions.
The text search functions that involve parsing raw text into lexemes are
remarkably CPU-intensive, so estimating them at the same cost as most other
built-in functions seems like a mistake; moreover, doing so turns out to
discourage the optimizer from using functional indexes on these functions.
After some debate, we've agreed to raise procost from 1 to 100 for
to_tsvector(), plainto_tsvector(), to_tsquery(), ts_headline(),
ts_match_tt(), and ts_match_tq(), which are all the text search functions
that parse raw text.

Also increase procost for the 2-argument form of ts_rewrite()
(tsquery_rewrite_query); while this function doesn't do text parsing,
it does execute a user-supplied SQL query, so its previous procost of 1 is
clearly a drastic underestimate.  It seems reasonable to assign it the same
cost we assign to PL functions by default, so 100 is the number here too.

I did not bother bumping catversion for this change, since it does not
break catalog compatibility with the server executable nor result in
any regression test changes.

Per complaint from Andrew Gierth and subsequent discussion.
2015-05-04 15:38:57 -04:00
Robert Haas
2ce439f337 Recursively fsync() the data directory after a crash.
Otherwise, if there's another crash, some writes from after the first
crash might make it to disk while writes from before the crash fail
to make it to disk.  This could lead to data corruption.

Back-patch to all supported versions.

Abhijit Menon-Sen, reviewed by Andres Freund and slightly revised
by me.
2015-05-04 14:13:53 -04:00
Heikki Linnakangas
ec3d976bce Fix the same-rel optimization when creating WAL records.
prev_regbuf was never set, and therefore the same-rel flag was never set on
WAL records.

Report and fix by Zhanq Zq
2015-05-04 21:03:36 +03:00
Andrew Dunstan
3c000fd9a6 Fix two small bugs in json's populate_record_worker
The first bug is not releasing a tupdesc when doing an early return out
of the function. The second bug is a logic error in choosing when to do
an early return if given an empty jsonb object.

Bug reports from Pavel Stehule and Tom Lane respectively.

Backpatch to 9.4 where these were introduced.
2015-05-04 12:38:58 -04:00
Tom Lane
c90b85e4d9 Second try at fixing warnings caused by commit 9b43d73b3f.
Commit ef3f9e642d suppressed one cause of warnings here, but
recent clang on OS X is still unhappy because we're passing a "long"
to abs().  The fact that tm_gmtoff is declared as long is no doubt a
hangover from days when int might be only 16 bits; but Postgres has
never been able to run on such machines, so we can just cast it to int
with no worries.  For consistency, also cast to int in the other
uses of tm_gmtoff in this stanza.

Note: this code is still broken on machines that don't follow C99
integer-division-truncates-towards-zero rules.  Given the lack of
complaints about it, I don't feel a large desire to complicate things
enough to cope with the pre-C99 rules.
2015-05-03 23:44:52 -04:00
Tom Lane
a4820434c1 Fix overlooked relcache invalidation in ALTER TABLE ... ALTER CONSTRAINT.
When altering the deferredness state of a foreign key constraint, we
correctly updated the catalogs and then invalidated the relcache state for
the target relation ... but that's not the only relation with relevant
triggers.  Must invalidate the other table as well, or the state change
fails to take effect promptly for operations triggered on the other table.
Per bug #13224 from Christian Ullrich.

In passing, reorganize regression test case for this feature so that it
isn't randomly injected into the middle of an unrelated test sequence.

Oversight in commit f177cbfe67.  Back-patch
to 9.4 where the faulty code was added.
2015-05-03 11:30:24 -04:00
Andrew Dunstan
b6b2149e48 Fix python_includespec on Windows at configure time
By converting to using forward slashes at configure time we avoid
having to repeat the logic anywhere that this is needed, such as
in transforms modules for plpython.
2015-05-03 08:17:04 -04:00
Noah Misch
1a629c1b16 Combine initdb tests that successfully create a data directory.
This eliminates many seconds of test duration and the cause to invoke
"rm -rf", which is typically unavailable on Windows.

Michael Paquier and Noah Misch
2015-05-02 16:47:28 -04:00
Noah Misch
84c08a7649 Fix one more TAP test to use standard command-line argument ordering.
Commit c67a86f7da caught most of these,
but this negative test escaped notice.  The test did pass, for the wrong
reason, under affected configurations.

Michael Paquier
2015-05-02 16:46:52 -04:00
Noah Misch
b339a5cf90 Rename coerce_type() local variable.
coerce_type() has local variables named targetTypeId, baseTypeId, and
targetType.  targetType has been the Type structure for baseTypeId, so
rename it to baseType.
2015-05-02 16:46:23 -04:00
Peter Eisentraut
67df9782e9 Windows also needs an override of the shared libpython detection 2015-05-02 13:23:16 -04:00
Peter Eisentraut
0fd764647a Make hstore_plperl's build even more like plperl's
Combine the two places that set CPPFLAGS into one.  Also, some settings
should be restricted to Windows only.  More precisely, -Wno-comment is
a GCC-only option, but Windows in a makefile implies GCC at the moment.

Also, since -Wno-comment is more properly a preprocessor option, move it
to CPPFLAGS to simplify things a bit.
2015-05-01 22:16:58 -04:00
Peter Eisentraut
d664a10f96 Move interpreter shared library detection to configure
For building PL/Perl, PL/Python, and PL/Tcl, we need a shared library of
libperl, libpython, and libtcl, respectively.  Previously, this was
checked in the makefiles, skipping the PL build with a warning if no
shared library was available.  Now this is checked in configure, with an
error if no shared library is available.

The previous situation arose because in the olden days, the configure
options --with-perl, --with-python, and --with-tcl controlled whether
frontend interfaces for those languages would be built.  The procedural
languages were added later, and shared libraries were often not
available in the beginning.  So it was decided skip the builds of the
procedural languages in those cases.  The frontend interfaces have since
been removed from the tree, and shared libraries are now available most
of the time, so that setup makes much less sense now.

Also, the new setup allows contrib modules and pgxs users to rely on the
respective PLs being available based on configure flags.
2015-05-01 21:38:21 -04:00
Bruce Momjian
b2f95c34f4 Mark views created from tables as replication identity 'nothing'
pg_dump turns tables into views using a method that was not setting
pg_class.relreplident properly.

Patch by Marko Tiikkaja

Backpatch through 9.4
2015-05-01 13:03:23 -04:00
Robert Haas
e044a44949 Deparse named arguments to use the new => operator instead of :=
Tom Lane pointed out that this wasn't done, and asked whether that was
intentional.  Subsequent discussion was in favor of making the change,
so here we go.
2015-05-01 09:37:10 -04:00
Robert Haas
e7cb7ee145 Allow FDWs and custom scan providers to replace joins with scans.
Foreign data wrappers can use this capability for so-called "join
pushdown"; that is, instead of executing two separate foreign scans
and then joining the results locally, they can generate a path which
performs the join on the remote server and then is scanned locally.
This commit does not extend postgres_fdw to take advantage of this
capability; it just provides the infrastructure.

Custom scan providers can use this in a similar way.  Previously,
it was only possible for a custom scan provider to scan a single
relation.  Now, it can scan an entire join tree, provided of course
that it knows how to produce the same results that the join would
have produced if executed normally.

KaiGai Kohei, reviewed by Shigeru Hanada, Ashutosh Bapat, and me.
2015-05-01 08:50:35 -04:00
Andres Freund
2b22795b32 Copy editing of the replication origins patch.
Michael Paquier and myself.
2015-05-01 12:22:13 +02:00
Andres Freund
1db12da85b Fix unaligned memory access in xlog parsing due to replication origin patch.
ParseCommitRecord() accessed xl_xact_origin directly. But the chunks in
the commit record's data only have 4 byte alignment, whereas
xl_xact_origin's members require 8 byte alignment on some
platforms. Update comments to make not of that and copy the record to
stack local storage before reading.

With help from Stefan Kaltenbrunner in pinning down the buildfarm and
verifying the fix.
2015-05-01 11:36:14 +02:00
Heikki Linnakangas
484a848a73 Fix pg_rewind regression failure after "fast promotion"
pg_rewind looks at the control file to determine the server's timeline. If
the standby performs a "fast promotion", the timeline ID in the control
file is not updated until the next checkpoint. The startup process requests
a checkpoint immediately after promotion, so this is unlikely to be an
issue in the real world, but the regression suite ran pg_rewind so quickly
after promotion that the checkpoint had not yet completed.

Reported by Stephen Frost
2015-04-30 21:59:58 -07:00
Alvaro Herrera
9d396af463 Fix up some loose ends for CURRENT_USER as RoleSpec
In commit 31eae6028e, some documents were not updated to show the new
capability; fix that.  Also, the error message you get when CURRENT_USER
and SESSION_USER are used in a context that doesn't accept them could be
clearer about it being a problem only in those contexts; so add the
word "here".

Author: Kyotaro HORIGUCHI

His patch submission also included changes to GRANT/REVOKE, but those
seemed more controversial, so I left them out.  We can reconsider these
changes later.
2015-04-30 16:57:05 -03:00
Robert Haas
924bcf4f16 Create an infrastructure for parallel computation in PostgreSQL.
This does four basic things.  First, it provides convenience routines
to coordinate the startup and shutdown of parallel workers.  Second,
it synchronizes various pieces of state (e.g. GUCs, combo CID
mappings, transaction snapshot) from the parallel group leader to the
worker processes.  Third, it prohibits various operations that would
result in unsafe changes to that state while parallelism is active.
Finally, it propagates events that would result in an ErrorResponse,
NoticeResponse, or NotifyResponse message being sent to the client
from the parallel workers back to the master, from which they can then
be sent on to the client.

Robert Haas, Amit Kapila, Noah Misch, Rushabh Lathia, Jeevan Chalke.
Suggestions and review from Andres Freund, Heikki Linnakangas, Noah
Misch, Simon Riggs, Euler Taveira, and Jim Nasby.
2015-04-30 15:02:14 -04:00
Alvaro Herrera
669c7d20e6 Fix pg_upgrade's multixact handling (again)
We need to create the pg_multixact/offsets file deleted by pg_upgrade
much earlier than we originally were: it was in TrimMultiXact(), which
runs after we exit recovery, but it actually needs to run earlier than
the first call to SetMultiXactIdLimit (before recovery), because that
routine already wants to read the first offset segment.

Per pg_upgrade trouble report from Jeff Janes.

While at it, silence a compiler warning about a pointless assert that an
unsigned variable was being tested non-negative.  This was a signed
constant in Thomas Munro's patch which I changed to unsigned before
commit.  Pointed out by Andres Freund.
2015-04-30 13:55:06 -03:00
Peter Eisentraut
dbf2ec1a1c Fix parallel make risk with new check temp-install setup
The "check" target no longer needs to depend on "all", because it now
runs "install" directly, which in turn depends on "all".  Doing both
will cause problems with parallel make, because two builds will run next
to each other.

Also remove the redirection of the temp-install output into a log file.
This was appropriate when this was done from within pg_regress, but now
it's just a regular make run, and especially with the above changes this
will now take the place of running the "all" target before the test
suites.

problem report by Jeff Janes, patch in part by Michael Paquier
2015-04-29 20:34:22 -04:00
Andres Freund
e0f26fc765 Correct replication origin's use of UINT16_MAX to PG_UINT16_MAX.
We can't rely on UINT16_MAX being present, which is why we introduced
PG_UINT16_MAX...

Buildfarm animal bowerbird via Andrew Gierth.
2015-04-30 00:19:36 +02:00
Robert Haas
fe72c4c55b Update .gitignore for new rmgr, changed paths. 2015-04-29 15:53:00 -04:00
Robert Haas
9b6a0ce5f0 Remove enum-related special cases for catalog scans.
When this code was written, catalog scans were normally performed using
SnapshotNow, making special handling necessary here.  Now, however, all
catalog scans use MVCC snapshots, so we can change these cases to look
more like what we do for catalog scans elsewhere in the code.

Per discussion with Tom Lane and a reminder from Bruce Momjian.
2015-04-29 15:48:44 -04:00
Robert Haas
ef3f9e642d Attempt to fix some compiler warnings. 2015-04-29 14:02:27 -04:00
Andrew Dunstan
eb010637dd Enable transforms tests for python 2 on MSVC builds
Currently regression tests for python 3 are disabled on MSVC, and these
tests fail with python 3, too, so we have some work to do to enable
both. Meanwhile, all the buildfarm hosts seem to be building with python
2 anyway, so this at least gets us some coverage.

Original patch from Michael Paquier, significantly modified by me.
2015-04-29 13:49:24 -04:00
Andres Freund
5aa2350426 Introduce replication progress tracking infrastructure.
When implementing a replication solution ontop of logical decoding, two
related problems exist:
* How to safely keep track of replication progress
* How to change replication behavior, based on the origin of a row;
  e.g. to avoid loops in bi-directional replication setups

The solution to these problems, as implemented here, consist out of
three parts:

1) 'replication origins', which identify nodes in a replication setup.
2) 'replication progress tracking', which remembers, for each
   replication origin, how far replay has progressed in a efficient and
   crash safe manner.
3) The ability to filter out changes performed on the behest of a
   replication origin during logical decoding; this allows complex
   replication topologies. E.g. by filtering all replayed changes out.

Most of this could also be implemented in "userspace", e.g. by inserting
additional rows contain origin information, but that ends up being much
less efficient and more complicated.  We don't want to require various
replication solutions to reimplement logic for this independently. The
infrastructure is intended to be generic enough to be reusable.

This infrastructure also replaces the 'nodeid' infrastructure of commit
timestamps. It is intended to provide all the former capabilities,
except that there's only 2^16 different origins; but now they integrate
with logical decoding. Additionally more functionality is accessible via
SQL.  Since the commit timestamp infrastructure has also been introduced
in 9.5 (commit 73c986add) changing the API is not a problem.

For now the number of origins for which the replication progress can be
tracked simultaneously is determined by the max_replication_slots
GUC. That GUC is not a perfect match to configure this, but there
doesn't seem to be sufficient reason to introduce a separate new one.

Bumps both catversion and wal page magic.

Author: Andres Freund, with contributions from Petr Jelinek and Craig Ringer
Reviewed-By: Heikki Linnakangas, Petr Jelinek, Robert Haas, Steve Singer
Discussion: 20150216002155.GI15326@awork2.anarazel.de,
    20140923182422.GA15776@alap3.anarazel.de,
    20131114172632.GE7522@alap2.anarazel.de
2015-04-29 19:30:53 +02:00
Robert Haas
c6e96a2f98 psql: Improve tab completion for ALTER FOREIGN TABLE.
Etsuro Fujita
2015-04-29 12:49:10 -04:00
Bruce Momjian
9b43d73b3f to_char(): have format 'OF' only show the leading negative sign
Previously both hours and minutes displayed as negative.

Report by David Pozsar
2015-04-28 21:02:57 -04:00
Bruce Momjian
f19d8f14c7 pg_basebackup: canonicalize old and new tablespace paths
This avoids problems with double-slash-specified paths.

Patch by Ian Barwick
2015-04-28 20:12:10 -04:00
Bruce Momjian
33cb8ff6aa Warn about tablespace creation in PGDATA
Also add warning to pg_upgrade

Report by Josh Berkus
2015-04-28 17:35:12 -04:00
Tom Lane
290713e31a Fix another test for RELKIND_RELATION that should allow foreign tables now.
I thought I'd gone through all of these before, but a fresh review found
this one too.  (Perhaps it would be better to just delete this test and
let the failure occur later, but for the moment I'll preserve the logic.)

The case that this was rejecting is like
	CREATE FOREIGN TABLE ft (f1 int ...) ...;
	CREATE TABLE c1 (UNIQUE(f1)) INHERITS(ft);
2015-04-28 12:34:35 -07:00
Tom Lane
ad9f08f706 Fix ATSimpleRecursion() to allow recursion from a foreign table.
This is necessary in view of the changes to allow foreign tables to be
full members of inheritance hierarchies, but I (tgl) unaccountably missed
it in commit cb1ca4d800.

Noted by Amit Langote, patch by Etsuro Fujita
2015-04-28 12:25:00 -07:00
Alvaro Herrera
d3821e70c9 Code review for multixact bugfix
Reword messages, rename a confusingly named function.

Per Robert Haas.
2015-04-28 14:52:29 -03:00
Andrew Dunstan
cbf9f0ec31 Fix MSVC builds for contrib transforms modules.
With this patch the MSVC build and installation will work correctly with
the transforms. However the python transform tests for hstore and ltree
are still disabled pending some further adjustments.

Michael Paquier with some tweaks from me.
2015-04-28 11:47:08 -04:00
Alvaro Herrera
b69bf30b9b Protect against multixact members wraparound
Multixact member files are subject to early wraparound overflow and
removal: if the average multixact size is above a certain threshold (see
note below) the protections against offset overflow are not enough:
during multixact truncation at checkpoint time, some
pg_multixact/members files would be removed because the server considers
them to be old and not needed anymore.  This leads to loss of files that
are critical to interpret existing tuples's Xmax values.

To protect against this, since we don't have enough info in pg_control
and we can't modify it in old branches, we maintain shared memory state
about the oldest value that we need to keep; we use this during new
multixact creation to abort if an old still-needed file would get
overwritten.  This value is kept up to date by checkpoints, which makes
it not completely accurate but should be good enough.  We start emitting
warnings sometime earlier, so that the eventual multixact-shutdown
doesn't take DBAs completely by surprise (more precisely: once 20
members SLRU segments are remaining before shutdown.)

On troublesome average multixact size: The threshold size depends on the
multixact freeze parameters. The oldest age is related to the greater of
multixact_freeze_table_age and multixact_freeze_min_age: anything
older than that should be removed promptly by autovacuum.  If autovacuum
is keeping up with multixact freezing, the troublesome multixact average
size is
	(2^32-1) / Max(freeze table age, freeze min age)
or around 28 members per multixact.  Having an average multixact size
larger than that will eventually cause new multixact data to overwrite
the data area for older multixacts.  (If autovacuum is not able to keep
up, or there are errors in vacuuming, the actual maximum is
multixact_freeeze_max_age instead, at which point multixact generation
is stopped completely.  The default value for this limit is 400 million,
which means that the multixact size that would cause trouble is about 10
members).

Initial bug report by Timothy Garnett, bug #12990
Backpatch to 9.3, where the problem was introduced.

Authors: Álvaro Herrera, Thomas Munro
Reviews: Thomas Munro, Amit Kapila, Robert Haas, Kevin Grittner
2015-04-28 11:32:53 -03:00
Andres Freund
dfbaed4597 Use a fd opened for read/write when syncing slots during startup.
Some operating systems, including the reporter's windows, return EBADFD
or similar when fsync() is invoked on a O_RDONLY file descriptor.
Unfortunately RestoreSlotFromDisk() does exactly that; which causes
failures after restarts in at least some scenarios.

If you hit the bug the error message will be something like
ERROR: could not fsync file "pg_replslot/$name/state": Bad file descriptor

Simply use O_RDWR instead of O_RDONLY when opening the relevant file
descriptor to fix the bug.  Unfortunately I have no way of verifying the
fix, but we've seen similar problems in the past.

This bug goes back to 9.4 where slots were introduced. Backpatch
accordingly.

Reported-By: Patrice Drolet
Bug: #13143:
Discussion: 20150424101006.2556.60897@wrigleys.postgresql.org
2015-04-28 00:17:43 +02:00
Stephen Frost
dcbf5948e1 Improve qual pushdown for RLS and SB views
The original security barrier view implementation, on which RLS is
built, prevented all non-leakproof functions from being pushed down to
below the view, even when the function was not receiving any data from
the view.  This optimization improves on that situation by, instead of
checking strictly for non-leakproof functions, it checks for Vars being
passed to non-leakproof functions and allows functions which do not
accept arguments or whose arguments are not from the current query level
(eg: constants can be particularly useful) to be pushed down.

As discussed, this does mean that a function which is pushed down might
gain some idea that there are rows meeting a certain criteria based on
the number of times the function is called, but this isn't a
particularly new issue and the documentation in rules.sgml already
addressed similar covert-channel risks.  That documentation is updated
to reflect that non-leakproof functions may be pushed down now, if
they meet the above-described criteria.

Author: Dean Rasheed, with a bit of rework to make things clearer,
along with comment and documentation updates from me.
2015-04-27 12:29:42 -04:00
Andrew Dunstan
06ca28d5ab Fix vcbuild failures and chkpass dependency caused by 854adb8
Switching the Windows build scripts to use forward slashes instead of
backslashes has caused a couple of issues in VC builds:
- The file tree list was not correctly generated, build script
  generating vcproj file missing tree dependencies when listing items in
  Filter.
- VC builds do not accept file paths with forward slashes, perhaps it
  could be possible to use a Condition but it seems safer to simply
  enforce the file paths to use backslashes in the vcproj files.
- chkpass had an unneeded dependency with libpgport and libpgcommon to
  make build succeed but actually it is not necessary as crypt.c is
  already listed for this project and should be replaced with a fake name
  as it is a unique file.

Michael Paquier
2015-04-27 10:56:04 -04:00
Andres Freund
2e3ca04e2e Also correct therefor to therefore.
Since both forms are arguably legal I wasn't sure about changing
this. But then Tom argued for 'therefore'...

Author: Dmitriy Olshevskiy
Discussion: 34789.1430067832@sss.pgh.pa.us
2015-04-26 19:05:39 +02:00
Andres Freund
6aab1f45ac Fix various typos and grammar errors in comments.
Author: Dmitriy Olshevskiy
Discussion: 553D00A6.4090205@bk.ru
2015-04-26 18:42:31 +02:00
Andres Freund
9fe1d9ac68 Fix possible division by zero in pg_xlogdump.
When displaying stats it was possible that a floating point division by
zero occured when no FPIs were issued for a type of record.

Author: Abhijit Menon-Sen
Discussion: 20150417091811.GA14008@toroid.org
2015-04-26 18:02:32 +02:00
Peter Eisentraut
cac7658205 Add transforms feature
This provides a mechanism for specifying conversions between SQL data
types and procedural languages.  As examples, there are transforms
for hstore and ltree for PL/Perl and PL/Python.

reviews by Pavel Stěhule and Andres Freund
2015-04-26 10:33:14 -04:00
Tom Lane
0bd11d9711 Add comments warning against generalizing default_with_oids.
pg_dump has historically assumed that default_with_oids affects only plain
tables and not other relkinds.  Conceivably we could make it apply to some
newly invented relkind if we did so from the get-go, but changing the
behavior for existing object types will break existing dump scripts.
Add code comments warning about this interaction.

Also, make sure that default_with_oids doesn't cause parse_utilcmd.c to
think that CREATE FOREIGN TABLE will create an OID column.  I think this is
only a latent bug right now, since we don't allow UNIQUE/PKEY constraints
in CREATE FOREIGN TABLE, but it's better to be consistent and future-proof.
2015-04-25 21:38:06 -04:00
Andrew Dunstan
04f1542d39 Try to unbreak some MSVC builds following forward slash change.
Michael Paquier.
2015-04-25 21:28:02 -04:00
Bruce Momjian
764ce22af3 Revert: Honor OID status of CREATE LIKE'd tables
Reverts d992f8a896

Report by Tom Lane
2015-04-25 21:10:48 -04:00
Peter Eisentraut
ee8d392765 Don't overwrite EXTRA_INSTALL
The temp-install target sets EXTRA_INSTALL to install the current
directory.  But when doing so, it should append instead of overwrite,
otherwise settings of EXTRA_INSTALL from a makefile won't take effect.
This would cause the earthdistance test to fail when called directly,
because it would miss installing the cube module.
2015-04-25 21:00:39 -04:00
Tom Lane
3cf8686014 Prevent improper reordering of antijoins vs. outer joins.
An outer join appearing within the RHS of an antijoin can't commute with
the antijoin, but somehow I missed teaching make_outerjoininfo() about
that.  In Teodor Sigaev's recent trouble report, this manifests as a
"could not find RelOptInfo for given relids" error within eqjoinsel();
but I think silently wrong query results are possible too, if the planner
misorders the joins and doesn't happen to trigger any internal consistency
checks.  It's broken as far back as we had antijoins, so back-patch to all
supported branches.
2015-04-25 16:44:27 -04:00
Peter Eisentraut
854adb8371 Replace backslashes by forward slashes in MSVC build code
This makes it possible to run some stages of these build scripts on
non-Windows systems.  That way, we can more easily test whether file
moves or makefile changes might break the MSVC build.

Peter Eisentraut and Michael Paquier
2015-04-25 08:58:01 -04:00
Stephen Frost
410cbfd6dd Fix file comment for test_rls_hooks.c
The file-level comment wasn't updated when it was copied from the shared
memory queue test module.  Fixed.

Noted by Dean Rasheed.
2015-04-24 20:44:53 -04:00
Stephen Frost
e89bd02f58 Perform RLS WITH CHECK before constraints, etc
The RLS capability is built on top of the WITH CHECK OPTION
system which was added for auto-updatable views, however, unlike
WCOs on views (which are mandated by the SQL spec to not fire until
after all other constraints and checks are done), it makes much more
sense for RLS checks to happen earlier than constraint and uniqueness
checks.

This patch reworks the structure which holds the WCOs a bit to be
explicitly either VIEW or RLS checks and the RLS-related checks are
done prior to the constraint and uniqueness checks.  This also allows
better error reporting as we are now reporting when a violation is due
to a WITH CHECK OPTION and when it's due to an RLS policy violation,
which was independently noted by Craig Ringer as being confusing.

The documentation is also updated to include a paragraph about when RLS
WITH CHECK handling is performed, as there have been a number of
questions regarding that and the documentation was previously silent on
the matter.

Author: Dean Rasheed, with some kabitzing and comment changes by me.
2015-04-24 20:34:26 -04:00
Noah Misch
c8aa893862 Remove obsolete -I options from ECPG library compilation.
The MSVC build system already omitted these.
2015-04-24 19:29:09 -04:00
Noah Misch
bcd7e8897c Remove superfluous -DFRONTEND.
The majority practice is to add -DFRONTEND in directories building files
that are, at other times, built for the backend.  Some directories
lacking that property added a noise -DFRONTEND in one build system.
Remove the excess flags, for consistency.
2015-04-24 19:29:05 -04:00
Noah Misch
151e74719b Build every ECPG library with -DFRONTEND.
Each of the libraries incorporates src/port files, which often check
FRONTEND.  Build systems disagreed on whether to build libpgtypes this
way.  Only libecpg incorporates files that rely on it today.  Back-patch
to 9.0 (all supported versions) to forestall surprises.
2015-04-24 19:29:02 -04:00
Tom Lane
732b33f8ae Fix up .gitignore and cleanup actions in some src/test/ subdirectories.
examples/, locale/, and thread/ lacked .gitignore files and were also
not connected up to top-level "make clean" etc.  This had escaped notice
because none of those directories are built in normal scenarios.  Still,
they have working Makefiles, so if someone does a "make" in one of these
directories it would be good if (a) git doesn't bleat about the product
files and (b) cleaning up removes them.

This is a longstanding oversight, but since this behavior is probably
only of interest to developers, there seems no need for back-patching.

Michael Paquier and Tom Lane
2015-04-24 17:13:06 -04:00
Tom Lane
70d44dd9de Fix obsolete comment in set_rel_size().
The cross-reference to set_append_rel_pathlist() was obsoleted by
commit e2fa76d80b, which split what
had been set_rel_pathlist() and child routines into two sets of
functions.  But I (tgl) evidently missed updating this comment.

Back-patch to 9.2 to avoid unnecessary divergence among branches.

Amit Langote
2015-04-24 15:18:07 -04:00
Heikki Linnakangas
61a553a091 Add comments explaining how unique and exclusion constraints are enforced. 2015-04-24 21:13:28 +03:00
Peter Eisentraut
9ba978c8cc Fix misspellings
Amit Langote and Thom Brown
2015-04-24 12:00:49 -04:00
Stephen Frost
cb087ec03b Copy the relation name for error reporting in WCOs
In get_row_security_policies(), we need to make a copy of the relation
name when building the WithCheckOptions structure, since
RelationGetRelationName just returns a pointer into the local Relation
structure.  The relation name in the WCO structure is only used for
error reporting.

Pointed out by Robert and Christian Ullrich, who noted that the
buildfarm members with -DCLOBBER_CACHE_ALWAYS were failing.
2015-04-24 09:38:10 -04:00
Heikki Linnakangas
62420ae7d6 Move functions related to index maintenance to separate source file.
There is enough code here to deserve a file of their own, not be buried
in the middle of execUtils.c.
2015-04-24 09:33:23 +03:00
Heikki Linnakangas
2c47fe16a7 Fix deadlock at startup, if max_prepared_transactions is too small.
When the startup process recovers transactions by scanning pg_twophase
directory, it should clear MyLockedGxact after it's done processing each
transaction. Like we do during normal operation, at PREPARE TRANSACTION.
Otherwise, if the startup process exits due to an error, it will try to
clear the locking_backend field of the last recovered transaction. That's
usually harmless, but if the error happens in MarkAsPreparing, while
holding TwoPhaseStateLock, the shmem-exit hook will try to acquire
TwoPhaseStateLock again, and deadlock with itself.

This fixes bug #13128 reported by Grant McAlister. The bug was introduced
by commit bb38fb0d, so backpatch to all supported versions like that
commit.
2015-04-23 21:39:35 +03:00
Peter Eisentraut
2aa0fb032e Fix shell error on Solaris
Apparently, the Bourne shell on Solaris doesn't like "for" loops with an
empty list, so have "make" skip the loop in that case.
2015-04-23 13:09:18 -04:00
Peter Eisentraut
dcae5facca Improve speed of make check-world
Before, make check-world would create a new temporary installation for
each test suite, which is slow and wasteful.  Instead, we now create one
test installation that is used by all test suites that are part of a
make run.

The management of the temporary installation is removed from pg_regress
and handled in the makefiles.  This allows for better control, and
unifies the code with that of test suites not run through pg_regress.

review and msvc support by Michael Paquier <michael.paquier@gmail.com>

more review by Fabien Coelho <coelho@cri.ensmp.fr>
2015-04-23 08:59:52 -04:00
Alvaro Herrera
50a16e30eb Use the right type OID after creating a shell type
Commit a2e35b53c3 neglected to update the type OID to use further
down in DefineType when TypeShellMake was changed to return
ObjectAddress instead of OID (it got it right in DefineRange, however.)
This resulted in an internal error message being issued when looking up
I/O functions.

Author: Michael Paquier

Also add Asserts() to a couple of other places to ensure that the type
OID being used is as expected.
2015-04-22 16:23:02 -03:00
Stephen Frost
450fa1b5ba Fix installcheck for test_rls_hooks
As pointed out by the buildfarm, test_rls_hooks wasn't functioning
properly with a clean installcheck.  test_rls_hooks needs to explicitly
load the library with the hooks in it, to allow installcheck to work;
using the --temp-config doesn't help since that isn't used when running
installcheck and it isn't exactly fair to the buildfarm to modify the
installed config prior to calling installcheck.

Also, have test_rls_hooks clean up after itself.
2015-04-22 12:43:57 -04:00
Stephen Frost
0bf22e0c8b RLS fixes, new hooks, and new test module
In prepend_row_security_policies(), defaultDeny was always true, so if
there were any hook policies, the RLS policies on the table would just
get discarded.  Fixed to start off with defaultDeny as false and then
properly set later if we detect that only the default deny policy exists
for the internal policies.

The infinite recursion detection in fireRIRrules() didn't properly
manage the activeRIRs list in the case of WCOs, so it would incorrectly
report infinite recusion if the same relation with RLS appeared more
than once in the rtable, for example "UPDATE t ... FROM t ...".

Further, the RLS expansion code in fireRIRrules() was handling RLS in
the main loop through the rtable, which lead to RTEs being visited twice
if they contained sublink subqueries, which
prepend_row_security_policies() attempted to handle by exiting early if
the RTE already had securityQuals.  That doesn't work, however, since
if the query involved a security barrier view on top of a table with
RLS, the RTE would already have securityQuals (from the view) by the
time fireRIRrules() was invoked, and so the table's RLS policies would
be ignored.  This is fixed in fireRIRrules() by handling RLS in a
separate loop at the end, after dealing with any other sublink
subqueries, thus ensuring that each RTE is only visited once for RLS
expansion.

The inheritance planner code didn't correctly handle non-target
relations with RLS, which would get turned into subqueries during
planning. Thus an update of the form "UPDATE t1 ... FROM t2 ..." where
t1 has inheritance and t2 has RLS quals would fail.  Fix by making sure
to copy in and update the securityQuals when they exist for non-target
relations.

process_policies() was adding WCOs to non-target relations, which is
unnecessary, and could lead to a lot of wasted time in the rewriter and
the planner. Fix by only adding WCO policies when working on the result
relation.  Also in process_policies, we should be copying the USING
policies to the WITH CHECK policies on a per-policy basis, fix by moving
the copying up into the per-policy loop.

Lastly, as noted by Dean, we were simply adding policies returned by the
hook provided to the list of quals being AND'd, meaning that they would
actually restrict records returned and there was no option to have
internal policies and hook-based policies work together permissively (as
all internal policies currently work).  Instead, explicitly add support
for both permissive and restrictive policies by having a hook for each
and combining the results appropriately.  To ensure this is all done
correctly, add a new test module (test_rls_hooks) to test the various
combinations of internal, permissive, and restrictive hook policies.

Largely from Dean Rasheed (thanks!):

CAEZATCVmFUfUOwwhnBTcgi6AquyjQ0-1fyKd0T3xBWJvn+xsFA@mail.gmail.com

Author: Dean Rasheed, though I added the new hooks and test module.
2015-04-22 12:01:06 -04:00
Stephen Frost
4ccc5bd28e Pull in tableoid for inheiritance with rowMarks
As noted by Etsuro Fujita [1] and Dean Rasheed[2],
cb1ca4d800 changed ExecBuildAuxRowMark()
to always look for the tableoid in the target list, but didn't also
change preprocess_targetlist() to always include the tableoid.  This
resulted in errors with soon-to-be-added RLS with inheritance tests,
and errors when using inheritance with foreign tables.

Authors: Etsuro Fujita and Dean Rasheed (independently)

Minor word-smithing on the comments by me.

[1] 552CF0B6.8010006@lab.ntt.co.jp
[2] CAEZATCVmFUfUOwwhnBTcgi6AquyjQ0-1fyKd0T3xBWJvn+xsFA@mail.gmail.com
2015-04-22 11:29:35 -04:00
Heikki Linnakangas
54a16df010 Make the pg_rewind regression tests more robust on slow systems.
There were a couple of hard-coded sleeps in the tests: to wait for standby
to catch up with master, and to wait for promotion with "pg_ctl promote"
to complete. Instead of a fixed, hard-coded sleep, poll the server with a
query once a second. This isn't ideal either, and I wish we had a better
solution for real-world applications too, but this should fix the
immediate problem.

Patch by Michael Paquier, with some editing by me.
2015-04-22 14:33:57 +03:00
Andres Freund
cef939c347 Rename pg_replication_slot's new active_in to active_pid.
In d811c037ce active_in was added but discussion since showed that
active_pid is preferred as a name.

Discussion: CAMsr+YFKgZca5_7_ouaMWxA5PneJC9LNViPzpDHusaPhU9pA7g@mail.gmail.com
2015-04-22 09:43:40 +02:00
Heikki Linnakangas
4d930eee89 Don't leave 'tmp_check' directory behind in pg_rewind regression tests. 2015-04-22 10:14:44 +03:00
Peter Eisentraut
b0a738f428 Move pg_xlogdump from contrib/ to src/bin/
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2015-04-21 19:03:49 -04:00
Heikki Linnakangas
060a1224af Add missing installcheck target to pg_rewind's Makefile
Michael Paquier
2015-04-21 14:09:25 +03:00
Andres Freund
d811c037ce Add 'active_in' column to pg_replication_slots.
Right now it is visible whether a replication slot is active in any
session, but not in which.  Adding the active_in column, containing the
pid of the backend having acquired the slot, makes it much easier to
associate pg_replication_slots entries with the corresponding
pg_stat_replication/pg_stat_activity row.

This should have been done from the start, but I (Andres) dropped the
ball there somehow.

Author: Craig Ringer, revised by me Discussion:
CAMsr+YFKgZca5_7_ouaMWxA5PneJC9LNViPzpDHusaPhU9pA7g@mail.gmail.com
2015-04-21 11:51:06 +02:00
Peter Eisentraut
528c2e44ab Move pg_test_timing from contrib/ to src/bin/
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2015-04-20 21:30:12 -04:00
Bruce Momjian
d992f8a896 Honor OID status of CREATE LIKE'd tables
Previously, tables created by CREATE LIKE never had OIDs.

Report by Tom Lane
2015-04-20 16:11:25 -04:00
Peter Eisentraut
00882d9e5c Move pg_test_fsync from contrib/ to src/bin/
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2015-04-19 22:20:49 -04:00
Bruce Momjian
f92fc4c95d pg_upgrade: binary_upgrade_create_empty_extension() is strict
Was broken by commit 30982be4e5.

Patch by Jeff Janes
2015-04-17 20:08:42 -04:00
Stephen Frost
ab6d1cd26e Fix typo in relcache's equalPolicy()
The USING policies were not being checked for differences as the same
policy was being passed in to both sides of the equal().  This could
result in backends not realizing that a policy had been changed, if
none of the other attributes had been changed.

Fix by passing to equal() the policy1 and policy2 using quals for
comparison.

No need to back-patch as this is not yet released.  Noticed while
testing changes to RLS proposed by Dean Rasheed.
2015-04-17 16:37:11 -04:00
Alvaro Herrera
4cb7d671fd Add new target modulescheck in vcregress.pl
This allows an MSVC build to run regression tests related to modules in
src/test/modules.

Author: Michael Paquier
Reviewed by: Andrew Dunstan
2015-04-16 23:39:52 -03:00
Alvaro Herrera
22d005323f MSVC: install src/test/modules together with contrib
These modules have to be installed so that the testing module can access
them.  (We don't have that yet, but will soon have it.)

Author: Michael Paquier
Reviewed by: Andrew Dunstan
2015-04-16 16:40:14 -03:00
Heikki Linnakangas
e2999abcd1 Fix assertion failure in logical decoding.
Logical decoding set SnapshotData's regd_count field to avoid the
snapshot manager from prematurely freeing snapshots that are generated
by the decoding system. That was always an abuse of the field, as it was
never supposed to be used outside the snapshot manager. Commit 94028691
made snapshot manager's tracking of the snapshots smarter, and that scheme
fell apart. The snapshot manager got confused and hit the assertion, when
a snapshot that was marked with regd_count==1 was not found in the heap,
where the snapshot manager tracks registered the snapshots.

To fix, don't abuse the regd_count field like that. Logical decoding still
abuses the active_count field for similar purposes, but that's currently
harmless.

The assertion failure was first reported by Michael Paquier
2015-04-16 21:50:07 +03:00
Alvaro Herrera
90898af30b MSVC: Include modules of src/test/modules in build
commit_ts, being only a module used for test purposes, is ignored in the
process for now.

Author: Michael Paquier
Reviewed by: Andrew Dunstan
2015-04-16 15:17:26 -03:00
Heikki Linnakangas
b5e384e374 Add missing newlines to error messages. 2015-04-16 09:18:00 +03:00
Heikki Linnakangas
b5e560c246 Error out in pg_rewind if lstat() fails.
A "file not found" is expected if the source server is running, so don't
complain about that. But any other error is definitely not expected.
2015-04-15 23:13:32 +03:00
Heikki Linnakangas
41457fcf97 Minor cleanup of pg_rewind.
Update comments and function names to use the terms "source" and "target"
consistently. Some places were calling them remote and local instead, which
was confusing.

Fix incorrect comment in extractPageInfo on database creation record - it
was wrong on what happens for databases created in the target that don't
exist in source.
2015-04-15 22:52:00 +03:00
Heikki Linnakangas
0d8a22a9ac Shut down test servers after pg_rewind regression tests.
Now that the test servers are initialized twice in each .pl script,
the single END block is not enough to stop them. Add a new clean_rewind_test
function that is called at the end of each test.

Michael Paquier
2015-04-15 19:54:38 +03:00
Heikki Linnakangas
3d80a1e0e3 Fix logic to skip checkpoint if no records have been inserted.
After the WAL format changes, the calculation of the size of a checkpoint
record became incorrect. Instead of trying to fix the math, check that the
previous record, i.e. the xl_prev value that we'd write for the next
record, matches the last checkpoint's redo pointer. That way it's not
dependent on the size of the checkpoint record at all.

The old logic was actually slightly wrong all along: if the previous
checkpoint record crossed a page boundary, the page headers threw off the
record size calculation, and the checkpoint was not skipped. The new
checkpoint would not cross a page boundary, so this only resulted in at
most one extra checkpoint after the system became idle. The new logic fixes
that. (It's not worth fixing in backbranches).

However, it makes some sense to try to keep the latest checkpoint contained
fully in a page, or at least in a single WAL segment, just on general
robustness grounds. If something goes awfully wrong, it's more likely that
you can recover the latest WAL segment, than the last two WAL segments. So
I added an extra check that the checkpoint is not skipped if the previous
checkpoint crossed a WAL segment.

Reported by Jeff Janes.
2015-04-15 17:21:04 +03:00
Peter Eisentraut
9fa8b0ee90 Move pg_upgrade from contrib/ to src/bin/
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2015-04-14 19:26:38 -04:00
Peter Eisentraut
30982be4e5 Integrate pg_upgrade_support module into backend
Previously, these functions were created in a schema "binary_upgrade",
which was deleted after pg_upgrade was finished.  Because we don't want
to keep that schema around permanently, move them to pg_catalog but
rename them with a binary_upgrade_... prefix.

The provided functions are only small wrappers around global variables
that were added specifically for pg_upgrade use, so keeping the module
separate does not create any modularity.

The functions still check that they are only called in binary upgrade
mode, so it is not possible to call these during normal operation.

Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2015-04-14 19:26:37 -04:00
Heikki Linnakangas
936546dcbc Optimize pg_comp_crc32c_sse42 routine slightly, and also use it on x86.
Eliminate the separate 'len' variable from the loops, and also use the 4
byte instruction. This shaves off a few more cycles. Even though this
routine that uses the special SSE 4.2 instructions is much faster than a
generic routine, it's still a hot spot, so let's make it as fast as
possible.

Change the configure test to not test _mm_crc32_u64. That variant is only
available in the 64-bit x86-64 architecture, not in 32-bit x86. Modify
pg_comp_crc32c_sse42 so that it only uses _mm_crc32_u64 on x86-64. With
these changes, the SSE accelerated CRC-32C implementation can also be used
on 32-bit x86 systems.

This also fixes the 32-bit MSVC build.
2015-04-14 23:58:16 +03:00
Heikki Linnakangas
b73e7a0716 Oops, fix misspelled #endif
I hope this fixes the Windows builfarm failures.
2015-04-14 22:00:52 +03:00
Alvaro Herrera
0a52fafce4 Fix typo in comment
SLRU_SEGMENTS_PER_PAGE -> SLRU_PAGES_PER_SEGMENT

I introduced this ancient typo in subtrans.c and later propagated it to
multixact.c.  I fixed the latter in f741300c, but only back to 9.3;
backpatch to all supported branches for consistency.
2015-04-14 12:12:18 -03:00
Heikki Linnakangas
3dc2d62d04 Use Intel SSE 4.2 CRC instructions where available.
Modern x86 and x86-64 processors with SSE 4.2 support have special
instructions, crc32b and crc32q, for calculating CRC-32C. They greatly
speed up CRC calculation.

Whether the instructions can be used or not depends on the compiler and the
target architecture. If generation of SSE 4.2 instructions is allowed for
the target (-msse4.2 flag on gcc and clang), use them. If they are not
allowed by default, but the compiler supports the -msse4.2 flag to enable
them, compile just the CRC-32C function with -msse4.2 flag, and check at
runtime whether the processor we're running on supports it. If it doesn't,
fall back to the slicing-by-8 algorithm. (With the common defaults on
current operating systems, the runtime-check variant is what you get in
practice.)

Abhijit Menon-Sen, heavily modified by me, reviewed by Andres Freund.
2015-04-14 17:05:03 +03:00
Heikki Linnakangas
4f700bcd20 Reorganize our CRC source files again.
Now that we use CRC-32C in WAL and the control file, the "traditional" and
"legacy" CRC-32 variants are not used in any frontend programs anymore.
Move the code for those back from src/common to src/backend/utils/hash.

Also move the slicing-by-8 implementation (back) to src/port. This is in
preparation for next patch that will add another implementation that uses
Intel SSE 4.2 instructions to calculate CRC-32C, where available.
2015-04-14 17:03:42 +03:00
Peter Eisentraut
d577bb868d pgbench: Attempt fix build on Windows 2015-04-13 15:32:57 -04:00
Alvaro Herrera
b5213e14a4 Remove duplicated word in README 2015-04-13 14:28:21 -03:00
Peter Eisentraut
81134af3ec Move pgbench from contrib/ to src/bin/
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2015-04-13 13:07:16 -04:00
Heikki Linnakangas
b22a36a62c Fix pg_rewind regression tests in VPATH builds
Should call just "pg_rewind", instead of "./pg_rewind". The tests are called
so that PATH contains the temporariy installation bin dir.

Per report from Alvaro Herrera
2015-04-13 18:30:38 +03:00
Heikki Linnakangas
53ba10770a Refactor and fix TAP tests of pg_rewind
* Don't pass arguments to prove, since that's not supported on perl 5.8
which is the minimum version supported by the TAP tests. Refactor the
test files themselves to run the tests twice, in both local and remote mode.

* Use eq rather than == for string comparison. This thinko caused the remote
versions of the tests to never run.

* Add "use strict" and "use warnings", and fix warnings that that produced.

* Increase the delay after standby promotion, to make the tests more robust.

* In remote mode, the connection string to the promoted standby was
incorrect, leading to connection errors.

Patch by Michael Paquier, to address Peter Eisentraut's report.
2015-04-13 18:09:20 +03:00
Heikki Linnakangas
b2a5545bd6 Don't archive bogus recycled or preallocated files after timeline switch.
After a timeline switch, we would leave behind recycled WAL segments that
are in the future, but on the old timeline. After promotion, and after they
become old enough to be recycled again, we would notice that they don't have
a .ready or .done file, create a .ready file for them, and archive them.
That's bogus, because the files contain garbage, recycled from an older
timeline (or prealloced as zeros). We shouldn't archive such files.

This could happen when we're following a timeline switch during replay, or
when we switch to new timeline at end-of-recovery.

To fix, whenever we switch to a new timeline, scan the data directory for
WAL segments on the old timeline, but with a higher segment number, and
remove them. Those don't belong to our timeline history, and are most
likely bogus recycled or preallocated files. They could also be valid files
that we streamed from the primary ahead of time, but in any case, they're
not needed to recover to the new timeline.
2015-04-13 16:53:49 +03:00
Fujii Masao
1f94bec7a9 Silence gettext warning about '\r' escape sequence in translatable string.
gettext was unhappy about the commit b216ad7 because it revealed
the problem that internationalized messages may contain '\r' escape
sequence in pg_rewind. This commit moves '\r' to a separate printf() call.

Michael Paquier, bug reported by Peter Eisentraut
2015-04-13 13:30:59 +09:00
Peter Eisentraut
442663f133 emacs: Set indent-tabs-mode in perl-mode
This matches existing practice, but makes the setup complete and
consistent with the C code setup.
2015-04-12 23:53:23 -04:00
Heikki Linnakangas
74a68e37d0 Free leaked result set in pg_rewind
It was not significant in practice, it was just one instance of a small
result set, but let's pacify Coverity.

Michael Paquier
2015-04-12 22:42:01 +03:00
Magnus Hagander
9029f4b374 Add system view pg_stat_ssl
This view shows information about all connections, such as if the
connection is using SSL, which cipher is used, and which client
certificate (if any) is used.

Reviews by Alex Shulgin, Heikki Linnakangas, Andres Freund & Michael Paquier
2015-04-12 19:07:46 +02:00
Heikki Linnakangas
a10589a512 Remove duplicated words in comments.
David Rowley
2015-04-12 10:46:17 +03:00
Peter Eisentraut
83aca89f7c Move pg_archivecleanup from contrib/ to src/bin/
Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2015-04-11 23:29:18 -04:00
Alvaro Herrera
27846f02c1 Optimize locking a tuple already locked by another subxact
Locking and updating the same tuple repeatedly led to some strange
multixacts being created which had several subtransactions of the same
parent transaction holding locks of the same strength.  However,
once a subxact of the current transaction holds a lock of a given
strength, it's not necessary to acquire the same lock again.  This made
some coding patterns much slower than required.

The fix is twofold.  First we change HeapTupleSatisfiesUpdate to return
HeapTupleBeingUpdated for the case where the current transaction is
already a single-xid locker for the given tuple; it used to return
HeapTupleMayBeUpdated for that case.  The new logic is simpler, and the
change to pgrowlocks is a testament to that: previously we needed to
check for the single-xid locker separately in a very ugly way.  That
test is simpler now.

As fallout from the HTSU change, some of its callers need to be amended
so that tuple-locked-by-own-transaction is taken into account in the
BeingUpdated case rather than the MayBeUpdated case.  For many of them
there is no difference; but heap_delete() and heap_update now check
explicitely and do not grab tuple lock in that case.

The HTSU change also means that routine MultiXactHasRunningRemoteMembers
introduced in commit 11ac4c73cb is no longer necessary and can be
removed; the case that used to require it is now handled naturally as
result of the changes to heap_delete and heap_update.

The second part of the fix to the performance issue is to adjust
heap_lock_tuple to avoid the slowness:

1. Previously we checked for the case that our own transaction already
held a strong enough lock and returned MayBeUpdated, but only in the
multixact case.  Now we do it for the plain Xid case as well, which
saves having to LockTuple.

2. If the current transaction is the only locker of the tuple (but with
a lock not as strong as what we need; otherwise it would have been
caught in the check mentioned above), we can skip sleeping on the
multixact, and instead go straight to create an updated multixact with
the additional lock strength.

3. Most importantly, make sure that both the single-xid-locker case and
the multixact-locker case optimization are applied always.  We do this
by checking both in a single place, rather than them appearing in two
separate portions of the routine -- something that is made possible by
the HeapTupleSatisfiesUpdate API change.  Previously we would only check
for the single-xid case when HTSU returned MayBeUpdated, and only
checked for the multixact case when HTSU returned BeingUpdated.  This
was at odds with what HTSU actually returned in one case: if our own
transaction was locker in a multixact, it returned MayBeUpdated, so the
optimization never applied.  This is what led to the large multixacts in
the first place.

Per bug report #8470 by Oskari Saarenmaa.
2015-04-10 13:47:15 -03:00
Peter Eisentraut
8a0d34e4e4 libpq: Don't overwrite existing OpenSSL thread callbacks
If someone else already set the callbacks, don't overwrite them with
ours.  When unsetting the callbacks, only unset them if they point to
ours.

Author: Jan Urbański <wulczer@wulczer.org>
2015-04-09 20:45:34 -04:00
Magnus Hagander
a6f3c1f1e2 Show owner of types in psql \dT+ 2015-04-09 21:39:35 +02:00
Heikki Linnakangas
5d79b67bdd Make SSL regression test suite more portable by avoiding cp.
Use perl 'glob' and File::Copy instead of "cp". This takes us one step
closer to running the suite on Windows.

Michael Paquier
2015-04-09 22:07:18 +03:00
Heikki Linnakangas
0fb256dc82 Gitignore temp files generated by SSL regression suite
Michael Paquier
2015-04-09 22:02:21 +03:00
Magnus Hagander
c9970ab937 Fix typo
Michael Paquier
2015-04-09 14:15:39 +02:00
Andres Freund
06d36fa40c Fix typo in eb68379c3.
I'd accidentally missed to rename PG_FORCE_NULL to BKI_FORCE_NULL in one
place.

Author: Jeevan Chalke
Discussion: CAM2+6=VPoow5PqgqiTjPX4QNeokb7op8aD_8Zg3QnHZMvvU0GQ@mail.gmail.com
2015-04-09 13:29:22 +02:00
Fujii Masao
17d436d2e8 Remove obsolete FORCE option from REINDEX.
FORCE option has been marked "obsolete" since very old version 7.4
but existed for backwards compatibility. Per discussion on pgsql-hackers,
we concluded that it's no longer worth keeping supporting the option.
2015-04-09 11:31:42 +09:00
Alvaro Herrera
73206812cd Change SQLSTATE for event triggers "wrong context" message
When certain event-trigger-only functions are called when not in the
wrong context, they were reporting the "feature not supported" SQLSTATE,
which is somewhat misleading.  Create a new custom error code for such
uses instead.

Not backpatched since it may be seen as an undesirable behavioral
change.

Author: Michael Paquier
Discussion: https://www.postgresql.org/message-id/CAB7nPqQ-5NAkHQHh_NOm7FPep37NCiLKwPoJ2Yxb8TDoGgbYYA@mail.gmail.com
2015-04-08 15:26:50 -03:00
Alvaro Herrera
5df64f298d Fix autovacuum launcher shutdown sequence
It was previously possible to have the launcher re-execute its main loop
before shutting down if some other signal was received or an error
occurred after getting SIGTERM, as reported by Qingqing Zhou.

While investigating, Tom Lane further noticed that if autovacuum had
been disabled in the config file, it would misbehave by trying to start
a new worker instead of bailing out immediately -- it would consider
itself as invoked in emergency mode.

Fix both problems by checking the shutdown flag in a few more places.
These problems have existed since autovacuum was introduced, so
backpatch all the way back.
2015-04-08 13:19:49 -03:00
Bruce Momjian
e4f1e0d842 libpq: add newlines to SSPI error messages
Report by Tom Lane
2015-04-08 10:28:47 -04:00
Bruce Momjian
90a8b1f82b libpq: issue clear error message for nested service files
Previously an odd error message was generated.  Nested service files are
not supported.

Report by David Johnston
2015-04-08 10:26:58 -04:00
Fujii Masao
026fafde91 Fix typo in comment. 2015-04-08 20:55:43 +09:00
Fujii Masao
29407f9774 Add file_ops.c to GETTEXT_FILES in nls.mk.
Since file_ops.c contains translatable strings, it should have been listed
in GETTEXT_FILES.
2015-04-08 13:46:58 +09:00
Robert Haas
aea652abd3 Make trace_sort control abbreviation debug output for the text opclass.
This is consistent with what the new numeric suppor for abbreviated keys
now does, and seems much more convenient than having a separate compiler
define to control this debug output.

Peter Geoghegan
2015-04-07 22:45:17 -04:00
Fujii Masao
b216ad7bf1 Mark the second argument of pg_log as the translatable string in nls.mk. 2015-04-08 11:06:25 +09:00
Tom Lane
393de3a098 Fix assorted inconsistent function declarations.
While gcc doesn't complain if you declare a function "static" and then
define it not-static, other compilers do; and in any case the code is
highly misleading this way.  Add the missing "static" keywords to a
couple of recent patches.  Per buildfarm member pademelon.
2015-04-07 16:56:21 -04:00
Heikki Linnakangas
ee075fcb13 Fix reporting of missing or invalid command line arguments in pg_rewind.
pg_fatal never returns, so a multi-line message cannot be printed by
calling it twice.

Michael Paquier and Fujii Masao
2015-04-07 23:28:28 +03:00
Alvaro Herrera
4e17e32f53 Remove variable shadowing
Commit a2e35b53 should have removed the variable declaration in the
inner block, but didn't.  As a result, the returned address might end up
not being what was intended.
2015-04-07 17:14:00 -03:00
Heikki Linnakangas
8a06c36aff Fix process startup in pg_rewind.
Don't allow pg_rewind to run as root on Unix platforms, as any new or
replaced files in the data directory would become owned by root. On Windows,
it can run under a user that has Administrator rights, but a restricted
token needs to be used. This is the same we do e.g. in pg_resetxlog.

Also, add missing set_pglocale_pgservice() call, to fix localization.

Michael Paquier and Fujii Masao
2015-04-07 23:05:25 +03:00
Alvaro Herrera
e9a077cad3 pg_event_trigger_dropped_objects: add is_temp column
It now also reports temporary objects dropped that are local to the
backend.  Previously we weren't reporting any temp objects because it
was deemed unnecessary; but as it turns out, it is necessary if we want
to keep close track of DDL command execution inside one session.  Temp
objects are reported as living in schema pg_temp, which works because
such a schema-qualification always refers to the temp objects of the
current session.
2015-04-06 11:40:55 -03:00
Alvaro Herrera
70dc2db7f1 Fix object identities for pg_conversion objects
This was already fixed in 0d906798f, but I failed to update the
array-formatted case.  This is not backpatched, since this only affects
the code path introduced by commit a676201490.
2015-04-06 11:15:13 -03:00
Simon Riggs
35ecc24407 Add new test files for lock level patch 2015-04-05 12:03:58 -04:00
Andrew Dunstan
cf376a4adc Enable float8-byval as the default for 64 bit MSVC builds
This is a long-standing inconsistency that was probably just missed when
we got 64 bit MSVC builds. This brings the platform into line with all
other systems.
2015-04-05 11:49:49 -04:00
Simon Riggs
0ef0396ae1 Reduce lock levels of some trigger DDL and add FKs
Reduce lock levels to ShareRowExclusive for the following SQL
 CREATE TRIGGER (but not DROP or ALTER)
 ALTER TABLE ENABLE TRIGGER
 ALTER TABLE DISABLE TRIGGER
 ALTER TABLE … ADD CONSTRAINT FOREIGN KEY

Original work by Simon Riggs, extracted and refreshed by Andreas Karlsson
New test cases added by Andreas Karlsson
Reviewed by Noah Misch, Andres Freund, Michael Paquier and Simon Riggs
2015-04-05 11:37:08 -04:00
Tom Lane
ca6805338f Fix incorrect matching of subexpressions in outer-join plan nodes.
Previously we would re-use input subexpressions in all expression trees
attached to a Join plan node.  However, if it's an outer join and the
subexpression appears in the nullable-side input, this is potentially
incorrect for apparently-matching subexpressions that came from above
the outer join (ie, targetlist and qpqual expressions), because the
executor will treat the subexpression value as NULL when maybe it should
not be.

The case is fairly hard to hit because (a) you need a non-strict
subexpression (else NULL is correct), and (b) we don't usually compute
expressions in the outputs of non-toplevel plan nodes.  But we might do
so if the expressions are sort keys for a mergejoin, for example.

Probably in the long run we should make a more explicit distinction between
Vars appearing above and below an outer join, but that will be a major
planner redesign and not at all back-patchable.  For the moment, just hack
set_join_references so that it will not match any non-Var expressions
coming from nullable inputs to expressions that came from above the join.
(This is somewhat overkill, in that a strict expression could still be
matched, but it doesn't seem worth the effort to check that.)

Per report from Qingqing Zhou.  The added regression test case is based
on his example.

This has been broken for a very long time, so back-patch to all active
branches.
2015-04-04 19:55:15 -04:00
Tom Lane
c67a86f7da Fix TAP tests to use only standard command-line argument ordering.
Some of the TAP tests were supposing that PG programs would accept switches
after non-switch arguments on their command lines.  While GNU getopt_long()
does allow that, our own implementation does not, and it's nowhere
suggested in our documentation that such cases should work.  Adjust the
tests to use only the documented syntax.

Back-patch to 9.4, since without this the TAP tests fail when run with
src/port's getopt_long() implementation.

Michael Paquier
2015-04-04 13:34:23 -04:00
Robert Haas
368b7c601e Fix numeric abbreviation for --disable-float8-byval.
When committing abd94bcac4, I tried to make
it decide what kind of abbreviation to use based only on SIZEOF_DATUM,
without regard to USE_FLOAT8_BYVAL.  That attempt was a few bricks short
of a load, so try to fix it, and add a comment explaining what we're
about.

Patch by me; review (but not a full endorsement) by Andrew Gierth.
2015-04-03 22:34:37 -04:00
Tom Lane
b7e1652d5d Remove unnecessary variables in _hash_splitbucket().
Commit ed9cc2b5df made it unnecessary to pass
start_nblkno to _hash_splitbucket(), and for that matter unnecessary to
have the internal nblkno variable either.  My compiler didn't complain
about that, but some did.  I also rearranged the use of oblkno a bit to
make that case more parallel.

Report and initial patch by Petr Jelinek, rearranged a bit by me.
Back-patch to all branches, like the previous patch.
2015-04-03 16:49:44 -04:00
Alvaro Herrera
9550e8348b Transform ALTER TABLE/SET TYPE/USING expr during parse analysis
This lets later stages have access to the transformed expression; in
particular it allows DDL-deparsing code during event triggers to pass
the transformed expression to ruleutils.c, so that the complete command
can be deparsed.

This shuffles the timing of the transform calls a bit: previously,
nothing was transformed during parse analysis, and only the
RELKIND_RELATION case was being handled during execution.  After this
patch, all expressions are transformed during parse analysis (including
those for relkinds other than RELATION), and the error for other
relation kinds is thrown only during execution.  So we do more work than
before to reject some bogus cases.  That seems acceptable.
2015-04-03 17:33:05 -03:00
Alvaro Herrera
4ff695b17d Add log_min_autovacuum_duration per-table option
This is useful to control autovacuum log volume, for situations where
monitoring only a set of tables is necessary.

Author: Michael Paquier
Reviewed by: A team led by Naoya Anzai (also including Akira Kurosawa,
Taiki Kondo, Huong Dangminh), Fujii Masao.
2015-04-03 11:55:50 -03:00
Alvaro Herrera
a75fb9b335 Have autovacuum workers listen to SIGHUP, too
They have historically ignored it, but it's been said to be useful at
times to change their settings mid-flight.

Author: Michael Paquier
2015-04-03 11:52:55 -03:00
Fujii Masao
6e4bf4ecd3 Fix error handling of XLogReaderAllocate in case of OOM
Similarly to previous fix 9b8d478, commit 2c03216 has switched
XLogReaderAllocate() to use a set of palloc calls instead of malloc,
causing any callers of this function to fail with an error instead of
receiving a NULL pointer in case of out-of-memory error. Fix this by
using palloc_extended with MCXT_ALLOC_NO_OOM that will safely return
NULL in case of an OOM.

Michael Paquier, slightly modified by me.
2015-04-03 21:55:37 +09:00
Robert Haas
f85155e18c Change the way we decide whether to give up on abbreviated text keys.
Be more aggressive about aborting early on if it looks like it's not
helping, but be less aggressive about aborting later on, since it's
more expensive at that point, and also since we're currently aborting
in some cases where abbreviation can still deliver a substantial win.

Peter Geoghegan. Extensive testing by Tomas Vondra.
2015-04-03 08:32:05 -04:00
Fujii Masao
9b8d4782ba Rework handling of OOM when allocating record buffer in XLOG reader.
Commit 2c03216 changed allocate_recordbuf() so that it uses a palloc to
allocate the read buffer and fails immediately when an out-of-memory error
shows up, even though its callers still expect that NULL is returned in that
case. This bug is fixed making allocate_recordbuf() use a palloc_extended
with MCXT_ALLOC_NO_OOM flag and return NULL in OOM case.

Michael Paquier
2015-04-03 18:29:38 +09:00
Fujii Masao
8c8a886268 Add palloc_extended for frontend and backend.
This commit also adds pg_malloc_extended for frontend. These interfaces
can be used to control at a lower level memory allocation using an interface
similar to MemoryContextAllocExtended. For example, the callers can specify
MCXT_ALLOC_NO_OOM if they want to suppress the "out of memory" error while
allocating the memory and handle a NULL return value.

Michael Paquier, reviewed by me.
2015-04-03 17:36:12 +09:00
Tom Lane
bc49d9324a Fix rare startup failure induced by MVCC-catalog-scans patch.
While a new backend nominally participates in sinval signaling starting
from the SharedInvalBackendInit call near the top of InitPostgres, it
cannot recognize sinval messages for unshared catalogs of its database
until it has set up MyDatabaseId.  This is not problematic for the catcache
or relcache, which by definition won't have loaded any data from or about
such catalogs before that point.  However, commit 568d4138c6
introduced a mechanism for re-using MVCC snapshots for catalog scans, and
made invalidation of those depend on recognizing relevant sinval messages.
So it's possible to establish a catalog snapshot to read pg_authid and
pg_database, then before we set MyDatabaseId, receive sinval messages that
should result in invalidating that snapshot --- but do not, because we
don't realize they are for our database.  This mechanism explains the
intermittent buildfarm failures we've seen since commit 31eae6028e.
That commit was not itself at fault, but it introduced a new regression
test that does reconnections concurrently with the "vacuum full pg_am"
command in vacuum.sql.  This allowed the pre-existing error to be exposed,
given just the right timing, because we'd fail to update our information
about how to access pg_am.  In principle any VACUUM FULL on a system
catalog could have created a similar hazard for concurrent incoming
connections.  Perhaps there are more subtle failure cases as well.

To fix, force invalidation of the catalog snapshot as soon as we've
set MyDatabaseId.

Back-patch to 9.4 where the error was introduced.
2015-04-03 00:07:29 -04:00
Robert Haas
05cce2f903 Repair stupid mistake in preprocessor directive. 2015-04-02 15:57:17 -04:00
Robert Haas
b3a5e76e12 After a crash, don't restart workers with BGW_NEVER_RESTART.
Amit Khandekar
2015-04-02 14:38:06 -04:00
Robert Haas
abd94bcac4 Use abbreviated keys for faster sorting of numeric datums.
Andrew Gierth, reviewed by Peter Geoghegan, with further tweaks by me.
2015-04-02 14:04:26 -04:00
Alvaro Herrera
00ee6c7672 autovacuum: Fix polarity of "wraparound" variable
Commit 0d83138974 inadvertently reversed the meaning of the
wraparound variable.  This causes vacuums which are not required for
wraparound to wait for locks to be acquired, and what is worse, it
allows wraparound vacuums to skip locked pages.

Bug reported by Jeff Janes in
http://www.postgresql.org/message-id/CAMkU=1xmTEiaY=5oMHsSQo5vd9V1Ze4kNLL0qN2eH0P_GXOaYw@mail.gmail.com
Analysis and patch by Kyotaro HORIGUCHI
2015-04-02 13:34:50 -03:00
Robert Haas
c02ef232c1 Add missing calls to DatumGetUInt32.
These were inadvertently ommitted from the commit that introduced
abbreviated keys, commit 4ea51cdfe8.

Peter Geoghegan
2015-04-02 11:57:35 -04:00
Andres Freund
62e2a8dc2c Define integer limits independently from the system definitions.
In 83ff1618 we defined integer limits iff they're not provided by the
system. That turns out not to be the greatest idea because there's
different ways some datatypes can be represented. E.g. on OSX PG's 64bit
datatype will be a 'long int', but OSX unconditionally uses 'long
long'. That disparity then can lead to warnings, e.g. around printf
formats.

One way to fix that would be to back int64 using stdint.h's
int64_t. While a good idea it's not that easy to implement. We would
e.g. need to include stdint.h in our external headers, which we don't
today. Also computing the correct int64 printf formats in that case is
nontrivial.

Instead simply prefix the integer limits with PG_ and define them
unconditionally. I've adjusted all the references to them in code, but
not the ones in comments; the latter seems unnecessary to me.

Discussion: 20150331141423.GK4878@alap3.anarazel.de
2015-04-02 17:43:35 +02:00
Alvaro Herrera
e146ca6820 psql: fix \connect with URIs and conninfo strings
This is the second try at this, after fcef161729 failed miserably and
had to be reverted: as it turns out, libpq cannot depend on libpgcommon
after all. Instead of shuffling code in the master branch, make that one
just like 9.4 and accept the duplication.  (This was all my own mistake,
not the patch submitter's).

psql was already accepting conninfo strings as the first parameter in
\connect, but the way it worked wasn't sane; some of the other
parameters would get the previous connection's values, causing it to
connect to a completely unexpected server or, more likely, not finding
any server at all because of completely wrong combinations of
parameters.

Fix by explicitely checking for a conninfo-looking parameter in the
dbname position; if one is found, use its complete specification rather
than mix with the other arguments.  Also, change tab-completion to not
try to complete conninfo/URI-looking "dbnames" and document that
conninfos are accepted as first argument.

There was a weak consensus to backpatch this, because while the behavior
of using the dbname as a conninfo is nowhere documented for \connect, it
is reasonable to expect that it works because it does work in many other
contexts.  Therefore this is backpatched all the way back to 9.0.

Author: David Fetter, Andrew Dunstan.  Some editorialization by me
(probably earning a Gierth's "Sloppy" badge in the process.)
Reviewers: Andrew Gierth, Erik Rijkers, Pavel Stěhule, Stephen Frost,
Robert Haas, Andrew Dunstan.
2015-04-02 12:30:57 -03:00
Robert Haas
f272098e91 Fix another bug in DSM_CREATE_NULL_IF_MAXSEGMENTS handling.
Amit Kapila
2015-04-02 10:39:24 -04:00
Robert Haas
4cd639baf4 Revert "psql: fix \connect with URIs and conninfo strings"
This reverts commit fcef161729, about
which both the buildfarm and my local machine are very unhappy.
2015-04-02 10:10:22 -04:00
Simon Riggs
7dae3cf68c Correct comment to use RS_EPHEMERAL 2015-04-02 07:45:19 -04:00
Alvaro Herrera
fcef161729 psql: fix \connect with URIs and conninfo strings
psql was already accepting conninfo strings as the first parameter in
\connect, but the way it worked wasn't sane; some of the other
parameters would get the previous connection's values, causing it to
connect to a completely unexpected server or, more likely, not finding
any server at all because of completely wrong combinations of
parameters.

Fix by explicitely checking for a conninfo-looking parameter in the
dbname position; if one is found, use its complete specification rather
than mix with the other arguments.  Also, change tab-completion to not
try to complete conninfo/URI-looking "dbnames" and document that
conninfos are accepted as first argument.

There was a weak consensus to backpatch this, because while the behavior
of using the dbname as a conninfo is nowhere documented for \connect, it
is reasonable to expect that it works because it does work in many other
contexts.  Therefore this is backpatched all the way back to 9.0.

To implement this, routines previously private to libpq have been
duplicated so that psql can decide what looks like a conninfo/URI
string.  In back branches, just duplicate the same code all the way back
to 9.2, where URIs where introduced; 9.0 and 9.1 have a simpler version.
In master, the routines are moved to src/common and renamed.

Author: David Fetter, Andrew Dunstan.  Some editorialization by me
(probably earning a Gierth's "Sloppy" badge in the process.)
Reviewers: Andrew Gierth, Erik Rijkers, Pavel Stěhule, Stephen Frost,
Robert Haas, Andrew Dunstan.
2015-04-01 20:00:07 -03:00
Tom Lane
89840d7d3f Provide real selectivity estimators for inet/cidr operators.
This patch fills in the formerly-stub networksel() and networkjoinsel()
estimation functions.  Those are used for << <<= >> >>= and && operators
on inet/cidr types.  The estimation is not perfect, certainly, because
we rely on the existing statistics collected for the inet btree operators.
But it's a long way better than nothing, and it's not clear that asking
ANALYZE to collect separate stats for these operators would be a win.

Emre Hasegeli, with reviews from Dilip Kumar and Heikki Linnakangas,
and some further hacking by me
2015-04-01 17:11:21 -04:00
Heikki Linnakangas
f770870d9e Move inet/cidr GiST opclass functions to correct place in header file.
They were accidentally placed under the GIN heading.

Andreas Karlsson
2015-04-01 19:20:45 +03:00
Fujii Masao
7a245bfe76 Make pg_ctl use SIGINT as a default shutdown signal.
The commit 0badb06 changed the default shutdown mode from smart to fast,
but forgot to change the default shutdown signal from SIGTERM to SIGINT.
2015-04-01 02:10:24 +09:00
Bruce Momjian
ed7b3b3811 initdb: remove unnecessary VACUUM FULL
Report by Peter Eisentraut
2015-03-31 11:51:39 -04:00
Bruce Momjian
0badb069bc pg_ctl: change default shutdown mode from 'smart' to 'fast'
Retain the order of the options in the documentation.
2015-03-31 11:46:27 -04:00
Bruce Momjian
9d9991c84e psql: add asciidoc output format
Patch by Szymon Guz, adjustments by me

Testing by Michael Paquier, Pavel Stehule
2015-03-31 11:33:25 -04:00
Heikki Linnakangas
1d0db8de04 Remove spurious semicolons.
Petr Jelinek
2015-03-31 15:12:27 +03:00
Andrew Dunstan
fa1e5afa8a Run pg_upgrade and pg_resetxlog with restricted token on Windows
As with initdb these programs need to run with a restricted token, and
if they don't pg_upgrade will fail when run as a user with Adminstrator
privileges.

Backpatch to all live branches. On the development branch the code is
reorganized so that the restricted token code is now in a single
location. On the stable bramches a less invasive change is made by
simply copying the relevant code to pg_upgrade.c and pg_resetxlog.c.

Patches and bug report from Muhammad Asif Naeem, reviewed by Michael
Paquier, slightly edited by me.
2015-03-30 17:07:52 -04:00
Tom Lane
ed9cc2b5df Fix bogus concurrent use of _hash_getnewbuf() in bucket split code.
_hash_splitbucket() obtained the base page of the new bucket by calling
_hash_getnewbuf(), but it held no exclusive lock that would prevent some
other process from calling _hash_getnewbuf() at the same time.  This is
contrary to _hash_getnewbuf()'s API spec and could in fact cause failures.
In practice, we must only call that function while holding write lock on
the hash index's metapage.

An additional problem was that we'd already modified the metapage's bucket
mapping data, meaning that failure to extend the index would leave us with
a corrupt index.

Fix both issues by moving the _hash_getnewbuf() call to just before we
modify the metapage in _hash_expandtable().

Unfortunately there's still a large problem here, which is that we could
also incur ENOSPC while trying to get an overflow page for the new bucket.
That would leave the index corrupt in a more subtle way, namely that some
index tuples that should be in the new bucket might still be in the old
one.  Fixing that seems substantially more difficult; even preallocating as
many pages as we could possibly need wouldn't entirely guarantee that the
bucket split would complete successfully.  So for today let's just deal
with the base case.

Per report from Antonin Houska.  Back-patch to all active branches.
2015-03-30 16:40:05 -04:00
Alvaro Herrera
97690ea6e8 Change array_offset to return subscripts, not offsets
... and rename it and its sibling array_offsets to array_position and
array_positions, to account for the changed behavior.

Having the functions return subscripts better matches existing practice,
and is better suited to using the result value as a subscript into the
array directly.  For one-based arrays, the new definition is identical
to what was originally committed.

(We use the term "subscript" in the documentation, which is what we use
whenever we talk about arrays; but the functions themselves are named
using the word "position" to match the standard-defined POSITION()
functions.)

Author: Pavel Stěhule
Behavioral problem noted by Dean Rasheed.
2015-03-30 16:13:21 -03:00
Alvaro Herrera
0853630159 Fix lost persistence setting during REINDEX INDEX
ReindexIndex() trusts a parser-built RangeVar with the persistence to
use for the new copy of the index; but the parser naturally does not
know what's the persistence of the original index.  To find out the
correct persistence, grab it from relcache.

This bug was introduced by commit 85b506bbfc, and therefore no
backpatch is necessary.

Bug reported by Thom Brown, analysis and patch by Michael Paquier; test
case provided by Fabrízio de Royes Mello.
2015-03-30 16:01:44 -03:00
Tom Lane
542320c2bd Be more careful about printing constants in ruleutils.c.
The previous coding in get_const_expr() tried to avoid quoting integer,
float, and numeric literals if at all possible.  While that looks nice,
it means that dumped expressions might re-parse to something that's
semantically equivalent but not the exact same parsetree; for example
a FLOAT8 constant would re-parse as a NUMERIC constant with a cast to
FLOAT8.  Though the result would be the same after constant-folding,
this is problematic in certain contexts.  In particular, Jeff Davis
pointed out that this could cause unexpected failures in ALTER INHERIT
operations because of child tables having not-exactly-equivalent CHECK
expressions.  Therefore, favor correctness over legibility and dump
such constants in quotes except in the limited cases where they'll
be interpreted as the same type even without any casting.

This results in assorted small changes in the regression test outputs,
and will affect display of user-defined views and rules similarly.
The odds of that causing problems in the field seem non-negligible;
given the lack of previous complaints, it seems best not to change
this in the back branches.
2015-03-30 14:59:49 -04:00
Tom Lane
701dcc983e Fix rare core dump in BackendIdGetTransactionIds().
BackendIdGetTransactionIds() neglected the possibility that the PROC
pointer in a ProcState array entry is null.  In current usage, this could
only crash if the other backend had exited since pgstat_read_current_status
saw it as active, which is a pretty narrow window.  But it's reachable in
the field, per bug #12918 from Vladimir Borodin.

Back-patch to 9.4 where the faulty code was introduced.
2015-03-30 13:05:27 -04:00
Heikki Linnakangas
0633a60f4d Add index-only scan support to range type GiST opclass.
Andreas Karlsson
2015-03-30 13:22:38 +03:00
Tom Lane
1c41e2a998 Clean up all the cruft after a pg_rewind test run.
regress_log temp directory was properly .gitignore'd, which may explain
why it got left out of the "make clean" action.
2015-03-29 20:54:37 -04:00
Tom Lane
c67f366fa9 Fix multiple bugs and infelicities in pg_rewind.
Bugs all spotted by Coverity, including wrong realloc() size request
and memory leaks.  Cosmetic improvements by me.

The usage of the global variable "filemap" here is still pretty awful,
but at least I got rid of the gratuitous aliasing in several routines
(which was helping to annoy Coverity, as well as being a bug risk).
2015-03-29 20:02:14 -04:00
Tom Lane
e4cbfd673d Add vacuum_delay_point call in compute_index_stats's per-sample-row loop.
Slow functions in index expressions might cause this loop to take long
enough to make it worth being cancellable.  Probably it would be enough
to call CHECK_FOR_INTERRUPTS here, but for consistency with other
per-sample-row loops in this file, let's use vacuum_delay_point.

Report and patch by Jeff Janes.  Back-patch to all supported branches.
2015-03-29 15:04:09 -04:00
Tom Lane
1601830ec2 Make ginbuild's funcCtx be independent of its tmpCtx.
Previously the funcCtx was a child of the tmpCtx, but that was broken
by commit eaa5808e8e, which made
MemoryContextReset() delete, not reset, child contexts.  The behavior of
having a tmpCtx reset also clear the other context seems rather dubious
anyway, so let's just disentangle them.  Per report from Erik Rijkers.

In passing, fix badly-inaccurate comments about these contexts.
2015-03-29 14:02:58 -04:00
Tom Lane
9a8e23311c Remove a couple other vestigial yylex() declarations.
These were workarounds for a long-gone flex bug; all supported versions
of flex emit an extern declaration as expected.
2015-03-29 13:12:28 -04:00
Andrew Dunstan
7655f4ccea Add a pager_min_lines setting to psql
If set, the pager will not be used unless this many lines are to be
displayed, even if that is more than the screen depth. Default is zero,
meaning it's disabled.

There is probably more work to be done in giving the user control over
when the pager is used, particularly when wide output forces use of the
pager regardless of how many lines there are, but this is a start.
2015-03-28 11:07:41 -04:00
Heikki Linnakangas
3a20b0e7b6 Add index-only scan support to inet GiST opclass.
Andreas Karlsson
2015-03-28 15:11:53 +02:00
Peter Eisentraut
16bbb96a2b Fix whitespace 2015-03-27 19:50:55 -04:00
Heikki Linnakangas
55b59eda13 Fix GiST index-only scans for opclasses with different storage type.
We cannot use the index's tuple descriptor directly to describe the index
tuples returned in an index-only scan. That's because the index might use
a different datatype for the values stored on disk than the type originally
indexed. As long as they were both pass-by-ref, it worked, but will not work
for pass-by-value types of different sizes. I noticed this as a crash when I
started hacking a patch to add fetch methods to btree_gist.
2015-03-26 23:07:52 +02:00
Tom Lane
785941cdc3 Tweak __attribute__-wrapping macros for better pgindent results.
This improves on commit bbfd7edae5 by
making two simple changes:

* pg_attribute_noreturn now takes parentheses, ie pg_attribute_noreturn().
Likewise pg_attribute_unused(), pg_attribute_packed().  This reduces
pgindent's tendency to misformat declarations involving them.

* attributes are now always attached to function declarations, not
definitions.  Previously some places were taking creative shortcuts,
which were not merely candidates for bad misformatting by pgindent
but often were outright wrong anyway.  (It does little good to put a
noreturn annotation where callers can't see it.)  In any case, if
we would like to believe that these macros can be used with non-gcc
compilers, we should avoid gratuitous variance in usage patterns.

I also went through and manually improved the formatting of a lot of
declarations, and got rid of excessively repetitive (and now obsolete
anyway) comments informing the reader what pg_attribute_printf is for.
2015-03-26 14:03:25 -04:00
Heikki Linnakangas
d04c8ed904 Add support for index-only scans in GiST.
This adds a new GiST opclass method, 'fetch', which is used to reconstruct
the original Datum from the value stored in the index. Also, the 'canreturn'
index AM interface function gains a new 'attno' argument. That makes it
possible to use index-only scans on a multi-column index where some of the
opclasses support index-only scans but some do not.

This patch adds support in the box and point opclasses. Other opclasses
can added later as follow-on patches (btree_gist would be particularly
interesting).

Anastasia Lubennikova, with additional fixes and modifications by me.
2015-03-26 19:12:00 +02:00
Heikki Linnakangas
8fa393a6d7 Minor cleanup of GiST code, for readability.
Remove the gistcentryinit function, inlining the relevant part of it into
the only caller.
2015-03-26 19:11:54 +02:00
Tom Lane
bed756a820 Suppress some unused-variable complaints in new LOCK_DEBUG code.
Jeff Janes
2015-03-26 12:00:30 -04:00
Tatsuo Ishii
656ea810e5 Make SyncRepWakeQueue to a static function
It is only used in src/backend/replication/syncrep.c.

Back-patch to all supported branches except 9.1 which declares the
function as static.
2015-03-26 10:34:08 +09:00
Tom Lane
a4847fc3ef Add an ASSERT statement in plpgsql.
This is meant to make it easier to insert simple debugging cross-checks
in plpgsql functions.

Pavel Stehule, reviewed by Jim Nasby
2015-03-25 19:05:32 -04:00
Andres Freund
83ff1618bc Centralize definition of integer limits.
Several submitted and even committed patches have run into the problem
that C89, our baseline, does not provide minimum/maximum values for
various integer datatypes. C99's stdint.h does, but we can't rely on
it.

Several parts of the code defined limits locally, so instead centralize
the definitions to c.h.

This patch also changes the more obvious usages of literal limit values;
there's more places that could be changed, but it's less clear whether
it's beneficial to change those.

Author: Andrew Gierth
Discussion: 87619tc5wc.fsf@news-spur.riddles.org.uk
2015-03-25 22:39:42 +01:00
Alvaro Herrera
bdc3d7fa23 Return ObjectAddress in many ALTER TABLE sub-routines
Since commit a2e35b53c3, most CREATE and ALTER commands return the
ObjectAddress of the affected object.  This is useful for event triggers
to try to figure out exactly what happened.  This patch extends this
idea a bit further to cover ALTER TABLE as well: an auxiliary
ObjectAddress is returned for each of several subcommands of ALTER
TABLE.  This makes it possible to decode with precision what happened
during execution of any ALTER TABLE command; for instance, which
constraint was added by ALTER TABLE ADD CONSTRAINT, or which parent got
dropped from the parents list by ALTER TABLE NO INHERIT.

As with the previous patch, there is no immediate user-visible change
here.

This is all really just continuing what c504513f83 started.

Reviewed by Stephen Frost.
2015-03-25 17:17:56 -03:00
Tom Lane
06bf0dd6e3 Upgrade src/port/rint.c to be POSIX-compliant.
The POSIX spec says that rint() rounds halfway cases to nearest even.
Our substitute implementation failed to do that, rather rounding halfway
cases away from zero; and it also got some other cases (such as minus
zero) wrong.  This led to observable cross-platform differences, as
reported in bug #12885 from Rich Schaaf; in particular, casting from
float to int didn't honor round-to-nearest-even on builds using rint.c.

Implement something that attempts to cover all cases per spec, and add
some simple regression tests so that we'll notice if any platforms still
get this wrong.

Although this is a bug fix, no back-patch, as a behavioral change in
the back branches was agreed not to be a good idea.

Pedro Gimeno Fortea, reviewed by Michael Paquier and myself
2015-03-25 15:54:18 -04:00
Kevin Grittner
2ed5b87f96 Reduce pinning and buffer content locking for btree scans.
Even though the main benefit of the Lehman and Yao algorithm for
btrees is that no locks need be held between page reads in an
index search, we were holding a buffer pin on each leaf page after
it was read until we were ready to read the next one.  The reason
was so that we could treat this as a weak lock to create an
"interlock" with vacuum's deletion of heap line pointers, even
though our README file pointed out that this was not necessary for
a scan using an MVCC snapshot.

The main goal of this patch is to reduce the blocking of vacuum
processes by in-progress btree index scans (including a cursor
which is idle), but the code rearrangement also allows for one
less buffer content lock to be taken when a forward scan steps from
one page to the next, which results in a small but consistent
performance improvement in many workloads.

This patch leaves behavior unchanged for some cases, which can be
addressed separately so that each case can be evaluated on its own
merits.  These unchanged cases are when a scan uses a non-MVCC
snapshot, an index-only scan, and a scan of a btree index for which
modifications are not WAL-logged.  If later patches allow  all of
these cases to drop the buffer pin after reading a leaf page, then
the btree vacuum process can be simplified; it will no longer need
the "super-exclusive" lock to delete tuples from a page.

Reviewed by Heikki Linnakangas and Kyotaro Horiguchi
2015-03-25 14:24:43 -05:00
Alvaro Herrera
8217fb1441 Add OID output argument to DefineTSConfiguration
... which is set to the OID of a copied text search config, whenever the
COPY clause is used.

This is in the spirit of commit a2e35b53c3.
2015-03-25 15:57:08 -03:00
Alvaro Herrera
b3196e65f5 Fix bug for array-formatted identities of user mappings
I failed to realize that server names reported in the object args array
would get quoted, which is wrong; remove that, making sure that it's
only quoted in the string-formatted identity.

This bug was introduced by my commit cf34e373, which was backpatched,
but since object name/args arrays are new in commit a676201490, there
is no need to backpatch this any further.
2015-03-25 14:28:34 -03:00
Alvaro Herrera
dc8e05295a Fix gram.y comment to match reality
There are other comments in there that don't precisely match what's
implemented, but this one confused me enough to be worth fixing.
2015-03-25 14:16:47 -03:00
Bruce Momjian
376a0c4547 psql: show proper row count in \x mode for zero-column output
Also, fix pager enable selection for such cases, and other cleanups for
zero-column output.

Report by Thom Brown
2015-03-24 21:04:10 -04:00
Bruce Momjian
1d8198bb44 Add support for ALTER TABLE IF EXISTS ... RENAME CONSTRAINT
Also add regression test.  Previously this was documented to work, but
didn't.
2015-03-24 19:52:47 -04:00
Tom Lane
feeb526cfe Fix ExecOpenScanRelation to take a lock on a ROW_MARK_COPY relation.
ExecOpenScanRelation assumed that any relation listed in the ExecRowMark
list has been locked by InitPlan; but this is not true if the rel's
markType is ROW_MARK_COPY, which is possible if it's a foreign table.

In most (possibly all) cases, failure to acquire a lock here isn't really
problematic because the parser, planner, or plancache would have taken the
appropriate lock already.  In principle though it might leave us vulnerable
to working with a relation that we hold no lock on, and in any case if the
executor isn't depending on previously-taken locks otherwise then it should
not do so for ROW_MARK_COPY relations.

Noted by Etsuro Fujita.  Back-patch to all active versions, since the
inconsistency has been there a long time.  (It's almost certainly
irrelevant in 9.0, since that predates foreign tables, but the code's
still wrong on its own terms.)
2015-03-24 15:53:06 -04:00
Tom Lane
e5f455f59f Apply table and domain CHECK constraints in name order.
Previously, CHECK constraints of the same scope were checked in whatever
order they happened to be read from pg_constraint.  (Usually, but not
reliably, this would be creation order for domain constraints and reverse
creation order for table constraints, because of differing implementation
details.)  Nondeterministic results of this sort are problematic at least
for testing purposes, and in discussion it was agreed to be a violation of
the principle of least astonishment.  Therefore, borrow the principle
already established for triggers, and apply such checks in name order
(using strcmp() sort rules).  This lets users control the check order
if they have a mind to.

Domain CHECK constraints still follow the rule of checking lower nested
domains' constraints first; the name sort only applies to multiple
constraints attached to the same domain.

In passing, I failed to resist the temptation to wordsmith a bit in
create_domain.sgml.

Apply to HEAD only, since this could result in a behavioral change in
existing applications, and the potential regression test failures have
not actually been observed in our buildfarm.
2015-03-23 16:59:35 -04:00
Alvaro Herrera
871293fb7f vacuumdb: Check result status of PQsendQuery
Noticed by Coverity
2015-03-23 15:57:11 -03:00
Heikki Linnakangas
4babae1a86 Try to fix MSVC build of pg_rewind.
It worked in my Windows VM with VS2013, but buildfarm animal mastodon,
running MSVC 2005, was not happy. Amit Kapila also reported a similar error
earlier in his environment. Let's see if this helps.
2015-03-23 20:26:49 +02:00
Heikki Linnakangas
61081e75c6 Add pg_rewind, for re-synchronizing a master server after failback.
Earlier versions of this tool were available (and still are) on github.

Thanks to Michael Paquier, Alvaro Herrera, Peter Eisentraut, Amit Kapila,
and Satoshi Nagayasu for review.
2015-03-23 19:47:52 +02:00
Andres Freund
87cec51d3a Don't delay replication for less than recovery_min_apply_delay's resolution.
Recovery delays are implemented by waiting on a latch, and latches take
milliseconds as a parameter. The required amount of waiting was computed
using microsecond resolution though and the wait loop's abort condition
was checking the delay in microseconds as well.  This could lead to
short spurts of busy looping when the overall wait time was below a
millisecond, but above 0 microseconds.

Instead just formulate the wait loop's abort condition in millisecond
granularity as well. Given that that's recovery_min_apply_delay
resolution, it seems harmless to not wait for less than a millisecond.

Backpatch to 9.4 where recovery_min_apply_delay was introduced.

Discussion: 20150323141819.GH26995@alap3.anarazel.de
2015-03-23 16:51:11 +01:00
Andres Freund
a1105c3dd4 Fix copy & paste error in 4f1b890b13.
Due to the bug delayed standbys would not delay when applying prepared
transactions.

Discussion: CAB7nPqT6BO1cCn+sAyDByBxA4EKZNAiPi2mFJ=ANeZmnmewRyg@mail.gmail.com

Michael Paquier via Coverity.
2015-03-23 15:53:40 +01:00
Robert Haas
372b97097e Remove ill-advised pre-check for DSM segment exhaustion.
dsm_control->nitems never decreases, so this is testing whether the
server has *ever* run out of DSM segments, not whether it is
*currently* out of DSM segments.

Reported off-list by Amit Kapila.
2015-03-23 09:58:56 -04:00
Bruce Momjian
33a2c5ecd6 to_char: revert cc0d90b73b
Revert "to_char(float4/8):  zero pad to specified length".  There are
too many platform-specific problems, and the proper rounding is missing.
Also revert companion patch 9d61b9953c.
2015-03-22 22:56:56 -04:00
Andres Freund
59b0a98af0 Fix minor copy & pasto in the int128 accumulator patch.
It's unlikely that using PG_GETARG_INT16 instead of PG_GETARG_INT32 in
this pace can cause actual problems, but this still should be fixed.
2015-03-22 19:53:38 +01:00
Tom Lane
cb1ca4d800 Allow foreign tables to participate in inheritance.
Foreign tables can now be inheritance children, or parents.  Much of the
system was already ready for this, but we had to fix a few things of
course, mostly in the area of planner and executor handling of row locks.

As side effects of this, allow foreign tables to have NOT VALID CHECK
constraints (and hence to accept ALTER ... VALIDATE CONSTRAINT), and to
accept ALTER SET STORAGE and ALTER SET WITH/WITHOUT OIDS.  Continuing to
disallow these things would've required bizarre and inconsistent special
cases in inheritance behavior.  Since foreign tables don't enforce CHECK
constraints anyway, a NOT VALID one is a complete no-op, but that doesn't
mean we shouldn't allow it.  And it's possible that some FDWs might have
use for SET STORAGE or SET WITH OIDS, though doubtless they will be no-ops
for most.

An additional change in support of this is that when a ModifyTable node
has multiple target tables, they will all now be explicitly identified
in EXPLAIN output, for example:

 Update on pt1  (cost=0.00..321.05 rows=3541 width=46)
   Update on pt1
   Foreign Update on ft1
   Foreign Update on ft2
   Update on child3
   ->  Seq Scan on pt1  (cost=0.00..0.00 rows=1 width=46)
   ->  Foreign Scan on ft1  (cost=100.00..148.03 rows=1170 width=46)
   ->  Foreign Scan on ft2  (cost=100.00..148.03 rows=1170 width=46)
   ->  Seq Scan on child3  (cost=0.00..25.00 rows=1200 width=46)

This was done mainly to provide an unambiguous place to attach "Remote SQL"
fields, but it is useful for inherited updates even when no foreign tables
are involved.

Shigeru Hanada and Etsuro Fujita, reviewed by Ashutosh Bapat and Kyotaro
Horiguchi, some additional hacking by me
2015-03-22 13:53:21 -04:00
Bruce Momjian
8ac356cde3 rm src/test/performance
Last changed in 1997.

Report by Andres Freund
2015-03-21 22:21:20 -04:00
Bruce Momjian
1c7087af42 Add TOAST table to pg_shseclabel for long label use
Report by Andres Freund
2015-03-21 22:14:49 -04:00
Bruce Momjian
34afbba84e Use mmap MAP_NOSYNC option to limit shared memory writes
mmap() is rarely used for shared memory, but when it is, this option is
useful, particularly on the BSDs.

Patch by Sean Chittenden
2015-03-21 22:06:19 -04:00
Bruce Momjian
9d61b9953c to_char(float4/8): don't print "junk" digits
Commit cc0d90b73b also avoids printing
junk digits, which are digits that are beyond the precision of the
underlying type.
2015-03-21 21:50:03 -04:00
Bruce Momjian
cc0d90b73b to_char(float4/8): zero pad to specified length
Previously, zero padding was limited to the internal length, rather than
the specified length.  This allows it to match to_char(int/numeric), which
always padded to the specified length.

Regression tests added.

BACKWARD INCOMPATIBILITY
2015-03-21 21:43:36 -04:00
Heikki Linnakangas
1933a5bbc8 Make pg_xlogdump MSVC build work more like others.
Instead of copying xlogreader.c and *desc.c files into the source directory,
build them where they are. That's what we do for other binaries that need to
compile and link in files from elsewhere in the source tree.

The commit history suggests that it was done this way because of issues with
older versions of MSVC. I think this should work, but we'll see if the
buildfarm complains.
2015-03-21 11:56:48 +02:00
Bruce Momjian
30a5ce8f5d pg_recvlogical: update --help description
Patch by Euler Taveira
2015-03-20 22:15:48 -04:00
Bruce Momjian
0c8fa710b6 C comment: clearify SQL command mention
Patch by Amit Langote
2015-03-20 18:30:30 -04:00
Bruce Momjian
159134b695 vacuumdb --help text: clarify analyze-only
Patch by Mats Erik Andersson
2015-03-20 17:17:30 -04:00
Bruce Momjian
13a10c0ccd C comment: update lock level mention in comment
Patch by Etsuro Fujita
2015-03-20 08:31:13 -04:00
Andres Freund
959277a4f5 Use 128-bit math to accelerate some aggregation functions.
On platforms where we support 128bit integers, use them to implement
faster transition functions for sum(int8), avg(int8),
var_*(int2/int4),stdev_*(int2/int4). Where not supported continue to use
numeric as a transition type.

In some synthetic benchmarks this has been shown to provide significant
speedups.

Bumps catversion.

Discussion: 544BB5F1.50709@proxel.se
Author: Andreas Karlsson
Reviewed-By: Peter Geoghegan, Petr Jelinek, Andres Freund,
    Oskari Saarenmaa, David Rowley
2015-03-20 10:29:32 +01:00
Andres Freund
8122e1437e Add, optional, support for 128bit integers.
We will, for the foreseeable future, not expose 128 bit datatypes to
SQL. But being able to use 128bit math will allow us, in a later patch,
to use 128bit accumulators for some aggregates; leading to noticeable
speedups over using numeric.

So far we only detect a gcc/clang extension that supports 128bit math,
but no 128bit literals, and no *printf support. We might want to expand
this in the future to further compilers; if there are any that that
provide similar support.

Discussion: 544BB5F1.50709@proxel.se
Author: Andreas Karlsson, with significant editorializing by me
Reviewed-By: Peter Geoghegan, Oskari Saarenmaa
2015-03-20 10:26:17 +01:00
Peter Eisentraut
28beb69f8b Fix whitespace 2015-03-19 22:18:46 -04:00
Bruce Momjian
05d1910c1c regression tests: remove polygon diagrams
The diagrams were inaccurate.

Report by Emre Hasegeli
2015-03-19 22:10:52 -04:00
Bruce Momjian
788e799ed4 psql: allow DROP INDEX CONCURRENTLY in AUTOCOMMIT off mode
Previously this threw an error.

Patch by Feike Steenbergen
2015-03-19 21:17:10 -04:00
Stephen Frost
bf03889996 GetUserId() changes to has_privs_of_role()
The pg_stat and pg_signal-related functions have been using GetUserId()
instead of has_privs_of_role() for checking if the current user should
be able to see details in pg_stat_activity or signal other processes,
requiring a user to do 'SET ROLE' for inheirited roles for a permissions
check, unlike other permissions checks.

This patch changes that behavior to, instead, act like most other
permission checks and use has_privs_of_role(), removing the 'SET ROLE'
need.  Documentation and error messages updated accordingly.

Per discussion with Alvaro, Peter, Adam (though not using Adam's patch),
and Robert.

Reviewed by Jeevan Chalke.
2015-03-19 15:02:33 -04:00
Robert Haas
12968cf408 Add flags argument to dsm_create.
Right now, there's only one flag, DSM_CREATE_NULL_IF_MAXSEGMENTS,
which suppresses the error that would normally be thrown when the
maximum number of segments already exists, instead returning NULL.
It might be useful to add more flags in the future, such as one to
ignore allocation errors, but I haven't done that here.
2015-03-19 13:03:03 -04:00
Robert Haas
bf740ce9e5 Fix status reporting for terminated bgworkers that were never started.
Previously, GetBackgroundWorkerPid() would return BGWH_NOT_YET_STARTED
if the slot used for the worker registration had not been reused by
unrelated activity, and BGWH_STOPPED if it had.  Either way, a process
that had requested notification when the state of one of its
background workers changed did not receive such notifications.  Fix
things so that GetBackgroundWorkerPid() always returns BGWH_STOPPED in
this situation, so that we do not erroneously give waiters the
impression that the worker will eventually be started; and send
notifications just as we would if the process terminated after having
been started, so that it's possible to wait for the postmaster to
process a worker termination request without polling.

Discovered by Amit Kapila during testing of parallel sequential scan.
Analysis and fix by me.  Back-patch to 9.4; there may not be anyone
relying on this interface yet, but if anyone is, the new behavior is a
clear improvement.
2015-03-19 11:04:09 -04:00
Alvaro Herrera
13dbc7a824 array_offset() and array_offsets()
These functions return the offset position or positions of a value in an
array.

Author: Pavel Stěhule
Reviewed by: Jim Nasby
2015-03-18 16:01:34 -03:00
Alvaro Herrera
f9dead5624 Install shared libraries to bin/ in Windows under MSVC
Since commit cb4a3b04 we were already doing this for the Cygwin/mingw
toolchains, but MSVC had not been updated to do it.  At Install.pm time,
the Makefile (or GNUmakefile) is inspected, and if a line matching
SO_MAJOR_VERSION is found (indicating a shared library is being built),
then files with the .dll extension are set to be installed in bin/
rather than lib/, while files with .lib extension are installed in lib/.
This makes the MSVC toolchain up to date with cygwin/mingw.

This removes ad-hoc hacks that were copying files into bin/ or lib/
manually (libpq.dll in particular was already being copied into bin).
So while this is a rather ugly kludge, it's still cleaner than what was
there before.

Author: Michael Paquier
Reviewed by: Asif Naeem
2015-03-18 15:16:29 -03:00
Alvaro Herrera
b8d226b4f9 Setup cursor position for schema-qualified elements
This makes any errors thrown while looking up such schemas report the
position of the error.

Author: Ryan Kelly
Reviewed by: Jeevan Chalke, Tom Lane
2015-03-18 14:48:02 -03:00
Alvaro Herrera
0d83138974 Rationalize vacuuming options and parameters
We were involving the parser too much in setting up initial vacuuming
parameters.  This patch moves that responsibility elsewhere to simplify
code, and also to make future additions easier.  To do this, create a
new struct VacuumParams which is filled just prior to vacuum execution,
instead of at parse time; for user-invoked vacuuming this is set up in a
new function ExecVacuum, while autovacuum sets it up by itself.

While at it, add a new member VACOPT_SKIPTOAST to enum VacuumOption,
only set by autovacuum, which is used to disable vacuuming of the toast
table instead of the old do_toast parameter; this relieves the argument
list of vacuum() and some callees a bit.  This partially makes up for
having added more arguments in an effort to avoid having autovacuum from
constructing a VacuumStmt parse node.

Author: Michael Paquier. Some tweaks by Álvaro
Reviewed by: Robert Haas, Stephen Frost, Álvaro Herrera
2015-03-18 11:52:33 -03:00
Alvaro Herrera
a190738457 Fix out-of-array-bounds compiler warning
Since the array length check is using a post-increment operator, the
compiler complains that there's a potential write to one element beyond
the end of the array.  This is not possible currently: the only path to
this function is through pg_get_object_address(), which already verifies
that the input array is no more than two elements in length.  Still, a
bug is a bug.

No idea why my compiler doesn't complain about this ...

Pointed out by Dead Rasheed and Peter Eisentraut
2015-03-16 22:35:45 -03:00
Alvaro Herrera
a61fd5334e Support opfamily members in get_object_address
In the spirit of 890192e99a and 4464303405: have get_object_address
understand individual pg_amop and pg_amproc objects.  There is no way to
refer to such objects directly in the grammar -- rather, they are almost
always considered an integral part of the opfamily that contains them.
(The only case that deals with them individually is ALTER OPERATOR
FAMILY ADD/DROP, which carries the opfamily address separately and thus
does not need it to be part of each added/dropped element's address.)
In event triggers it becomes possible to become involved with individual
amop/amproc elements, and this commit enables pg_get_object_address to
do so as well.

To make the overall coding simpler, this commit also slightly changes
the get_object_address representation for opclasses and opfamilies:
instead of having the AM name in the objargs array, I moved it as the
first element of the objnames array.  This enables the new code to use
objargs for the type names used by pg_amop and pg_amproc.

Reviewed by: Stephen Frost
2015-03-16 12:06:34 -03:00
Tom Lane
7b8b8a4331 Improve representation of PlanRowMark.
This patch fixes two inadequacies of the PlanRowMark representation.

First, that the original LockingClauseStrength isn't stored (and cannot be
inferred for foreign tables, which always get ROW_MARK_COPY).  Since some
PlanRowMarks are created out of whole cloth and don't actually have an
ancestral RowMarkClause, this requires adding a dummy LCS_NONE value to
enum LockingClauseStrength, which is fairly annoying but the alternatives
seem worse.  This fix allows getting rid of the use of get_parse_rowmark()
in FDWs (as per the discussion around commits 462bd95705 and
8ec8760fc8), and it simplifies some things elsewhere.

Second, that the representation assumed that all child tables in an
inheritance hierarchy would use the same RowMarkType.  That's true today
but will soon not be true.  We add an "allMarkTypes" field that identifies
the union of mark types used in all a parent table's children, and use
that where appropriate (currently, only in preprocess_targetlist()).

In passing fix a couple of minor infelicities left over from the SKIP
LOCKED patch, notably that _outPlanRowMark still thought waitPolicy
is a bool.

Catversion bump is required because the numeric values of enum
LockingClauseStrength can appear in on-disk rules.

Extracted from a much larger patch to support foreign table inheritance;
it seemed worth breaking this out, since it's a separable concern.

Shigeru Hanada and Etsuro Fujita, somewhat modified by me
2015-03-15 18:41:47 -04:00
Tom Lane
9fac5fd741 Move LockClauseStrength, LockWaitPolicy into new file nodes/lockoptions.h.
Commit df630b0dd5 moved enum LockWaitPolicy
into its very own header file utils/lockwaitpolicy.h, which does not seem
like a great idea from here.  First, it's still a node-related declaration,
and second, a file named like that can never sensibly be used for anything
else.  I do not think we want to encourage a one-typedef-per-header-file
approach.  The upcoming foreign table inheritance patch was doubling down
on this bad idea by moving enum LockClauseStrength into its *own*
can-never-be-used-for-anything-else file.  Instead, let's put them both in
a file named nodes/lockoptions.h.  (They do seem to need a separate header
file because we need them in both parsenodes.h and plannodes.h, and we
don't want either of those including the other.  Past practice might
suggest adding them to nodes/nodes.h, but they don't seem sufficiently
globally useful to justify that.)

Committed separately since there's no functional change here, just some
header-file refactoring.
2015-03-15 15:19:04 -04:00
Andres Freund
4f1b890b13 Merge the various forms of transaction commit & abort records.
Since 465883b0a two versions of commit records have existed. A compact
version that was used when no cache invalidations, smgr unlinks and
similar were needed, and a full version that could deal with all
that. Additionally the full version was embedded into twophase commit
records.

That resulted in a measurable reduction in the size of the logged WAL in
some workloads. But more recently additions like logical decoding, which
e.g. needs information about the database something was executed on,
made it applicable in fewer situations. The static split generally made
it hard to expand the commit record, because concerns over the size made
it hard to add anything to the compact version.

Additionally it's not particularly pretty to have twophase.c insert
RM_XACT records.

Rejigger things so that the commit and abort records only have one form
each, including the twophase equivalents. The presence of the various
optional (in the sense of not being in every record) pieces is indicated
by a bits in the 'xinfo' flag.  That flag previously was not included in
compact commit records. To prevent an increase in size due to its
presence, it's only included if necessary; signalled by a bit in the
xl_info bits available for xact.c, similar to heapam.c's
XLOG_HEAP_OPMASK/XLOG_HEAP_INIT_PAGE.

Twophase commit/aborts are now the same as their normal
counterparts. The original transaction's xid is included in an optional
data field.

This means that commit records generally are smaller, except in the case
of a transaction with subtransactions, but no other special cases; the
increase there is four bytes, which seems acceptable given that the more
common case of not having subtransactions shrank.  The savings are
especially measurable for twophase commits, which previously always used
the full version; but will in practice only infrequently have required
that.

The motivation for this work are not the space savings and and
deduplication though; it's that it makes it easier to extend commit
records with additional information. That's just a few lines of code
now; without impacting the common case where that information is not
needed.

Discussion: 20150220152150.GD4149@awork2.anarazel.de,
    235610.92468.qm%40web29004.mail.ird.yahoo.com

Reviewed-By: Heikki Linnakangas, Simon Riggs
2015-03-15 17:37:07 +01:00
Andres Freund
a0f5954af1 Increase max_wal_size's default from 128MB to 1GB.
The introduction of min_wal_size & max_wal_size in 88e9823026 makes it
feasible to increase the default upper bound in checkpoint
size. Previously raising the default would lead to a increased disk
footprint, even if more segments weren't beneficial.  The low default of
checkpoint size is one of common performance problem users have thus
increasing the default makes sense.  Setups where the increase in
maximum disk usage is a problem will very likely have to run with a
modified configuration anyway.

Discussion: 54F4EFB8.40202@agliodbs.com,
    CA+TgmoZEAgX5oMGJOHVj8L7XOkAe05Gnf45rP40m-K3FhZRVKg@mail.gmail.com

Author: Josh Berkus, after a discussion involving lots of people.
2015-03-15 17:37:07 +01:00
Andres Freund
241f088f36 Adjust valgrind suppressions wrt 025c02420. 2015-03-15 17:37:07 +01:00
Andres Freund
51c11a7025 Remove pause_at_recovery_target recovery.conf setting.
The new recovery_target_action (introduced in aedccb1f6/b8e33a85d4)
replaces it's functionality. Having both seems likely to cause more
confusion than it saves worry due to the incompatibility.

Discussion: 5484FC53.2060903@2ndquadrant.com
Author: Petr Jelinek
2015-03-15 17:37:07 +01:00
Fujii Masao
cd6c45cbee Suppress maybe-uninitialized compiler warnings.
Previously some compilers were thinking that the variables that
57aa5b2 added maybe-uninitialized.

Spotted by Andres Freund
2015-03-15 10:40:43 +09:00
Tom Lane
5ff683962e Remove obsolete comment.
Obsoleted by commit 21dcda2713, but I missed
seeing the cross-reference in the comments for exec_eval_integer().

Also improve the cross-reference in the comments for exec_eval_cleanup().
2015-03-14 17:07:11 -04:00
Tom Lane
91f4a5a976 Build src/port/dirmod.c only on Windows.
Since commit ba7c5975ad, port/dirmod.c
has contained only Windows-specific functions.  Most platforms don't
seem to mind uselessly building an empty file, but OS X for one issues
warnings.  Hence, treat dirmod.c as a Windows-specific file selected
by configure rather than one that's always built.  We can revert this
change if dirmod.c ever gains any non-Windows functionality again.

Back-patch to 9.4 where the mentioned commit appeared.
2015-03-14 14:08:45 -04:00
Tom Lane
df9ebf1eea Remove workaround for ancient incompatibility between readline and libedit.
GNU readline defines the return value of write_history() as "zero if OK,
else an errno code".  libedit's version of that function used to have a
different definition (to wit, "-1 if error, else the number of lines
written to the file").  We tried to work around that by checking whether
errno had become nonzero, but this method has never been kosher according
to the published API of either library.  It's reportedly completely broken
in recent Ubuntu releases: psql bleats about "No such file or directory"
when saving ~/.psql_history, even though the write worked fine.

However, libedit has been following the readline definition since somewhere
around 2006, so it seems all right to finally break compatibility with
ancient libedit releases and trust that the return value is what readline
specifies.  (I'm not sure when the various Linux distributions incorporated
this fix, but I did find that OS X has been shipping fixed versions since
10.5/Leopard.)

If anyone is still using such an ancient libedit, they will find that psql
complains it can't write ~/.psql_history at exit, even when the file was
written correctly.  This is no worse than the behavior we're fixing for
current releases.

Back-patch to all supported branches.
2015-03-14 13:43:00 -04:00
Tatsuo Ishii
364c006c1f Fix integer overflow in debug message of walreceiver
The message tries to tell the replication apply delay which fails if
the first WAL record is not applied yet. Fix is, instead of telling
overflowed minus numeric, showing "N/A" which indicates that the delay
data is not yet available. Problem reported by me and patch by
Fabrízio de Royes Mello.

Back patched to 9.4, 9.3 and 9.2 stable branches (9.1 and 9.0 do not
have the debug message).
2015-03-14 08:16:50 +09:00
Tom Lane
443fd0540e Ensure tableoid reads correctly in EvalPlanQual-manufactured tuples.
The ROW_MARK_COPY path in EvalPlanQualFetchRowMarks() was just setting
tableoid to InvalidOid, I think on the assumption that the referenced
RTE must be a subquery or other case without a meaningful OID.  However,
foreign tables also use this code path, and they do have meaningful
table OIDs; so failure to set the tuple field can lead to user-visible
misbehavior.  Fix that by fetching the appropriate OID from the range
table.

There's still an issue about whether CTID can ever have a meaningful
value in this case; at least with postgres_fdw foreign tables, it does.
But that is a different problem that seems to require a significantly
different patch --- it's debatable whether postgres_fdw really wants to
use this code path at all.

Simplified version of a patch by Etsuro Fujita, who also noted the
problem to begin with.  The issue can be demonstrated in all versions
having FDWs, so back-patch to 9.1.
2015-03-12 13:39:09 -04:00
Heikki Linnakangas
26d2c5dc8d Fix memory leaks in GIN index vacuum.
Per bug #12850 by Walter Nordmann. Backpatch to 9.4 where the leak was
introduced.
2015-03-12 15:34:32 +01:00
Tom Lane
f4abd0241d Support flattening of empty-FROM subqueries and one-row VALUES tables.
We can't handle this in the general case due to limitations of the
planner's data representations; but we can allow it in many useful cases,
by being careful to flatten only when we are pulling a single-row subquery
up into a FROM (or, equivalently, inner JOIN) node that will still have at
least one remaining relation child.  Per discussion of an example from
Kyotaro Horiguchi.
2015-03-11 23:18:03 -04:00
Tom Lane
b746d0c32d Fix old bug in get_loop_count().
While poking at David Kubečka's issue I noticed an ancient logic error
in get_loop_count(): it used 1.0 as a "no data yet" indicator, but since
that is actually a valid rowcount estimate, this doesn't work.  If we
have one input relation with 1.0 as rowcount and then another one with
a larger rowcount, we should use 1.0 as the result, but we picked the
larger rowcount instead.  (I think when I coded this, I recognized the
conflict, but mistakenly thought that the logic would pick the desired
count anyway.)

Fixing this changed the plan for one existing regression test case.
Since the point of that test is to exercise creation of a particular
shape of nestloop plan, I tweaked the query a little bit so it still
results in the same plan choice.

This is definitely a bug, but I'm hesitant to back-patch since it might
change plan choices unexpectedly, and anyway failure to implement a
heuristic precisely as intended is a pretty low-grade bug.
2015-03-11 22:53:32 -04:00
Tom Lane
b55722692b Improve planner's cost estimation in the presence of semijoins.
If we have a semijoin, say
	SELECT * FROM x WHERE x1 IN (SELECT y1 FROM y)
and we're estimating the cost of a parameterized indexscan on x, the number
of repetitions of the indexscan should not be taken as the size of y; it'll
really only be the number of distinct values of y1, because the only valid
plan with y on the outside of a nestloop would require y to be unique-ified
before joining it to x.  Most of the time this doesn't make that much
difference, but sometimes it can lead to drastically underestimating the
cost of the indexscan and hence choosing a bad plan, as pointed out by
David Kubečka.

Fixing this is a bit difficult because parameterized indexscans are costed
out quite early in the planning process, before we have the information
that would be needed to call estimate_num_groups() and thereby estimate the
number of distinct values of the join column(s).  However we can move the
code that extracts a semijoin RHS's unique-ification columns, so that it's
done in initsplan.c rather than on-the-fly in create_unique_path().  That
shouldn't make any difference speed-wise and it's really a bit cleaner too.

The other bit of information we need is the size of the semijoin RHS,
which is easy if it's a single relation (we make those estimates before
considering indexscan costs) but problematic if it's a join relation.
The solution adopted here is just to use the product of the sizes of the
join component rels.  That will generally be an overestimate, but since
estimate_num_groups() only uses this input as a clamp, an overestimate
shouldn't hurt us too badly.  In any case we don't allow this new logic
to produce a value larger than we would have chosen before, so that at
worst an overestimate leaves us no wiser than we were before.
2015-03-11 21:21:00 -04:00
Peter Eisentraut
ff2faeec5c PL/Python: Fix regression tests for Python 3 2015-03-11 18:30:56 -04:00
Alvaro Herrera
4464303405 Support default ACLs in get_object_address
In the spirit of 890192e99a, this time add support for the things
living in the pg_default_acl catalog.  These are not really "objects",
but they show up as such in event triggers.

There is no "DROP DEFAULT PRIVILEGES" or similar command, so it doesn't
look like the new representation given would be useful anywhere else, so
I didn't try to use it outside objectaddress.c.  (That might be a bug in
itself, but that would be material for another commit.)

Reviewed by Stephen Frost.
2015-03-11 19:23:47 -03:00
Alvaro Herrera
d4d7777548 Fix libpq test expected output file
Evidently, this test is not run very frequently ...
2015-03-11 17:04:27 -03:00
Alvaro Herrera
890192e99a Support user mappings in get_object_address
Since commit 72dd233d3e we were trying to obtain object addressing
information in sql_drop event triggers, but that caused failures when
the drops involved user mappings.  This addition enables that to work
again.  Naturally, pg_get_object_address can work with these objects
now, too.

I toyed with the idea of removing DropUserMappingStmt as a node and
using DropStmt instead in the DropUserMappingStmt grammar production,
but that didn't go very well: for one thing the messages thrown by the
specific code are specialized (you get "server not found" if you specify
the wrong server, instead of a generic "user mapping for ... not found"
which you'd get it we were to merge this with RemoveObjects --- unless
we added even more special cases).  For another thing, it would require
to pass RoleSpec nodes through the objname/objargs representation used
by RemoveObjects, which works in isolation, but gets messy when
pg_get_object_address is involved.  So I dropped this part for now.

Reviewed by Stephen Frost.
2015-03-11 17:04:27 -03:00
Peter Eisentraut
1ce7a57ca6 PL/Python: Avoid lossiness in float conversion
PL/Python uses str() to convert Python values back to PostgreSQL, but
str() is lossy for float values, so use repr() instead in that case.

Author: Marko Kreen <markokr@gmail.com>
2015-03-11 15:46:06 -04:00
Robert Haas
bc93ac12c2 Require non-NULL pstate for all addRangeTableEntryFor* functions.
Per discussion, it's better to have a consistent coding rule here.

Michael Paquier, per a node from Greg Stark referencing an old post
from Tom Lane.
2015-03-11 15:26:43 -04:00
Tom Lane
c6b3c939b7 Make operator precedence follow the SQL standard more closely.
While the SQL standard is pretty vague on the overall topic of operator
precedence (because it never presents a unified BNF for all expressions),
it does seem reasonable to conclude from the spec for <boolean value
expression> that OR has the lowest precedence, then AND, then NOT, then IS
tests, then the six standard comparison operators, then everything else
(since any non-boolean operator in a WHERE clause would need to be an
argument of one of these).

We were only sort of on board with that: most notably, while "<" ">" and
"=" had properly low precedence, "<=" ">=" and "<>" were treated as generic
operators and so had significantly higher precedence.  And "IS" tests were
even higher precedence than those, which is very clearly wrong per spec.

Another problem was that "foo NOT SOMETHING bar" constructs, such as
"x NOT LIKE y", were treated inconsistently because of a bison
implementation artifact: they had the documented precedence with respect
to operators to their right, but behaved like NOT (i.e., very low priority)
with respect to operators to their left.

Fixing the precedence issues is just a small matter of rearranging the
precedence declarations in gram.y, except for the NOT problem, which
requires adding an additional lookahead case in base_yylex() so that we
can attach a different token precedence to NOT LIKE and allied two-word
operators.

The bulk of this patch is not the bug fix per se, but adding logic to
parse_expr.c to allow giving warnings if an expression has changed meaning
because of these precedence changes.  These warnings are off by default
and are enabled by the new GUC operator_precedence_warning.  It's believed
that very few applications will be affected by these changes, but it was
agreed that a warning mechanism is essential to help debug any that are.
2015-03-11 13:22:52 -04:00
Tom Lane
21dcda2713 Allocate ParamListInfo once per plpgsql function, not once per expression.
setup_param_list() was allocating a fresh ParamListInfo for each query or
expression evaluation requested by a plpgsql function.  There was probably
once good reason to do it like that, but for a long time we've had a
convention that there's a one-to-one mapping between the function's
PLpgSQL_datum array and the ParamListInfo slots, which means that a single
ParamListInfo can serve all the function's evaluation requests: the data
that would need to be passed is the same anyway.

In this patch, we retain the pattern of zeroing out the ParamListInfo
contents during each setup_param_list() call, because some of the slots may
be stale and we don't know exactly which ones.  So this patch only saves a
palloc/pfree per evaluation cycle and nothing more; still, that seems to be
good for a couple percent overall speedup on simple-arithmetic type
statements.  In future, though, we might be able to improve matters still
more by managing the param array contents more carefully.

Also, unify the former use of estate->cur_expr with that of
paramLI->parserSetupArg; they both were used to point to the active
expression, so we can combine the variables into just one.
2015-03-11 12:40:43 -04:00
Robert Haas
e529cd4ffa Suggest to the user the column they may have meant to reference.
Error messages informing the user that no such column exists can
sometimes provoke a perplexed response.  This often happens due to
a subtle typo in the column name or, perhaps less likely, in the
alias name.  To speed discovery of what the real issue is in such
cases, we'll now search the range table for approximate matches.
If there are one or two such matches that are good enough to think
that they might be what the user intended to type, and better than
all other approximate matches, we'll issue a hint suggesting that
the user might have intended to reference those columns.

Peter Geoghegan and Robert Haas
2015-03-11 10:44:04 -04:00