Commit Graph

24864 Commits

Author SHA1 Message Date
Tom Lane 8b49a6044d Cache catalog lookup data across groups in ordered-set aggregates.
The initial commit of ordered-set aggregates just did all the setup work
afresh each time the aggregate function is started up.  But in a GROUP BY
query, the catalog lookups need not be repeated for each group, since the
column datatypes and sort information won't change.  When there are many
small groups, this makes for a useful, though not huge, performance
improvement.  Per suggestion from Andrew Gierth.

Profiling of these cases suggests that it might be profitable to avoid
duplicate lookups within tuplesort startup as well; but changing the
tuplesort APIs would have much broader impact, so I left that for
another day.
2014-01-05 12:28:39 -05:00
Tom Lane 92459e7a7f Fix translatability markings in psql, and add defenses against future bugs.
Several previous commits have added columns to various \d queries without
updating their translate_columns[] arrays, leading to potentially incorrect
translations in NLS-enabled builds.  Offenders include commit 893686762
(added prosecdef to \df+), c9ac00e6e (added description to \dc+) and
3b17efdfd (added description to \dC+).  Fix those cases back to 9.3 or
9.2 as appropriate.

Since this is evidently more easily missed than one would like, in HEAD
also add an Assert that the supplied array is long enough.  This requires
an API change for printQuery(), so it seems inappropriate for back
branches, but presumably all future changes will be tested in HEAD anyway.

In HEAD and 9.3, also clean up a whole lot of sloppiness in the emitted
SQL for \dy (event triggers): lack of translatability due to failing to
pass words-to-be-translated through gettext_noop(), inadequate schema
qualification, and sloppy formatting resulting in unnecessarily ugly
-E output.

Peter Eisentraut and Tom Lane, per bug #8702 from Sergey Burladyan
2014-01-04 16:05:16 -05:00
Tom Lane 5858cf8ab2 Fix header comment for bitncmp().
The result is an int less than, equal to, or greater than zero, in the
style of memcmp (and, in fact, exactly the output of memcmp in some cases).
This comment previously said -1, 1, or 0, which was an overspecification,
as noted by Emre Hasegeli.  All of the existing callers appear to be fine
with the actual behavior, so just fix the comment.

In passing, improve infelicitous formatting of some call sites.
2014-01-04 14:01:51 -05:00
Alvaro Herrera 1a3e82a7f9 Restore some comments lost during 15732b34e8
Michael Paquier
2014-01-03 13:22:03 -03:00
Tom Lane a3b4aeecfe Ooops, should use double not single quotes in StaticAssertStmt().
That's what I get for testing this on an older compiler.
2014-01-02 21:54:20 -05:00
Tom Lane a7ef273e1c Fix calculation of maximum statistics-message size.
The PGSTAT_NUM_TABENTRIES macro should have been updated when new fields
were added to struct PgStat_MsgTabstat in commit 644828908, but it wasn't.
Fix that.

Also, add a static assertion that we didn't overrun the intended size limit
on stats messages.  This will not necessarily catch every mistake in
computing the maximum array size for stats messages, but it will catch ones
that have practical consequences.  (The assertion in fact doesn't complain
about the aforementioned error in PGSTAT_NUM_TABENTRIES, because that was
not big enough to cause the array length to increase.)

No back-patch, as there's no actual bug in existing releases; this is just
in the nature of future-proofing.

Mark Dilger and Tom Lane
2014-01-02 21:45:51 -05:00
Alvaro Herrera 638cf09e76 Handle 5-char filenames in SlruScanDirectory
Original users of slru.c were all producing 4-digit filenames, so that
was all that that code was prepared to handle.  Changes to multixact.c
in the course of commit 0ac5ad5134 made pg_multixact/members create
5-digit filenames once a certain threshold was reached, which
SlruScanDirectory wasn't prepared to deal with; in particular,
5-digit-name files were not removed during truncation.  Change that
routine to make it aware of those files, and have it process them just
like any others.

Right now, some pg_multixact/members directories will contain a mixture
of 4-char and 5-char filenames.  A future commit is expected fix things
so that each slru.c user declares the correct maximum width for the
files it produces, to avoid such unsightly mixtures.

Noticed while investigating bug #8673 reported by Serge Negodyuck.
2014-01-02 18:17:29 -03:00
Alvaro Herrera a50d976254 Wrap multixact/members correctly during extension
In the 9.2 code for extending multixact/members, the logic was very
simple because the number of entries in a members page was a proper
divisor of 2^32, and thus at 2^32 wraparound the logic for page switch
was identical than at any other page boundary.  In commit 0ac5ad5134 I
failed to realize this and introduced code that was not able to go over
the 2^32 boundary.  Fix that by ensuring that when we reach the last
page of the last segment we correctly zero the initial page of the
initial segment, using correct uint32-wraparound-safe arithmetic.

Noticed while investigating bug #8673 reported by Serge Negodyuck, as
diagnosed by Andres Freund.
2014-01-02 18:17:07 -03:00
Alvaro Herrera 722acf51a0 Handle wraparound during truncation in multixact/members
In pg_multixact/members, relying on modulo-2^32 arithmetic for
wraparound handling doesn't work all that well.  Because we don't
explicitely track wraparound of the allocation counter for members, it
is possible that the "live" area exceeds 2^31 entries; trying to remove
SLRU segments that are "old" according to the original logic might lead
to removal of segments still in use.  To fix, have the truncation
routine use a tailored SlruScanDirectory callback that keeps track of
the live area in actual use; that way, when the live range exceeds 2^31
entries, the oldest segments still live will not get removed untimely.

This new SlruScanDir callback needs to take care not to remove segments
that are "in the future": if new SLRU segments appear while the
truncation is ongoing, make sure we don't remove them.  This requires
examination of shared memory state to recheck for false positives, but
testing suggests that this doesn't cause a problem.  The original coding
didn't suffer from this pitfall because segments created when truncation
is running are never considered to be removable.

Per Andres Freund's investigation of bug #8673 reported by Serge
Negodyuck.
2014-01-02 18:16:54 -03:00
Robert Haas 3cff1879f8 Aggressively freeze tables when CLUSTER or VACUUM FULL rewrites them.
We haven't wanted to do this in the past on the grounds that in rare
cases the original xmin value will be needed for forensic purposes, but
commit 37484ad2aa removes that objection,
so now we can.

Per extensive discussion, among many people, on pgsql-hackers.
2014-01-02 15:15:51 -05:00
Robert Haas 4b351841fa Rename walLogHints to wal_log_hints for easier grepping.
Michael Paquier
2014-01-01 20:17:00 -05:00
Michael Meskes 7c957ec83e Do not use an empty hostname.
When trying to connect to a given database libecpg should not try using an
empty hostname if no hostname was given.
2014-01-01 12:39:31 +01:00
Tom Lane c01bc51f8d Fix broken support for event triggers as extension members.
CREATE EVENT TRIGGER forgot to mark the event trigger as a member of its
extension, and pg_dump didn't pay any attention anyway when deciding
whether to dump the event trigger.  Per report from Moshe Jacobson.

Given the obvious lack of testing here, it's rather astonishing that
ALTER EXTENSION ADD/DROP EVENT TRIGGER work, but they seem to.
2013-12-30 14:00:02 -05:00
Tom Lane f7fbf4b0be Remove dead code now that orindxpath.c is history.
We don't need make_restrictinfo_from_bitmapqual() anymore at all.
generate_bitmap_or_paths() doesn't need to be exported, and we can
drop its rather klugy restriction_only flag.
2013-12-30 12:50:31 -05:00
Tom Lane f343a880d5 Extract restriction OR clauses whether or not they are indexable.
It's possible to extract a restriction OR clause from a join clause that
has the form of an OR-of-ANDs, if each sub-AND includes a clause that
mentions only one specific relation.  While PG has been aware of that idea
for many years, the code previously only did it if it could extract an
indexable OR clause.  On reflection, though, that seems a silly limitation:
adding a restriction clause can be a win by reducing the number of rows
that have to be filtered at the join step, even if we have to test the
clause as a plain filter clause during the scan.  This should be especially
useful for foreign tables, where the change can cut the number of rows that
have to be retrieved from the foreign server; but testing shows it can win
even on local tables.  Per a suggestion from Robert Haas.

As a heuristic, I made the code accept an extracted restriction clause
if its estimated selectivity is less than 0.9, which will probably result
in accepting extracted clauses just about always.  We might need to tweak
that later based on experience.

Since the code no longer has even a weak connection to Path creation,
remove orindxpath.c and create a new file optimizer/util/orclauses.c.

There's some additional janitorial cleanup of now-dead code that needs
to happen, but it seems like that's a fit subject for a separate commit.
2013-12-30 12:24:37 -05:00
Kevin Grittner 47f50262e7 Don't attempt to limit target database for pg_restore.
There was an apparent attempt to limit the target database for
pg_restore to version 7.1.0 or later.  Due to a leading zero this
was interpreted as an octal number, which allowed targets with
version numbers down to 2.87.36.  The lowest actual release above
that was 6.0.0, so that was effectively the limit.

Since the success of the restore attempt will depend primarily on
on what statements were generated by the dump run, we don't want
pg_restore trying to guess whether a given target should be allowed
based on version number.  Allow a connection to any version.  Since
it is very unlikely that anyone would be using a recent version of
pg_restore to restore to a pre-6.0 database, this has little to no
practical impact, but it makes the code less confusing to read.

Issue reported and initial patch suggestion from Joel Jacobson
based on an article by Andrey Karpov reporting on issues found by
PVS-Studio static code analyzer.  Final patch based on analysis by
Tom Lane.  Back-patch to all supported branches.
2013-12-29 15:17:52 -06:00
Tom Lane ed011d9754 Undo autoconf 2.69's attempt to #define _DARWIN_USE_64_BIT_INODE.
Defining this symbol causes OS X 10.5 to use a buggy version of readdir(),
which can sometimes fail with EINVAL if the previously-fetched directory
entry has been deleted or renamed.  In later OS X versions that bug has
been repaired, but we still don't need the #define because it's on by
default.  So this is just an all-around bad idea, and we can do without it.
2013-12-29 12:57:56 -05:00
Peter Eisentraut 71812a98cb Update grammar
From: Etsuro Fujita <fujita.etsuro@lab.ntt.co.jp>
2013-12-28 20:54:23 -05:00
Peter Eisentraut b986270bd4 Fix whitespace 2013-12-27 19:51:49 -05:00
Andrew Dunstan 29dcf7ded5 Properly detect invalid JSON numbers when generating JSON.
Instead of looking for characters that aren't valid in JSON numbers, we
simply pass the output string through the JSON number parser, and if it
fails the string is quoted. This means among other things that money and
domains over money will be quoted correctly and generate valid JSON.

Fixes bug #8676 reported by Anderson Cristian da Silva.

Backpatched to 9.2 where JSON generation was introduced.
2013-12-27 17:04:00 -05:00
Kevin Grittner a133bf7031 Fix misplaced right paren bugs in pgstatfuncs.c.
The bug would only show up if the C sockaddr structure contained
zero in the first byte for a valid address; otherwise it would
fail to fail, which is probably why it went unnoticed for so long.

Patch submitted by Joel Jacobson after seeing an article by Andrey
Karpov in which he reports finding this through static code
analysis using PVS-Studio.  While I was at it I moved a definition
of a local variable referenced in the buggy code to a more local
context.

Backpatch to all supported branches.
2013-12-27 15:26:24 -06:00
Peter Eisentraut a09e3fd776 Fix whitespace 2013-12-26 23:51:56 -05:00
Tom Lane 1def747db6 Fix inadequately-tested code path in tuplesort_skiptuples().
Per report from Jeff Davis.
2013-12-24 17:13:02 -05:00
Tom Lane 4eeda92d86 Fix ANALYZE failure on a column that's a domain over a range.
Most other range operations seem to work all right on domains,
but this one not so much, at least not since commit 918eee0c.
Per bug #8684 from Brett Neumeier.
2013-12-23 22:18:48 -05:00
Robert Haas d43760b624 Revise documentation for new freezing method.
Commit 37484ad2aa invalidated a good
chunk of documentation, so patch it up to reflect the new state of
play.  Along the way, patch remaining documentation references to
FrozenXID to say instead FrozenTransactionId, so that they match the
way we actually spell it in the code.
2013-12-23 20:36:31 -05:00
Tom Lane cf63c641ca Fix portability issue in ordered-set patch.
Overly compact coding in makeOrderedSetArgs() led to a platform dependency:
if the compiler chose to execute the subexpressions in the wrong order,
list_length() might get applied to an already-modified List, giving a
value we didn't want.  Per buildfarm.
2013-12-23 20:24:07 -05:00
Tom Lane 8d65da1f01 Support ordered-set (WITHIN GROUP) aggregates.
This patch introduces generic support for ordered-set and hypothetical-set
aggregate functions, as well as implementations of the instances defined in
SQL:2008 (percentile_cont(), percentile_disc(), rank(), dense_rank(),
percent_rank(), cume_dist()).  We also added mode() though it is not in the
spec, as well as versions of percentile_cont() and percentile_disc() that
can compute multiple percentile values in one pass over the data.

Unlike the original submission, this patch puts full control of the sorting
process in the hands of the aggregate's support functions.  To allow the
support functions to find out how they're supposed to sort, a new API
function AggGetAggref() is added to nodeAgg.c.  This allows retrieval of
the aggregate call's Aggref node, which may have other uses beyond the
immediate need.  There is also support for ordered-set aggregates to
install cleanup callback functions, so that they can be sure that
infrastructure such as tuplesort objects gets cleaned up.

In passing, make some fixes in the recently-added support for variadic
aggregates, and make some editorial adjustments in the recent FILTER
additions for aggregates.  Also, simplify use of IsBinaryCoercible() by
allowing it to succeed whenever the target type is ANY or ANYELEMENT.
It was inconsistent that it dealt with other polymorphic target types
but not these.

Atri Sharma and Andrew Gierth; reviewed by Pavel Stehule and Vik Fearing,
and rather heavily editorialized upon by Tom Lane
2013-12-23 16:11:35 -05:00
Robert Haas 37484ad2aa Change the way we mark tuples as frozen.
Instead of changing the tuple xmin to FrozenTransactionId, the combination
of HEAP_XMIN_COMMITTED and HEAP_XMIN_INVALID, which were previously never
set together, is now defined as HEAP_XMIN_FROZEN.  A variety of previous
proposals to freeze tuples opportunistically before vacuum_freeze_min_age
is reached have foundered on the objection that replacing xmin by
FrozenTransactionId might hinder debugging efforts when things in this
area go awry; this patch is intended to solve that problem by keeping
the XID around (but largely ignoring the value to which it is set).

Third-party code that checks for HEAP_XMIN_INVALID on tuples where
HEAP_XMIN_COMMITTED might be set will be broken by this change.  To fix,
use the new accessor macros in htup_details.h rather than consulting the
bits directly.  HeapTupleHeaderGetXmin has been modified to return
FrozenTransactionId when the infomask bits indicate that the tuple is
frozen; use HeapTupleHeaderGetRawXmin when you already know that the
tuple isn't marked commited or frozen, or want the raw value anyway.
We currently do this in routines that display the xmin for user consumption,
in tqual.c where it's known to be safe and important for the avoidance of
extra cycles, and in the function-caching code for various procedural
languages, which shouldn't invalidate the cache just because the tuple
gets frozen.

Robert Haas and Andres Freund
2013-12-22 15:49:09 -05:00
Fujii Masao 961bf59fb7 Rename wal_log_hintbits to wal_log_hints, per discussion on pgsql-hackers.
Sawada Masahiko
2013-12-21 03:33:16 +09:00
Alvaro Herrera 6130208e75 Avoid useless palloc during transaction commit
We can allocate the initial relations-to-drop array when first needed,
instead of at function entry; this avoids allocating it when the
function is not going to do anything, which is most of the time.

Backpatch to 9.3, where this behavior was introduced by commit
279628a0a7.

There's more that could be done here, such as possible reworking of the
code to avoid having to palloc anything, but that doesn't sound as
backpatchable as this relatively minor change.

Per complaint from Noah Misch in
20131031145234.GA621493@tornado.leadboat.com
2013-12-20 12:37:30 -03:00
Robert Haas c32afe53c2 pg_prewarm, a contrib module for prewarming relationd data.
Patch by me.  Review by Álvaro Herrera, Amit Kapila, Jeff Janes,
Gurjeet Singh, and others.
2013-12-20 08:14:13 -05:00
Alvaro Herrera 6eda3e9c27 isolationtester: Ensure stderr is unbuffered, too 2013-12-19 22:09:30 -03:00
Bruce Momjian 527fdd9df1 Move pg_upgrade_support global variables to their own include file
Previously their declarations were spread around to avoid accidental
access.
2013-12-19 16:10:07 -05:00
Alvaro Herrera 73bcb76b77 Make stdout unbuffered
This ensures that all stdout output is flushed immediately, to match
stderr.  This eliminates the need for fflush(stdout) calls sprinkled all
over the place.

Per Daniel Wood in message 519A79C6.90308@salesforce.com
2013-12-19 17:26:27 -03:00
Alvaro Herrera 13aa624431 Optimize updating a row that's locked by same xid
Updating or locking a row that was already locked by the same
transaction under the same Xid caused a MultiXact to be created; but
this is unnecessary, because there's no usefulness in being able to
differentiate two locks by the same transaction.  In particular, if a
transaction executed SELECT FOR UPDATE followed by an UPDATE that didn't
modify columns of the key, we would dutifully represent the resulting
combination as a multixact -- even though a single key-update is
sufficient.

Optimize the case so that only the strongest of both locks/updates is
represented in Xmax.  This can save some Xmax's from becoming
MultiXacts, which can be a significant optimization.

This missed optimization opportunity was spotted by Andres Freund while
investigating a bug reported by Oliver Seemann in message
CANCipfpfzoYnOz5jj=UZ70_R=CwDHv36dqWSpwsi27vpm1z5sA@mail.gmail.com
and also directly as a performance regression reported by Dong Ye in
message
d54b8387.000012d8.00000010@YED-DEVD1.vmware.com
Reportedly, this patch fixes the performance regression.

Since the missing optimization was reported as a significant performance
regression from 9.2, backpatch to 9.3.

Andres Freund, tweaked by Álvaro Herrera
2013-12-19 16:53:49 -03:00
Fujii Masao 084e385a2f Add tab completion for ALTER SYSTEM SET in psql. 2013-12-20 02:33:27 +09:00
Peter Eisentraut 94b899b829 Upgrade to Autoconf 2.69 2013-12-18 20:53:23 -05:00
Robert Haas 001a573a20 Allow on-detach callbacks for dynamic shared memory segments.
Just as backends must clean up their shared memory state (releasing
lwlocks, buffer pins, etc.) before exiting, they must also perform
any similar cleanups related to dynamic shared memory segments they
have mapped before unmapping those segments.  So add a mechanism to
ensure that.

Existing on_shmem_exit hooks include both "user level" cleanup such
as transaction abort and removal of leftover temporary relations and
also "low level" cleanup that forcibly released leftover shared
memory resources.  On-detach callbacks should run after the first
group but before the second group, so create a new before_shmem_exit
function for registering the early callbacks and keep on_shmem_exit
for the regular callbacks.  (An earlier draft of this patch added an
additional argument to on_shmem_exit, but that had a much larger
footprint and probably a substantially higher risk of breaking third
party code for no real gain.)

Patch by me, reviewed by KaiGai Kohei and Andres Freund.
2013-12-18 13:09:09 -05:00
Bruce Momjian 613c6d26bd Fix incorrect error message reported for non-existent users
Previously, lookups of non-existent user names could return "Success";
it will now return "User does not exist" by resetting errno.  This also
centralizes the user name lookup code in libpgport.

Report and analysis by Nicolas Marchildon;  patch by me
2013-12-18 12:16:21 -05:00
Alvaro Herrera 11ac4c73cb Don't ignore tuple locks propagated by our updates
If a tuple was locked by transaction A, and transaction B updated it,
the new version of the tuple created by B would be locked by A, yet
visible only to B; due to an oversight in HeapTupleSatisfiesUpdate, the
lock held by A wouldn't get checked if transaction B later deleted (or
key-updated) the new version of the tuple.  This might cause referential
integrity checks to give false positives (that is, allow deletes that
should have been rejected).

This is an easy oversight to have made, because prior to improved tuple
locks in commit 0ac5ad5134 it wasn't possible to have tuples created by
our own transaction that were also locked by remote transactions, and so
locks weren't even considered in that code path.

It is recommended that foreign keys be rechecked manually in bulk after
installing this update, in case some referenced rows are missing with
some referencing row remaining.

Per bug reported by Daniel Wood in
CAPweHKe5QQ1747X2c0tA=5zf4YnS2xcvGf13Opd-1Mq24rF1cQ@mail.gmail.com
2013-12-18 13:45:51 -03:00
Tatsuo Ishii 65d6e4cb5c Add ALTER SYSTEM command to edit the server configuration file.
Patch contributed by Amit Kapila. Reviewed by Hari Babu, Masao Fujii,
Boszormenyi Zoltan, Andres Freund, Greg Smith and others.
2013-12-18 23:42:44 +09:00
Bruce Momjian dba5a9dda9 Comment: COPY comment improvement
Etsuro Fujita
2013-12-17 12:51:16 -05:00
Alvaro Herrera 3b97e6823b Rework tuple freezing protocol
Tuple freezing was broken in connection to MultiXactIds; commit
8e53ae025d tried to fix it, but didn't go far enough.  As noted by
Noah Misch, freezing a tuple whose Xmax is a multi containing an aborted
update might cause locks in the multi to go ignored by later
transactions.  This is because the code depended on a multixact above
their cutoff point not having any lock-only member older than the cutoff
point for Xids, which is easily defeated in READ COMMITTED transactions.

The fix for this involves creating a new MultiXactId when necessary.
But this cannot be done during WAL replay, and moreover multixact
examination requires using CLOG access routines which are not supposed
to be used during WAL replay either; so tuple freezing cannot be done
with the old freeze WAL record.  Therefore, separate the freezing
computation from its execution, and change the WAL record to carry all
necessary information.  At WAL replay time, it's easy to re-execute
freezing because we don't need to re-compute the new infomask/Xmax
values but just take them from the WAL record.

While at it, restructure the coding to ensure all page changes occur in
a single critical section without much room for failures.  The previous
coding wasn't using a critical section, without any explanation as to
why this was acceptable.

In replication scenarios using the 9.3 branch, standby servers must be
upgraded before their master, so that they are prepared to deal with the
new WAL record once the master is upgraded; failure to do so will cause
WAL replay to die with a PANIC message.  Later upgrade of the standby
will allow the process to continue where it left off, so there's no
disruption of the data in the standby in any case.  Standbys know how to
deal with the old WAL record, so it's okay to keep the master running
the old code for a while.

In master, the old freeze WAL record is gone, for cleanliness' sake;
there's no compatibility concern there.

Backpatch to 9.3, where the original bug was introduced and where the
previous fix was backpatched.

Álvaro Herrera and Andres Freund
2013-12-16 11:29:50 -03:00
Heikki Linnakangas 30b96549ab Mark variables 'static' where possible. Move GinFuzzySearchLimit to ginget.c
Per "clang -Wmissing-variable-declarations" output, posted by Andres Freund.
I didn't silence all those warnings, though, only the most obvious cases.
2013-12-16 11:41:17 +02:00
Tatsuo Ishii 1f0626ee40 Add "SHIFT_JIS" as an accepted encoding name for locale checking.
When locale is "ja_JP.SJIS", nl_langinfo(CODESET) returns "SHIFT_JIS"
on some platforms, at least on RedHat Linux. So the encoding/locale
match table (encoding_match_list) needs the entry. Otherwise client
encoding is set to SQL_ASCII.

Back patch to all supported branches.
2013-12-15 11:09:05 +09:00
Tom Lane 1b4f7f93b4 Allow empty target list in SELECT.
This fixes a problem noted as a followup to bug #8648: if a query has a
semantically-empty target list, e.g. SELECT * FROM zero_column_table,
ruleutils.c will dump it as a syntactically-empty target list, which was
not allowed.  There doesn't seem to be any reliable way to fix this by
hacking ruleutils (note in particular that the originally zero-column table
might since have had columns added to it); and even if we had such a fix,
it would do nothing for existing dump files that might contain bad syntax.
The best bet seems to be to relax the syntactic restriction.

Also, add parse-analysis errors for SELECT DISTINCT with no columns (after
*-expansion) and RETURNING with no columns.  These cases previously
produced unexpected behavior because the parsed Query looked like it had
no DISTINCT or RETURNING clause, respectively.  If anyone ever offers
a plausible use-case for this, we could work a bit harder on making the
situation distinguishable.

Arguably this is a bug fix that should be back-patched, but I'm worried
that there may be client apps or PLs that expect "SELECT ;" to throw a
syntax error.  The issue doesn't seem important enough to risk changing
behavior in minor releases.
2013-12-14 20:23:26 -05:00
Tom Lane c03ad5602f Fix inherited UPDATE/DELETE with UNION ALL subqueries.
Fix an oversight in commit b3aaf9081a1a95c245fd605dcf02c91b3a5c3a29: we do
indeed need to process the planner's append_rel_list when copying RTE
subqueries, because if any of them were flattenable UNION ALL subqueries,
the append_rel_list shows which subquery RTEs were pulled up out of which
other ones.  Without this, UNION ALL subqueries aren't correctly inserted
into the update plans for inheritance child tables after the first one,
typically resulting in no update happening for those child table(s).
Per report from Victor Yegorov.

Experimentation with this case also exposed a fault in commit
a7b965382cf0cb30aeacb112572718045e6d4be7: if an inherited UPDATE/DELETE
was proven totally dummy by constraint exclusion, we might arrive at
add_rtes_to_flat_rtable with root->simple_rel_array being NULL.  This
should be interpreted as not having any RelOptInfos.  I chose to code
the guard as a check against simple_rel_array_size, so as to also
provide some protection against indexing off the end of the array.

Back-patch to 9.2 where the faulty code was added.
2013-12-14 17:33:53 -05:00
Alvaro Herrera 60eea3780c Fix typo 2013-12-13 17:27:16 -03:00
Alvaro Herrera d881dd6233 Rework MultiXactId cache code
The original performs too poorly; in some scenarios it shows way too
high while profiling.  Try to make it a bit smarter to avoid excessive
cosst.  In particular, make it have a maximum size, and have entries be
sorted in LRU order; once the max size is reached, evict the oldest
entry to avoid it from growing too large.

Per complaint from Andres Freund in connection with new tuple freezing
code.
2013-12-13 17:16:25 -03:00
Tom Lane 2efc6dc256 Add HOLD/RESUME_INTERRUPTS in HandleCatchupInterrupt/HandleNotifyInterrupt.
This prevents a possible longjmp out of the signal handler if a timeout
or SIGINT occurs while something within the handler has transiently set
ImmediateInterruptOK.  For safety we must hold off the timeout or cancel
error until we're back in mainline, or at least till we reach the end of
the signal handler when ImmediateInterruptOK was true at entry.  This
syncs these functions with the logic now present in handle_sig_alarm.

AFAICT there is no live bug here in 9.0 and up, because I don't think we
currently can wait for any heavyweight lock inside these functions, and
there is no other code (except read-from-client) that will turn on
ImmediateInterruptOK.  However, that was not true pre-9.0: in older
branches ProcessIncomingNotify might block trying to lock pg_listener, and
then a SIGINT could lead to undesirable control flow.  It might be all
right anyway given the relatively narrow code ranges in which NOTIFY
interrupts are enabled, but for safety's sake I'm back-patching this.
2013-12-13 14:05:51 -05:00
Heikki Linnakangas dde6282500 Fix more instances of "the the" in comments.
Plus one instance of "to to" in the docs.
2013-12-13 20:02:01 +02:00
Tom Lane e8312b4f03 Don't let timeout interrupts happen unless ImmediateInterruptOK is set.
Serious oversight in commit 16e1b7a1b7f7ffd8a18713e83c8cd72c9ce48e07:
we should not allow an interrupt to take control away from mainline code
except when ImmediateInterruptOK is set.  Just to be safe, let's adopt
the same save-clear-restore dance that's been used for many years in
HandleCatchupInterrupt and HandleNotifyInterrupt, so that nothing bad
happens if a timeout handler invokes code that tests or even manipulates
ImmediateInterruptOK.

Per report of "stuck spinlock" failures from Christophe Pettus, though
many other symptoms are possible.  Diagnosis by Andres Freund.
2013-12-13 11:50:15 -05:00
Heikki Linnakangas 50e547096c Add GUC to enable WAL-logging of hint bits, even with checksums disabled.
WAL records of hint bit updates is useful to tools that want to examine
which pages have been modified. In particular, this is required to make
the pg_rewind tool safe (without checksums).

This can also be used to test how much extra WAL-logging would occur if
you enabled checksums, without actually enabling them (which you can't
currently do without re-initdb'ing).

Sawada Masahiko, docs by Samrat Revagade. Reviewed by Dilip Kumar, with
further changes by me.
2013-12-13 16:26:14 +02:00
Heikki Linnakangas a49633d8dc Fix WAL-logging of setting the visibility map bit.
The operation that removes the remaining dead tuples from the page must
be WAL-logged before the setting of the VM bit. Otherwise, if you replay
the WAL to between those two records, you end up with the VM bit set, but
the dead tuples are still there.

Backpatch to 9.3, where this bug was introduced.
2013-12-13 14:15:04 +02:00
Tom Lane ccca6f56f5 Fix ancient docs/comments thinko: XID comparison is mod 2^32, not 2^31.
Pointed out by Gianni Ciolli.
2013-12-12 12:39:48 -05:00
Tom Lane f26099057a Improve EXPLAIN to print the grouping columns in Agg and Group nodes.
Per request from Kevin Grittner.
2013-12-12 11:24:38 -05:00
Simon Riggs 8693559cac New autovacuum_work_mem parameter
If autovacuum_work_mem is set, autovacuum workers now use
this parameter in preference to maintenance_work_mem.

Peter Geoghegan
2013-12-12 11:42:39 +00:00
Simon Riggs 36da3cfb45 Allow time delayed standbys and recovery
Set min_recovery_apply_delay to force a delay in recovery apply for commit and
restore point WAL records. Other records are replayed immediately. Delay is
measured between WAL record time and local standby time.

Robert Haas, Fabrízio de Royes Mello and Simon Riggs
Detailed review by Mitsumasa Kondo
2013-12-12 10:53:20 +00:00
Heikki Linnakangas 108e3992cd Display old and new values in pg_resetxlog -n output.
For extra clarity.

Rajeev Rastogi, reviewed by Amit Kapila
2013-12-12 11:57:18 +02:00
Tom Lane 22310b808d Remove bogus executable permissions on xlog.c.
Apparently fat-fingered in 1a3d104475.
Noted by Peter Geoghegan.
2013-12-11 22:12:25 -05:00
Tom Lane 6bff0e7d92 Add a regression test case for plpython function returning setof RECORD.
We had coverage for functions returning setof a named composite type,
but not for anonymous records, which is a somewhat different code path.
In view of recent crash report from Sergey Konoplev, this seems worth
testing, though I doubt there's any deterministic bug here today.
2013-12-11 17:22:55 -05:00
Simon Riggs cf589c9c1f Regression tests for SCHEMA commands
Hari Babu Kommi reviewed by David Rowley
2013-12-11 20:45:15 +00:00
Simon Riggs b921a26fb8 Regression tests for ALTER TABLESPACE RENAME,OWNER
Hari Babu Kommi reviewed by David Rowley
2013-12-11 20:42:58 +00:00
Tom Lane b5e0a2a384 Tweak placement of explicit ANALYZE commands in the regression tests.
Make the COPY test, which loads most of the large static tables used in
the tests, also explicitly ANALYZE those tables.  This allows us to get
rid of various ad-hoc, and rather redundant, ANALYZE commands that had
gotten stuck into various test scripts over time to ensure we got
consistent plan choices.  (We could have done a database-wide ANALYZE,
but that would cause stats to get attached to the small static tables
too, which results in plan changes compared to the historical behavior.
I'm not sure that's a good idea, so not going that far for now.)

Back-patch to 9.0, since 9.0 and 9.1 are currently sometimes failing
regression tests for lack of an "ANALYZE tenk1" in the subselect test.
There's no need for this in 8.4 since we didn't print any plans back
then.
2013-12-11 15:09:15 -05:00
Robert Haas 60dd40bbda Under wal_level=logical, when saving old tuples, always save OID.
There's no real point in not doing this.  It doesn't cost anything
in performance or space.  So let's go wild.

Andres Freund, with substantial editing as to style by me.
2013-12-11 13:19:31 -05:00
Kevin Grittner 09df854b8a Add table name to VACUUM statement in matview.c.
The test only needs the one table to be vacuumed.  Vacuuming the
database may affect other tests.

Per gripe from Tom Lane.  Back-patch to 9.3, where the test was
was added.
2013-12-11 08:53:03 -06:00
Peter Eisentraut e5dc4cc24d PL/Perl: Add event trigger support
From: Dimitri Fontaine <dimitri@2ndQuadrant.fr>
2013-12-11 08:11:59 -05:00
Robert Haas 6bea96dd49 Add a new option, -g, to createuser, to add membership in a role.
Chistopher Browne, reviewed by Sameer Thakur, Amit Kapila, and
Peter Eisentraut.
2013-12-11 07:50:36 -05:00
Robert Haas 66abc2608c Add a new reloption, user_catalog_table.
When this reloption is set and wal_level=logical is configured,
we'll record the CIDs stamped by inserts, updates, and deletes to
the table just as we would for an actual catalog table.  This will
allow logical decoding to use historical MVCC snapshots to access
such tables just as they access ordinary catalog tables.

Replication solutions built around the logical decoding machinery
will likely need to set this operation for their configuration
tables; it might also be needed by extensions which perform table
access in their output functions.

Andres Freund, reviewed by myself and others.
2013-12-10 19:17:34 -05:00
Robert Haas e55704d8b2 Add new wal_level, logical, sufficient for logical decoding.
When wal_level=logical, we'll log columns from the old tuple as
configured by the REPLICA IDENTITY facility added in commit
07cacba983.  This makes it possible
a properly-configured logical replication solution to correctly
follow table updates even if they change the chosen key columns,
or, with REPLICA IDENTITY FULL, even if the table has no key at
all.  Note that updates which do not modify the replica identity
column won't log anything extra, making the choice of a good key
(i.e. one that will rarely be changed) important to performance
when wal_level=logical is configured.

Each insert, update, or delete to a catalog table will also log
the CMIN and/or CMAX values of stamped by the current transaction.
This is necessary because logical decoding will require access to
historical snapshots of the catalog in order to decode some data
types, and the CMIN/CMAX values that we may need in order to judge
row visibility may have been overwritten by the time we need them.

Andres Freund, reviewed in various versions by myself, Heikki
Linnakangas, KONDO Mitsumasa, and many others.
2013-12-10 19:01:40 -05:00
Tom Lane 9ec6199d18 Fix possible crash with nested SubLinks.
An expression such as WHERE (... x IN (SELECT ...) ...) IN (SELECT ...)
could produce an invalid plan that results in a crash at execution time,
if the planner attempts to flatten the outer IN into a semi-join.
This happens because convert_testexpr() was not expecting any nested
SubLinks and would wrongly replace any PARAM_SUBLINK Params belonging
to the inner SubLink.  (I think the comment denying that this case could
happen was wrong when written; it's certainly been wrong for quite a long
time, since very early versions of the semijoin flattening logic.)

Per report from Teodor Sigaev.  Back-patch to all supported branches.
2013-12-10 16:10:17 -05:00
Noah Misch 53685d7981 Rename TABLE() to ROWS FROM().
SQL-standard TABLE() is a subset of UNNEST(); they deal with arrays and
other collection types.  This feature, however, deals with set-returning
functions.  Use a different syntax for this feature to keep open the
possibility of implementing the standard TABLE().
2013-12-10 09:34:37 -05:00
Robert Haas d9250da032 Fixups for dsm.c's file descriptor handling.
Per complaint from Tom Lane.
2013-12-09 11:15:19 -05:00
Peter Eisentraut 3164721462 SSL: Support ECDH key exchange
This sets up ECDH key exchange, when compiling against OpenSSL that
supports EC.  Then the ECDHE-RSA and ECDHE-ECDSA cipher suites can be
used for SSL connections.  The latter one means that EC keys are now
usable.

The reason for EC key exchange is that it's faster than DHE and it
allows to go to higher security levels where RSA will be horribly slow.

There is also new GUC option ssl_ecdh_curve that specifies the curve
name used for ECDH.  It defaults to "prime256v1", which is the most
common curve in use in HTTPS.

From: Marko Kreen <markokr@gmail.com>
Reviewed-by: Adrian Klaver <adrian.klaver@gmail.com>
2013-12-07 15:11:44 -05:00
Peter Eisentraut ef3267523d SSL: Add configuration option to prefer server cipher order
By default, OpenSSL (and SSL/TLS in general) lets the client cipher
order take priority.  This is OK for browsers where the ciphers were
tuned, but few PostgreSQL client libraries make the cipher order
configurable.  So it makes sense to have the cipher order in
postgresql.conf take priority over client defaults.

This patch adds the setting "ssl_prefer_server_ciphers" that can be
turned on so that server cipher order is preferred.  Per discussion,
this now defaults to on.

From: Marko Kreen <markokr@gmail.com>
Reviewed-by: Adrian Klaver <adrian.klaver@gmail.com>
2013-12-07 08:13:50 -05:00
Alvaro Herrera 312bde3d40 Fix improper abort during update chain locking
In 247c76a989, I added some code to do fine-grained checking of
MultiXact status of locking/updating transactions when traversing an
update chain.  There was a thinko in that patch which would have the
traversing abort, that is return HeapTupleUpdated, when the other
transaction is a committed lock-only.  In this case we should ignore it
and return success instead.  Of course, in the case where there is a
committed update, HeapTupleUpdated is the correct return value.

A user-visible symptom of this bug is that in REPEATABLE READ and
SERIALIZABLE transaction isolation modes spurious serializability errors
can occur:
  ERROR:  could not serialize access due to concurrent update

In order for this to happen, there needs to be a tuple that's key-share-
locked and also updated, and the update must abort; a subsequent
transaction trying to acquire a new lock on that tuple would abort with
the above error.  The reason is that the initial FOR KEY SHARE is seen
as committed by the new locking transaction, which triggers this bug.
(If the UPDATE commits, then the serialization error is correctly
reported.)

When running a query in READ COMMITTED mode, what happens is that the
locking is aborted by the HeapTupleUpdated return value, then
EvalPlanQual fetches the newest version of the tuple, which is then the
only version that gets locked.  (The second time the tuple is checked
there is no misbehavior on the committed lock-only, because it's not
checked by the code that traverses update chains; so no bug.) Only the
newest version of the tuple is locked, not older ones, but this is
harmless.

The isolation test added by this commit illustrates the desired
behavior, including the proper serialization errors that get thrown.

Backpatch to 9.3.
2013-12-05 17:47:51 -03:00
Tom Lane 74242c23c1 Clear retry flags properly in replacement OpenSSL sock_write function.
Current OpenSSL code includes a BIO_clear_retry_flags() step in the
sock_write() function.  Either we failed to copy the code correctly, or
they added this since we copied it.  In any case, lack of the clear step
appears to be the cause of the server lockup after connection loss reported
in bug #8647 from Valentine Gogichashvili.  Assume that this is correct
coding for all OpenSSL versions, and hence back-patch to all supported
branches.

Diagnosis and patch by Alexander Kukushkin.
2013-12-05 12:48:28 -05:00
Alvaro Herrera 07aeb1fec5 Avoid resetting Xmax when it's a multi with an aborted update
HeapTupleSatisfiesUpdate can very easily "forget" tuple locks while
checking the contents of a multixact and finding it contains an aborted
update, by setting the HEAP_XMAX_INVALID bit.  This would lead to
concurrent transactions not noticing any previous locks held by
transactions that might still be running, and thus being able to acquire
subsequent locks they wouldn't be normally able to acquire.

This bug was introduced in commit 1ce150b7bb; backpatch this fix to 9.3,
like that commit.

This change reverts the change to the delete-abort-savept isolation test
in 1ce150b7bb, because that behavior change was caused by this bug.

Noticed by Andres Freund while investigating a different issue reported
by Noah Misch.
2013-12-05 12:21:55 -03:00
Bruce Momjian 86ef4796f5 build: pass EXTRA_REGRESS_OPTS to secondary regression tests
Christoph Berg
2013-12-04 10:14:45 -05:00
Heikki Linnakangas 9e857436ef Don't include unused space in LOG_NEWPAGE records.
This is the same trick we use when taking a full page image of a buffer
passed to XLogInsert.
2013-12-04 00:10:47 +02:00
Heikki Linnakangas 22122c83f1 Fix full-page writes of internal GIN pages.
Insertion to a non-leaf GIN page didn't make a full-page image of the page,
which is wrong. The code used to do it correctly, but was changed (commit
853d1c3103) because the redo-routine didn't
track incomplete splits correctly when the page was restored from a full
page image. Of course, that was not right way to fix it, the redo routine
should've been fixed instead. The redo-routine was surreptitiously fixed
in 2010 (commit 4016bdef8a), so all we need
to do now is revert the code that creates the record to its original form.

This doesn't change the format of the WAL record.

Backpatch to all supported versions.
2013-12-03 23:16:01 +02:00
Bruce Momjian 4a8adfd4d0 C comment: again update comment for pg_fe_sendauth for error cases 2013-12-03 11:42:18 -05:00
Bruce Momjian 6a6b7bbb81 Update C comment for pg_fe_getauthname
This function no longer takes an argument.
2013-12-03 11:33:46 -05:00
Bruce Momjian 9e0a97f1c8 libpq: change PQconndefaults() to ignore invalid service files
Previously missing or invalid service files returned NULL.  Also fix
pg_upgrade to report "out of memory" for a null return from
PQconndefaults().

Patch by Steve Singer, rewritten by me
2013-12-03 11:12:25 -05:00
Peter Eisentraut fef88b3fda Report exit code from external recovery commands properly
When an external recovery command such as restore_command or
archive_cleanup_command fails, report the exit code properly,
distinguishing signals and normal exists, using the existing
wait_result_to_str() facility, instead of just reporting the return
value from system().

Reviewed-by: Peter Geoghegan <pg@heroku.com>
2013-12-02 22:31:05 -05:00
Tom Lane 7ab321404c Fix crash in assign_collations_walker for EXISTS with empty SELECT list.
We (I think I, actually) forgot about this corner case while coding
collation resolution.  Per bug #8648 from Arjen Nienhuis.
2013-12-02 20:28:45 -05:00
Tom Lane 7a1e34d371 Increase git_changelog's timestamp_slop from 10 min to 1 day.
Many committers seem to now be using a work flow in which back-patched
commits are timestamped minutes or even hours apart in different branches
(most likely because they commit in one branch before starting work on
the next one).  git_changelog was failing to merge its reports in such
cases, so increase the max time it's willing to merge commits across.
I considered getting rid of the limit altogether, but that produces
some odd results in terms of how the merged commit gets sorted relative
to unrelated commits.
2013-12-02 11:33:49 -05:00
Robert Haas c6d4b1dd3e Flag mmap implemenation of dynamic shared memory as resize-capable.
Error noted by Heikki Linnakangas
2013-12-02 11:18:54 -05:00
Robert Haas a8656a3ab0 Make NUM_TOCHAR_prepare and NUM_TOCHAR_finish macros declare "len".
Remove the variable from the enclosing scopes so that nothing can be
relying on it.  The net result of this refactoring is that we get rid
of a few unnecessary strlen() calls.

Original patch from Greg Jaskiewicz, substantially expanded by me.
2013-12-02 10:51:06 -05:00
Robert Haas 9d140f7be2 Avoid out-of-bounds read in errfinish if error_stack_depth < 0.
If errordata_stack_depth < 0, we won't find that out and correct the
problem until CHECK_STACK_DEPTH() is invoked.  In the meantime,
elevel will be set based on an invalid read.  This is probably
harmless in practice, but it seems cleaner this way.

Xi Wang
2013-12-02 10:42:01 -05:00
Peter Eisentraut 3e3520cf7a Translation updates 2013-12-02 00:17:07 -05:00
Tom Lane 335470251d Update time zone data files to tzdata release 2013h.
DST law changes in Argentina, Brazil, Jordan, Libya, Liechtenstein,
Morocco, Palestine.  New timezone abbreviations WIB, WIT, WITA for
Indonesia.
2013-12-01 14:11:44 -05:00
Kevin Grittner 4bd371f6f8 Fix pg_dumpall to work for databases flagged as read-only.
pg_dumpall's charter is to be able to recreate a database cluster's
contents in a virgin installation, but it was failing to honor that
contract if the cluster had any ALTER DATABASE SET
default_transaction_read_only settings.  By including a SET command
for the connection for each connection opened by pg_dumpall output,
errors are avoided and the source cluster is successfully
recreated.

There was discussion of whether to also set this for the connection
applying pg_dump output, but it was felt that it was both less
appropriate in that context, and far easier to work around.

Backpatch to all supported branches.
2013-11-30 11:24:56 -06:00
Peter Eisentraut 34fa72ec9c Remove use of obsolescent Autoconf macros
Remove the use of the following macros, which are obsolescent according
to the Autoconf documentation:

- AC_C_CONST
- AC_C_STRINGIZE
- AC_C_VOLATILE
- AC_FUNC_MEMCMP
2013-11-30 09:17:08 -05:00
Alvaro Herrera 2393c7d102 Fix a couple of bugs in MultiXactId freezing
Both heap_freeze_tuple() and heap_tuple_needs_freeze() neglected to look
into a multixact to check the members against cutoff_xid.  This means
that a very old Xid could survive hidden within a multi, possibly
outliving its CLOG storage.  In the distant future, this would cause
clog lookup failures:
ERROR:  could not access status of transaction 3883960912
DETAIL:  Could not open file "pg_clog/0E78": No such file or directory.

This mostly was problematic when the updating transaction aborted, since
in that case the row wouldn't get pruned away earlier in vacuum and the
multixact could possibly survive for a long time.  In many cases, data
that is inaccessible for this reason way can be brought back
heuristically.

As a second bug, heap_freeze_tuple() didn't properly handle multixacts
that need to be frozen according to cutoff_multi, but whose updater xid
is still alive.  Instead of preserving the update Xid, it just set Xmax
invalid, which leads to both old and new tuple versions becoming
visible.  This is pretty rare in practice, but a real threat
nonetheless.  Existing corrupted rows, unfortunately, cannot be repaired
in an automated fashion.

Existing physical replicas might have already incorrectly frozen tuples
because of different behavior than in master, which might only become
apparent in the future once pg_multixact/ is truncated; it is
recommended that all clones be rebuilt after upgrading.

Following code analysis caused by bug report by J Smith in message
CADFUPgc5bmtv-yg9znxV-vcfkb+JPRqs7m2OesQXaM_4Z1JpdQ@mail.gmail.com
and privately by F-Secure.

Backpatch to 9.3, where freezing of MultiXactIds was introduced.

Analysis and patch by Andres Freund, with some tweaks by Álvaro.
2013-11-29 21:47:25 -03:00
Alvaro Herrera 1ce150b7bb Don't TransactionIdDidAbort in HeapTupleGetUpdateXid
It is dangerous to do so, because some code expects to be able to see what's
the true Xmax even if it is aborted (particularly while traversing HOT
chains).  So don't do it, and instead rely on the callers to verify for
abortedness, if necessary.

Several race conditions and bugs fixed in the process.  One isolation test
changes the expected output due to these.

This also reverts commit c235a6a589, which is no longer necessary.

Backpatch to 9.3, where this function was introduced.

Andres Freund
2013-11-29 21:47:21 -03:00
Alvaro Herrera 1df0122daa Truncate pg_multixact/'s contents during crash recovery
Commit 9dc842f08 of 8.2 era prevented MultiXact truncation during crash
recovery, because there was no guarantee that enough state had been
setup, and because it wasn't deemed to be a good idea to remove data
during crash recovery anyway.  Since then, due to Hot-Standby, streaming
replication and PITR, the amount of time a cluster can spend doing crash
recovery has increased significantly, to the point that a cluster may
even never come out of it.  This has made not truncating the content of
pg_multixact/ not defensible anymore.

To fix, take care to setup enough state for multixact truncation before
crash recovery starts (easy since checkpoints contain the required
information), and move the current end-of-recovery actions to a new
TrimMultiXact() function, analogous to TrimCLOG().

At some later point, this should probably done similarly to the way
clog.c is doing it, which is to just WAL log truncations, but we can't
do that for the back branches.

Back-patch to 9.0.  8.4 also has the problem, but since there's no hot
standby there, it's much less pressing.  In 9.2 and earlier, this patch
is simpler than in newer branches, because multixact access during
recovery isn't required.  Add appropriate checks to make sure that's not
happening.

Andres Freund
2013-11-29 21:47:15 -03:00
Alvaro Herrera f54106f77e Fix full-table-vacuum request mechanism for MultiXactIds
While autovacuum dutifully launched anti-multixact-wraparound vacuums
when the multixact "age" was reached, the vacuum code was not aware that
it needed to make them be full table vacuums.  As the resulting
partial-table vacuums aren't capable of actually increasing relminmxid,
autovacuum continued to launch anti-wraparound vacuums that didn't have
the intended effect, until age of relfrozenxid caused the vacuum to
finally be a full table one via vacuum_freeze_table_age.

To fix, introduce logic for multixacts similar to that for plain
TransactionIds, using the same GUCs.

Backpatch to 9.3, where permanent MultiXactIds were introduced.

Andres Freund, some cleanup by Álvaro
2013-11-29 21:47:13 -03:00
Alvaro Herrera 76a31c689c Replace hardcoded 200000000 with autovacuum_freeze_max_age
Parts of the code used autovacuum_freeze_max_age to determine whether
anti-multixact-wraparound vacuums are necessary, while others used a
hardcoded 200000000 value.  This leads to problems when
autovacuum_freeze_max_age is set to a non-default value.  Use the latter
everywhere.

Backpatch to 9.3, where vacuuming of multixacts was introduced.

Andres Freund
2013-11-29 21:47:09 -03:00
Tom Lane 79193c75f8 Fix assorted issues in pg_ctl's pgwin32_CommandLine().
Ensure that the invocation command for postgres or pg_ctl runservice
double-quotes the executable's pathname; failure to do this leads to
trouble when the path contains spaces.

Also, ensure that the path ends in ".exe" in both cases and uses
backslashes rather than slashes as directory separators.  The latter issue
is reported to confuse some third-party tools such as Symantec Backup Exec.

Also, rewrite the function to avoid buffer overrun issues by using a
PQExpBuffer instead of a fixed-size static buffer.  Combinations of
very long executable pathnames and very long data directory pathnames
could have caused trouble before, for example.

Back-patch to all active branches, since this code has been like this
for a long while.

Naoya Anzai and Tom Lane, reviewed by Rajeev Rastogi
2013-11-29 18:34:07 -05:00