Commit Graph

12500 Commits

Author SHA1 Message Date
Heikki Linnakangas d6a7271958 Correctly detect SSI conflicts of prepared transactions after crash.
A prepared transaction can get new conflicts in and out after preparing, so
we cannot rely on the in- and out-flags stored in the statefile at prepare-
time. As a quick fix, make the conservative assumption that after a restart,
all prepared transactions are considered to have both in- and out-conflicts.
That can lead to unnecessary rollbacks after a crash, but that shouldn't be
a big problem in practice; you don't want prepared transactions to hang
around for a long time anyway.

Dan Ports
2012-02-29 15:42:36 +02:00
Tom Lane 5c02a00d44 Move CRC tables to libpgport, and provide them in a separate include file.
This makes it much more convenient to build tools for Postgres that are
separately compiled and require a matching CRC implementation.

To prevent multiple copies of the CRC polynomial tables being introduced
into the postgres binaries, they are now included in the static library
libpgport that is mainly meant for replacement system functions.  That
seems like a bit of a kludge, but there's no better place.

This cleans up building of the tools pg_controldata and pg_resetxlog,
which previously had to build their own copies of pg_crc.o.

In the future, external programs that need access to the CRC tables can
include the tables directly from the new header file pg_crc_tables.h.

Daniel Farina, reviewed by Abhijit Menon-Sen and Tom Lane
2012-02-28 19:53:39 -05:00
Tom Lane 0140a11b9b Fix thinko in new match_join_clauses_to_index() logic.
We don't need to constrain the other side of an indexable join clause to
not be below an outer join; an example here is

SELECT FROM t1 LEFT JOIN t2 ON t1.a = t2.b LEFT JOIN t3 ON t2.c = t3.d;

We can consider an inner indexscan on t3.d using c = d as indexqual, even
though t2.c is potentially nulled by a previous outer join.  The comparable
logic in orindxpath.c has always worked that way, but I was being overly
cautious here.
2012-02-28 18:10:40 -05:00
Peter Eisentraut 973e9fb294 Add const qualifiers where they are accidentally cast away
This only produces warnings under -Wcast-qual, but it's more correct
and consistent in any case.
2012-02-28 12:42:08 +02:00
Alvaro Herrera cb3a7c2b95 ALTER TABLE: skip FK validation when it's safe to do so
We already skip rewriting the table in these cases, but we still force a
whole table scan to validate the data.  This can be skipped, and thus
we can make the whole ALTER TABLE operation just do some catalog touches
instead of scanning the table, when these two conditions hold:

(a) Old and new pg_constraint.conpfeqop match exactly.  This is actually
stronger than needed; we could loosen things by way of operator
families, but it'd require a lot more effort.

(b) The functions, if any, implementing a cast from the foreign type to
the primary opcintype are the same.  For this purpose, we can consider a
binary coercion equivalent to an exact type match.  When the opcintype
is polymorphic, require that the old and new foreign types match
exactly.  (Since ri_triggers.c does use the executor, the stronger check
for polymorphic types is no mere future-proofing.  However, no core type
exercises its necessity.)

Author: Noah Misch

Committer's note: catalog version bumped due to change of the Constraint
node.  I can't actually find any way to have such a node in a stored
rule, but given that we have "out" support for them, better be safe.
2012-02-27 19:10:24 -03:00
Peter Eisentraut 9bf8603c7a Call check_keywords.pl in maintainer-check
For that purpose, have check_keywords.pl print errors to stderr and
return a useful exit status.
2012-02-27 13:53:12 +02:00
Tom Lane 1b630751d0 Fix some more bugs in GIN's WAL replay logic.
In commit 4016bdef8a I fixed a bunch of
ginxlog.c bugs having to do with not handling XLogReadBuffer failures
correctly.  However, in ginRedoUpdateMetapage and ginRedoDeleteListPages,
I unaccountably thought that failure to read the metapage would be
impossible and just put in an elog(PANIC) call.  This is of course wrong:
failure is exactly what will happen if the index got dropped (or rebuilt)
between creation of the WAL record and the crash we're trying to recover
from.  I believe this explains Nicholas Wilson's recent report of these
errors getting reached.

Also, fix memory leak in forgetIncompleteSplit.  This wasn't of much
concern when the code was written, but in a long-running standby server
page split records could be expected to accumulate indefinitely.

Back-patch to 8.4 --- before that, GIN didn't have a metapage.
2012-02-26 15:12:17 -05:00
Peter Eisentraut b5c077c368 Remove useless cast 2012-02-26 15:31:16 +02:00
Peter Eisentraut 66f0cf7da8 Remove useless const qualifier
Claiming that the typevar argument to DefineCompositeType() is const
was a plain lie.  A similar case in DefineVirtualRelation() was
already changed in passing in commit 1575fbcb.  Also clean up the now
unnecessary casts that used to cast away the const.
2012-02-26 15:22:27 +02:00
Tom Lane 4dd78bf37a Merge dissect() into cdissect() to remove a pile of near-duplicate code.
The "uncomplicated" case isn't materially less complicated than the full
case, certainly not enough so to justify duplicating nearly 500 lines
of code.  The only extra work being done in the full path is zaptreesubs,
which is very cheap compared to everything else being done here, and
besides that I'm less than convinced that it's not needed in some cases
even without backrefs.
2012-02-24 18:40:31 -05:00
Tom Lane 587359479a Avoid repeated creation/freeing of per-subre DFAs during regex search.
In nested sub-regex trees, lower-level nodes created DFAs and then
destroyed them again before exiting, which is a bit dumb considering that
the recursive search is likely to call those nodes again later.  Instead
cache each created DFA until the end of pg_regexec().  This is basically a
space for time tradeoff, in that it might increase the maximum memory
usage.  However, in most regex patterns there are not all that many subre
nodes, so not that many DFAs --- and in any case, the peak usage occurs
when reaching the bottom recursion level, and except for alternation cases
that's going to be the same anyway.
2012-02-24 18:40:30 -05:00
Tom Lane 3cbfe485e4 Remove useless "retry memory" logic within regex engine.
Apparently some primordial version of Spencer's engine needed cdissect()
and child functions to be able to continue matching from a previous
position when re-called.  That is dead code, though, since trivial
inspection shows that cdissect can never be entered without having
previously done zapmem which resets the relevant retry counter.  I have
also verified experimentally that no case in the Tcl regression tests
reaches cdissect with a nonzero retry value.  Accordingly, remove that
logic.  This doesn't really save any noticeable number of cycles in itself,
but it is one step towards making dissect() and cdissect() equivalent,
which will allow removing hundreds of lines of near-duplicated code.

Since struct subre's "retry" field is no longer particularly related to
any kind of retry, rename it to "id".  As of this commit it's only used
for identifying a subre node in debug printouts, so you might think we
should get rid of the field entirely; but I have a plan for another use.
2012-02-24 18:40:28 -05:00
Peter Eisentraut 9cfd800aab Add some enumeration commas, for consistency 2012-02-24 11:04:45 +02:00
Tom Lane 173e29aa5d Fix the general case of quantified regex back-references.
Cases where a back-reference is part of a larger subexpression that
is quantified have never worked in Spencer's regex engine, because
he used a compile-time transformation that neglected the need to
check the back-reference match in iterations before the last one.
(That was okay for capturing parens, and we still do it if the
regex has *only* capturing parens ... but it's not okay for backrefs.)

To make this work properly, we have to add an "iteration" node type
to the regex engine's vocabulary of sub-regex nodes.  Since this is a
moderately large change with a fair risk of introducing new bugs of its
own, apply to HEAD only, even though it's a fix for a longstanding bug.
2012-02-24 01:41:03 -05:00
Andrew Dunstan 0c9e5d5e0d Correctly handle NULLs in JSON output.
Error reported by David Wheeler.
2012-02-23 23:44:16 -05:00
Tom Lane 077711c2e3 Remove arbitrary limitation on length of common name in SSL certificates.
Both libpq and the backend would truncate a common name extracted from a
certificate at 32 bytes.  Replace that fixed-size buffer with dynamically
allocated string so that there is no hard limit.  While at it, remove the
code for extracting peer_dn, which we weren't using for anything; and
don't bother to store peer_cn longer than we need it in libpq.

This limit was not so terribly unreasonable when the code was written,
because we weren't using the result for anything critical, just logging it.
But now that there are options for checking the common name against the
server host name (in libpq) or using it as the user's name (in the server),
this could result in undesirable failures.  In the worst case it even seems
possible to spoof a server name or user name, if the correct name is
exactly 32 bytes and the attacker can persuade a trusted CA to issue a
certificate in which that string is a prefix of the certificate's common
name.  (To exploit this for a server name, he'd also have to send the
connection astray via phony DNS data or some such.)  The case that this is
a realistic security threat is a bit thin, but nonetheless we'll treat it
as one.

Back-patch to 8.4.  Older releases contain the faulty code, but it's not
a security problem because the common name wasn't used for anything
interesting.

Reported and patched by Heikki Linnakangas

Security: CVE-2012-0867
2012-02-23 15:48:04 -05:00
Tom Lane 891e6e7bfd Require execute permission on the trigger function for CREATE TRIGGER.
This check was overlooked when we added function execute permissions to the
system years ago.  For an ordinary trigger function it's not a big deal,
since trigger functions execute with the permissions of the table owner,
so they couldn't do anything the user issuing the CREATE TRIGGER couldn't
have done anyway.  However, if a trigger function is SECURITY DEFINER,
that is not the case.  The lack of checking would allow another user to
install it on his own table and then invoke it with, essentially, forged
input data; which the trigger function is unlikely to realize, so it might
do something undesirable, for instance insert false entries in an audit log
table.

Reported by Dinesh Kumar, patch by Robert Haas

Security: CVE-2012-0866
2012-02-23 15:38:56 -05:00
Peter Eisentraut c9d7004440 Remove inappropriate quotes
And adjust wording for consistency.
2012-02-23 12:52:17 +02:00
Peter Eisentraut 8251670cb3 Fix build without OpenSSL
This is a fixup for commit a445cb92ef.
2012-02-23 10:20:25 +02:00
Robert Haas 2254367435 Make EXPLAIN (BUFFERS) track blocks dirtied, as well as those written.
Also expose the new counters through pg_stat_statements.

Patch by me.  Review by Fujii Masao and Greg Smith.
2012-02-22 20:33:05 -05:00
Robert Haas f74f9a277c Fix typo in comment.
Sandro Santilli
2012-02-22 19:46:12 -05:00
Peter Eisentraut a445cb92ef Add parameters for controlling locations of server-side SSL files
This allows changing the location of the files that were previously
hard-coded to server.crt, server.key, root.crt, root.crl.

server.crt and server.key continue to be the default settings and are
thus required to be present by default if SSL is enabled.  But the
settings for the server-side CA and CRL are now empty by default, and
if they are set, the files are required to be present.  This replaces
the previous behavior of ignoring the functionality if the files were
not found.
2012-02-22 23:40:46 +02:00
Alvaro Herrera a417f85e1d REASSIGN OWNED: Support foreign data wrappers and servers
This was overlooked when implementing those kinds of objects, in commit
cae565e503.

Per report from Pawel Casperek.
2012-02-22 17:33:12 -03:00
Tom Lane 593a9631a7 Don't clear btpo_cycleid during _bt_vacuum_one_page.
When "vacuuming" a single btree page by removing LP_DEAD tuples, we are not
actually within a vacuum operation, but rather in an ordinary insertion
process that could well be running concurrently with a vacuum.  So clearing
the cycleid is incorrect, and could cause the concurrent vacuum to miss
removing tuples that it needs to remove.  This is a longstanding bug
introduced by commit e6284649b9 of
2006-07-25.  I believe it explains Maxim Boguk's recent report of index
corruption, and probably some other previously unexplained reports.

In 9.0 and up this is a one-line fix; before that we need to introduce a
flag to tell _bt_delitems what to do.
2012-02-21 15:03:36 -05:00
Tom Lane 9789c99d01 Cosmetic cleanup for commit a760893dbd.
Mostly, fixing overlooked comments.
2012-02-21 14:14:16 -05:00
Magnus Hagander c2a2f7516b Avoid double close of file handle in syslogger on win32
This causes an exception when running under a debugger or in particular
when running on a debug version of Windows.

Patch from MauMau
2012-02-21 17:12:25 +01:00
Andrew Dunstan 6b044cb810 Fix typo, noticed by Will Crawford. 2012-02-21 11:03:51 -05:00
Andrew Dunstan 83fcaffea2 Fix a couple of cases of JSON output.
First, as noted by Itagaki Takahiro, a datum of type JSON doesn't
need to be escaped. Second, ensure that numeric output not in
the form of a legal JSON number is quoted and escaped.
2012-02-20 15:01:03 -05:00
Tom Lane 5223f96d92 Fix regex back-references that are directly quantified with *.
The syntax "\n*", that is a backref with a * quantifier directly applied
to it, has never worked correctly in Spencer's library.  This has been an
open bug in the Tcl bug tracker since 2005:
https://sourceforge.net/tracker/index.php?func=detail&aid=1115587&group_id=10894&atid=110894

The core of the problem is in parseqatom(), which first changes "\n*" to
"\n+|" and then applies repeat() to the NFA representing the backref atom.
repeat() thinks that any arc leading into its "rp" argument is part of the
sub-NFA to be repeated.  Unfortunately, since parseqatom() already created
the arc that was intended to represent the empty bypass around "\n+", this
arc gets moved too, so that it now leads into the state loop created by
repeat().  Thus, what was supposed to be an "empty" bypass gets turned into
something that represents zero or more repetitions of the NFA representing
the backref atom.  In the original example, in place of
	^([bc])\1*$
we now have something that acts like
	^([bc])(\1+|[bc]*)$
At runtime, the branch involving the actual backref fails, as it's supposed
to, but then the other branch succeeds anyway.

We could no doubt fix this by some rearrangement of the operations in
parseqatom(), but that code is plenty ugly already, and what's more the
whole business of converting "x*" to "x+|" probably needs to go away to fix
another problem I'll mention in a moment.  Instead, this patch suppresses
the *-conversion when the target is a simple backref atom, leaving the case
of m == 0 to be handled at runtime.  This makes the patch in regcomp.c a
one-liner, at the cost of having to tweak cbrdissect() a little.  In the
event I went a bit further than that and rewrote cbrdissect() to check all
the string-length-related conditions before it starts comparing characters.
It seems a bit stupid to possibly iterate through many copies of an
n-character backreference, only to fail at the end because the target
string's length isn't a multiple of n --- we could have found that out
before starting.  The existing coding could only be a win if integer
division is hugely expensive compared to character comparison, but I don't
know of any modern machine where that might be true.

This does not fix all the problems with quantified back-references.  In
particular, the code is still broken for back-references that appear within
a larger expression that is quantified (so that direct insertion of the
quantification limits into the BACKREF node doesn't apply).  I think fixing
that will take some major surgery on the NFA code, specifically introducing
an explicit iteration node type instead of trying to transform iteration
into concatenation of modified regexps.

Back-patch to all supported branches.  In HEAD, also add a regression test
case for this.  (It may seem a bit silly to create a regression test file
for just one test case; but I'm expecting that we will soon import a whole
bunch of regex regression tests from Tcl, so might as well create the
infrastructure now.)
2012-02-20 00:52:33 -05:00
Tom Lane e00f68e49c Add caching of ctype.h/wctype.h results in regc_locale.c.
While this doesn't save a huge amount of runtime, it still seems worth
doing, especially since I realized that the data copying I did in my first
draft was quite unnecessary.  In this version, once we have the results
cached, getting them back for re-use is really very cheap.

Also, remove the hard-wired limitation to not consider wctype.h results for
character codes above 255.  It turns out that we can't push the limit as
far up as I'd originally hoped, because the regex colormap code is not
efficient enough to cope very well with character classes containing many
thousand letters, which a Unicode locale is entirely capable of producing.
Still, we can push it up to U+7FF (which I chose as the limit of 2-byte
UTF8 characters), which will at least make Eastern Europeans happy pending
a better solution.  Thus, this commit resolves the specific complaint in
bug #6457, but not the more general issue that letters of non-western
alphabets are mostly not recognized as matching [[:alpha:]].
2012-02-19 21:01:13 -05:00
Tom Lane 27af91438b Create the beginnings of internals documentation for the regex code.
Create src/backend/regex/README to hold an implementation overview of
the regex package, and fill it in with some preliminary notes about
the code's DFA/NFA processing and colormap management.  Much more to
do there of course.

Also, improve some code comments around the colormap and cvec code.
No functional changes except to add one missing assert.
2012-02-19 18:58:23 -05:00
Andrew Dunstan 2f582f76b1 Improve pretty printing of viewdefs.
Some line feeds are added to target lists and from lists to make
them more readable. By default they wrap at 80 columns if possible,
but the wrap column is also selectable - if 0 it wraps after every
item.

Andrew Dunstan, reviewed by Hitoshi Harada.
2012-02-19 11:43:46 -05:00
Tom Lane 08fd6ff37f Sync regex code with Tcl 8.5.11.
Sync our regex code with upstream changes since last time we did this,
which was Tcl 8.5.0 (see commit df1e965e12).

There are no functional changes here; the main point is just to lay down
a commit-log marker that somebody has looked at this recently, and to do
what we can to keep the two codebases comparable.
2012-02-17 19:44:26 -05:00
Tom Lane 4767bc8ff2 Improve statistics estimation to make some use of DISTINCT in sub-queries.
Formerly, we just punted when trying to estimate stats for variables coming
out of sub-queries using DISTINCT, on the grounds that whatever stats we
might have for underlying table columns would be inapplicable.  But if the
sub-query has only one DISTINCT column, we can consider its output variable
as being unique, which is useful information all by itself.  The scope of
this improvement is pretty narrow, but it costs nearly nothing, so we might
as well do it.  Per discussion with Andres Freund.

This patch differs from the draft I submitted yesterday in updating various
comments about vardata.isunique (to reflect its extended meaning) and in
tweaking the interaction with security_barrier views.  There does not seem
to be a reason why we can't use this sort of knowledge even when the
sub-query is such a view.
2012-02-16 17:34:00 -05:00
Tom Lane 4bfe68dfab Run a portal's cleanup hook immediately when pushing it to FAILED state.
This extends the changes of commit 6252c4f9e2
so that we run the cleanup hook earlier for failure cases as well as
success cases.  As before, the point is to avoid an assertion failure from
an Assert I added in commit a874fe7b4c, which
was meant to check that no user-written code can be called during portal
cleanup.  This fixes a case reported by Pavan Deolasee in which the Assert
could be triggered during backend exit (see the new regression test case),
and also prevents the possibility that the cleanup hook is run after
portions of the portal's state have already been recycled.  That doesn't
really matter in current usage, but it foreseeably could matter in the
future.

Back-patch to 9.1 where the Assert in question was added.
2012-02-15 16:19:01 -05:00
Robert Haas edec8c8e00 Fix VPATH builds, broken by my recent commit to speed up tuplesorting.
The relevant commit is 337b6f5ecf.
2012-02-15 15:53:53 -05:00
Robert Haas 337b6f5ecf Speed up in-memory tuplesorting.
Per recent work by Peter Geoghegan, it's significantly faster to
tuplesort on a single sortkey if ApplySortComparator is inlined into
quicksort rather reached via a function pointer.  It's also faster
in general to have a version of quicksort which is specialized for
sorting SortTuple objects rather than objects of arbitrary size and
type.  This requires a couple of additional copies of the quicksort
logic, which in this patch are generate using a Perl script.  There
might be some benefit in adding further specializations here too,
but thus far it's not clear that those gains are worth their weight
in code footprint.
2012-02-15 12:13:32 -05:00
Robert Haas 73a4b994a6 Make CREATE/ALTER FUNCTION support NOT LEAKPROOF.
Because it isn't good to be able to turn things on, and not off again.
2012-02-15 10:45:08 -05:00
Tom Lane 398f70ec07 Preserve column names in the execution-time tupledesc for a RowExpr.
The hstore and json datatypes both have record-conversion functions that
pay attention to column names in the composite values they're handed.
We used to not worry about inserting correct field names into tuple
descriptors generated at runtime, but given these examples it seems
useful to do so.  Observe the nicer-looking results in the regression
tests whose results changed.

catversion bump because there is a subtle change in requirements for stored
rule parsetrees: RowExprs from ROW() constructs now have to include field
names.

Andrew Dunstan and Tom Lane
2012-02-14 17:34:56 -05:00
Robert Haas cd30728fb2 Allow LEAKPROOF functions for better performance of security views.
We don't normally allow quals to be pushed down into a view created
with the security_barrier option, but functions without side effects
are an exception: they're OK.  This allows much better performance in
common cases, such as when using an equality operator (that might
even be indexable).

There is an outstanding issue here with the CREATE FUNCTION / ALTER
FUNCTION syntax: there's no way to use ALTER FUNCTION to unset the
leakproof flag.  But I'm committing this as-is so that it doesn't
have to be rebased again; we can fix up the grammar in a future
commit.

KaiGai Kohei, with some wordsmithing by me.
2012-02-13 22:21:14 -05:00
Heikki Linnakangas 21b1634275 Fix heap_multi_insert to set t_self field in the caller's tuples.
If tuples were toasted, heap_multi_insert didn't update the ctid on the
original tuples. This caused a failure if there was an after trigger
(including a foreign key), on the table, and a tuple got toasted.

Per off-list report and test case from Ted Phelps
2012-02-13 10:20:50 +02:00
Robert Haas d429ebe347 Add a comment to AdjustIntervalForTypmod to reduce chance of future bugs.
It's not entirely evident how the logic here relates to the
interval_transform function, so let's clue people in that they need to
check that if the rules change.
2012-02-09 12:24:36 -05:00
Robert Haas 6656588575 Improve interval_transform function to detect a few more cases.
Noah Misch, per a review comment from me.
2012-02-09 12:24:22 -05:00
Heikki Linnakangas 82e73ba0d1 Add new keywords SNAPSHOT and TYPES to the keyword list in gram.y
These were added to kwlist.h as unreserved keywords in separate patches,
but authors forgot to add them to the corresponding list in gram.y.
Because of that, even though they were supposed to be unreserved keywords,
they could not be used as identifiers. src/tools/check_keywords.pl is your
friend.
2012-02-09 11:37:54 +02:00
Tom Lane 331bf6712c Throw error sooner for unlogged GiST indexes.
Throwing an error only after we've built the main index fork is pretty
unfriendly when the table already contains data.  Per gripe from Jay
Levitt.
2012-02-08 16:19:27 -05:00
Tom Lane cb7c84fae8 Check misplaced window functions before checking aggregate/group by sanity.
If somebody puts a window function in WHERE, we should complain about that
in so many words.  The previous coding tended to complain about the window
function's arguments instead, which is likely to be misleading to users who
are unclear on the semantics of window functions; as seen for example in
bug #6440 from Matyas Novak.

Just another example of how "add new code at the end" is frequently a bad
heuristic.
2012-02-08 13:15:02 -05:00
Robert Haas c13897983a Add transform functions for various temporal typmod coercisions.
This enables ALTER TABLE to skip table and index rebuilds in some cases.

Noah Misch, with trivial changes by me.
2012-02-08 09:33:37 -05:00
Heikki Linnakangas 1a01560cbb Rename LWLockWaitUntilFree to LWLockAcquireOrWait.
LWLockAcquireOrWait makes it more clear that the lock is acquired if it's
free.
2012-02-08 09:17:13 +02:00
Robert Haas af7dd696b0 Fix typos pointed out by Noah Misch. 2012-02-07 21:40:49 -05:00
Robert Haas f7d7dade8a Add a transform function for varbit typmod coercisions.
This enables ALTER TABLE to skip table and index rebuilds when the
new type is unconstraint varbit, or when the allowable number of bits
is not decreasing.

Noah Misch, with review and a fix for an OID collision by me.
2012-02-07 12:42:50 -05:00