Commit Graph

5445 Commits

Author SHA1 Message Date
Fujii Masao 5d2b45e3f7 Add GUC to control the time to wait before retrieving WAL after failed attempt.
Previously when the standby server failed to retrieve WAL files from any sources
(i.e., streaming replication, local pg_xlog directory or WAL archive), it always
waited for five seconds (hard-coded) before the next attempt. For example,
this is problematic in warm-standby because restore_command can fail
every five seconds even while new WAL file is expected to be unavailable for
a long time and flood the log files with its error messages.

This commit adds new parameter, wal_retrieve_retry_interval, to control that
wait time.

Alexey Vasiliev and Michael Paquier, reviewed by Andres Freund and me.
2015-02-23 20:55:17 +09:00
Jeff Davis 74811c4050 Rename variable in AllocSetContextCreate to be consistent.
Everywhere else in the file, "context" is of type MemoryContext and
"set" is of type AllocSet. AllocSetContextCreate uses a variable of
type AllocSet, so rename it from "context" to "set".
2015-02-21 23:17:52 -08:00
Jeff Davis b419865a81 In array_agg(), don't create a new context for every group.
Previously, each new array created a new memory context that started
out at 8kB. This is incredibly wasteful when there are lots of small
groups of just a few elements each.

Change initArrayResult() and friends to accept a "subcontext" argument
to indicate whether the caller wants the ArrayBuildState allocated in
a new subcontext or not. If not, it can no longer be released
separately from the rest of the memory context.

Fixes bug report by Frank van Vugt on 2013-10-19.

Tomas Vondra. Reviewed by Ali Akbar, Tom Lane, and me.
2015-02-21 17:24:48 -08:00
Andres Freund eb68379c38 Allow forcing nullness of columns during bootstrap.
Bootstrap determines whether a column is null based on simple builtin
rules. Those work surprisingly well, but nonetheless a few existing
columns aren't set correctly. Additionally there is at least one patch
sent to hackers where forcing the nullness of a column would be helpful.

The boostrap format has gained FORCE [NOT] NULL for this, which will be
emitted by genbki.pl when BKI_FORCE_(NOT_)?NULL is specified for a
column in a catalog header.

This patch doesn't change the marking of any existing columns.

Discussion: 20150215170014.GE15326@awork2.anarazel.de
2015-02-21 22:31:54 +01:00
Tom Lane 2e211211a7 Use FLEXIBLE_ARRAY_MEMBER in a number of other places.
I think we're about done with this...
2015-02-21 16:12:14 -05:00
Tom Lane e1a11d9311 Use FLEXIBLE_ARRAY_MEMBER for HeapTupleHeaderData.t_bits[].
This requires changing quite a few places that were depending on
sizeof(HeapTupleHeaderData), but it seems for the best.

Michael Paquier, some adjustments by me
2015-02-21 15:13:06 -05:00
Tom Lane 33b2a2c97f Fix statically allocated struct with FLEXIBLE_ARRAY_MEMBER member.
clang complains about this, not unreasonably, so define another struct
that's explicitly for a WordEntryPos with exactly one element.

While at it, get rid of pretty dubious use of a static variable for
more than one purpose --- if it were being treated as const maybe
I'd be okay with this, but it isn't.
2015-02-20 17:50:18 -05:00
Tom Lane 33a3b03d63 Use FLEXIBLE_ARRAY_MEMBER in some more places.
Fix a batch of structs that are only visible within individual .c files.

Michael Paquier
2015-02-20 17:32:01 -05:00
Tom Lane c110eff132 Use FLEXIBLE_ARRAY_MEMBER in struct RecordIOData.
I (tgl) fixed this last night in rowtypes.c, but I missed that the
code had been copied into a couple of other places.

Michael Paquier
2015-02-20 17:03:12 -05:00
Tom Lane 5740be6d6e Some more FLEXIBLE_ARRAY_MEMBER hacking. 2015-02-20 02:28:03 -05:00
Tom Lane 9aa53bbd15 Remove unused variable.
Per buildfarm.
2015-02-20 00:47:28 -05:00
Tom Lane 09d8d110a6 Use FLEXIBLE_ARRAY_MEMBER in a bunch more places.
Replace some bogus "x[1]" declarations with "x[FLEXIBLE_ARRAY_MEMBER]".
Aside from being more self-documenting, this should help prevent bogus
warnings from static code analyzers and perhaps compiler misoptimizations.

This patch is just a down payment on eliminating the whole problem, but
it gets rid of a lot of easy-to-fix cases.

Note that the main problem with doing this is that one must no longer rely
on computing sizeof(the containing struct), since the result would be
compiler-dependent.  Instead use offsetof(struct, lastfield).  Autoconf
also warns against spelling that offsetof(struct, lastfield[0]).

Michael Paquier, review and additional fixes by me.
2015-02-20 00:11:42 -05:00
Tom Lane 2fb7a75f37 Add pg_stat_get_snapshot_timestamp() to show statistics snapshot timestamp.
Per discussion, this could be useful for purposes such as programmatically
detecting a nonresponding stats collector.  We already have the timestamp
anyway, it's just a matter of providing a SQL-accessible function to fetch
it.

Matt Kelly, reviewed by Jim Nasby
2015-02-19 21:36:50 -05:00
Tom Lane 56a79a869b Split array_push into separate array_append and array_prepend functions.
There wasn't any good reason for a single C function to implement both
these SQL functions: it saved very little code overall, and it required
significant pushups to re-determine at runtime which case applied.  Redoing
it as two functions ends up with just slightly more lines of code, but it's
simpler to understand, and faster too because we need not repeat syscache
lookups on every call.

An important side benefit is that this eliminates the only case in which
different aliases of the same C function had both anyarray and anyelement
arguments at the same position, which would almost always be a mistake.
The opr_sanity regression test will now notice such mistakes since there's
no longer a valid case where it happens.
2015-02-18 20:53:33 -05:00
Tom Lane e983c4d1aa Rationalize the APIs of array element/slice access functions.
The four functions array_ref, array_set, array_get_slice, array_set_slice
have traditionally declared their array inputs and results as being of type
"ArrayType *".  This is a lie, and has been since Berkeley days, because
they actually also support "fixed-length array" types such as "name" and
"point"; not to mention that the inputs could be toasted.  These values
should be declared Datum instead to avoid confusion.  The current coding
already risks possible misoptimization by compilers, and it'll get worse
when "expanded" array representations become a valid alternative.

However, there's a fair amount of code using array_ref and array_set with
arrays that *are* known to be ArrayType structures, and there might be more
such places in third-party code.  Rather than cluttering those call sites
with PointerGetDatum/DatumGetArrayTypeP cruft, what I did was to rename the
existing functions to array_get_element/array_set_element, fix their
signatures, then reincarnate array_ref/array_set as backwards compatibility
wrappers.

array_get_slice/array_set_slice have no such constituency in the core code,
and probably not in third-party code either, so I just changed their APIs.
2015-02-16 12:23:58 -05:00
Heikki Linnakangas c619c2351f Move pg_crc.c to src/common, and remove pg_crc_tables.h
To get CRC functionality in a client program, you now need to link with
libpgcommon instead of libpgport. The CRC code has nothing to do with
portability, so libpgcommon is a better home. (libpgcommon didn't exist
when pg_crc.c was originally moved to src/port.)

Remove the possibility to get CRC functionality by just #including
pg_crc_tables.h. I'm not aware of any extensions that actually did that and
couldn't simply link with libpgcommon.

This also moves the pg_crc.h header file from src/include/utils to
src/include/common, which will require changes to any external programs
that currently does #include "utils/pg_crc.h". That seems acceptable, as
include/common is clearly the right home for it now, and the change needed
to any such programs is trivial.
2015-02-09 11:17:56 +02:00
Fujii Masao 40bede5477 Move pg_lzcompress.c to src/common.
The meta data of PGLZ symbolized by PGLZ_Header is removed, to make
the compression and decompression code independent on the backend-only
varlena facility. PGLZ_Header is being used to store some meta data
related to the data being compressed like the raw length of the uncompressed
record or some varlena-related data, making it unpluggable once PGLZ is
stored in src/common as it contains some backend-only code paths with
the management of varlena structures. The APIs of PGLZ are reworked
at the same time to do only compression and decompression of buffers
without the meta-data layer, simplifying its use for a more general usage.

On-disk format is preserved as well, so there is no incompatibility with
previous major versions of PostgreSQL for TOAST entries.

Exposing compression and decompression APIs of pglz makes possible its
use by extensions and contrib modules. Especially this commit is required
for upcoming WAL compression feature so that the WAL reader facility can
decompress the WAL data by using pglz_decompress.

Michael Paquier, reviewed by me.
2015-02-09 15:15:24 +09:00
Noah Misch 237795a7b4 Check DCH_MAX_ITEM_SIZ limits with <=, not <.
We reserve space for the full amount, not one less.  The affected checks
deal with localized month and day names.  Today's DCH_MAX_ITEM_SIZ value
would suffice for a 60-byte day name, while the longest known is the
49-byte mn_CN.utf-8 word for "Saturday."  Thus, the upshot of this
change is merely to avoid misdirecting future readers of the code; users
are not expected to see errors either way.
2015-02-06 23:39:52 -05:00
Andres Freund 2505ce0be0 Remove remnants of ImmediateInterruptOK handling.
Now that nothing sets ImmediateInterruptOK to true anymore, we can
remove all the supporting code.

Reviewed-By: Heikki Linnakangas
2015-02-03 23:25:47 +01:00
Andres Freund 6753333f55 Move deadlock and other interrupt handling in proc.c out of signal handlers.
Deadlock checking was performed inside signal handlers up to
now. While it's a remarkable feat to have made this work reliably,
it's quite complex to understand why that is the case. Partially it
worked due to the assumption that semaphores are signal safe - which
is not actually documented to be the case for sysv semaphores.

The reason we had to rely on performing this work inside signal
handlers is that semaphores aren't guaranteed to be interruptable by
signals on all platforms. But now that latches provide a somewhat
similar API, which actually has the guarantee of being interruptible,
we can avoid doing so.

Signalling between ProcSleep, ProcWakeup, ProcWaitForSignal and
ProcSendSignal is now done using latches. This increases the
likelihood of spurious wakeups. As spurious wakeup already were
possible and aren't likely to be frequent enough to be an actual
problem, this seems acceptable.

This change would allow for further simplification of the deadlock
checking, now that it doesn't have to run in a signal handler. But
even if I were motivated to do so right now, it would still be better
to do that separately. Such a cleanup shouldn't have to be reviewed a
the same time as the more fundamental changes in this commit.

There is one possible usability regression due to this commit. Namely
it is more likely than before that log_lock_waits messages are output
more than once.

Reviewed-By: Heikki Linnakangas
2015-02-03 23:24:38 +01:00
Andres Freund 6647248e37 Don't allow immediate interrupts during authentication anymore.
We used to handle authentication_timeout by setting
ImmediateInterruptOK to true during large parts of the authentication
phase of a new connection.  While that happens to work acceptably in
practice, it's not particularly nice and has ugly corner cases.

Previous commits converted the FE/BE communication to use latches and
implemented support for interrupt handling during both
send/recv. Building on top of that work we can get rid of
ImmediateInterruptOK during authentication, by immediately treating
timeouts during authentication as a reason to die. As die interrupts
are handled immediately during client communication that provides a
sensibly quick reaction time to authentication timeout.

Additionally add a few CHECK_FOR_INTERRUPTS() to some more complex
authentication methods. More could be added, but this already should
provides a reasonable coverage.

While it this overall increases the maximum time till a timeout is
reacted to, it greatly reduces complexity and increases
reliability. That seems like a overall win. If the increase proves to
be noticeable we can deal with those cases by moving to nonblocking
network code and add interrupt checking there.

Reviewed-By: Heikki Linnakangas
2015-02-03 22:54:48 +01:00
Tom Lane cec916f35b Remove unused "m" field in LSEG.
This field has been unreferenced since 1998, and does not appear in lseg
values stored on disk (since sizeof(lseg) is only 32 bytes according to
pg_type).  There was apparently some idea of maintaining it just in values
appearing in memory, but the bookkeeping required to make that work would
surely far outweigh the cost of recalculating the line's slope when needed.
Remove it to (a) simplify matters and (b) suppress some uninitialized-field
whining from Coverity.
2015-02-03 16:53:32 -05:00
Tom Lane 778d498c7d Fix breakage in GEODEBUG debug code.
LINE doesn't have an "m" field (anymore anyway).  Also fix unportable
assumption that %x can print the result of pointer subtraction.

In passing, improve single_decode() in minor ways:
* Remove unnecessary leading-whitespace skip (strtod does that already).
* Make GEODEBUG message more intelligible.
* Remove entirely-useless test to see if strtod returned a silly pointer.
* Don't bother computing trailing-whitespace skip unless caller wants
  an ending pointer.

This has been broken since 261c7d4b65.
Although it's only debug code, might as well fix the 9.4 branch too.
2015-02-03 15:20:45 -05:00
Robert Haas 5d2f957f3f Add new function BackgroundWorkerInitializeConnectionByOid.
Sometimes it's useful for a background worker to be able to initialize
its database connection by OID rather than by name, so provide a way
to do that.
2015-02-02 16:23:59 -05:00
Heikki Linnakangas 2b3a8b20c2 Be more careful to not lose sync in the FE/BE protocol.
If any error occurred while we were in the middle of reading a protocol
message from the client, we could lose sync, and incorrectly try to
interpret a part of another message as a new protocol message. That will
usually lead to an "invalid frontend message" error that terminates the
connection. However, this is a security issue because an attacker might
be able to deliberately cause an error, inject a Query message in what's
supposed to be just user data, and have the server execute it.

We were quite careful to not have CHECK_FOR_INTERRUPTS() calls or other
operations that could ereport(ERROR) in the middle of processing a message,
but a query cancel interrupt or statement timeout could nevertheless cause
it to happen. Also, the V2 fastpath and COPY handling were not so careful.
It's very difficult to recover in the V2 COPY protocol, so we will just
terminate the connection on error. In practice, that's what happened
previously anyway, as we lost protocol sync.

To fix, add a new variable in pqcomm.c, PqCommReadingMsg, that is set
whenever we're in the middle of reading a message. When it's set, we cannot
safely ERROR out and continue running, because we might've read only part
of a message. PqCommReadingMsg acts somewhat similarly to critical sections
in that if an error occurs while it's set, the error handler will force the
connection to be terminated, as if the error was FATAL. It's not
implemented by promoting ERROR to FATAL in elog.c, like ERROR is promoted
to PANIC in critical sections, because we want to be able to use
PG_TRY/CATCH to recover and regain protocol sync. pq_getmessage() takes
advantage of that to prevent an OOM error from terminating the connection.

To prevent unnecessary connection terminations, add a holdoff mechanism
similar to HOLD/RESUME_INTERRUPTS() that can be used hold off query cancel
interrupts, but still allow die interrupts. The rules on which interrupts
are processed when are now a bit more complicated, so refactor
ProcessInterrupts() and the calls to it in signal handlers so that the
signal handlers always call it if ImmediateInterruptOK is set, and
ProcessInterrupts() can decide to not do anything if the other conditions
are not met.

Reported by Emil Lenngren. Patch reviewed by Noah Misch and Andres Freund.
Backpatch to all supported versions.

Security: CVE-2015-0244
2015-02-02 17:09:53 +02:00
Bruce Momjian 9241c84cbc to_char(): prevent writing beyond the allocated buffer
Previously very long localized month and weekday strings could
overflow the allocated buffers, causing a server crash.

Reported and patch reviewed by Noah Misch.  Backpatch to all
supported versions.

Security: CVE-2015-0241
2015-02-02 10:00:45 -05:00
Bruce Momjian 0150ab567b to_char(): prevent accesses beyond the allocated buffer
Previously very long field masks for floats could access memory
beyond the existing buffer allocated to hold the result.

Reported by Andres Freund and Peter Geoghegan.	Backpatch to all
supported versions.

Security: CVE-2015-0241
2015-02-02 10:00:44 -05:00
Tom Lane 451d280815 Fix jsonb Unicode escape processing, and in consequence disallow \u0000.
We've been trying to support \u0000 in JSON values since commit
78ed8e03c6, and have introduced increasingly worse hacks to try to
make it work, such as commit 0ad1a81632.  However, it fundamentally
can't work in the way envisioned, because the stored representation looks
the same as for \\u0000 which is not the same thing at all.  It's also
entirely bogus to output \u0000 when de-escaped output is called for.

The right way to do this would be to store an actual 0x00 byte, and then
throw error only if asked to produce de-escaped textual output.  However,
getting to that point seems likely to take considerable work and may well
never be practical in the 9.4.x series.

To preserve our options for better behavior while getting rid of the nasty
side-effects of 0ad1a81632, revert that commit in toto and instead
throw error if \u0000 is used in a context where it needs to be de-escaped.
(These are the same contexts where non-ASCII Unicode escapes throw error
if the database encoding isn't UTF8, so this behavior is by no means
without precedent.)

In passing, make both the \u0000 case and the non-ASCII Unicode case report
ERRCODE_UNTRANSLATABLE_CHARACTER / "unsupported Unicode escape sequence"
rather than claiming there's something wrong with the input syntax.

Back-patch to 9.4, where we have to do something because 0ad1a81632
broke things for many cases having nothing to do with \u0000.  9.3 also has
bogus behavior, but only for that specific escape value, so given the lack
of field complaints it seems better to leave 9.3 alone.
2015-01-30 14:44:56 -05:00
Robert Haas bd4e2fd97d Provide a way to supress the "out of memory" error when allocating.
Using the new interface MemoryContextAllocExtended, callers can
specify MCXT_ALLOC_NO_OOM if they are prepared to handle a NULL
return value.

Michael Paquier, reviewed and somewhat revised by me.
2015-01-30 12:56:48 -05:00
Tom Lane 3d660d33aa Fix assorted oversights in range selectivity estimation.
calc_rangesel() failed outright when comparing range variables to empty
constant ranges with < or >=, as a result of missing cases in a switch.
It also produced a bogus estimate for > comparison to an empty range.

On top of that, the >= and > cases were mislabeled throughout.  For
nonempty constant ranges, they managed to produce the right answers
anyway as a result of counterbalancing typos.

Also, default_range_selectivity() omitted cases for elem <@ range,
range &< range, and range &> range, so that rather dubious defaults
were applied for these operators.

In passing, rearrange the code in rangesel() so that the elem <@ range
case is handled in a less opaque fashion.

Report and patch by Emre Hasegeli, some additional work by me
2015-01-30 12:30:59 -05:00
Robert Haas 3d6d1b5855 Move out-of-memory error checks from aset.c to mcxt.c
This potentially allows us to add mcxt.c interfaces that do something
other than throw an error when memory cannot be allocated.  We'll
handle adding those interfaces in a separate commit.

Michael Paquier, with minor changes by me
2015-01-29 10:23:38 -05:00
Stephen Frost 804b6b6db4 Fix column-privilege leak in error-message paths
While building error messages to return to the user,
BuildIndexValueDescription, ExecBuildSlotValueDescription and
ri_ReportViolation would happily include the entire key or entire row in
the result returned to the user, even if the user didn't have access to
view all of the columns being included.

Instead, include only those columns which the user is providing or which
the user has select rights on.  If the user does not have any rights
to view the table or any of the columns involved then no detail is
provided and a NULL value is returned from BuildIndexValueDescription
and ExecBuildSlotValueDescription.  Note that, for key cases, the user
must have access to all of the columns for the key to be shown; a
partial key will not be returned.

Further, in master only, do not return any data for cases where row
security is enabled on the relation and row security should be applied
for the user.  This required a bit of refactoring and moving of things
around related to RLS- note the addition of utils/misc/rls.c.

Back-patch all the way, as column-level privileges are now in all
supported versions.

This has been assigned CVE-2014-8161, but since the issue and the patch
have already been publicized on pgsql-hackers, there's no point in trying
to hide this commit.
2015-01-28 12:31:30 -05:00
Tom Lane 1a2b2034d4 Fix NUMERIC field access macros to treat NaNs consistently.
Commit 145343534c arranged to store numeric
NaN values as short-header numerics, but the field access macros did not
get the memo: they thought only "SHORT" numerics have short headers.

Most of the time this makes no difference because we don't access the
weight or dscale of a NaN; but numeric_send does that.  As pointed out
by Andrew Gierth, this led to fetching uninitialized bytes.

AFAICS this could not have any worse consequences than that; in particular,
an unaligned stored numeric would have been detoasted by PG_GETARG_NUMERIC,
so that there's no risk of a fetch off the end of memory.  Still, the code
is wrong on its own terms, and it's not hard to foresee future changes that
might expose us to real risks.  So back-patch to all affected branches.
2015-01-27 12:06:31 -05:00
Robert Haas 168a809d4b Re-enable abbreviated keys on Windows.
Commit 1be4eb1b2d disabled this, but I
think the real problem here was fixed by commit
b181a91981 and commit
d060e07fa9.  So let's try re-enabling
it now and see what happens.
2015-01-26 14:28:14 -05:00
Tom Lane bf007a27ac Clean up assorted issues in ALTER SYSTEM coding.
Fix unsafe use of a non-volatile variable in PG_TRY/PG_CATCH in
AlterSystemSetConfigFile().  While at it, clean up a bundle of other
infelicities and outright bugs, including corner-case-incorrect linked list
manipulation, a poorly designed and worse documented parse-and-validate
function (which even included some randomly chosen hard-wired substitutes
for the specified elevel in one code path ... wtf?), direct use of open()
instead of fd.c's facilities, inadequate checking of write()'s return
value, and generally poorly written commentary.
2015-01-25 20:19:04 -05:00
Tom Lane fd496129d1 Clean up some mess in row-security patches.
Fix unsafe coding around PG_TRY in RelationBuildRowSecurity: can't change
a variable inside PG_TRY and then use it in PG_CATCH without marking it
"volatile".  In this case though it seems saner to avoid that by doing
a single assignment before entering the TRY block.

I started out just intending to fix that, but the more I looked at the
row-security code the more distressed I got.  This patch also fixes
incorrect construction of the RowSecurityPolicy cache entries (there was
not sufficient care taken to copy pass-by-ref data into the cache memory
context) and a whole bunch of sloppiness around the definition and use of
pg_policy.polcmd.  You can't use nulls in that column because initdb will
mark it NOT NULL --- and I see no particular reason why a null entry would
be a good idea anyway, so changing initdb's behavior is not the right
answer.  The internal value of '\0' wouldn't be suitable in a "char" column
either, so after a bit of thought I settled on using '*' to represent ALL.
Chasing those changes down also revealed that somebody wasn't paying
attention to what the underlying values of ACL_UPDATE_CHR etc really were,
and there was a great deal of lackadaiscalness in the catalogs.sgml
documentation for pg_policy and pg_policies too.

This doesn't pretend to be a complete code review for the row-security
stuff, it just fixes the things that were in my face while dealing with
the bugs in RelationBuildRowSecurity.
2015-01-24 16:16:22 -05:00
Tom Lane 586dd5d6a5 Replace a bunch more uses of strncpy() with safer coding.
strncpy() has a well-deserved reputation for being unsafe, so make an
effort to get rid of nearly all occurrences in HEAD.

A large fraction of the remaining uses were passing length less than or
equal to the known strlen() of the source, in which case no null-padding
can occur and the behavior is equivalent to memcpy(), though doubtless
slower and certainly harder to reason about.  So just use memcpy() in
these cases.

In other cases, use either StrNCpy() or strlcpy() as appropriate (depending
on whether padding to the full length of the destination buffer seems
useful).

I left a few strncpy() calls alone in the src/timezone/ code, to keep it
in sync with upstream (the IANA tzcode distribution).  There are also a
few such calls in ecpg that could possibly do with more analysis.

AFAICT, none of these changes are more than cosmetic, except for the four
occurrences in fe-secure-openssl.c, which are in fact buggy: an overlength
source leads to a non-null-terminated destination buffer and ensuing
misbehavior.  These don't seem like security issues, first because no stack
clobber is possible and second because if your values of sslcert etc are
coming from untrusted sources then you've got problems way worse than this.
Still, it's undesirable to have unpredictable behavior for overlength
inputs, so back-patch those four changes to all active branches.
2015-01-24 13:05:42 -05:00
Robert Haas d1747571b6 Fix typos, update README.
Peter Geoghegan
2015-01-23 15:06:53 -05:00
Robert Haas 5cefbf5a6c Don't use abbreviated keys for the final merge pass.
When we write tuples out to disk and read them back in, the abbreviated
keys become non-abbreviated, because the readtup routines don't know
anything about abbreviation.  But without this fix, the rest of the
code still thinks the abbreviation-aware compartor should be used,
so chaos ensues.

Report by Andrew Gierth; patch by Peter Geoghegan.
2015-01-23 11:58:31 -05:00
Robert Haas d060e07fa9 Repair brain fade in commit b181a91981.
The split between which things need to happen in the C-locale case and
which needed to happen in the locale-aware case was a few bricks short
of a load.  Try to fix that.
2015-01-22 12:51:20 -05:00
Robert Haas b181a91981 More fixes for abbreviated keys infrastructure.
First, when LC_COLLATE = C, bttext_abbrev_convert should use memcpy()
rather than strxfrm() to construct the abbreviated key, because the
authoritative comparator uses memcpy().  If we do anything else here,
we might get inconsistent answers, and the buildfarm says this risk
is not theoretical.  It should be faster this way, too.

Second, while I'm looking at bttext_abbrev_convert, convert a needless
use of goto into the loop it's trying to implement into an actual
loop.

Both of the above problems date to the original commit of abbreviated
keys, commit 4ea51cdfe8.

Third, fix a bogus assignment to tss->locale before tss is set up.
That's a new goof in commit b529b65d1b.
2015-01-22 11:58:58 -05:00
Robert Haas b529b65d1b Heavily refactor btsortsupport_worker.
Prior to commit 4ea51cdfe8, this function
only had one job, which was to decide whether we could avoid trampolining
through the fmgr layer when performing sort comparisons.  As of that
commit, it has a second job, which is to decide whether we can use
abbreviated keys.  Unfortunately, those two tasks are somewhat intertwined
in the existing coding, which is likely why neither Peter Geoghegan nor
I noticed prior to commit that this calls pg_newlocale_from_collation() in
cases where it didn't previously.  The buildfarm noticed, though.

To fix, rewrite the logic so that the decision as to which comparator to
use is more cleanly separated from the decision about abbreviation.
2015-01-22 10:54:16 -05:00
Robert Haas 1be4eb1b2d Disable abbreviated keys on Windows.
Most of the Windows buildfarm members (bowerbird, hamerkop, currawong,
jacana, brolga) are unhappy with yesterday's abbreviated keys patch,
although there are some (narwhal, frogmouth) that seem OK with it.
Since there's no obvious pattern to explain why some are working and
others are failing, just disable this across-the-board on Windows for
now.  This is a bit unfortunate since the optimization will be a big
win in some cases, but we can't leave the buildfarm broken.
2015-01-20 20:32:21 -05:00
Robert Haas 4ea51cdfe8 Use abbreviated keys for faster sorting of text datums.
This commit extends the SortSupport infrastructure to allow operator
classes the option to provide abbreviated representations of Datums;
in the case of text, we abbreviate by taking the first few characters
of the strxfrm() blob.  If the abbreviated comparison is insufficent
to resolve the comparison, we fall back on the normal comparator.
This can be much faster than the old way of doing sorting if the
first few bytes of the string are usually sufficient to resolve the
comparison.

There is the potential for a performance regression if all of the
strings to be sorted are identical for the first 8+ characters and
differ only in later positions; therefore, the SortSupport machinery
now provides an infrastructure to abort the use of abbreviation if
it appears that abbreviation is producing comparatively few distinct
keys.  HyperLogLog, a streaming cardinality estimator, is included in
this commit and used to make that determination for text.

Peter Geoghegan, reviewed by me.
2015-01-19 15:28:27 -05:00
Heikki Linnakangas 9402869160 Advance backend's advertised xmin more aggressively.
Currently, a backend will reset it's PGXACT->xmin value when it doesn't
have any registered snapshots left. That covered the common case that a
transaction in read committed mode runs several queries, one after each
other, as there would be no snapshots active between those queries.
However, if you hold cursors across each of the query, we didn't get a
chance to reset xmin.

To make that better, keep all the registered snapshots in a pairing heap,
ordered by xmin so that it's always quick to find the snapshot with the
smallest xmin. That allows us to advance PGXACT->xmin whenever the oldest
snapshot is deregistered, even if there are others still active.

Per discussion originally started by Jeff Davis back in 2009 and more
recently by Robert Haas.
2015-01-17 01:15:23 +02:00
Tom Lane a5cd70dcbc Improve performance of EXPLAIN with large range tables.
As of 9.3, ruleutils.c goes to some lengths to ensure that table and column
aliases used in its output are unique.  Of course this takes more time than
was required before, which in itself isn't fatal.  However, EXPLAIN was set
up so that recalculation of the unique aliases was repeated for each
subexpression printed in a plan.  That results in O(N^2) time and memory
consumption for large plan trees, which did not happen in older branches.

Fortunately, the expensive work is the same across a whole plan tree,
so there is no need to repeat it; we can do most of the initialization
just once per query and re-use it for each subexpression.  This buys
back most (not all) of the performance loss since 9.2.

We need an extra ExplainState field to hold the precalculated deparse
context.  That's no problem in HEAD, but in the back branches, expanding
sizeof(ExplainState) seems risky because third-party extensions might
have local variables of that struct type.  So, in 9.4 and 9.3, introduce
an auxiliary struct to keep sizeof(ExplainState) the same.  We should
refactor the APIs to avoid such local variables in future, but that's
material for a separate HEAD-only commit.

Per gripe from Alexey Bashtanov.  Back-patch to 9.3 where the issue
was introduced.
2015-01-15 13:18:12 -05:00
Andres Freund 59f71a0d0b Add a default local latch for use in signal handlers.
To do so, move InitializeLatchSupport() into the new common process
initialization functions, and add a new global variable MyLatch.

MyLatch is usable as soon InitPostmasterChild() has been called
(i.e. very early during startup). Initially it points to a process
local latch that exists in all processes. InitProcess/InitAuxiliaryProcess
then replaces that local latch with PGPROC->procLatch. During shutdown
the reverse happens.

This is primarily advantageous for two reasons: For one it simplifies
dealing with the shared process latch, especially in signal handlers,
because instead of having to check for MyProc, MyLatch can be used
unconditionally. For another, a later patch that makes FEs/BE
communication use latches, now can rely on the existence of a latch,
even before having gone through InitProcess.

Discussion: 20140927191243.GD5423@alap3.anarazel.de
2015-01-14 18:45:22 +01:00
Andres Freund 31c453165b Commonalize process startup code.
Move common code, that was duplicated in every postmaster child/every
standalone process, into two functions in miscinit.c.  Not only does
that already result in a fair amount of net code reduction but it also
makes it much easier to remove more duplication in the future. The
prime motivation wasn't code deduplication though, but easier addition
of new common code.
2015-01-14 00:33:14 +01:00
Tom Lane 1f9bf05e53 Use correct text domain for errcontext() appearing within ereport().
The mechanism added in commit dbdf9679d7
for associating the correct translation domain with errcontext strings
potentially fails in cases where errcontext() is used within an ereport()
macro.  Such usage was not originally envisioned for errcontext(), but we
do have a few places that do it.  In this situation, the intended comma
expression becomes just a couple of arguments to errfinish(), which the
compiler might choose to evaluate right-to-left.

Fortunately, in such cases the textdomain for the errcontext string must
be the same as for the surrounding ereport.  So we can fix this by letting
errstart initialize context_domain along with domain; then it will have
the correct value no matter which order the calls occur in.  (Note that
error stack callback functions are not invoked until errfinish, so normal
usage of errcontext won't affect what happens for errcontext calls within
the ereport macro.)

In passing, make sure that errcontext calls within the main backend set
context_domain to something non-NULL.  This isn't a live bug because
NULL would select the current textdomain() setting which should be the
right thing anyway --- but it seems better to handle this completely
consistently with the regular domain field.

Per report from Dmitry Voronin.  Backpatch to 9.3; before that, there
wasn't any attempt to ensure that errcontext strings were translated
in an appropriate domain.
2015-01-12 12:40:29 -05:00
Peter Eisentraut 79af9a1d26 Fix namespace handling in xpath function
Previously, the xml value resulting from an xpath query would not have
namespace declarations if the namespace declarations were attached to
an ancestor element in the input xml value.  That means the output value
was not correct XML.  Fix that by running the result value through
xmlCopyNode(), which produces the correct namespace declarations.

Author: Ali Akbar <the.apaan@gmail.com>
2015-01-06 23:06:13 -05:00
Andres Freund 3fabed0705 Correctly handle relcache invalidation corner case during logical decoding.
When using a historic snapshot for logical decoding it can validly
happen that a relation that's in the relcache isn't visible to that
historic snapshot.  E.g. if a newly created relation is referenced in
the query that uses the SQL interface for logical decoding and a
sinval reset occurs.

The earlier commit that fixed the error handling for that corner case
already improves the situation as a ERROR is better than hitting an
assertion... But it's obviously not good enough.  So additionally
allow that case without an error if a historic snapshot is set up -
that won't allow an invalid entry to stay in the cache because it's a)
already marked invalid and will thus be rebuilt during the next access
b) the syscaches will be reset at the end of decoding.

There might be prettier solutions to handle this case, but all that we
could think of so far end up being much more complex than this quite
simple fix.

This fixes the assertion failures reported by the buildfarm (markhor,
tick, leech) after the introduction of new regression tests in
89fd41b390. The failure there weren't actually directly caused by
CLOBBER_CACHE_ALWAYS but the extraordinary long runtimes due to it
lead to sinval resets triggering the behaviour.

Discussion: 22459.1418656530@sss.pgh.pa.us

Backpatch to 9.4 where logical decoding was introduced.
2015-01-07 00:19:37 +01:00
Andres Freund 31912d01d8 Improve relcache invalidation handling of currently invisible relations.
The corner case where a relcache invalidation tried to rebuild the
entry for a referenced relation but couldn't find it in the catalog
wasn't correct.

The code tried to RelationCacheDelete/RelationDestroyRelation the
entry. That didn't work when assertions are enabled because the latter
contains an assertion ensuring the refcount is zero. It's also more
generally a bad idea, because by virtue of being referenced somebody
might actually look at the entry, which is possible if the error is
trapped and handled via a subtransaction abort.

Instead just error out, without deleting the entry. As the entry is
marked invalid, the worst that can happen is that the invalid (and at
some point unused) entry lingers in the relcache.

Discussion: 22459.1418656530@sss.pgh.pa.us

There should be no way to hit this case < 9.4 where logical decoding
introduced a bug that can hit this. But since the code for handling
the corner case is there it should do something halfway sane, so
backpatch all the the way back.  The logical decoding bug will be
handled in a separate commit.
2015-01-07 00:18:00 +01:00
Bruce Momjian 4baaf863ec Update copyright for 2015
Backpatch certain files through 9.0
2015-01-06 11:43:47 -05:00
Fujii Masao 9f1d7313aa Fix typo in comment.
Report by Amit Kapila
2015-01-05 16:35:26 +09:00
Alvaro Herrera a676201490 Add pg_identify_object_as_address
This function returns object type and objname/objargs arrays, which can
be passed to pg_get_object_address.  This is especially useful because
the textual representation can be copied to a remote server in order to
obtain the corresponding OID-based address.  In essence, this function
is the inverse of recently added pg_get_object_address().

Catalog version bumped due to the addition of the new function.

Also add docs to pg_get_object_address.
2014-12-30 15:41:50 -03:00
Tom Lane 966115c305 Temporarily revert "Move pg_lzcompress.c to src/common."
This reverts commit 60838df922.
That change needs a bit more thought to be workable.  In view of
the potentially machine-dependent stuff that went in today,
we need all of the buildfarm to be testing those other changes.
2014-12-25 13:22:55 -05:00
Andres Freund 570bd2b3fd Add capability to suppress CONTEXT: messages to elog machinery.
Hiding context messages usually is not a good idea - except for rather
verbose debugging/development utensils like LOG_DEBUG. There the
amount of repeated context messages just bloat the log without adding
information.
2014-12-25 17:24:30 +01:00
Fujii Masao 60838df922 Move pg_lzcompress.c to src/common.
Exposing compression and decompression APIs of pglz makes possible its
use by extensions and contrib modules. pglz_decompress contained a call
to elog to emit an error message in case of corrupted data. This function
is changed to return a status code to let its callers return an error instead.

This commit is required for upcoming WAL compression feature so that
the WAL reader facility can decompress the WAL data by using pglz_decompress.

Michael Paquier
2014-12-25 20:46:14 +09:00
Alvaro Herrera a609d96778 Revert "Use a bitmask to represent role attributes"
This reverts commit 1826987a46.

The overall design was deemed unacceptable, in discussion following the
previous commit message; we might find some parts of it still
salvageable, but I don't want to be on the hook for fixing it, so let's
wait until we have a new patch.
2014-12-23 15:35:49 -03:00
Alvaro Herrera 1826987a46 Use a bitmask to represent role attributes
The previous representation using a boolean column for each attribute
would not scale as well as we want to add further attributes.

Extra auxilliary functions are added to go along with this change, to
make up for the lost convenience of access of the old representation.

Catalog version bumped due to change in catalogs and the new functions.

Author: Adam Brightwell, minor tweaks by Álvaro
Reviewed by: Stephen Frost, Andres Freund, Álvaro Herrera
2014-12-23 10:22:09 -03:00
Peter Eisentraut 584e35d17c Change local_preload_libraries to PGC_USERSET
This allows it to be used with ALTER ROLE SET.

Although the old setting of PGC_BACKEND prevented changes after session
start, after discussion it was more useful to allow ALTER ROLE SET
instead and just document that changes during a session have no effect.
This is similar to how session_preload_libraries works already.

An alternative would be to change things to allow PGC_BACKEND and
PGC_SU_BACKEND settings to be changed by ALTER ROLE SET.  But that might
need further research (e.g., log_connections would probably not work).

based on patch by Kyotaro Horiguchi
2014-12-22 23:05:46 -05:00
Heikki Linnakangas 955557ddcc Move rbtree.c from src/backend/utils/misc to src/backend/lib.
We have other general-purpose data structures in src/backend/lib, so it
seems like a better home for the red-black tree as well.
2014-12-22 17:52:08 +02:00
Tom Lane 4a14f13a0a Improve hash_create's API for selecting simple-binary-key hash functions.
Previously, if you wanted anything besides C-string hash keys, you had to
specify a custom hashing function to hash_create().  Nearly all such
callers were specifying tag_hash or oid_hash; which is tedious, and rather
error-prone, since a caller could easily miss the opportunity to optimize
by using hash_uint32 when appropriate.  Replace this with a design whereby
callers using simple binary-data keys just specify HASH_BLOBS and don't
need to mess with specific support functions.  hash_create() itself will
take care of optimizing when the key size is four bytes.

This nets out saving a few hundred bytes of code space, and offers
a measurable performance improvement in tidbitmap.c (which was not
exploiting the opportunity to use hash_uint32 for its 4-byte keys).
There might be some wins elsewhere too, I didn't analyze closely.

In future we could look into offering a similar optimized hashing function
for 8-byte keys.  Under this design that could be done in a centralized
and machine-independent fashion, whereas getting it right for keys of
platform-dependent sizes would've been notationally painful before.

For the moment, the old way still works fine, so as not to break source
code compatibility for loadable modules.  Eventually we might want to
remove tag_hash and friends from the exported API altogether, since there's
no real need for them to be explicitly referenced from outside dynahash.c.

Teodor Sigaev and Tom Lane
2014-12-18 13:36:36 -05:00
Fujii Masao 19e065c049 Ensure variables live across calls in generate_series(numeric, numeric).
In generate_series_step_numeric(), the variables "start_num"
and "stop_num" may be potentially freed until the next call.
So they should be put in the location which can survive across calls.
But previously they were not, and which could cause incorrect
behavior of generate_series(numeric, numeric). This commit fixes
this problem by copying them on multi_call_memory_ctx.

Andrew Gierth
2014-12-18 21:13:52 +09:00
Andrew Dunstan c8315930e6 Fix some jsonb issues found by Coverity in recent commits.
Mostly these issues concern the non-use of function results. These
have been changed to use (void) pushJsonbValue(...) instead of assigning
the result to a variable that gets overwritten before it is used.

There is a larger issue that we should possibly examine the API for
pushJsonbValue(), so that instead of returning a value it modifies a
state argument. The current idiom is rather clumsy. However, changing
that requires quite a bit more work, so this change should do for the
moment.
2014-12-16 10:32:06 -05:00
Tom Lane 9418820efb Fix point <-> polygon code for zero-distance case.
"PG_RETURN_FLOAT8(x)" is not "return x", except perhaps by accident
on some platforms.
2014-12-15 14:04:27 -05:00
Heikki Linnakangas 4520ba6769 Add point <-> polygon distance operator.
Alexander Korotkov, reviewed by Emre Hasegeli.
2014-12-15 17:06:21 +02:00
Tom Lane b0f479113a Repair corner-case bug in array version of percentile_cont().
The code for advancing through the input rows overlooked the case that we
might already be past the first row of the row pair now being considered,
in case the previous percentile also fell between the same two input rows.

Report and patch by Andrew Gierth; logic rewritten a bit for clarity by me.
2014-12-13 11:49:41 -05:00
Andrew Dunstan 7e354ab9fe Add several generator functions for jsonb that exist for json.
The functions are:
    to_jsonb()
    jsonb_object()
    jsonb_build_object()
    jsonb_build_array()
    jsonb_agg()
    jsonb_object_agg()

Also along the way some better logic is implemented in
json_categorize_type() to match that in the newly implemented
jsonb_categorize_type().

Andrew Dunstan, reviewed by Pavel Stehule and Alvaro Herrera.
2014-12-12 15:31:14 -05:00
Andrew Dunstan 237a882443 Add json_strip_nulls and jsonb_strip_nulls functions.
The functions remove object fields, including in nested objects, that
have null as a value. In certain cases this can lead to considerably
smaller datums, with no loss of semantic information.

Andrew Dunstan, reviewed by Pavel Stehule.
2014-12-12 09:00:43 -05:00
Tom Lane 24688f4e5a Fix minor thinko in convertToJsonb().
The amount of space to reserve for the value's varlena header is
VARHDRSZ, not sizeof(VARHDRSZ).  The latter coding accidentally
failed to fail because of the way the VARHDRSZ macro is currently
defined; but if we ever change it to return size_t (as one might
reasonably expect it to do), convertToJsonb() would have failed.

Spotted by Mark Dilger.
2014-12-10 19:06:27 -05:00
Simon Riggs 618c9430a8 Event Trigger for table_rewrite
Generate a table_rewrite event when ALTER TABLE
attempts to rewrite a table. Provide helper
functions to identify table and reason.

Intended use case is to help assess or to react
to schema changes that might hold exclusive locks
for long periods.

Dimitri Fontaine, triggering an edit by Simon Riggs

Reviewed in detail by Michael Paquier
2014-12-08 00:55:28 +09:00
Heikki Linnakangas c846e67c46 Print wal_log_hints in the rm_desc routing of a parameter-change record.
It was an oversight in the original commit.

Also note in the sample config file that changing wal_log_hints requires a
restart.

Michael Paquier. Backpatch to 9.4, where wal_log_hints was added.
2014-12-05 12:00:48 +02:00
Alvaro Herrera 73c986adde Keep track of transaction commit timestamps
Transactions can now set their commit timestamp directly as they commit,
or an external transaction commit timestamp can be fed from an outside
system using the new function TransactionTreeSetCommitTsData().  This
data is crash-safe, and truncated at Xid freeze point, same as pg_clog.

This module is disabled by default because it causes a performance hit,
but can be enabled in postgresql.conf requiring only a server restart.

A new test in src/test/modules is included.

Catalog version bumped due to the new subdirectory within PGDATA and a
couple of new SQL functions.

Authors: Álvaro Herrera and Petr Jelínek

Reviewed to varying degrees by Michael Paquier, Andres Freund, Robert
Haas, Amit Kapila, Fujii Masao, Jaime Casanova, Simon Riggs, Steven
Singer, Peter Eisentraut
2014-12-03 11:53:02 -03:00
Tom Lane 475aedd1ef Improve error messages for malformed array input strings.
Make the error messages issued by array_in() uniformly follow the style
	ERROR: malformed array literal: "actual input string"
	DETAIL: specific complaint here
and rewrite many of the specific complaints to be clearer.

The immediate motivation for doing this is a complaint from Josh Berkus
that json_to_record() produced an unintelligible error message when
dealing with an array item, because it tries to feed the JSON-format
array value to array_in().  Really it ought to be smart enough to
perform JSON-to-Postgres array conversion, but that's a future feature
not a bug fix.  In the meantime, this change is something we agreed
we could back-patch into 9.4, and it should help de-confuse things a bit.
2014-12-02 18:23:27 -05:00
Tom Lane 75ef435218 Fix JSON aggregates to work properly when final function is re-executed.
Davide S. reported that json_agg() sometimes produced multiple trailing
right brackets.  This turns out to be because json_agg_finalfn() attaches
the final right bracket, and was doing so by modifying the aggregate state
in-place.  That's verboten, though unfortunately it seems there's no way
for nodeAgg.c to check for such mistakes.

Fix that back to 9.3 where the broken code was introduced.  In 9.4 and
HEAD, likewise fix json_object_agg(), which had copied the erroneous logic.
Make some cosmetic cleanups as well.
2014-12-02 15:02:37 -05:00
Tom Lane 0927bf8060 Guard against bad "dscale" values in numeric_recv().
We were not checking to see if the supplied dscale was valid for the given
digit array when receiving binary-format numeric values.  While dscale can
validly be more than the number of nonzero fractional digits, it shouldn't
be less; that case causes fractional digits to be hidden on display even
though they're there and participate in arithmetic.

Bug #12053 from Tommaso Sala indicates that there's at least one broken
client library out there that sometimes supplies an incorrect dscale value,
leading to strange behavior.  This suggests that simply throwing an error
might not be the best response; it would lead to failures in applications
that might seem to be working fine today.  What seems the least risky fix
is to truncate away any digits that would be hidden by dscale.  This
preserves the existing behavior in terms of what will be printed for the
transmitted value, while preventing subsequent arithmetic from producing
results inconsistent with that.

In passing, throw a specific error for the case of dscale being outside
the range that will fit into a numeric's header.  Before you got "value
overflows numeric format", which is a bit misleading.

Back-patch to all supported branches.
2014-12-01 15:25:02 -05:00
Andrew Dunstan e09996ff8d Fix hstore_to_json_loose's detection of valid JSON number values.
We expose a function IsValidJsonNumber that internally calls the lexer
for json numbers. That allows us to use the same test everywhere,
instead of inventing a broken test for hstore conversions. The new
function is also used in datum_to_json, replacing the code that is now
moved to the new function.

Backpatch to 9.3 where hstore_to_json_loose was introduced.
2014-12-01 11:28:45 -05:00
Tom Lane e384ed6cde Improve typcache: cache negative lookup results, add invalidation logic.
Previously, if the typcache had for example tried and failed to find a hash
opclass for a given data type, it would nonetheless repeat the unsuccessful
catalog lookup each time it was asked again.  This can lead to a
significant amount of useless bufmgr traffic, as in a recent report from
Scott Marlowe.  Like the catalog caches, typcache should be able to cache
negative results.  This patch arranges that by making use of separate flag
bits to remember whether a particular item has been looked up, rather than
treating a zero OID as an indicator that no lookup has been done.

Also, install a credible invalidation mechanism, namely watching for inval
events in pg_opclass.  The sole advantage of the lack of negative caching
was that the code would cope if operators or opclasses got added for a type
mid-session; to preserve that behavior we have to be able to invalidate
stale lookup results.  Updates in pg_opclass should be pretty rare in
production systems, so it seems sufficient to just invalidate all the
dependent data whenever one happens.

Adding proper invalidation also means that this code will now react sanely
if an opclass is dropped mid-session.  Arguably, that's a back-patchable
bug fix, but in view of the lack of complaints from the field I'll refrain
from back-patching.  (Probably, in most cases where an opclass is dropped,
the data type itself is dropped soon after, so that this misfeasance has
no bad consequences.)
2014-11-28 12:19:14 -05:00
Stephen Frost 143b39c185 Rename pg_rowsecurity -> pg_policy and other fixes
As pointed out by Robert, we should really have named pg_rowsecurity
pg_policy, as the objects stored in that catalog are policies.  This
patch fixes that and updates the column names to start with 'pol' to
match the new catalog name.

The security consideration for COPY with row level security, also
pointed out by Robert, has also been addressed by remembering and
re-checking the OID of the relation initially referenced during COPY
processing, to make sure it hasn't changed under us by the time we
finish planning out the query which has been built.

Robert and Alvaro also commented on missing OCLASS and OBJECT entries
for POLICY (formerly ROWSECURITY or POLICY, depending) in various
places.  This patch fixes that too, which also happens to add the
ability to COMMENT on policies.

In passing, attempt to improve the consistency of messages, comments,
and documentation as well.  This removes various incarnations of
'row-security', 'row-level security', 'Row-security', etc, in favor
of 'policy', 'row level security' or 'row_security' as appropriate.

Happy Thanksgiving!
2014-11-27 01:15:57 -05:00
Robert Haas a6c84c770e Attempt to suppress uninitialized variable warning.
Report by Heikki Linnakangas.
2014-11-25 20:07:07 -05:00
Tom Lane d934a05234 Fix uninitialized-variable warning.
In passing, add an Assert defending the presumption that bytes_left
is positive to start with.  (I'm not exactly convinced that using an
unsigned type was such a bright thing here, but let's at least do
this much.)
2014-11-25 15:17:16 -05:00
Tom Lane bac27394a1 Support arrays as input to array_agg() and ARRAY(SELECT ...).
These cases formerly failed with errors about "could not find array type
for data type".  Now they yield arrays of the same element type and one
higher dimension.

The implementation involves creating functions with API similar to the
existing accumArrayResult() family.  I (tgl) also extended the base family
by adding an initArrayResult() function, which allows callers to avoid
special-casing the zero-inputs case if they just want an empty array as
result.  (Not all do, so the previous calling convention remains valid.)
This allowed simplifying some existing code in xml.c and plperl.c.

Ali Akbar, reviewed by Pavel Stehule, significantly modified by me
2014-11-25 12:21:28 -05:00
Robert Haas f5d9698a84 Add infrastructure to save and restore GUC values.
This is further infrastructure for parallelism.

Amit Khandekar, Noah Misch, Robert Haas
2014-11-24 16:37:56 -05:00
Tom Lane 447770404c Rearrange CustomScan API.
Make it work more like FDW plans do: instead of assuming that there are
expressions in a CustomScan plan node that the core code doesn't know
about, insist that all subexpressions that need planner attention be in
a "custom_exprs" list in the Plan representation.  (Of course, the
custom plugin can break the list apart again at executor initialization.)
This lets us revert the parts of the patch that exposed setrefs.c and
subselect.c processing to the outside world.

Also revert the GetSpecialCustomVar stuff in ruleutils.c; that concept
may work in future, but it's far from fully baked right now.
2014-11-21 18:21:46 -05:00
Heikki Linnakangas 2c03216d83 Revamp the WAL record format.
Each WAL record now carries information about the modified relation and
block(s) in a standardized format. That makes it easier to write tools that
need that information, like pg_rewind, prefetching the blocks to speed up
recovery, etc.

There's a whole new API for building WAL records, replacing the XLogRecData
chains used previously. The new API consists of XLogRegister* functions,
which are called for each buffer and chunk of data that is added to the
record. The new API also gives more control over when a full-page image is
written, by passing flags to the XLogRegisterBuffer function.

This also simplifies the XLogReadBufferForRedo() calls. The function can dig
the relation and block number from the WAL record, so they no longer need to
be passed as arguments.

For the convenience of redo routines, XLogReader now disects each WAL record
after reading it, copying the main data part and the per-block data into
MAXALIGNed buffers. The data chunks are not aligned within the WAL record,
but the redo routines can assume that the pointers returned by XLogRecGet*
functions are. Redo routines are now passed the XLogReaderState, which
contains the record in the already-disected format, instead of the plain
XLogRecord.

The new record format also makes the fixed size XLogRecord header smaller,
by removing the xl_len field. The length of the "main data" portion is now
stored at the end of the WAL record, and there's a separate header after
XLogRecord for it. The alignment padding at the end of XLogRecord is also
removed. This compansates for the fact that the new format would otherwise
be more bulky than the old format.

Reviewed by Andres Freund, Amit Kapila, Michael Paquier, Alvaro Herrera,
Fujii Masao.
2014-11-20 18:46:41 +02:00
Alvaro Herrera 0f9692b40d Fix relpersistence setting in reindex_index
Buildfarm members with CLOBBER_CACHE_ALWAYS advised us that commit
85b506bbfc was mistaken in setting the relpersistence value of the
index directly in the relcache entry, within reindex_index.  The reason
for the failure is that an invalidation message that comes after mucking
with the relcache entry directly, but before writing it to the catalogs,
would cause the entry to become rebuilt in place from catalogs with the
old contents, losing the update.

Fix by passing the correct persistence value to
RelationSetNewRelfilenode instead; this routine also writes the updated
tuple to pg_class, avoiding the problem.  Suggested by Tom Lane.
2014-11-17 11:23:35 -03:00
Alvaro Herrera 85b506bbfc Get rid of SET LOGGED indexes persistence kludge
This removes ATChangeIndexesPersistence() introduced by f41872d0c1
which was too ugly to live for long.  Instead, the correct persistence
marking is passed all the way down to reindex_index, so that the
transient relation built to contain the index relfilenode can
get marked correctly right from the start.

Author: Fabrízio de Royes Mello
Review and editorialization by Michael Paquier
                                     and Álvaro Herrera
2014-11-15 01:19:49 -03:00
Stephen Frost 80eacaa3cd Clean up includes from RLS patch
The initial patch for RLS mistakenly included headers associated with
the executor and planner bits in rewrite/rowsecurity.h.  Per policy and
general good sense, executor headers should not be included in planner
headers or vice versa.

The include of execnodes.h was a mistaken holdover from previous
versions, while the include of relation.h was used for Relation's
definition, which should have been coming from utils/relcache.h.  This
patch cleans these issues up, adds comments to the RowSecurityPolicy
struct and the RowSecurityConfigType enum, and changes Relation->rsdesc
to Relation->rd_rsdesc to follow Relation field naming convention.

Additionally, utils/rel.h was including rewrite/rowsecurity.h, which
wasn't a great idea since that was pulling in things not really needed
in utils/rel.h (which gets included in quite a few places).  Instead,
use 'struct RowSecurityDesc' for the rd_rsdesc field and add comments
explaining why.

Lastly, add an include into access/nbtree/nbtsort.c for
utils/sortsupport.h, which was evidently missed due to the above mess.

Pointed out by Tom in 16970.1415838651@sss.pgh.pa.us; note that the
concerns regarding a similar situation in the custom-path commit still
need to be addressed.
2014-11-14 17:05:17 -05:00
Robert Haas c0828b78e9 Move the guts of our Levenshtein implementation into core.
The hope is that we can use this to produce better diagnostics in
some cases.

Peter Geoghegan, reviewed by Michael Paquier, with some further
changes by me.
2014-11-13 12:33:26 -05:00
Fujii Masao c291503b1c Rename pending_list_cleanup_size to gin_pending_list_limit.
Since this parameter is only for GIN index, it's better to
add "gin" to the parameter name for easier understanding.
2014-11-13 12:14:48 +09:00
Tom Lane 677708032c Explicitly support the case that a plancache's raw_parse_tree is NULL.
This only happens if a client issues a Parse message with an empty query
string, which is a bit odd; but since it is explicitly called out as legal
by our FE/BE protocol spec, we'd probably better continue to allow it.

Fix by adding tests everywhere that the raw_parse_tree field is passed to
functions that don't or shouldn't accept NULL.  Also make it clear in the
relevant comments that NULL is an expected case.

This reverts commits a73c9dbab0 and
2e9650cbcf, which fixed specific crash
symptoms by hacking things at what now seems to be the wrong end, ie the
callee functions.  Making the callees allow NULL is superficially more
robust, but it's not always true that there is a defensible thing for the
callee to do in such cases.  The caller has more context and is better
able to decide what the empty-query case ought to do.

Per followup discussion of bug #11335.  Back-patch to 9.2.  The code
before that is sufficiently different that it would require development
of a separate patch, which doesn't seem worthwhile for what is believed
to be an essentially cosmetic change.
2014-11-12 15:59:01 -05:00
Peter Eisentraut 8339f33d68 Message improvements 2014-11-11 20:02:30 -05:00
Robert Haas f1abd78be7 Remove incorrect comment.
This was introduced by commit 5ea86e6e65.

Peter Geoghegan
2014-11-11 18:41:29 -05:00
Fujii Masao 1871c89202 Add generate_series(numeric, numeric).
Платон Малюгин
Reviewed by Michael Paquier, Ali Akbar and Marti Raudsepp
2014-11-11 21:44:46 +09:00
Fujii Masao a1b395b6a2 Add GUC and storage parameter to set the maximum size of GIN pending list.
Previously the maximum size of GIN pending list was controlled only by
work_mem. But the reasonable value of work_mem and the reasonable size
of the list are basically not the same, so it was not appropriate to
control both of them by only one GUC, i.e., work_mem. This commit
separates new GUC, pending_list_cleanup_size, from work_mem to allow
users to control only the size of the list.

Also this commit adds pending_list_cleanup_size as new storage parameter
to allow users to specify the size of the list per index. This is useful,
for example, when users want to increase the size of the list only for
the GIN index which can be updated heavily, and decrease it otherwise.

Reviewed by Etsuro Fujita.
2014-11-11 21:08:21 +09:00
Robert Haas 0b03e5951b Introduce custom path and scan providers.
This allows extension modules to define their own methods for
scanning a relation, and get the core code to use them.  It's
unclear as yet how much use this capability will find, but we
won't find out if we never commit it.

KaiGai Kohei, reviewed at various times and in various levels
of detail by Shigeru Hanada, Tom Lane, Andres Freund, Álvaro
Herrera, and myself.
2014-11-07 17:34:36 -05:00
Robert Haas 5ea86e6e65 Use the sortsupport infrastructure in more cases.
This removes some fmgr overhead from cases such as btree index builds.

Peter Geoghegan, reviewed by Andreas Karlsson and me.
2014-11-07 15:50:55 -05:00
Alvaro Herrera 7516f52594 BRIN: Block Range Indexes
BRIN is a new index access method intended to accelerate scans of very
large tables, without the maintenance overhead of btrees or other
traditional indexes.  They work by maintaining "summary" data about
block ranges.  Bitmap index scans work by reading each summary tuple and
comparing them with the query quals; all pages in the range are returned
in a lossy TID bitmap if the quals are consistent with the values in the
summary tuple, otherwise not.  Normal index scans are not supported
because these indexes do not store TIDs.

As new tuples are added into the index, the summary information is
updated (if the block range in which the tuple is added is already
summarized) or not; in the latter case, a subsequent pass of VACUUM or
the brin_summarize_new_values() function will create the summary
information.

For data types with natural 1-D sort orders, the summary info consists
of the maximum and the minimum values of each indexed column within each
page range.  This type of operator class we call "Minmax", and we
supply a bunch of them for most data types with B-tree opclasses.
Since the BRIN code is generalized, other approaches are possible for
things such as arrays, geometric types, ranges, etc; even for things
such as enum types we could do something different than minmax with
better results.  In this commit I only include minmax.

Catalog version bumped due to new builtin catalog entries.

There's more that could be done here, but this is a good step forwards.

Loosely based on ideas from Simon Riggs; code mostly by Álvaro Herrera,
with contribution by Heikki Linnakangas.

Patch reviewed by: Amit Kapila, Heikki Linnakangas, Robert Haas.
Testing help from Jeff Janes, Erik Rijkers, Emanuel Calvo.

PS:
  The research leading to these results has received funding from the
  European Union's Seventh Framework Programme (FP7/2007-2013) under
  grant agreement n° 318633.
2014-11-07 16:38:14 -03:00
Tom Lane 4875931938 Fix normalization of numeric values in JSONB GIN indexes.
The default JSONB GIN opclass (jsonb_ops) converts numeric data values
to strings for storage in the index.  It must ensure that numeric values
that would compare equal (such as 12 and 12.00) produce identical strings,
else index searches would have behavior different from regular JSONB
comparisons.  Unfortunately the function charged with doing this was
completely wrong: it could reduce distinct numeric values to the same
string, or reduce equivalent numeric values to different strings.  The
former type of error would only lead to search inefficiency, but the
latter type of error would cause index entries that should be found by
a search to not be found.

Repairing this bug therefore means that it will be necessary for 9.4 beta
testers to reindex GIN jsonb_ops indexes, if they care about getting
correct results from index searches involving numeric data values within
the comparison JSONB object.

Per report from Thomas Fanghaenel.
2014-11-06 11:41:06 -05:00
Heikki Linnakangas 2076db2aea Move the backup-block logic from XLogInsert to a new file, xloginsert.c.
xlog.c is huge, this makes it a little bit smaller, which is nice. Functions
related to putting together the WAL record are in xloginsert.c, and the
lower level stuff for managing WAL buffers and such are in xlog.c.

Also move the definition of XLogRecord to a separate header file. This
causes churn in the #includes of all the files that write WAL records, and
redo routines, but it avoids pulling in xlog.h into most places.

Reviewed by Michael Paquier, Alvaro Herrera, Andres Freund and Amit Kapila.
2014-11-06 13:55:36 +02:00
Tom Lane 525a489915 Remove the last vestige of server-side autocommit.
Long ago we briefly had an "autocommit" GUC that turned server-side
autocommit on and off.  That behavior was removed in 7.4 after concluding
that it broke far too much client-side logic, and making clients cope with
both behaviors was impractical.  But the GUC variable was left behind, so
as not to break any client code that might be trying to read its value.
Enough time has now passed that we should remove the GUC completely.
Whatever vestigial backwards-compatibility benefit it had is outweighed by
the risk of confusion for newbies who assume it ought to do something,
as per a recent complaint from Wolfgang Wilhelm.

In passing, adjust what seemed to me a rather confusing documentation
reference to libpq's autocommit behavior.  libpq as such knows nothing
about autocommit, so psql is probably what was meant.
2014-11-05 19:35:23 -05:00
Heikki Linnakangas 5028f22f6e Switch to CRC-32C in WAL and other places.
The old algorithm was found to not be the usual CRC-32 algorithm, used by
Ethernet et al. We were using a non-reflected lookup table with code meant
for a reflected lookup table. That's a strange combination that AFAICS does
not correspond to any bit-wise CRC calculation, which makes it difficult to
reason about its properties. Although it has worked well in practice, seems
safer to use a well-known algorithm.

Since we're changing the algorithm anyway, we might as well choose a
different polynomial. The Castagnoli polynomial has better error-correcting
properties than the traditional CRC-32 polynomial, even if we had
implemented it correctly. Another reason for picking that is that some new
CPUs have hardware support for calculating CRC-32C, but not CRC-32, let
alone our strange variant of it. This patch doesn't add any support for such
hardware, but a future patch could now do that.

The old algorithm is kept around for tsquery and pg_trgm, which use the
values in indexes that need to remain compatible so that pg_upgrade works.
While we're at it, share the old lookup table for CRC-32 calculation
between hstore, ltree and core. They all use the same table, so might as
well.
2014-11-04 11:39:48 +02:00
Robert Haas 2bd9e412f9 Support frontend-backend protocol communication using a shm_mq.
A background worker can use pq_redirect_to_shm_mq() to direct protocol
that would normally be sent to the frontend to a shm_mq so that another
process may read them.

The receiving process may use pq_parse_errornotice() to parse an
ErrorResponse or NoticeResponse from the background worker and, if
it wishes, ThrowErrorData() to propagate the error (with or without
further modification).

Patch by me.  Review by Andres Freund.
2014-10-31 12:02:40 -04:00
Robert Haas 6cb4afff33 Avoid setup work for invalidation messages at start-of-(sub)xact.
Instead of initializing a new TransInvalidationInfo for every
transaction or subtransaction, we can just do it for those
transactions or subtransactions that actually need to queue
invalidation messages.  That also avoids needing to free those
entries at the end of a transaction or subtransaction that does
not generate any invalidation messages, which is by far the
common case.

Patch by me.  Review by Simon Riggs and Andres Freund.
2014-10-29 12:35:19 -04:00
Heikki Linnakangas 22926e00f7 Fix two bugs in tsquery @> operator.
1. The comparison for matching terms used only the CRC to decide if there's
a match. Two different terms with the same CRC gave a match.

2. It assumed that if the second operand has more terms than the first, it's
never a match. That assumption is bogus, because there can be duplicate
terms in either operand.

Rewrite the implementation in a way that doesn't have those bugs.

Backpatch to all supported versions.
2014-10-27 10:50:41 +02:00
Robert Haas 5ac372fc1a Add a function to get the authenticated user ID.
Previously, this was not exposed outside of miscinit.c.  It is needed
for the pending pg_background patch, and will also be needed for
parallelism.  Without it, there's no way for a background worker to
re-create the exact authentication environment that was present in the
process that started it, which could lead to security exposures.
2014-10-23 08:18:45 -04:00
Heikki Linnakangas 98b3743779 Update comment.
The _bt_tuplecompare() function mentioned in comment hasn't existed for a
long time.

Peter Geoghegan
2014-10-22 15:44:07 +03:00
Peter Eisentraut 6f04368cfc Allow input format xxxx-xxxx-xxxx for macaddr type
Author: Herwin Weststrate <herwin@quarantainenet.nl>
Reviewed-by: Ali Akbar <the.apaan@gmail.com>
2014-10-21 16:16:39 -04:00
Robert Haas 0f565c0743 Fix typos.
Etsuro Fujita
2014-10-20 10:23:40 -04:00
Peter Eisentraut 7feaccc217 Allow setting effective_io_concurrency even on unsupported systems
This matches the behavior of other parameters that are unsupported on
some systems (e.g., ssl).

Also document the default value.
2014-10-18 21:35:46 -04:00
Tom Lane b2cbced9ee Support timezone abbreviations that sometimes change.
Up to now, PG has assumed that any given timezone abbreviation (such as
"EDT") represents a constant GMT offset in the usage of any particular
region; we had a way to configure what that offset was, but not for it
to be changeable over time.  But, as with most things horological, this
view of the world is too simplistic: there are numerous regions that have
at one time or another switched to a different GMT offset but kept using
the same timezone abbreviation.  Almost the entire Russian Federation did
that a few years ago, and later this month they're going to do it again.
And there are similar examples all over the world.

To cope with this, invent the notion of a "dynamic timezone abbreviation",
which is one that is referenced to a particular underlying timezone
(as defined in the IANA timezone database) and means whatever it currently
means in that zone.  For zones that use or have used daylight-savings time,
the standard and DST abbreviations continue to have the property that you
can specify standard or DST time and get that time offset whether or not
DST was theoretically in effect at the time.  However, the abbreviations
mean what they meant at the time in question (or most recently before that
time) rather than being absolutely fixed.

The standard abbreviation-list files have been changed to use this behavior
for abbreviations that have actually varied in meaning since 1970.  The
old simple-numeric definitions are kept for abbreviations that have not
changed, since they are a bit faster to resolve.

While this is clearly a new feature, it seems necessary to back-patch it
into all active branches, because otherwise use of Russian zone
abbreviations is going to become even more problematic than it already was.
This change supersedes the changes in commit 513d06ded et al to modify the
fixed meanings of the Russian abbreviations; since we've not shipped that
yet, this will avoid an undesirably incompatible (not to mention incorrect)
change in behavior for timestamps between 2011 and 2014.

This patch makes some cosmetic changes in ecpglib to keep its usage of
datetime lookup tables as similar as possible to the backend code, but
doesn't do anything about the increasingly obsolete set of timezone
abbreviation definitions that are hard-wired into ecpglib.  Whatever we
do about that will likely not be appropriate material for back-patching.
Also, a potential free() of a garbage pointer after an out-of-memory
failure in ecpglib has been fixed.

This patch also fixes pre-existing bugs in DetermineTimeZoneOffset() that
caused it to produce unexpected results near a timezone transition, if
both the "before" and "after" states are marked as standard time.  We'd
only ever thought about or tested transitions between standard and DST
time, but that's not what's happening when a zone simply redefines their
base GMT offset.

In passing, update the SGML documentation to refer to the Olson/zoneinfo/
zic timezone database as the "IANA" database, since it's now being
maintained under the auspices of IANA.
2014-10-16 15:22:10 -04:00
Bruce Momjian d6c8e8d7ac Consistently use NULL for invalid GUC unit strings
Patch by Euler Taveira
2014-10-13 16:11:43 -04:00
Tom Lane 4a50de1312 Fix bogus optimization in JSONB containment tests.
When determining whether one JSONB object contains another, it's okay to
make a quick exit if the first object has fewer pairs than the second:
because we de-duplicate keys within objects, it is impossible that the
first object has all the keys the second does.  However, the code was
applying this rule to JSONB arrays as well, where it does *not* hold
because arrays can contain duplicate entries.  The test was really in
the wrong place anyway; we should do it within JsonbDeepContains, where
it can be applied to nested objects not only top-level ones.

Report and test cases by Alexander Korotkov; fix by Peter Geoghegan and
Tom Lane.
2014-10-11 14:13:51 -04:00
Alvaro Herrera 7b1c2a0f20 Split builtins.h to a new header ruleutils.h
The new header contains many prototypes for functions in ruleutils.c
that are not exposed to the SQL level.

Reviewed by Andres Freund and Michael Paquier.
2014-10-08 18:10:47 -03:00
Alvaro Herrera df630b0dd5 Implement SKIP LOCKED for row-level locks
This clause changes the behavior of SELECT locking clauses in the
presence of locked rows: instead of causing a process to block waiting
for the locks held by other processes (or raise an error, with NOWAIT),
SKIP LOCKED makes the new reader skip over such rows.  While this is not
appropriate behavior for general purposes, there are some cases in which
it is useful, such as queue-like tables.

Catalog version bumped because this patch changes the representation of
stored rules.

Reviewed by Craig Ringer (based on a previous attempt at an
implementation by Simon Riggs, who also provided input on the syntax
used in the current patch), David Rowley, and Álvaro Herrera.

Author: Thomas Munro
2014-10-07 17:23:34 -03:00
Tom Lane 513d06ded1 Update time zone data files to tzdata release 2014h.
Most zones in the Russian Federation are subtracting one or two hours
as of 2014-10-26.  Update the meanings of the abbreviations IRKT, KRAT,
MAGT, MSK, NOVT, OMST, SAKT, VLAT, YAKT, YEKT to match.

The IANA timezone database has adopted abbreviations of the form AxST/AxDT
for all Australian time zones, reflecting what they believe to be current
majority practice Down Under.  These names do not conflict with usage
elsewhere (other than ACST for Acre Summer Time, which has been in disuse
since 1994).  Accordingly, adopt these names into our "Default" timezone
abbreviation set.  The "Australia" abbreviation set now contains only
CST,EAST,EST,SAST,SAT,WST, all of which are thought to be mostly historical
usage.  Note that SAST has also been changed to be South Africa Standard
Time in the "Default" abbreviation set.

Add zone abbreviations SRET (Asia/Srednekolymsk) and XJT (Asia/Urumqi),
and use WSST/WSDT for western Samoa.

Also a DST law change in the Turks & Caicos Islands (America/Grand_Turk),
and numerous corrections for historical time zone data.
2014-10-04 14:18:19 -04:00
Heikki Linnakangas 5fa6c81a43 Remove num_xloginsert_locks GUC, replace with a #define
I left the GUC in place for the beta period, so that people could experiment
with different values. No-one's come up with any data that a different value
would be better under some circumstances, so rather than try to document to
users what the GUC, let's just hard-code the current value, 8.
2014-10-01 16:42:26 +03:00
Stephen Frost c8a026e4f1 Revert 95d737ff to add 'ignore_nulls'
Per discussion, revert the commit which added 'ignore_nulls' to
row_to_json.  This capability would be better added as an independent
function rather than being bolted on to row_to_json.  Additionally,
the implementation didn't address complex JSON objects, and so was
incomplete anyway.

Pointed out by Tom and discussed with Andrew and Robert.
2014-09-29 13:32:22 -04:00
Tom Lane def4c28cf9 Change JSONB's on-disk format for improved performance.
The original design used an array of offsets into the variable-length
portion of a JSONB container.  However, such an array is basically
uncompressible by simple compression techniques such as TOAST's LZ
compressor.  That's bad enough, but because the offset array is at the
front, it tended to trigger the give-up-after-1KB heuristic in the TOAST
code, so that the entire JSONB object was stored uncompressed; which was
the root cause of bug #11109 from Larry White.

To fix without losing the ability to extract a random array element in O(1)
time, change this scheme so that most of the JEntry array elements hold
lengths rather than offsets.  With data that's compressible at all, there
tend to be fewer distinct element lengths, so that there is scope for
compression of the JEntry array.  Every N'th entry is still an offset.
To determine the length or offset of any specific element, we might have
to examine up to N preceding JEntrys, but that's still O(1) so far as the
total container size is concerned.  Testing shows that this cost is
negligible compared to other costs of accessing a JSONB field, and that
the method does largely fix the incompressible-data problem.

While at it, rearrange the order of elements in a JSONB object so that
it's "all the keys, then all the values" not alternating keys and values.
This doesn't really make much difference right at the moment, but it will
allow providing a fast path for extracting individual object fields from
large JSONB values stored EXTERNAL (ie, uncompressed), analogously to the
existing optimization for substring extraction from large EXTERNAL text
values.

Bump catversion to denote the incompatibility in on-disk format.
We will need to fix pg_upgrade to disallow upgrading jsonb data stored
with 9.4 betas 1 and 2.

Heikki Linnakangas and Tom Lane
2014-09-29 12:29:21 -04:00
Stephen Frost ff27fcfa0a Fix relcache for policies, and doc updates
Andres pointed out that there was an extra ';' in equalPolicies, which
made me realize that my prior testing with CLOBBER_CACHE_ALWAYS was
insufficient (it didn't always catch the issue, just most of the time).
Thanks to that, a different issue was discovered, specifically in
equalRSDescs.  This change corrects eqaulRSDescs to return 'true' once
all policies have been confirmed logically identical.  After stepping
through both functions to ensure correct behavior, I ran this for
about 12 hours of CLOBBER_CACHE_ALWAYS runs of the regression tests
with no failures.

In addition, correct a few typos in the documentation which were pointed
out by Thom Brown (thanks!) and improve the policy documentation further
by adding a flushed out usage example based on a unix passwd file.

Lastly, clean up a few comments in the regression tests and pg_dump.h.
2014-09-26 12:46:26 -04:00
Peter Eisentraut d11339c099 Fix whitespace 2014-09-26 02:43:46 -04:00
Andrew Dunstan 9111d46351 Remove ill-conceived ban on zero length json object keys.
We removed a similar ban on this in json_object recently, but the ban in
datum_to_json was left, which generate4d sprutious errors in othee json
generators, notable json_build_object.

Along the way, add an assertion that datum_to_json is not passed a null
key. All current callers comply with this rule, but the assertion will
catch any possible future misbehaviour.
2014-09-25 15:08:42 -04:00
Andrew Dunstan ecacbdbcee Return NULL from json_object_agg if it gets no rows.
This makes it consistent with the docs and with all other builtin
aggregates apart from count().
2014-09-25 08:18:18 -04:00
Stephen Frost 6550b901fe Code review for row security.
Buildfarm member tick identified an issue where the policies in the
relcache for a relation were were being replaced underneath a running
query, leading to segfaults while processing the policies to be added
to a query.  Similar to how TupleDesc RuleLocks are handled, add in a
equalRSDesc() function to check if the policies have actually changed
and, if not, swap back the rsdesc field (using the original instead of
the temporairly built one; the whole structure is swapped and then
specific fields swapped back).  This now passes a CLOBBER_CACHE_ALWAYS
for me and should resolve the buildfarm error.

In addition to addressing this, add a new chapter in Data Definition
under Privileges which explains row security and provides examples of
its usage, change \d to always list policies (even if row security is
disabled- but note that it is disabled, or enabled with no policies),
rework check_role_for_policy (it really didn't need the entire policy,
but it did need to be using has_privs_of_role()), and change the field
in pg_class to relrowsecurity from relhasrowsecurity, based on
Heikki's suggestion.  Also from Heikki, only issue SET ROW_SECURITY in
pg_restore when talking to a 9.5+ server, list Bypass RLS in \du, and
document --enable-row-security options for pg_dump and pg_restore.

Lastly, fix a number of minor whitespace and typo issues from Heikki,
Dimitri, add a missing #include, per Peter E, fix a few minor
variable-assigned-but-not-used and resource leak issues from Coverity
and add tab completion for role attribute bypassrls as well.
2014-09-24 16:32:22 -04:00
Robert Haas e246b3d6ea Add a fast pre-check for equality of equal-length strings.
Testing reveals that that doing a memcmp() before the strcoll() costs
practically nothing, at least on the systems we tested, and it speeds
up sorts containing many equal strings significatly.

Peter Geoghegan.  Review by myself and Heikki Linnakangas.  Comments
rewritten by me.
2014-09-19 12:39:00 -04:00
Stephen Frost 491c029dbc Row-Level Security Policies (RLS)
Building on the updatable security-barrier views work, add the
ability to define policies on tables to limit the set of rows
which are returned from a query and which are allowed to be added
to a table.  Expressions defined by the policy for filtering are
added to the security barrier quals of the query, while expressions
defined to check records being added to a table are added to the
with-check options of the query.

New top-level commands are CREATE/ALTER/DROP POLICY and are
controlled by the table owner.  Row Security is able to be enabled
and disabled by the owner on a per-table basis using
ALTER TABLE .. ENABLE/DISABLE ROW SECURITY.

Per discussion, ROW SECURITY is disabled on tables by default and
must be enabled for policies on the table to be used.  If no
policies exist on a table with ROW SECURITY enabled, a default-deny
policy is used and no records will be visible.

By default, row security is applied at all times except for the
table owner and the superuser.  A new GUC, row_security, is added
which can be set to ON, OFF, or FORCE.  When set to FORCE, row
security will be applied even for the table owner and superusers.
When set to OFF, row security will be disabled when allowed and an
error will be thrown if the user does not have rights to bypass row
security.

Per discussion, pg_dump sets row_security = OFF by default to ensure
that exports and backups will have all data in the table or will
error if there are insufficient privileges to bypass row security.
A new option has been added to pg_dump, --enable-row-security, to
ask pg_dump to export with row security enabled.

A new role capability, BYPASSRLS, which can only be set by the
superuser, is added to allow other users to be able to bypass row
security using row_security = OFF.

Many thanks to the various individuals who have helped with the
design, particularly Robert Haas for his feedback.

Authors include Craig Ringer, KaiGai Kohei, Adam Brightwell, Dean
Rasheed, with additional changes and rework by me.

Reviewers have included all of the above, Greg Smith,
Jeff McCormick, and Robert Haas.
2014-09-19 11:18:35 -04:00
Tom Lane fe550b2ac2 Invent PGC_SU_BACKEND and mark log_connections/log_disconnections that way.
This new GUC context option allows GUC parameters to have the combined
properties of PGC_BACKEND and PGC_SUSET, ie, they don't change after
session start and non-superusers can't change them.  This is a more
appropriate choice for log_connections and log_disconnections than their
previous context of PGC_BACKEND, because we don't want non-superusers
to be able to affect whether their sessions get logged.

Note: the behavior for log_connections is still a bit odd, in that when
a superuser attempts to set it from PGOPTIONS, the setting takes effect
but it's too late to enable or suppress connection startup logging.
It's debatable whether that's worth fixing, and in any case there is
a reasonable argument for PGC_SU_BACKEND to exist.

In passing, re-pgindent the files touched by this commit.

Fujii Masao, reviewed by Joe Conway and Amit Kapila
2014-09-13 21:01:57 -04:00
Bruce Momjian 95c38a9895 Revert f68dc5d86b
Renaming will have to be more comprehensive, so I need approval.
2014-09-12 20:42:19 -04:00
Bruce Momjian f68dc5d86b More formatting.c variable renaming, for clarity 2014-09-12 20:35:07 -04:00
Fujii Masao 4ad2a54805 Add GUC to enable logging of replication commands.
Previously replication commands like IDENTIFY_COMMAND were not logged
even when log_statements is set to all. Some users who want to audit
all types of statements were not satisfied with this situation. To
address the problem, this commit adds new GUC log_replication_commands.
If it's enabled, all replication commands are logged in the server log.

There are many ways to allow us to enable that logging. For example,
we can extend log_statement so that replication commands are logged
when it's set to all. But per discussion in the community, we reached
the consensus to add separate GUC for that.

Reviewed by Ian Barwick, Robert Haas and Heikki Linnakangas.
2014-09-13 02:55:45 +09:00
Tom Lane 1d352325b8 Fix power_var_int() for large integer exponents.
The code for raising a NUMERIC value to an integer power wasn't very
careful about large powers.  It got an outright wrong answer for an
exponent of INT_MIN, due to failure to consider overflow of the Abs(exp)
operation; which is fixable by using an unsigned rather than signed
exponent value after that point.  Also, even though the number of
iterations of the power-computation loop is pretty limited, it's easy for
the repeated squarings to result in ridiculously enormous intermediate
values, which can take unreasonable amounts of time/memory to process,
or even overflow the internal "weight" field and so produce a wrong answer.
We can forestall misbehaviors of that sort by bailing out as soon as the
weight value exceeds what will fit in int16, since then the final answer
must overflow (if exp > 0) or underflow (if exp < 0) the packed numeric
format.

Per off-list report from Pavel Stehule.  Back-patch to all supported
branches.
2014-09-11 23:30:51 -04:00
Stephen Frost 95d737ff45 Add 'ignore_nulls' option to row_to_json
Provide an option to skip NULL values in a row when generating a JSON
object from that row with row_to_json.  This can reduce the size of the
JSON object in cases where columns are NULL without really reducing the
information in the JSON object.

This also makes row_to_json into a single function with default values,
rather than having multiple functions.  In passing, change array_to_json
to also be a single function with default values (we don't add an
'ignore_nulls' option yet- it's not clear that there is a sensible
use-case there, and it hasn't been asked for in any case).

Pavel Stehule
2014-09-11 21:23:51 -04:00
Heikki Linnakangas 0ed41529f6 Silence compiler warning on Windows.
David Rowley.
2014-09-11 13:50:14 +03:00
Bruce Momjian 36ad1a87a3 Implement mxid_age() to compute multi-xid age
Report by Josh Berkus
2014-09-10 17:13:04 -04:00
Tom Lane e80252d424 Add width_bucket(anyelement, anyarray).
This provides a convenient method of classifying input values into buckets
that are not necessarily equal-width.  It works on any sortable data type.

The choice of function name is a bit debatable, perhaps, but showing that
there's a relationship to the SQL standard's width_bucket() function seems
more attractive than the other proposals.

Petr Jelinek, reviewed by Pavel Stehule
2014-09-09 15:34:14 -04:00
Peter Eisentraut 57b1085df5 Allow empty content in xml type
The xml type previously rejected "content" that is empty or consists
only of spaces.  But the SQL/XML standard allows that, so change that.
The accepted values for XML "documents" are not changed.

Reviewed-by: Ali Akbar <the.apaan@gmail.com>
2014-09-09 11:34:52 -04:00
Bruce Momjian a9c22d1480 Rename C variables in formatting.c, for clarity
Also add C comments.  This should help future debugging of this
notorious file.
2014-09-05 09:52:31 -04:00
Fujii Masao bd3b7a9eef Support ALTER SYSTEM RESET command.
This patch allows us to execute ALTER SYSTEM RESET command to
remove the configuration entry from postgresql.auto.conf.

Vik Fearing, reviewed by Amit Kapila and me.
2014-09-02 16:06:58 +09:00
Tom Lane 01b6976c13 Fix unportable use of isspace().
Introduced in commit 11a020eb6.
2014-09-01 18:37:45 -04:00
Peter Eisentraut 65c9dc231a Assorted message improvements 2014-08-29 00:26:17 -04:00
Tom Lane 6c40f8316e Add min and max aggregates for inet/cidr data types.
Haribabu Kommi, reviewed by Muhammad Asif Naeem
2014-08-28 22:37:58 -04:00
Andres Freund 11a020eb6e Allow escaping of option values for options passed at connection start.
This is useful to allow to set GUCs to values that include spaces;
something that wasn't previously possible. The primary case motivating
this is the desire to set default_transaction_isolation to 'repeatable
read' on a per connection basis, but other usecases like seach_path do
also exist.

This introduces a slight backward incompatibility: Previously a \ in
an option value would have been passed on literally, now it'll be
taken as an escape.

The relevant mailing list discussion starts with
20140204125823.GJ12016@awork2.anarazel.de.
2014-08-28 13:59:29 +02:00
Jeff Davis 8167a3883a Allow multibyte characters as escape in SIMILAR TO and SUBSTRING.
Previously, only a single-byte character was allowed as an
escape. This patch allows it to be a multi-byte character, though it
still must be a single character.

Reviewed by Heikki Linnakangas and Tom Lane.
2014-08-27 21:07:36 -07:00
Robert Haas 9522ec3e70 Fix typo in b34e37bfef.
Spotted by Peter Geoghegan.
2014-08-26 15:58:50 -04:00
Bruce Momjian 73fe87503f rename macro isTempOrToastNamespace to isTempOrTempToastNamespace
Done for clarity
2014-08-25 21:28:19 -04:00
Tom Lane 73eba19aeb Fix another ancient memory-leak bug in relcache.c.
CheckConstraintFetch() leaked a cstring in the caller's context for each
CHECK constraint expression it copied into the relcache.  Ordinarily that
isn't problematic, but it can be during CLOBBER_CACHE testing because so
many reloads can happen during a single query; so complicate the code
slightly to allow freeing the cstring after use.  Per testing on buildfarm
member barnacle.

This is exactly like the leak fixed in AttrDefaultFetch() by commit
078b2ed291.  (Yes, this time I did look for
other instances of the same coding pattern :-(.)  Like that patch, no
back-patch, since it seems unlikely that there's any problem except under
very artificial test conditions.

BTW, it strikes me that both of these places would require further work
comparable to commit ab8c84db2f, if we ever
supported defaults or check constraints on system catalogs: they both
assume they are copying into an empty relcache data structure, and that
conceivably wouldn't be the case during recursive reloading of a system
catalog.  This does not seem worth worrying about for the moment, since
there is no near-term prospect of supporting any such thing.  So I'll
just note the possibility for the archives' sake.
2014-08-24 11:56:52 -04:00
Tom Lane 41dd50e84d Fix corner-case behaviors in JSON/JSONB field extraction operators.
Cause the path extraction operators to return their lefthand input,
not NULL, if the path array has no elements.  This seems more consistent
since the case ought to correspond to applying the simple extraction
operator (->) zero times.

Cause other corner cases in field/element/path extraction to return NULL
rather than failing.  This behavior is arguably more useful than throwing
an error, since it allows an expression index using these operators to be
built even when not all values in the column are suitable for the
extraction being indexed.  Moreover, we already had multiple
inconsistencies between the path extraction operators and the simple
extraction operators, as well as inconsistencies between the JSON and
JSONB code paths.  Adopt a uniform rule of returning NULL rather than
throwing an error when the JSON input does not have a structure that
permits the request to be satisfied.

Back-patch to 9.4.  Update the release notes to list this as a behavior
change since 9.3.
2014-08-22 13:17:58 -04:00
Tom Lane 9bac66020d Fix core dump in jsonb #> operator, and add regression test cases.
jsonb's #> operator segfaulted (dereferencing a null pointer) if the RHS
was a zero-length array, as reported in bug #11207 from Justin Van Winkle.
json's #> operator returns NULL in such cases, so for the moment let's
make jsonb act likewise.

Also add a bunch of regression test queries memorializing the -> and #>
operators' behavior for this and other corner cases.

There is a good argument for changing some of these behaviors, as they
are not very consistent with each other, and throwing an error isn't
necessarily a desirable behavior for operators that are likely to be
used in indexes.  However, everybody can agree that a core dump is the
Wrong Thing, and we need test cases even if we decide to change their
expected output later.
2014-08-20 16:48:53 -04:00
Heikki Linnakangas 02587dcddc Use comma+space as the separator in the default search_path.
While the space is optional, it seems nicer to be consistent with what
you get if you do "SET search_path=...". SET always normalizes the
separator to be comma+space.

Christoph Martin
2014-08-20 12:06:08 +03:00