Commit Graph

14846 Commits

Author SHA1 Message Date
Peter Eisentraut 30982be4e5 Integrate pg_upgrade_support module into backend
Previously, these functions were created in a schema "binary_upgrade",
which was deleted after pg_upgrade was finished.  Because we don't want
to keep that schema around permanently, move them to pg_catalog but
rename them with a binary_upgrade_... prefix.

The provided functions are only small wrappers around global variables
that were added specifically for pg_upgrade use, so keeping the module
separate does not create any modularity.

The functions still check that they are only called in binary upgrade
mode, so it is not possible to call these during normal operation.

Reviewed-by: Michael Paquier <michael.paquier@gmail.com>
2015-04-14 19:26:37 -04:00
Alvaro Herrera 0a52fafce4 Fix typo in comment
SLRU_SEGMENTS_PER_PAGE -> SLRU_PAGES_PER_SEGMENT

I introduced this ancient typo in subtrans.c and later propagated it to
multixact.c.  I fixed the latter in f741300c, but only back to 9.3;
backpatch to all supported branches for consistency.
2015-04-14 12:12:18 -03:00
Heikki Linnakangas 4f700bcd20 Reorganize our CRC source files again.
Now that we use CRC-32C in WAL and the control file, the "traditional" and
"legacy" CRC-32 variants are not used in any frontend programs anymore.
Move the code for those back from src/common to src/backend/utils/hash.

Also move the slicing-by-8 implementation (back) to src/port. This is in
preparation for next patch that will add another implementation that uses
Intel SSE 4.2 instructions to calculate CRC-32C, where available.
2015-04-14 17:03:42 +03:00
Alvaro Herrera b5213e14a4 Remove duplicated word in README 2015-04-13 14:28:21 -03:00
Heikki Linnakangas b2a5545bd6 Don't archive bogus recycled or preallocated files after timeline switch.
After a timeline switch, we would leave behind recycled WAL segments that
are in the future, but on the old timeline. After promotion, and after they
become old enough to be recycled again, we would notice that they don't have
a .ready or .done file, create a .ready file for them, and archive them.
That's bogus, because the files contain garbage, recycled from an older
timeline (or prealloced as zeros). We shouldn't archive such files.

This could happen when we're following a timeline switch during replay, or
when we switch to new timeline at end-of-recovery.

To fix, whenever we switch to a new timeline, scan the data directory for
WAL segments on the old timeline, but with a higher segment number, and
remove them. Those don't belong to our timeline history, and are most
likely bogus recycled or preallocated files. They could also be valid files
that we streamed from the primary ahead of time, but in any case, they're
not needed to recover to the new timeline.
2015-04-13 16:53:49 +03:00
Magnus Hagander 9029f4b374 Add system view pg_stat_ssl
This view shows information about all connections, such as if the
connection is using SSL, which cipher is used, and which client
certificate (if any) is used.

Reviews by Alex Shulgin, Heikki Linnakangas, Andres Freund & Michael Paquier
2015-04-12 19:07:46 +02:00
Heikki Linnakangas a10589a512 Remove duplicated words in comments.
David Rowley
2015-04-12 10:46:17 +03:00
Alvaro Herrera 27846f02c1 Optimize locking a tuple already locked by another subxact
Locking and updating the same tuple repeatedly led to some strange
multixacts being created which had several subtransactions of the same
parent transaction holding locks of the same strength.  However,
once a subxact of the current transaction holds a lock of a given
strength, it's not necessary to acquire the same lock again.  This made
some coding patterns much slower than required.

The fix is twofold.  First we change HeapTupleSatisfiesUpdate to return
HeapTupleBeingUpdated for the case where the current transaction is
already a single-xid locker for the given tuple; it used to return
HeapTupleMayBeUpdated for that case.  The new logic is simpler, and the
change to pgrowlocks is a testament to that: previously we needed to
check for the single-xid locker separately in a very ugly way.  That
test is simpler now.

As fallout from the HTSU change, some of its callers need to be amended
so that tuple-locked-by-own-transaction is taken into account in the
BeingUpdated case rather than the MayBeUpdated case.  For many of them
there is no difference; but heap_delete() and heap_update now check
explicitely and do not grab tuple lock in that case.

The HTSU change also means that routine MultiXactHasRunningRemoteMembers
introduced in commit 11ac4c73cb is no longer necessary and can be
removed; the case that used to require it is now handled naturally as
result of the changes to heap_delete and heap_update.

The second part of the fix to the performance issue is to adjust
heap_lock_tuple to avoid the slowness:

1. Previously we checked for the case that our own transaction already
held a strong enough lock and returned MayBeUpdated, but only in the
multixact case.  Now we do it for the plain Xid case as well, which
saves having to LockTuple.

2. If the current transaction is the only locker of the tuple (but with
a lock not as strong as what we need; otherwise it would have been
caught in the check mentioned above), we can skip sleeping on the
multixact, and instead go straight to create an updated multixact with
the additional lock strength.

3. Most importantly, make sure that both the single-xid-locker case and
the multixact-locker case optimization are applied always.  We do this
by checking both in a single place, rather than them appearing in two
separate portions of the routine -- something that is made possible by
the HeapTupleSatisfiesUpdate API change.  Previously we would only check
for the single-xid case when HTSU returned MayBeUpdated, and only
checked for the multixact case when HTSU returned BeingUpdated.  This
was at odds with what HTSU actually returned in one case: if our own
transaction was locker in a multixact, it returned MayBeUpdated, so the
optimization never applied.  This is what led to the large multixacts in
the first place.

Per bug report #8470 by Oskari Saarenmaa.
2015-04-10 13:47:15 -03:00
Andres Freund 06d36fa40c Fix typo in eb68379c3.
I'd accidentally missed to rename PG_FORCE_NULL to BKI_FORCE_NULL in one
place.

Author: Jeevan Chalke
Discussion: CAM2+6=VPoow5PqgqiTjPX4QNeokb7op8aD_8Zg3QnHZMvvU0GQ@mail.gmail.com
2015-04-09 13:29:22 +02:00
Fujii Masao 17d436d2e8 Remove obsolete FORCE option from REINDEX.
FORCE option has been marked "obsolete" since very old version 7.4
but existed for backwards compatibility. Per discussion on pgsql-hackers,
we concluded that it's no longer worth keeping supporting the option.
2015-04-09 11:31:42 +09:00
Alvaro Herrera 73206812cd Change SQLSTATE for event triggers "wrong context" message
When certain event-trigger-only functions are called when not in the
wrong context, they were reporting the "feature not supported" SQLSTATE,
which is somewhat misleading.  Create a new custom error code for such
uses instead.

Not backpatched since it may be seen as an undesirable behavioral
change.

Author: Michael Paquier
Discussion: https://www.postgresql.org/message-id/CAB7nPqQ-5NAkHQHh_NOm7FPep37NCiLKwPoJ2Yxb8TDoGgbYYA@mail.gmail.com
2015-04-08 15:26:50 -03:00
Alvaro Herrera 5df64f298d Fix autovacuum launcher shutdown sequence
It was previously possible to have the launcher re-execute its main loop
before shutting down if some other signal was received or an error
occurred after getting SIGTERM, as reported by Qingqing Zhou.

While investigating, Tom Lane further noticed that if autovacuum had
been disabled in the config file, it would misbehave by trying to start
a new worker instead of bailing out immediately -- it would consider
itself as invoked in emergency mode.

Fix both problems by checking the shutdown flag in a few more places.
These problems have existed since autovacuum was introduced, so
backpatch all the way back.
2015-04-08 13:19:49 -03:00
Fujii Masao 026fafde91 Fix typo in comment. 2015-04-08 20:55:43 +09:00
Robert Haas aea652abd3 Make trace_sort control abbreviation debug output for the text opclass.
This is consistent with what the new numeric suppor for abbreviated keys
now does, and seems much more convenient than having a separate compiler
define to control this debug output.

Peter Geoghegan
2015-04-07 22:45:17 -04:00
Alvaro Herrera 4e17e32f53 Remove variable shadowing
Commit a2e35b53 should have removed the variable declaration in the
inner block, but didn't.  As a result, the returned address might end up
not being what was intended.
2015-04-07 17:14:00 -03:00
Alvaro Herrera e9a077cad3 pg_event_trigger_dropped_objects: add is_temp column
It now also reports temporary objects dropped that are local to the
backend.  Previously we weren't reporting any temp objects because it
was deemed unnecessary; but as it turns out, it is necessary if we want
to keep close track of DDL command execution inside one session.  Temp
objects are reported as living in schema pg_temp, which works because
such a schema-qualification always refers to the temp objects of the
current session.
2015-04-06 11:40:55 -03:00
Alvaro Herrera 70dc2db7f1 Fix object identities for pg_conversion objects
This was already fixed in 0d906798f, but I failed to update the
array-formatted case.  This is not backpatched, since this only affects
the code path introduced by commit a676201490.
2015-04-06 11:15:13 -03:00
Simon Riggs 0ef0396ae1 Reduce lock levels of some trigger DDL and add FKs
Reduce lock levels to ShareRowExclusive for the following SQL
 CREATE TRIGGER (but not DROP or ALTER)
 ALTER TABLE ENABLE TRIGGER
 ALTER TABLE DISABLE TRIGGER
 ALTER TABLE … ADD CONSTRAINT FOREIGN KEY

Original work by Simon Riggs, extracted and refreshed by Andreas Karlsson
New test cases added by Andreas Karlsson
Reviewed by Noah Misch, Andres Freund, Michael Paquier and Simon Riggs
2015-04-05 11:37:08 -04:00
Tom Lane ca6805338f Fix incorrect matching of subexpressions in outer-join plan nodes.
Previously we would re-use input subexpressions in all expression trees
attached to a Join plan node.  However, if it's an outer join and the
subexpression appears in the nullable-side input, this is potentially
incorrect for apparently-matching subexpressions that came from above
the outer join (ie, targetlist and qpqual expressions), because the
executor will treat the subexpression value as NULL when maybe it should
not be.

The case is fairly hard to hit because (a) you need a non-strict
subexpression (else NULL is correct), and (b) we don't usually compute
expressions in the outputs of non-toplevel plan nodes.  But we might do
so if the expressions are sort keys for a mergejoin, for example.

Probably in the long run we should make a more explicit distinction between
Vars appearing above and below an outer join, but that will be a major
planner redesign and not at all back-patchable.  For the moment, just hack
set_join_references so that it will not match any non-Var expressions
coming from nullable inputs to expressions that came from above the join.
(This is somewhat overkill, in that a strict expression could still be
matched, but it doesn't seem worth the effort to check that.)

Per report from Qingqing Zhou.  The added regression test case is based
on his example.

This has been broken for a very long time, so back-patch to all active
branches.
2015-04-04 19:55:15 -04:00
Robert Haas 368b7c601e Fix numeric abbreviation for --disable-float8-byval.
When committing abd94bcac4, I tried to make
it decide what kind of abbreviation to use based only on SIZEOF_DATUM,
without regard to USE_FLOAT8_BYVAL.  That attempt was a few bricks short
of a load, so try to fix it, and add a comment explaining what we're
about.

Patch by me; review (but not a full endorsement) by Andrew Gierth.
2015-04-03 22:34:37 -04:00
Tom Lane b7e1652d5d Remove unnecessary variables in _hash_splitbucket().
Commit ed9cc2b5df made it unnecessary to pass
start_nblkno to _hash_splitbucket(), and for that matter unnecessary to
have the internal nblkno variable either.  My compiler didn't complain
about that, but some did.  I also rearranged the use of oblkno a bit to
make that case more parallel.

Report and initial patch by Petr Jelinek, rearranged a bit by me.
Back-patch to all branches, like the previous patch.
2015-04-03 16:49:44 -04:00
Alvaro Herrera 9550e8348b Transform ALTER TABLE/SET TYPE/USING expr during parse analysis
This lets later stages have access to the transformed expression; in
particular it allows DDL-deparsing code during event triggers to pass
the transformed expression to ruleutils.c, so that the complete command
can be deparsed.

This shuffles the timing of the transform calls a bit: previously,
nothing was transformed during parse analysis, and only the
RELKIND_RELATION case was being handled during execution.  After this
patch, all expressions are transformed during parse analysis (including
those for relkinds other than RELATION), and the error for other
relation kinds is thrown only during execution.  So we do more work than
before to reject some bogus cases.  That seems acceptable.
2015-04-03 17:33:05 -03:00
Alvaro Herrera 4ff695b17d Add log_min_autovacuum_duration per-table option
This is useful to control autovacuum log volume, for situations where
monitoring only a set of tables is necessary.

Author: Michael Paquier
Reviewed by: A team led by Naoya Anzai (also including Akira Kurosawa,
Taiki Kondo, Huong Dangminh), Fujii Masao.
2015-04-03 11:55:50 -03:00
Alvaro Herrera a75fb9b335 Have autovacuum workers listen to SIGHUP, too
They have historically ignored it, but it's been said to be useful at
times to change their settings mid-flight.

Author: Michael Paquier
2015-04-03 11:52:55 -03:00
Fujii Masao 6e4bf4ecd3 Fix error handling of XLogReaderAllocate in case of OOM
Similarly to previous fix 9b8d478, commit 2c03216 has switched
XLogReaderAllocate() to use a set of palloc calls instead of malloc,
causing any callers of this function to fail with an error instead of
receiving a NULL pointer in case of out-of-memory error. Fix this by
using palloc_extended with MCXT_ALLOC_NO_OOM that will safely return
NULL in case of an OOM.

Michael Paquier, slightly modified by me.
2015-04-03 21:55:37 +09:00
Robert Haas f85155e18c Change the way we decide whether to give up on abbreviated text keys.
Be more aggressive about aborting early on if it looks like it's not
helping, but be less aggressive about aborting later on, since it's
more expensive at that point, and also since we're currently aborting
in some cases where abbreviation can still deliver a substantial win.

Peter Geoghegan. Extensive testing by Tomas Vondra.
2015-04-03 08:32:05 -04:00
Fujii Masao 9b8d4782ba Rework handling of OOM when allocating record buffer in XLOG reader.
Commit 2c03216 changed allocate_recordbuf() so that it uses a palloc to
allocate the read buffer and fails immediately when an out-of-memory error
shows up, even though its callers still expect that NULL is returned in that
case. This bug is fixed making allocate_recordbuf() use a palloc_extended
with MCXT_ALLOC_NO_OOM flag and return NULL in OOM case.

Michael Paquier
2015-04-03 18:29:38 +09:00
Fujii Masao 8c8a886268 Add palloc_extended for frontend and backend.
This commit also adds pg_malloc_extended for frontend. These interfaces
can be used to control at a lower level memory allocation using an interface
similar to MemoryContextAllocExtended. For example, the callers can specify
MCXT_ALLOC_NO_OOM if they want to suppress the "out of memory" error while
allocating the memory and handle a NULL return value.

Michael Paquier, reviewed by me.
2015-04-03 17:36:12 +09:00
Tom Lane bc49d9324a Fix rare startup failure induced by MVCC-catalog-scans patch.
While a new backend nominally participates in sinval signaling starting
from the SharedInvalBackendInit call near the top of InitPostgres, it
cannot recognize sinval messages for unshared catalogs of its database
until it has set up MyDatabaseId.  This is not problematic for the catcache
or relcache, which by definition won't have loaded any data from or about
such catalogs before that point.  However, commit 568d4138c6
introduced a mechanism for re-using MVCC snapshots for catalog scans, and
made invalidation of those depend on recognizing relevant sinval messages.
So it's possible to establish a catalog snapshot to read pg_authid and
pg_database, then before we set MyDatabaseId, receive sinval messages that
should result in invalidating that snapshot --- but do not, because we
don't realize they are for our database.  This mechanism explains the
intermittent buildfarm failures we've seen since commit 31eae6028e.
That commit was not itself at fault, but it introduced a new regression
test that does reconnections concurrently with the "vacuum full pg_am"
command in vacuum.sql.  This allowed the pre-existing error to be exposed,
given just the right timing, because we'd fail to update our information
about how to access pg_am.  In principle any VACUUM FULL on a system
catalog could have created a similar hazard for concurrent incoming
connections.  Perhaps there are more subtle failure cases as well.

To fix, force invalidation of the catalog snapshot as soon as we've
set MyDatabaseId.

Back-patch to 9.4 where the error was introduced.
2015-04-03 00:07:29 -04:00
Robert Haas 05cce2f903 Repair stupid mistake in preprocessor directive. 2015-04-02 15:57:17 -04:00
Robert Haas b3a5e76e12 After a crash, don't restart workers with BGW_NEVER_RESTART.
Amit Khandekar
2015-04-02 14:38:06 -04:00
Robert Haas abd94bcac4 Use abbreviated keys for faster sorting of numeric datums.
Andrew Gierth, reviewed by Peter Geoghegan, with further tweaks by me.
2015-04-02 14:04:26 -04:00
Alvaro Herrera 00ee6c7672 autovacuum: Fix polarity of "wraparound" variable
Commit 0d83138974 inadvertently reversed the meaning of the
wraparound variable.  This causes vacuums which are not required for
wraparound to wait for locks to be acquired, and what is worse, it
allows wraparound vacuums to skip locked pages.

Bug reported by Jeff Janes in
http://www.postgresql.org/message-id/CAMkU=1xmTEiaY=5oMHsSQo5vd9V1Ze4kNLL0qN2eH0P_GXOaYw@mail.gmail.com
Analysis and patch by Kyotaro HORIGUCHI
2015-04-02 13:34:50 -03:00
Robert Haas c02ef232c1 Add missing calls to DatumGetUInt32.
These were inadvertently ommitted from the commit that introduced
abbreviated keys, commit 4ea51cdfe8.

Peter Geoghegan
2015-04-02 11:57:35 -04:00
Andres Freund 62e2a8dc2c Define integer limits independently from the system definitions.
In 83ff1618 we defined integer limits iff they're not provided by the
system. That turns out not to be the greatest idea because there's
different ways some datatypes can be represented. E.g. on OSX PG's 64bit
datatype will be a 'long int', but OSX unconditionally uses 'long
long'. That disparity then can lead to warnings, e.g. around printf
formats.

One way to fix that would be to back int64 using stdint.h's
int64_t. While a good idea it's not that easy to implement. We would
e.g. need to include stdint.h in our external headers, which we don't
today. Also computing the correct int64 printf formats in that case is
nontrivial.

Instead simply prefix the integer limits with PG_ and define them
unconditionally. I've adjusted all the references to them in code, but
not the ones in comments; the latter seems unnecessary to me.

Discussion: 20150331141423.GK4878@alap3.anarazel.de
2015-04-02 17:43:35 +02:00
Robert Haas f272098e91 Fix another bug in DSM_CREATE_NULL_IF_MAXSEGMENTS handling.
Amit Kapila
2015-04-02 10:39:24 -04:00
Simon Riggs 7dae3cf68c Correct comment to use RS_EPHEMERAL 2015-04-02 07:45:19 -04:00
Tom Lane 89840d7d3f Provide real selectivity estimators for inet/cidr operators.
This patch fills in the formerly-stub networksel() and networkjoinsel()
estimation functions.  Those are used for << <<= >> >>= and && operators
on inet/cidr types.  The estimation is not perfect, certainly, because
we rely on the existing statistics collected for the inet btree operators.
But it's a long way better than nothing, and it's not clear that asking
ANALYZE to collect separate stats for these operators would be a win.

Emre Hasegeli, with reviews from Dilip Kumar and Heikki Linnakangas,
and some further hacking by me
2015-04-01 17:11:21 -04:00
Heikki Linnakangas 1d0db8de04 Remove spurious semicolons.
Petr Jelinek
2015-03-31 15:12:27 +03:00
Tom Lane ed9cc2b5df Fix bogus concurrent use of _hash_getnewbuf() in bucket split code.
_hash_splitbucket() obtained the base page of the new bucket by calling
_hash_getnewbuf(), but it held no exclusive lock that would prevent some
other process from calling _hash_getnewbuf() at the same time.  This is
contrary to _hash_getnewbuf()'s API spec and could in fact cause failures.
In practice, we must only call that function while holding write lock on
the hash index's metapage.

An additional problem was that we'd already modified the metapage's bucket
mapping data, meaning that failure to extend the index would leave us with
a corrupt index.

Fix both issues by moving the _hash_getnewbuf() call to just before we
modify the metapage in _hash_expandtable().

Unfortunately there's still a large problem here, which is that we could
also incur ENOSPC while trying to get an overflow page for the new bucket.
That would leave the index corrupt in a more subtle way, namely that some
index tuples that should be in the new bucket might still be in the old
one.  Fixing that seems substantially more difficult; even preallocating as
many pages as we could possibly need wouldn't entirely guarantee that the
bucket split would complete successfully.  So for today let's just deal
with the base case.

Per report from Antonin Houska.  Back-patch to all active branches.
2015-03-30 16:40:05 -04:00
Alvaro Herrera 97690ea6e8 Change array_offset to return subscripts, not offsets
... and rename it and its sibling array_offsets to array_position and
array_positions, to account for the changed behavior.

Having the functions return subscripts better matches existing practice,
and is better suited to using the result value as a subscript into the
array directly.  For one-based arrays, the new definition is identical
to what was originally committed.

(We use the term "subscript" in the documentation, which is what we use
whenever we talk about arrays; but the functions themselves are named
using the word "position" to match the standard-defined POSITION()
functions.)

Author: Pavel Stěhule
Behavioral problem noted by Dean Rasheed.
2015-03-30 16:13:21 -03:00
Alvaro Herrera 0853630159 Fix lost persistence setting during REINDEX INDEX
ReindexIndex() trusts a parser-built RangeVar with the persistence to
use for the new copy of the index; but the parser naturally does not
know what's the persistence of the original index.  To find out the
correct persistence, grab it from relcache.

This bug was introduced by commit 85b506bbfc, and therefore no
backpatch is necessary.

Bug reported by Thom Brown, analysis and patch by Michael Paquier; test
case provided by Fabrízio de Royes Mello.
2015-03-30 16:01:44 -03:00
Tom Lane 542320c2bd Be more careful about printing constants in ruleutils.c.
The previous coding in get_const_expr() tried to avoid quoting integer,
float, and numeric literals if at all possible.  While that looks nice,
it means that dumped expressions might re-parse to something that's
semantically equivalent but not the exact same parsetree; for example
a FLOAT8 constant would re-parse as a NUMERIC constant with a cast to
FLOAT8.  Though the result would be the same after constant-folding,
this is problematic in certain contexts.  In particular, Jeff Davis
pointed out that this could cause unexpected failures in ALTER INHERIT
operations because of child tables having not-exactly-equivalent CHECK
expressions.  Therefore, favor correctness over legibility and dump
such constants in quotes except in the limited cases where they'll
be interpreted as the same type even without any casting.

This results in assorted small changes in the regression test outputs,
and will affect display of user-defined views and rules similarly.
The odds of that causing problems in the field seem non-negligible;
given the lack of previous complaints, it seems best not to change
this in the back branches.
2015-03-30 14:59:49 -04:00
Tom Lane 701dcc983e Fix rare core dump in BackendIdGetTransactionIds().
BackendIdGetTransactionIds() neglected the possibility that the PROC
pointer in a ProcState array entry is null.  In current usage, this could
only crash if the other backend had exited since pgstat_read_current_status
saw it as active, which is a pretty narrow window.  But it's reachable in
the field, per bug #12918 from Vladimir Borodin.

Back-patch to 9.4 where the faulty code was introduced.
2015-03-30 13:05:27 -04:00
Heikki Linnakangas 0633a60f4d Add index-only scan support to range type GiST opclass.
Andreas Karlsson
2015-03-30 13:22:38 +03:00
Tom Lane e4cbfd673d Add vacuum_delay_point call in compute_index_stats's per-sample-row loop.
Slow functions in index expressions might cause this loop to take long
enough to make it worth being cancellable.  Probably it would be enough
to call CHECK_FOR_INTERRUPTS here, but for consistency with other
per-sample-row loops in this file, let's use vacuum_delay_point.

Report and patch by Jeff Janes.  Back-patch to all supported branches.
2015-03-29 15:04:09 -04:00
Tom Lane 1601830ec2 Make ginbuild's funcCtx be independent of its tmpCtx.
Previously the funcCtx was a child of the tmpCtx, but that was broken
by commit eaa5808e8e, which made
MemoryContextReset() delete, not reset, child contexts.  The behavior of
having a tmpCtx reset also clear the other context seems rather dubious
anyway, so let's just disentangle them.  Per report from Erik Rijkers.

In passing, fix badly-inaccurate comments about these contexts.
2015-03-29 14:02:58 -04:00
Tom Lane 9a8e23311c Remove a couple other vestigial yylex() declarations.
These were workarounds for a long-gone flex bug; all supported versions
of flex emit an extern declaration as expected.
2015-03-29 13:12:28 -04:00
Heikki Linnakangas 3a20b0e7b6 Add index-only scan support to inet GiST opclass.
Andreas Karlsson
2015-03-28 15:11:53 +02:00
Heikki Linnakangas 55b59eda13 Fix GiST index-only scans for opclasses with different storage type.
We cannot use the index's tuple descriptor directly to describe the index
tuples returned in an index-only scan. That's because the index might use
a different datatype for the values stored on disk than the type originally
indexed. As long as they were both pass-by-ref, it worked, but will not work
for pass-by-value types of different sizes. I noticed this as a crash when I
started hacking a patch to add fetch methods to btree_gist.
2015-03-26 23:07:52 +02:00