2010-05-12 04:19:11 +02:00
|
|
|
/*
|
|
|
|
* version.c
|
|
|
|
*
|
|
|
|
* Postgres-version-specific routines
|
2010-07-03 16:23:14 +02:00
|
|
|
*
|
2017-01-03 19:48:53 +01:00
|
|
|
* Copyright (c) 2010-2017, PostgreSQL Global Development Group
|
2015-03-11 03:33:25 +01:00
|
|
|
* src/bin/pg_upgrade/version.c
|
2010-05-12 04:19:11 +02:00
|
|
|
*/
|
|
|
|
|
Create libpgcommon, and move pg_malloc et al to it
libpgcommon is a new static library to allow sharing code among the
various frontend programs and backend; this lets us eliminate duplicate
implementations of common routines. We avoid libpgport, because that's
intended as a place for porting issues; per discussion, it seems better
to keep them separate.
The first use case, and the only implemented by this patch, is pg_malloc
and friends, which many frontend programs were already using.
At the same time, we can use this to provide palloc emulation functions
for the frontend; this way, some palloc-using files in the backend can
also be used by the frontend cleanly. To do this, we change palloc() in
the backend to be a function instead of a macro on top of
MemoryContextAlloc(). This was previously believed to cause loss of
performance, but this implementation has been tweaked by Tom and Andres
so that on modern compilers it provides a slight improvement over the
previous one.
This lets us clean up some places that were already with
localized hacks.
Most of the pg_malloc/palloc changes in this patch were authored by
Andres Freund. Zoltán Böszörményi also independently provided a form of
that. libpgcommon infrastructure was authored by Álvaro.
2013-02-12 14:33:40 +01:00
|
|
|
#include "postgres_fe.h"
|
2011-08-27 03:16:24 +02:00
|
|
|
|
2010-05-12 04:19:11 +02:00
|
|
|
#include "pg_upgrade.h"
|
2017-03-10 04:42:16 +01:00
|
|
|
|
|
|
|
#include "catalog/pg_class.h"
|
2016-08-08 16:07:46 +02:00
|
|
|
#include "fe_utils/string_utils.h"
|
2010-05-12 04:19:11 +02:00
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* new_9_0_populate_pg_largeobject_metadata()
|
|
|
|
* new >= 9.0, old <= 8.4
|
|
|
|
* 9.0 has a new pg_largeobject permission table
|
|
|
|
*/
|
|
|
|
void
|
2011-01-01 18:06:36 +01:00
|
|
|
new_9_0_populate_pg_largeobject_metadata(ClusterInfo *cluster, bool check_mode)
|
2010-05-12 04:19:11 +02:00
|
|
|
{
|
|
|
|
int dbnum;
|
|
|
|
FILE *script = NULL;
|
|
|
|
bool found = false;
|
|
|
|
char output_path[MAXPGPATH];
|
|
|
|
|
2010-10-19 23:38:16 +02:00
|
|
|
prep_status("Checking for large objects");
|
2010-05-12 04:19:11 +02:00
|
|
|
|
2012-03-13 00:47:54 +01:00
|
|
|
snprintf(output_path, sizeof(output_path), "pg_largeobject.sql");
|
2010-05-12 04:19:11 +02:00
|
|
|
|
2011-01-01 18:06:36 +01:00
|
|
|
for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)
|
2010-05-12 04:19:11 +02:00
|
|
|
{
|
|
|
|
PGresult *res;
|
|
|
|
int i_count;
|
2011-01-01 18:06:36 +01:00
|
|
|
DbInfo *active_db = &cluster->dbarr.dbs[dbnum];
|
|
|
|
PGconn *conn = connectToServer(cluster, active_db->db_name);
|
2010-05-12 04:19:11 +02:00
|
|
|
|
|
|
|
/* find if there are any large objects */
|
2010-10-19 23:38:16 +02:00
|
|
|
res = executeQueryOrDie(conn,
|
2010-05-12 04:19:11 +02:00
|
|
|
"SELECT count(*) "
|
|
|
|
"FROM pg_catalog.pg_largeobject ");
|
|
|
|
|
|
|
|
i_count = PQfnumber(res, "count");
|
|
|
|
if (atoi(PQgetvalue(res, 0, i_count)) != 0)
|
|
|
|
{
|
|
|
|
found = true;
|
|
|
|
if (!check_mode)
|
|
|
|
{
|
2016-08-08 16:07:46 +02:00
|
|
|
PQExpBufferData connectbuf;
|
|
|
|
|
2012-03-13 00:47:54 +01:00
|
|
|
if (script == NULL && (script = fopen_priv(output_path, "w")) == NULL)
|
Improve error reporting in pg_upgrade's file copying/linking/rewriting.
The previous design for this had copyFile(), linkFile(), and
rewriteVisibilityMap() returning strerror strings, with the caller
producing one-size-fits-all error messages based on that. This made it
impossible to produce messages that described the failures with any degree
of precision, especially not short-read problems since those don't set
errno at all.
Since pg_upgrade has no intention of continuing after any error in this
area, let's fix this by just letting these functions call pg_fatal() for
themselves, making it easy for each point of failure to have a suitable
error message. Taking this approach also allows dropping cleanup code
that was unnecessary and was often rather sloppy about preserving errno.
To not lose relevant info that was reported before, pass in the schema name
and table name of the current table so that they can be included in the
error reports.
An additional problem was the use of getErrorText(), which was flat out
wrong for all but a couple of call sites, because it unconditionally did
"_dosmaperr(GetLastError())" on Windows. That's only appropriate when
reporting an error from a Windows-native API, which only a couple of
the callers were actually doing. Thus, even the reported strerror string
would be unrelated to the actual failure in many cases on Windows.
To fix, get rid of getErrorText() altogether, and just have call sites
do strerror(errno) instead, since that's the way all the rest of our
frontend programs do it. Add back the _dosmaperr() calls in the two
places where that's actually appropriate.
In passing, make assorted messages hew more closely to project style
guidelines, notably by removing initial capitals in not-complete-sentence
primary error messages. (I didn't make any effort to clean up places
I didn't have another reason to touch, though.)
Per discussion of a report from Thomas Kellerer. Back-patch to 9.6,
but no further; given the relative infrequency of reports of problems
here, it's not clear it's worth adapting the patch to older branches.
Patch by me, but with credit to Alvaro Herrera for spotting the issue
with getErrorText's misuse of _dosmaperr().
Discussion: <nsjrbh$8li$1@blaine.gmane.org>
2016-10-01 02:40:27 +02:00
|
|
|
pg_fatal("could not open file \"%s\": %s\n", output_path,
|
|
|
|
strerror(errno));
|
2016-08-08 16:07:46 +02:00
|
|
|
|
|
|
|
initPQExpBuffer(&connectbuf);
|
|
|
|
appendPsqlMetaConnect(&connectbuf, active_db->db_name);
|
|
|
|
fputs(connectbuf.data, script);
|
|
|
|
termPQExpBuffer(&connectbuf);
|
|
|
|
|
2010-05-12 04:19:11 +02:00
|
|
|
fprintf(script,
|
|
|
|
"SELECT pg_catalog.lo_create(t.loid)\n"
|
|
|
|
"FROM (SELECT DISTINCT loid FROM pg_catalog.pg_largeobject) AS t;\n");
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
PQclear(res);
|
|
|
|
PQfinish(conn);
|
|
|
|
}
|
|
|
|
|
2011-03-09 03:35:42 +01:00
|
|
|
if (script)
|
|
|
|
fclose(script);
|
|
|
|
|
2010-05-12 04:19:11 +02:00
|
|
|
if (found)
|
|
|
|
{
|
2010-10-19 23:38:16 +02:00
|
|
|
report_status(PG_WARNING, "warning");
|
2010-05-12 04:19:11 +02:00
|
|
|
if (check_mode)
|
2010-10-19 23:38:16 +02:00
|
|
|
pg_log(PG_WARNING, "\n"
|
2011-07-12 06:13:51 +02:00
|
|
|
"Your installation contains large objects. The new database has an\n"
|
|
|
|
"additional large object permission table. After upgrading, you will be\n"
|
2017-08-23 02:32:17 +02:00
|
|
|
"given a command to populate the pg_largeobject_metadata table with\n"
|
2011-07-12 06:13:51 +02:00
|
|
|
"default permissions.\n\n");
|
2010-05-12 04:19:11 +02:00
|
|
|
else
|
2010-10-19 23:38:16 +02:00
|
|
|
pg_log(PG_WARNING, "\n"
|
2011-07-12 06:13:51 +02:00
|
|
|
"Your installation contains large objects. The new database has an\n"
|
|
|
|
"additional large object permission table, so default permissions must be\n"
|
|
|
|
"defined for all large objects. The file\n"
|
|
|
|
" %s\n"
|
|
|
|
"when executed by psql by the database superuser will set the default\n"
|
|
|
|
"permissions.\n\n",
|
2010-05-12 04:19:11 +02:00
|
|
|
output_path);
|
|
|
|
}
|
|
|
|
else
|
2010-10-19 23:38:16 +02:00
|
|
|
check_ok();
|
2010-05-12 04:19:11 +02:00
|
|
|
}
|
2014-05-14 22:26:06 +02:00
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* old_9_3_check_for_line_data_type_usage()
|
|
|
|
* 9.3 -> 9.4
|
|
|
|
* Fully implement the 'line' data type in 9.4, which previously returned
|
2014-05-15 19:23:31 +02:00
|
|
|
* "not enabled" by default and was only functionally enabled with a
|
2014-05-14 22:26:06 +02:00
|
|
|
* compile-time switch; 9.4 "line" has different binary and text
|
|
|
|
* representation formats; checks tables and indexes.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
old_9_3_check_for_line_data_type_usage(ClusterInfo *cluster)
|
|
|
|
{
|
|
|
|
int dbnum;
|
|
|
|
FILE *script = NULL;
|
|
|
|
bool found = false;
|
|
|
|
char output_path[MAXPGPATH];
|
|
|
|
|
2017-08-23 02:32:17 +02:00
|
|
|
prep_status("Checking for incompatible \"line\" data type");
|
2014-05-14 22:26:06 +02:00
|
|
|
|
|
|
|
snprintf(output_path, sizeof(output_path), "tables_using_line.txt");
|
|
|
|
|
|
|
|
for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)
|
|
|
|
{
|
|
|
|
PGresult *res;
|
|
|
|
bool db_used = false;
|
|
|
|
int ntups;
|
|
|
|
int rowno;
|
|
|
|
int i_nspname,
|
|
|
|
i_relname,
|
|
|
|
i_attname;
|
|
|
|
DbInfo *active_db = &cluster->dbarr.dbs[dbnum];
|
|
|
|
PGconn *conn = connectToServer(cluster, active_db->db_name);
|
|
|
|
|
|
|
|
res = executeQueryOrDie(conn,
|
|
|
|
"SELECT n.nspname, c.relname, a.attname "
|
|
|
|
"FROM pg_catalog.pg_class c, "
|
|
|
|
" pg_catalog.pg_namespace n, "
|
|
|
|
" pg_catalog.pg_attribute a "
|
|
|
|
"WHERE c.oid = a.attrelid AND "
|
|
|
|
" NOT a.attisdropped AND "
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
" a.atttypid = 'pg_catalog.line'::pg_catalog.regtype AND "
|
2014-05-14 22:26:06 +02:00
|
|
|
" c.relnamespace = n.oid AND "
|
|
|
|
/* exclude possible orphaned temp tables */
|
|
|
|
" n.nspname !~ '^pg_temp_' AND "
|
2017-06-21 20:39:04 +02:00
|
|
|
" n.nspname !~ '^pg_toast_temp_' AND "
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
" n.nspname NOT IN ('pg_catalog', 'information_schema')");
|
2014-05-14 22:26:06 +02:00
|
|
|
|
|
|
|
ntups = PQntuples(res);
|
|
|
|
i_nspname = PQfnumber(res, "nspname");
|
|
|
|
i_relname = PQfnumber(res, "relname");
|
|
|
|
i_attname = PQfnumber(res, "attname");
|
|
|
|
for (rowno = 0; rowno < ntups; rowno++)
|
|
|
|
{
|
|
|
|
found = true;
|
|
|
|
if (script == NULL && (script = fopen_priv(output_path, "w")) == NULL)
|
Improve error reporting in pg_upgrade's file copying/linking/rewriting.
The previous design for this had copyFile(), linkFile(), and
rewriteVisibilityMap() returning strerror strings, with the caller
producing one-size-fits-all error messages based on that. This made it
impossible to produce messages that described the failures with any degree
of precision, especially not short-read problems since those don't set
errno at all.
Since pg_upgrade has no intention of continuing after any error in this
area, let's fix this by just letting these functions call pg_fatal() for
themselves, making it easy for each point of failure to have a suitable
error message. Taking this approach also allows dropping cleanup code
that was unnecessary and was often rather sloppy about preserving errno.
To not lose relevant info that was reported before, pass in the schema name
and table name of the current table so that they can be included in the
error reports.
An additional problem was the use of getErrorText(), which was flat out
wrong for all but a couple of call sites, because it unconditionally did
"_dosmaperr(GetLastError())" on Windows. That's only appropriate when
reporting an error from a Windows-native API, which only a couple of
the callers were actually doing. Thus, even the reported strerror string
would be unrelated to the actual failure in many cases on Windows.
To fix, get rid of getErrorText() altogether, and just have call sites
do strerror(errno) instead, since that's the way all the rest of our
frontend programs do it. Add back the _dosmaperr() calls in the two
places where that's actually appropriate.
In passing, make assorted messages hew more closely to project style
guidelines, notably by removing initial capitals in not-complete-sentence
primary error messages. (I didn't make any effort to clean up places
I didn't have another reason to touch, though.)
Per discussion of a report from Thomas Kellerer. Back-patch to 9.6,
but no further; given the relative infrequency of reports of problems
here, it's not clear it's worth adapting the patch to older branches.
Patch by me, but with credit to Alvaro Herrera for spotting the issue
with getErrorText's misuse of _dosmaperr().
Discussion: <nsjrbh$8li$1@blaine.gmane.org>
2016-10-01 02:40:27 +02:00
|
|
|
pg_fatal("could not open file \"%s\": %s\n", output_path,
|
|
|
|
strerror(errno));
|
2014-05-14 22:26:06 +02:00
|
|
|
if (!db_used)
|
|
|
|
{
|
|
|
|
fprintf(script, "Database: %s\n", active_db->db_name);
|
|
|
|
db_used = true;
|
|
|
|
}
|
|
|
|
fprintf(script, " %s.%s.%s\n",
|
|
|
|
PQgetvalue(res, rowno, i_nspname),
|
|
|
|
PQgetvalue(res, rowno, i_relname),
|
|
|
|
PQgetvalue(res, rowno, i_attname));
|
|
|
|
}
|
|
|
|
|
|
|
|
PQclear(res);
|
|
|
|
|
|
|
|
PQfinish(conn);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (script)
|
|
|
|
fclose(script);
|
|
|
|
|
|
|
|
if (found)
|
|
|
|
{
|
|
|
|
pg_log(PG_REPORT, "fatal\n");
|
|
|
|
pg_fatal("Your installation contains the \"line\" data type in user tables. This\n"
|
2015-05-24 03:35:49 +02:00
|
|
|
"data type changed its internal and input/output format between your old\n"
|
2014-05-14 22:26:06 +02:00
|
|
|
"and new clusters so this cluster cannot currently be upgraded. You can\n"
|
2015-05-24 03:35:49 +02:00
|
|
|
"remove the problem tables and restart the upgrade. A list of the problem\n"
|
2014-05-14 22:26:06 +02:00
|
|
|
"columns is in the file:\n"
|
|
|
|
" %s\n\n", output_path);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
check_ok();
|
|
|
|
}
|
Change unknown-type literals to type text in SELECT and RETURNING lists.
Previously, we left such literals alone if the query or subquery had
no properties forcing a type decision to be made (such as an ORDER BY or
DISTINCT clause using that output column). This meant that "unknown" could
be an exposed output column type, which has never been a great idea because
it could result in strange failures later on. For example, an outer query
that tried to do any operations on an unknown-type subquery output would
generally fail with some weird error like "failed to find conversion
function from unknown to text" or "could not determine which collation to
use for string comparison". Also, if the case occurred in a CREATE VIEW's
query then the view would have an unknown-type column, causing similar
failures in queries trying to use the view.
To fix, at the tail end of parse analysis of a query, forcibly convert any
remaining "unknown" literals in its SELECT or RETURNING list to type text.
However, provide a switch to suppress that, and use it in the cases of
SELECT inside a set operation or INSERT command. In those cases we already
had type resolution rules that make use of context information from outside
the subquery proper, and we don't want to change that behavior.
Also, change creation of an unknown-type column in a relation from a
warning to a hard error. The error should be unreachable now in CREATE
VIEW or CREATE MATVIEW, but it's still possible to explicitly say "unknown"
in CREATE TABLE or CREATE (composite) TYPE. We want to forbid that because
it's nothing but a foot-gun.
This change creates a pg_upgrade failure case: a matview that contains an
unknown-type column can't be pg_upgraded, because reparsing the matview's
defining query will now decide that the column is of type text, which
doesn't match the cstring-like storage that the old materialized column
would actually have. Add a checking pass to detect that. While at it,
we can detect tables or composite types that would fail, essentially
for free. Those would fail safely anyway later on, but we might as
well fail earlier.
This patch is by me, but it owes something to previous investigations
by Rahila Syed. Also thanks to Ashutosh Bapat and Michael Paquier for
review.
Discussion: https://postgr.es/m/CAH2L28uwwbL9HUM-WR=hromW1Cvamkn7O-g8fPY2m=_7muJ0oA@mail.gmail.com
2017-01-25 15:17:18 +01:00
|
|
|
|
|
|
|
|
|
|
|
/*
|
|
|
|
* old_9_6_check_for_unknown_data_type_usage()
|
|
|
|
* 9.6 -> 10
|
|
|
|
* It's no longer allowed to create tables or views with "unknown"-type
|
|
|
|
* columns. We do not complain about views with such columns, because
|
|
|
|
* they should get silently converted to "text" columns during the DDL
|
|
|
|
* dump and reload; it seems unlikely to be worth making users do that
|
|
|
|
* by hand. However, if there's a table with such a column, the DDL
|
|
|
|
* reload will fail, so we should pre-detect that rather than failing
|
|
|
|
* mid-upgrade. Worse, if there's a matview with such a column, the
|
|
|
|
* DDL reload will silently change it to "text" which won't match the
|
|
|
|
* on-disk storage (which is like "cstring"). So we *must* reject that.
|
|
|
|
* Also check composite types, in case they are used for table columns.
|
|
|
|
* We needn't check indexes, because "unknown" has no opclasses.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
old_9_6_check_for_unknown_data_type_usage(ClusterInfo *cluster)
|
|
|
|
{
|
|
|
|
int dbnum;
|
|
|
|
FILE *script = NULL;
|
|
|
|
bool found = false;
|
|
|
|
char output_path[MAXPGPATH];
|
|
|
|
|
|
|
|
prep_status("Checking for invalid \"unknown\" user columns");
|
|
|
|
|
|
|
|
snprintf(output_path, sizeof(output_path), "tables_using_unknown.txt");
|
|
|
|
|
|
|
|
for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)
|
|
|
|
{
|
|
|
|
PGresult *res;
|
|
|
|
bool db_used = false;
|
|
|
|
int ntups;
|
|
|
|
int rowno;
|
|
|
|
int i_nspname,
|
|
|
|
i_relname,
|
|
|
|
i_attname;
|
|
|
|
DbInfo *active_db = &cluster->dbarr.dbs[dbnum];
|
|
|
|
PGconn *conn = connectToServer(cluster, active_db->db_name);
|
|
|
|
|
|
|
|
res = executeQueryOrDie(conn,
|
|
|
|
"SELECT n.nspname, c.relname, a.attname "
|
|
|
|
"FROM pg_catalog.pg_class c, "
|
|
|
|
" pg_catalog.pg_namespace n, "
|
|
|
|
" pg_catalog.pg_attribute a "
|
|
|
|
"WHERE c.oid = a.attrelid AND "
|
|
|
|
" NOT a.attisdropped AND "
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
" a.atttypid = 'pg_catalog.unknown'::pg_catalog.regtype AND "
|
2017-03-10 04:42:16 +01:00
|
|
|
" c.relkind IN ("
|
|
|
|
CppAsString2(RELKIND_RELATION) ", "
|
|
|
|
CppAsString2(RELKIND_COMPOSITE_TYPE) ", "
|
|
|
|
CppAsString2(RELKIND_MATVIEW) ") AND "
|
Change unknown-type literals to type text in SELECT and RETURNING lists.
Previously, we left such literals alone if the query or subquery had
no properties forcing a type decision to be made (such as an ORDER BY or
DISTINCT clause using that output column). This meant that "unknown" could
be an exposed output column type, which has never been a great idea because
it could result in strange failures later on. For example, an outer query
that tried to do any operations on an unknown-type subquery output would
generally fail with some weird error like "failed to find conversion
function from unknown to text" or "could not determine which collation to
use for string comparison". Also, if the case occurred in a CREATE VIEW's
query then the view would have an unknown-type column, causing similar
failures in queries trying to use the view.
To fix, at the tail end of parse analysis of a query, forcibly convert any
remaining "unknown" literals in its SELECT or RETURNING list to type text.
However, provide a switch to suppress that, and use it in the cases of
SELECT inside a set operation or INSERT command. In those cases we already
had type resolution rules that make use of context information from outside
the subquery proper, and we don't want to change that behavior.
Also, change creation of an unknown-type column in a relation from a
warning to a hard error. The error should be unreachable now in CREATE
VIEW or CREATE MATVIEW, but it's still possible to explicitly say "unknown"
in CREATE TABLE or CREATE (composite) TYPE. We want to forbid that because
it's nothing but a foot-gun.
This change creates a pg_upgrade failure case: a matview that contains an
unknown-type column can't be pg_upgraded, because reparsing the matview's
defining query will now decide that the column is of type text, which
doesn't match the cstring-like storage that the old materialized column
would actually have. Add a checking pass to detect that. While at it,
we can detect tables or composite types that would fail, essentially
for free. Those would fail safely anyway later on, but we might as
well fail earlier.
This patch is by me, but it owes something to previous investigations
by Rahila Syed. Also thanks to Ashutosh Bapat and Michael Paquier for
review.
Discussion: https://postgr.es/m/CAH2L28uwwbL9HUM-WR=hromW1Cvamkn7O-g8fPY2m=_7muJ0oA@mail.gmail.com
2017-01-25 15:17:18 +01:00
|
|
|
" c.relnamespace = n.oid AND "
|
|
|
|
/* exclude possible orphaned temp tables */
|
|
|
|
" n.nspname !~ '^pg_temp_' AND "
|
2017-06-21 20:39:04 +02:00
|
|
|
" n.nspname !~ '^pg_toast_temp_' AND "
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
" n.nspname NOT IN ('pg_catalog', 'information_schema')");
|
Change unknown-type literals to type text in SELECT and RETURNING lists.
Previously, we left such literals alone if the query or subquery had
no properties forcing a type decision to be made (such as an ORDER BY or
DISTINCT clause using that output column). This meant that "unknown" could
be an exposed output column type, which has never been a great idea because
it could result in strange failures later on. For example, an outer query
that tried to do any operations on an unknown-type subquery output would
generally fail with some weird error like "failed to find conversion
function from unknown to text" or "could not determine which collation to
use for string comparison". Also, if the case occurred in a CREATE VIEW's
query then the view would have an unknown-type column, causing similar
failures in queries trying to use the view.
To fix, at the tail end of parse analysis of a query, forcibly convert any
remaining "unknown" literals in its SELECT or RETURNING list to type text.
However, provide a switch to suppress that, and use it in the cases of
SELECT inside a set operation or INSERT command. In those cases we already
had type resolution rules that make use of context information from outside
the subquery proper, and we don't want to change that behavior.
Also, change creation of an unknown-type column in a relation from a
warning to a hard error. The error should be unreachable now in CREATE
VIEW or CREATE MATVIEW, but it's still possible to explicitly say "unknown"
in CREATE TABLE or CREATE (composite) TYPE. We want to forbid that because
it's nothing but a foot-gun.
This change creates a pg_upgrade failure case: a matview that contains an
unknown-type column can't be pg_upgraded, because reparsing the matview's
defining query will now decide that the column is of type text, which
doesn't match the cstring-like storage that the old materialized column
would actually have. Add a checking pass to detect that. While at it,
we can detect tables or composite types that would fail, essentially
for free. Those would fail safely anyway later on, but we might as
well fail earlier.
This patch is by me, but it owes something to previous investigations
by Rahila Syed. Also thanks to Ashutosh Bapat and Michael Paquier for
review.
Discussion: https://postgr.es/m/CAH2L28uwwbL9HUM-WR=hromW1Cvamkn7O-g8fPY2m=_7muJ0oA@mail.gmail.com
2017-01-25 15:17:18 +01:00
|
|
|
|
|
|
|
ntups = PQntuples(res);
|
|
|
|
i_nspname = PQfnumber(res, "nspname");
|
|
|
|
i_relname = PQfnumber(res, "relname");
|
|
|
|
i_attname = PQfnumber(res, "attname");
|
|
|
|
for (rowno = 0; rowno < ntups; rowno++)
|
|
|
|
{
|
|
|
|
found = true;
|
|
|
|
if (script == NULL && (script = fopen_priv(output_path, "w")) == NULL)
|
|
|
|
pg_fatal("could not open file \"%s\": %s\n", output_path,
|
|
|
|
strerror(errno));
|
|
|
|
if (!db_used)
|
|
|
|
{
|
|
|
|
fprintf(script, "Database: %s\n", active_db->db_name);
|
|
|
|
db_used = true;
|
|
|
|
}
|
|
|
|
fprintf(script, " %s.%s.%s\n",
|
|
|
|
PQgetvalue(res, rowno, i_nspname),
|
|
|
|
PQgetvalue(res, rowno, i_relname),
|
|
|
|
PQgetvalue(res, rowno, i_attname));
|
|
|
|
}
|
|
|
|
|
|
|
|
PQclear(res);
|
|
|
|
|
|
|
|
PQfinish(conn);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (script)
|
|
|
|
fclose(script);
|
|
|
|
|
|
|
|
if (found)
|
|
|
|
{
|
|
|
|
pg_log(PG_REPORT, "fatal\n");
|
|
|
|
pg_fatal("Your installation contains the \"unknown\" data type in user tables. This\n"
|
|
|
|
"data type is no longer allowed in tables, so this cluster cannot currently\n"
|
|
|
|
"be upgraded. You can remove the problem tables and restart the upgrade.\n"
|
|
|
|
"A list of the problem columns is in the file:\n"
|
|
|
|
" %s\n\n", output_path);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
check_ok();
|
|
|
|
}
|
2017-05-19 22:49:38 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* old_9_6_invalidate_hash_indexes()
|
|
|
|
* 9.6 -> 10
|
|
|
|
* Hash index binary format has changed from 9.6->10.0
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
old_9_6_invalidate_hash_indexes(ClusterInfo *cluster, bool check_mode)
|
|
|
|
{
|
|
|
|
int dbnum;
|
|
|
|
FILE *script = NULL;
|
|
|
|
bool found = false;
|
|
|
|
char *output_path = "reindex_hash.sql";
|
|
|
|
|
|
|
|
prep_status("Checking for hash indexes");
|
|
|
|
|
|
|
|
for (dbnum = 0; dbnum < cluster->dbarr.ndbs; dbnum++)
|
|
|
|
{
|
|
|
|
PGresult *res;
|
|
|
|
bool db_used = false;
|
|
|
|
int ntups;
|
|
|
|
int rowno;
|
|
|
|
int i_nspname,
|
|
|
|
i_relname;
|
|
|
|
DbInfo *active_db = &cluster->dbarr.dbs[dbnum];
|
|
|
|
PGconn *conn = connectToServer(cluster, active_db->db_name);
|
|
|
|
|
|
|
|
/* find hash indexes */
|
|
|
|
res = executeQueryOrDie(conn,
|
|
|
|
"SELECT n.nspname, c.relname "
|
|
|
|
"FROM pg_catalog.pg_class c, "
|
|
|
|
" pg_catalog.pg_index i, "
|
|
|
|
" pg_catalog.pg_am a, "
|
|
|
|
" pg_catalog.pg_namespace n "
|
|
|
|
"WHERE i.indexrelid = c.oid AND "
|
|
|
|
" c.relam = a.oid AND "
|
|
|
|
" c.relnamespace = n.oid AND "
|
|
|
|
" a.amname = 'hash'"
|
|
|
|
);
|
|
|
|
|
|
|
|
ntups = PQntuples(res);
|
|
|
|
i_nspname = PQfnumber(res, "nspname");
|
|
|
|
i_relname = PQfnumber(res, "relname");
|
|
|
|
for (rowno = 0; rowno < ntups; rowno++)
|
|
|
|
{
|
|
|
|
found = true;
|
|
|
|
if (!check_mode)
|
|
|
|
{
|
|
|
|
if (script == NULL && (script = fopen_priv(output_path, "w")) == NULL)
|
|
|
|
pg_fatal("could not open file \"%s\": %s\n", output_path,
|
|
|
|
strerror(errno));
|
|
|
|
if (!db_used)
|
|
|
|
{
|
|
|
|
PQExpBufferData connectbuf;
|
|
|
|
|
|
|
|
initPQExpBuffer(&connectbuf);
|
|
|
|
appendPsqlMetaConnect(&connectbuf, active_db->db_name);
|
|
|
|
fputs(connectbuf.data, script);
|
|
|
|
termPQExpBuffer(&connectbuf);
|
|
|
|
db_used = true;
|
|
|
|
}
|
|
|
|
fprintf(script, "REINDEX INDEX %s.%s;\n",
|
|
|
|
quote_identifier(PQgetvalue(res, rowno, i_nspname)),
|
|
|
|
quote_identifier(PQgetvalue(res, rowno, i_relname)));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
PQclear(res);
|
|
|
|
|
|
|
|
if (!check_mode && db_used)
|
|
|
|
{
|
|
|
|
/* mark hash indexes as invalid */
|
|
|
|
PQclear(executeQueryOrDie(conn,
|
|
|
|
"UPDATE pg_catalog.pg_index i "
|
|
|
|
"SET indisvalid = false "
|
|
|
|
"FROM pg_catalog.pg_class c, "
|
|
|
|
" pg_catalog.pg_am a, "
|
|
|
|
" pg_catalog.pg_namespace n "
|
|
|
|
"WHERE i.indexrelid = c.oid AND "
|
|
|
|
" c.relam = a.oid AND "
|
|
|
|
" c.relnamespace = n.oid AND "
|
|
|
|
" a.amname = 'hash'"));
|
|
|
|
}
|
|
|
|
|
|
|
|
PQfinish(conn);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (script)
|
|
|
|
fclose(script);
|
|
|
|
|
|
|
|
if (found)
|
|
|
|
{
|
|
|
|
report_status(PG_WARNING, "warning");
|
|
|
|
if (check_mode)
|
|
|
|
pg_log(PG_WARNING, "\n"
|
|
|
|
"Your installation contains hash indexes. These indexes have different\n"
|
|
|
|
"internal formats between your old and new clusters, so they must be\n"
|
|
|
|
"reindexed with the REINDEX command. After upgrading, you will be given\n"
|
|
|
|
"REINDEX instructions.\n\n");
|
|
|
|
else
|
|
|
|
pg_log(PG_WARNING, "\n"
|
|
|
|
"Your installation contains hash indexes. These indexes have different\n"
|
|
|
|
"internal formats between your old and new clusters, so they must be\n"
|
2017-08-23 02:32:17 +02:00
|
|
|
"reindexed with the REINDEX command. The file\n"
|
2017-05-19 22:49:38 +02:00
|
|
|
" %s\n"
|
|
|
|
"when executed by psql by the database superuser will recreate all invalid\n"
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
"indexes; until then, none of these indexes will be used.\n\n",
|
2017-05-19 22:49:38 +02:00
|
|
|
output_path);
|
|
|
|
}
|
|
|
|
else
|
|
|
|
check_ok();
|
|
|
|
}
|