1996-07-09 08:22:35 +02:00
|
|
|
/*-------------------------------------------------------------------------
|
|
|
|
*
|
1999-02-14 00:22:53 +01:00
|
|
|
* dbcommands.c
|
2001-06-12 07:55:50 +02:00
|
|
|
* Database management commands (create/drop database).
|
1997-09-07 07:04:48 +02:00
|
|
|
*
|
2006-05-04 18:07:29 +02:00
|
|
|
* Note: database creation/destruction commands use exclusive locks on
|
|
|
|
* the database objects (as expressed by LockSharedObject()) to avoid
|
|
|
|
* stepping on each others' toes. Formerly we used table-level locks
|
|
|
|
* on pg_database, but that's too coarse-grained.
|
1996-07-09 08:22:35 +02:00
|
|
|
*
|
2019-01-02 18:44:25 +01:00
|
|
|
* Portions Copyright (c) 1996-2019, PostgreSQL Global Development Group
|
2000-01-26 06:58:53 +01:00
|
|
|
* Portions Copyright (c) 1994, Regents of the University of California
|
1996-07-09 08:22:35 +02:00
|
|
|
*
|
|
|
|
*
|
|
|
|
* IDENTIFICATION
|
2010-09-20 22:08:53 +02:00
|
|
|
* src/backend/commands/dbcommands.c
|
1996-07-09 08:22:35 +02:00
|
|
|
*
|
|
|
|
*-------------------------------------------------------------------------
|
|
|
|
*/
|
|
|
|
#include "postgres.h"
|
2000-01-13 19:26:18 +01:00
|
|
|
|
|
|
|
#include <fcntl.h>
|
|
|
|
#include <unistd.h>
|
|
|
|
#include <sys/stat.h>
|
1998-04-27 06:08:07 +02:00
|
|
|
|
2019-12-27 00:09:00 +01:00
|
|
|
#include "access/genam.h"
|
1996-07-09 08:22:35 +02:00
|
|
|
#include "access/heapam.h"
|
2012-08-30 22:15:44 +02:00
|
|
|
#include "access/htup_details.h"
|
2019-09-27 23:10:16 +02:00
|
|
|
#include "access/multixact.h"
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
#include "access/tableam.h"
|
2006-07-13 18:49:20 +02:00
|
|
|
#include "access/xact.h"
|
2014-11-06 12:52:08 +01:00
|
|
|
#include "access/xloginsert.h"
|
2008-05-12 02:00:54 +02:00
|
|
|
#include "access/xlogutils.h"
|
2000-10-16 16:52:28 +02:00
|
|
|
#include "catalog/catalog.h"
|
2005-07-07 22:40:02 +02:00
|
|
|
#include "catalog/dependency.h"
|
2005-06-28 07:09:14 +02:00
|
|
|
#include "catalog/indexing.h"
|
2010-11-25 17:48:49 +01:00
|
|
|
#include "catalog/objectaccess.h"
|
2005-06-28 07:09:14 +02:00
|
|
|
#include "catalog/pg_authid.h"
|
1996-07-09 08:22:35 +02:00
|
|
|
#include "catalog/pg_database.h"
|
2009-10-08 00:14:26 +02:00
|
|
|
#include "catalog/pg_db_role_setting.h"
|
2017-01-19 18:00:00 +01:00
|
|
|
#include "catalog/pg_subscription.h"
|
2004-06-18 08:14:31 +02:00
|
|
|
#include "catalog/pg_tablespace.h"
|
1999-10-26 05:12:39 +02:00
|
|
|
#include "commands/comment.h"
|
2000-11-14 19:37:49 +01:00
|
|
|
#include "commands/dbcommands.h"
|
2015-03-09 14:49:10 +01:00
|
|
|
#include "commands/dbcommands_xlog.h"
|
2014-07-02 01:02:21 +02:00
|
|
|
#include "commands/defrem.h"
|
2011-07-20 19:18:24 +02:00
|
|
|
#include "commands/seclabel.h"
|
2004-06-18 08:14:31 +02:00
|
|
|
#include "commands/tablespace.h"
|
|
|
|
#include "mb/pg_wchar.h"
|
1999-07-16 05:14:30 +02:00
|
|
|
#include "miscadmin.h"
|
2007-02-09 17:12:19 +01:00
|
|
|
#include "pgstat.h"
|
2004-10-28 02:39:59 +02:00
|
|
|
#include "postmaster/bgwriter.h"
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
#include "replication/slot.h"
|
2010-11-12 22:39:53 +01:00
|
|
|
#include "storage/copydir.h"
|
2012-08-29 00:02:07 +02:00
|
|
|
#include "storage/fd.h"
|
2008-04-17 01:59:40 +02:00
|
|
|
#include "storage/ipc.h"
|
2019-11-12 04:00:16 +01:00
|
|
|
#include "storage/lmgr.h"
|
2019-04-04 10:56:03 +02:00
|
|
|
#include "storage/md.h"
|
2005-05-19 23:35:48 +02:00
|
|
|
#include "storage/procarray.h"
|
2007-01-17 17:25:01 +01:00
|
|
|
#include "storage/smgr.h"
|
2003-06-27 16:45:32 +02:00
|
|
|
#include "utils/acl.h"
|
2000-01-13 19:26:18 +01:00
|
|
|
#include "utils/builtins.h"
|
2000-05-28 19:56:29 +02:00
|
|
|
#include "utils/fmgroids.h"
|
2008-09-23 11:20:39 +02:00
|
|
|
#include "utils/pg_locale.h"
|
2008-11-07 19:25:07 +01:00
|
|
|
#include "utils/snapmgr.h"
|
1998-04-27 06:08:07 +02:00
|
|
|
#include "utils/syscache.h"
|
1996-07-09 08:22:35 +02:00
|
|
|
|
2008-04-17 01:59:40 +02:00
|
|
|
typedef struct
|
|
|
|
{
|
|
|
|
Oid src_dboid; /* source (template) DB */
|
|
|
|
Oid dest_dboid; /* DB we are trying to create */
|
|
|
|
} createdb_failure_params;
|
|
|
|
|
2008-11-07 19:25:07 +01:00
|
|
|
typedef struct
|
|
|
|
{
|
|
|
|
Oid dest_dboid; /* DB we are trying to move */
|
|
|
|
Oid dest_tsoid; /* tablespace we are trying to move to */
|
|
|
|
} movedb_failure_params;
|
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
/* non-export function prototypes */
|
2008-04-17 01:59:40 +02:00
|
|
|
static void createdb_failure_callback(int code, Datum arg);
|
2008-11-07 19:25:07 +01:00
|
|
|
static void movedb(const char *dbname, const char *tblspcname);
|
|
|
|
static void movedb_failure_callback(int code, Datum arg);
|
2006-05-04 18:07:29 +02:00
|
|
|
static bool get_db_info(const char *name, LOCKMODE lockmode,
|
2019-05-22 19:04:48 +02:00
|
|
|
Oid *dbIdP, Oid *ownerIdP,
|
|
|
|
int *encodingP, bool *dbIsTemplateP, bool *dbAllowConnP,
|
|
|
|
Oid *dbLastSysOidP, TransactionId *dbFrozenXidP,
|
|
|
|
MultiXactId *dbMinMultiP,
|
|
|
|
Oid *dbTablespace, char **dbCollate, char **dbCtype);
|
2014-12-23 19:35:49 +01:00
|
|
|
static bool have_createdb_privilege(void);
|
2004-06-18 08:14:31 +02:00
|
|
|
static void remove_dbtablespaces(Oid db_id);
|
2006-10-19 00:44:12 +02:00
|
|
|
static bool check_db_file_conflict(Oid db_id);
|
2008-08-04 20:03:46 +02:00
|
|
|
static int errdetail_busy_db(int notherbackends, int npreparedxacts);
|
2004-06-18 08:14:31 +02:00
|
|
|
|
2000-01-13 19:26:18 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* CREATE DATABASE
|
|
|
|
*/
|
2012-12-29 13:55:37 +01:00
|
|
|
Oid
|
2016-09-06 18:00:00 +02:00
|
|
|
createdb(ParseState *pstate, const CreatedbStmt *stmt)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
TableScanDesc scan;
|
2004-06-18 08:14:31 +02:00
|
|
|
Relation rel;
|
2000-11-14 19:37:49 +01:00
|
|
|
Oid src_dboid;
|
2005-06-28 07:09:14 +02:00
|
|
|
Oid src_owner;
|
2019-09-27 23:10:16 +02:00
|
|
|
int src_encoding = -1;
|
|
|
|
char *src_collate = NULL;
|
|
|
|
char *src_ctype = NULL;
|
2000-11-14 19:37:49 +01:00
|
|
|
bool src_istemplate;
|
2005-03-12 22:33:55 +01:00
|
|
|
bool src_allowconn;
|
2019-09-27 23:10:16 +02:00
|
|
|
Oid src_lastsysoid = InvalidOid;
|
|
|
|
TransactionId src_frozenxid = InvalidTransactionId;
|
|
|
|
MultiXactId src_minmxid = InvalidMultiXactId;
|
2004-06-18 08:14:31 +02:00
|
|
|
Oid src_deftablespace;
|
2005-08-02 21:02:32 +02:00
|
|
|
volatile Oid dst_deftablespace;
|
2006-05-04 18:07:29 +02:00
|
|
|
Relation pg_database_rel;
|
2000-01-13 19:26:18 +01:00
|
|
|
HeapTuple tuple;
|
2000-04-12 19:17:23 +02:00
|
|
|
Datum new_record[Natts_pg_database];
|
2008-11-02 02:45:28 +01:00
|
|
|
bool new_record_nulls[Natts_pg_database];
|
2000-10-22 19:55:49 +02:00
|
|
|
Oid dboid;
|
2006-05-04 18:07:29 +02:00
|
|
|
Oid datdba;
|
2004-05-26 06:41:50 +02:00
|
|
|
ListCell *option;
|
2004-08-29 07:07:03 +02:00
|
|
|
DefElem *dtablespacename = NULL;
|
2002-06-18 19:27:58 +02:00
|
|
|
DefElem *downer = NULL;
|
2002-09-04 22:31:48 +02:00
|
|
|
DefElem *dtemplate = NULL;
|
|
|
|
DefElem *dencoding = NULL;
|
2019-07-23 14:40:42 +02:00
|
|
|
DefElem *dlocale = NULL;
|
2008-09-23 11:20:39 +02:00
|
|
|
DefElem *dcollate = NULL;
|
|
|
|
DefElem *dctype = NULL;
|
2014-07-02 02:10:38 +02:00
|
|
|
DefElem *distemplate = NULL;
|
|
|
|
DefElem *dallowconnections = NULL;
|
2005-07-31 19:19:22 +02:00
|
|
|
DefElem *dconnlimit = NULL;
|
2002-06-18 19:27:58 +02:00
|
|
|
char *dbname = stmt->dbname;
|
|
|
|
char *dbowner = NULL;
|
2005-06-21 06:02:34 +02:00
|
|
|
const char *dbtemplate = NULL;
|
2008-09-23 11:20:39 +02:00
|
|
|
char *dbcollate = NULL;
|
|
|
|
char *dbctype = NULL;
|
Replace empty locale name with implied value in CREATE DATABASE and initdb.
setlocale() accepts locale name "" as meaning "the locale specified by the
process's environment variables". Historically we've accepted that for
Postgres' locale settings, too. However, it's fairly unsafe to store an
empty string in a new database's pg_database.datcollate or datctype fields,
because then the interpretation could vary across postmaster restarts,
possibly resulting in index corruption and other unpleasantness.
Instead, we should expand "" to whatever it means at the moment of calling
CREATE DATABASE, which we can do by saving the value returned by
setlocale().
For consistency, make initdb set up the initial lc_xxx parameter values the
same way. initdb was already doing the right thing for empty locale names,
but it did not replace non-empty names with setlocale results. On a
platform where setlocale chooses to canonicalize the spellings of locale
names, this would result in annoying inconsistency. (It seems that popular
implementations of setlocale don't do such canonicalization, which is a
pity, but the POSIX spec certainly allows it to be done.) The same risk
of inconsistency leads me to not venture back-patching this, although it
could certainly be seen as a longstanding bug.
Per report from Jeff Davis, though this is not his proposed patch.
2012-03-26 03:47:22 +02:00
|
|
|
char *canonname;
|
2006-05-04 18:07:29 +02:00
|
|
|
int encoding = -1;
|
2014-07-02 02:10:38 +02:00
|
|
|
bool dbistemplate = false;
|
|
|
|
bool dballowconnections = true;
|
2006-05-04 18:07:29 +02:00
|
|
|
int dbconnlimit = -1;
|
2008-08-04 20:03:46 +02:00
|
|
|
int notherbackends;
|
|
|
|
int npreparedxacts;
|
2008-04-17 01:59:40 +02:00
|
|
|
createdb_failure_params fparms;
|
2004-06-18 08:14:31 +02:00
|
|
|
|
2002-06-18 19:27:58 +02:00
|
|
|
/* Extract options from the statement node tree */
|
|
|
|
foreach(option, stmt->options)
|
|
|
|
{
|
|
|
|
DefElem *defel = (DefElem *) lfirst(option);
|
|
|
|
|
2004-06-18 08:14:31 +02:00
|
|
|
if (strcmp(defel->defname, "tablespace") == 0)
|
2002-06-18 19:27:58 +02:00
|
|
|
{
|
2004-06-18 08:14:31 +02:00
|
|
|
if (dtablespacename)
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
2016-09-06 18:00:00 +02:00
|
|
|
errmsg("conflicting or redundant options"),
|
|
|
|
parser_errposition(pstate, defel->location)));
|
2004-06-18 08:14:31 +02:00
|
|
|
dtablespacename = defel;
|
2002-06-18 19:27:58 +02:00
|
|
|
}
|
2004-06-18 08:14:31 +02:00
|
|
|
else if (strcmp(defel->defname, "owner") == 0)
|
2002-06-18 19:27:58 +02:00
|
|
|
{
|
2004-06-18 08:14:31 +02:00
|
|
|
if (downer)
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
2016-09-06 18:00:00 +02:00
|
|
|
errmsg("conflicting or redundant options"),
|
|
|
|
parser_errposition(pstate, defel->location)));
|
2004-06-18 08:14:31 +02:00
|
|
|
downer = defel;
|
2002-06-18 19:27:58 +02:00
|
|
|
}
|
|
|
|
else if (strcmp(defel->defname, "template") == 0)
|
|
|
|
{
|
|
|
|
if (dtemplate)
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
2016-09-06 18:00:00 +02:00
|
|
|
errmsg("conflicting or redundant options"),
|
|
|
|
parser_errposition(pstate, defel->location)));
|
2002-06-18 19:27:58 +02:00
|
|
|
dtemplate = defel;
|
|
|
|
}
|
|
|
|
else if (strcmp(defel->defname, "encoding") == 0)
|
|
|
|
{
|
|
|
|
if (dencoding)
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
2016-09-06 18:00:00 +02:00
|
|
|
errmsg("conflicting or redundant options"),
|
|
|
|
parser_errposition(pstate, defel->location)));
|
2002-06-18 19:27:58 +02:00
|
|
|
dencoding = defel;
|
|
|
|
}
|
2019-07-23 14:40:42 +02:00
|
|
|
else if (strcmp(defel->defname, "locale") == 0)
|
|
|
|
{
|
|
|
|
if (dlocale)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
|
|
|
errmsg("conflicting or redundant options"),
|
|
|
|
parser_errposition(pstate, defel->location)));
|
|
|
|
dlocale = defel;
|
|
|
|
}
|
2009-04-06 10:42:53 +02:00
|
|
|
else if (strcmp(defel->defname, "lc_collate") == 0)
|
2008-09-23 11:20:39 +02:00
|
|
|
{
|
|
|
|
if (dcollate)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
2016-09-06 18:00:00 +02:00
|
|
|
errmsg("conflicting or redundant options"),
|
|
|
|
parser_errposition(pstate, defel->location)));
|
2008-09-23 11:20:39 +02:00
|
|
|
dcollate = defel;
|
|
|
|
}
|
2009-04-06 10:42:53 +02:00
|
|
|
else if (strcmp(defel->defname, "lc_ctype") == 0)
|
2008-09-23 11:20:39 +02:00
|
|
|
{
|
|
|
|
if (dctype)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
2016-09-06 18:00:00 +02:00
|
|
|
errmsg("conflicting or redundant options"),
|
|
|
|
parser_errposition(pstate, defel->location)));
|
2008-09-23 11:20:39 +02:00
|
|
|
dctype = defel;
|
|
|
|
}
|
2014-07-02 02:10:38 +02:00
|
|
|
else if (strcmp(defel->defname, "is_template") == 0)
|
|
|
|
{
|
|
|
|
if (distemplate)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
2016-09-06 18:00:00 +02:00
|
|
|
errmsg("conflicting or redundant options"),
|
|
|
|
parser_errposition(pstate, defel->location)));
|
2014-07-02 02:10:38 +02:00
|
|
|
distemplate = defel;
|
|
|
|
}
|
|
|
|
else if (strcmp(defel->defname, "allow_connections") == 0)
|
|
|
|
{
|
|
|
|
if (dallowconnections)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
2016-09-06 18:00:00 +02:00
|
|
|
errmsg("conflicting or redundant options"),
|
|
|
|
parser_errposition(pstate, defel->location)));
|
2014-07-02 02:10:38 +02:00
|
|
|
dallowconnections = defel;
|
|
|
|
}
|
2014-07-02 01:02:21 +02:00
|
|
|
else if (strcmp(defel->defname, "connection_limit") == 0)
|
2005-07-31 19:19:22 +02:00
|
|
|
{
|
|
|
|
if (dconnlimit)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
2016-09-06 18:00:00 +02:00
|
|
|
errmsg("conflicting or redundant options"),
|
|
|
|
parser_errposition(pstate, defel->location)));
|
2005-07-31 19:19:22 +02:00
|
|
|
dconnlimit = defel;
|
|
|
|
}
|
2004-06-18 08:14:31 +02:00
|
|
|
else if (strcmp(defel->defname, "location") == 0)
|
|
|
|
{
|
|
|
|
ereport(WARNING,
|
|
|
|
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
|
|
|
|
errmsg("LOCATION is not supported anymore"),
|
2016-09-06 18:00:00 +02:00
|
|
|
errhint("Consider using tablespaces instead."),
|
|
|
|
parser_errposition(pstate, defel->location)));
|
2004-06-18 08:14:31 +02:00
|
|
|
}
|
2002-06-18 19:27:58 +02:00
|
|
|
else
|
2014-07-02 01:02:21 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
2016-09-06 18:00:00 +02:00
|
|
|
errmsg("option \"%s\" not recognized", defel->defname),
|
|
|
|
parser_errposition(pstate, defel->location)));
|
2002-06-18 19:27:58 +02:00
|
|
|
}
|
|
|
|
|
2019-07-23 14:40:42 +02:00
|
|
|
if (dlocale && (dcollate || dctype))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
|
|
|
errmsg("conflicting or redundant options"),
|
|
|
|
errdetail("LOCALE cannot be specified together with LC_COLLATE or LC_CTYPE.")));
|
|
|
|
|
2002-11-02 19:41:22 +01:00
|
|
|
if (downer && downer->arg)
|
2014-07-02 01:02:21 +02:00
|
|
|
dbowner = defGetString(downer);
|
2002-11-02 19:41:22 +01:00
|
|
|
if (dtemplate && dtemplate->arg)
|
2014-07-02 01:02:21 +02:00
|
|
|
dbtemplate = defGetString(dtemplate);
|
2002-11-02 19:41:22 +01:00
|
|
|
if (dencoding && dencoding->arg)
|
|
|
|
{
|
|
|
|
const char *encoding_name;
|
|
|
|
|
|
|
|
if (IsA(dencoding->arg, Integer))
|
|
|
|
{
|
2014-07-02 01:02:21 +02:00
|
|
|
encoding = defGetInt32(dencoding);
|
2002-11-02 19:41:22 +01:00
|
|
|
encoding_name = pg_encoding_to_char(encoding);
|
|
|
|
if (strcmp(encoding_name, "") == 0 ||
|
|
|
|
pg_valid_server_encoding(encoding_name) < 0)
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_OBJECT),
|
|
|
|
errmsg("%d is not a valid encoding code",
|
2016-09-06 18:00:00 +02:00
|
|
|
encoding),
|
|
|
|
parser_errposition(pstate, dencoding->location)));
|
2002-11-02 19:41:22 +01:00
|
|
|
}
|
2014-07-02 01:02:21 +02:00
|
|
|
else
|
2002-11-02 19:41:22 +01:00
|
|
|
{
|
2014-07-02 01:02:21 +02:00
|
|
|
encoding_name = defGetString(dencoding);
|
2007-10-13 22:18:42 +02:00
|
|
|
encoding = pg_valid_server_encoding(encoding_name);
|
|
|
|
if (encoding < 0)
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_OBJECT),
|
|
|
|
errmsg("%s is not a valid encoding name",
|
2016-09-06 18:00:00 +02:00
|
|
|
encoding_name),
|
|
|
|
parser_errposition(pstate, dencoding->location)));
|
2002-11-02 19:41:22 +01:00
|
|
|
}
|
|
|
|
}
|
2019-07-23 14:40:42 +02:00
|
|
|
if (dlocale && dlocale->arg)
|
|
|
|
{
|
|
|
|
dbcollate = defGetString(dlocale);
|
|
|
|
dbctype = defGetString(dlocale);
|
|
|
|
}
|
2008-09-23 11:20:39 +02:00
|
|
|
if (dcollate && dcollate->arg)
|
2014-07-02 01:02:21 +02:00
|
|
|
dbcollate = defGetString(dcollate);
|
2008-09-23 11:20:39 +02:00
|
|
|
if (dctype && dctype->arg)
|
2014-07-02 01:02:21 +02:00
|
|
|
dbctype = defGetString(dctype);
|
2014-07-02 02:10:38 +02:00
|
|
|
if (distemplate && distemplate->arg)
|
|
|
|
dbistemplate = defGetBoolean(distemplate);
|
|
|
|
if (dallowconnections && dallowconnections->arg)
|
|
|
|
dballowconnections = defGetBoolean(dallowconnections);
|
2005-07-31 19:19:22 +02:00
|
|
|
if (dconnlimit && dconnlimit->arg)
|
2009-01-30 18:24:47 +01:00
|
|
|
{
|
2014-07-02 01:02:21 +02:00
|
|
|
dbconnlimit = defGetInt32(dconnlimit);
|
2009-01-30 18:24:47 +01:00
|
|
|
if (dbconnlimit < -1)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
|
|
|
|
errmsg("invalid connection limit: %d", dbconnlimit)));
|
|
|
|
}
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2005-06-28 07:09:14 +02:00
|
|
|
/* obtain OID of proposed owner */
|
2002-02-24 21:20:21 +01:00
|
|
|
if (dbowner)
|
2010-08-05 16:45:09 +02:00
|
|
|
datdba = get_role_oid(dbowner, false);
|
2002-02-24 21:20:21 +01:00
|
|
|
else
|
|
|
|
datdba = GetUserId();
|
|
|
|
|
2005-07-14 23:46:30 +02:00
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* To create a database, must have createdb privilege and must be able to
|
|
|
|
* become the target role (this does not imply that the target role itself
|
|
|
|
* must have createdb privilege). The latter provision guards against
|
2014-05-06 18:12:18 +02:00
|
|
|
* "giveaway" attacks. Note that a superuser will always have both of
|
2005-10-15 04:49:52 +02:00
|
|
|
* these privileges a fortiori.
|
2005-07-14 23:46:30 +02:00
|
|
|
*/
|
2014-12-23 19:35:49 +01:00
|
|
|
if (!have_createdb_privilege())
|
2005-07-14 23:46:30 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
|
|
|
|
errmsg("permission denied to create database")));
|
|
|
|
|
|
|
|
check_is_member_of_role(GetUserId(), datdba);
|
2000-01-13 19:26:18 +01:00
|
|
|
|
2000-04-12 19:17:23 +02:00
|
|
|
/*
|
2006-05-04 18:07:29 +02:00
|
|
|
* Lookup database (template) to be cloned, and obtain share lock on it.
|
|
|
|
* ShareLock allows two CREATE DATABASEs to work from the same template
|
|
|
|
* concurrently, while ensuring no one is busy dropping it in parallel
|
|
|
|
* (which would be Very Bad since we'd likely get an incomplete copy
|
|
|
|
* without knowing it). This also prevents any new connections from being
|
|
|
|
* made to the source until we finish copying it, so we can be sure it
|
|
|
|
* won't change underneath us.
|
2000-10-22 19:55:49 +02:00
|
|
|
*/
|
2000-11-14 19:37:49 +01:00
|
|
|
if (!dbtemplate)
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
dbtemplate = "template1"; /* Default template database name */
|
2000-10-22 19:55:49 +02:00
|
|
|
|
2006-05-04 18:07:29 +02:00
|
|
|
if (!get_db_info(dbtemplate, ShareLock,
|
|
|
|
&src_dboid, &src_owner, &src_encoding,
|
2005-03-12 22:33:55 +01:00
|
|
|
&src_istemplate, &src_allowconn, &src_lastsysoid,
|
Improve concurrency of foreign key locking
This patch introduces two additional lock modes for tuples: "SELECT FOR
KEY SHARE" and "SELECT FOR NO KEY UPDATE". These don't block each
other, in contrast with already existing "SELECT FOR SHARE" and "SELECT
FOR UPDATE". UPDATE commands that do not modify the values stored in
the columns that are part of the key of the tuple now grab a SELECT FOR
NO KEY UPDATE lock on the tuple, allowing them to proceed concurrently
with tuple locks of the FOR KEY SHARE variety.
Foreign key triggers now use FOR KEY SHARE instead of FOR SHARE; this
means the concurrency improvement applies to them, which is the whole
point of this patch.
The added tuple lock semantics require some rejiggering of the multixact
module, so that the locking level that each transaction is holding can
be stored alongside its Xid. Also, multixacts now need to persist
across server restarts and crashes, because they can now represent not
only tuple locks, but also tuple updates. This means we need more
careful tracking of lifetime of pg_multixact SLRU files; since they now
persist longer, we require more infrastructure to figure out when they
can be removed. pg_upgrade also needs to be careful to copy
pg_multixact files over from the old server to the new, or at least part
of multixact.c state, depending on the versions of the old and new
servers.
Tuple time qualification rules (HeapTupleSatisfies routines) need to be
careful not to consider tuples with the "is multi" infomask bit set as
being only locked; they might need to look up MultiXact values (i.e.
possibly do pg_multixact I/O) to find out the Xid that updated a tuple,
whereas they previously were assured to only use information readily
available from the tuple header. This is considered acceptable, because
the extra I/O would involve cases that would previously cause some
commands to block waiting for concurrent transactions to finish.
Another important change is the fact that locking tuples that have
previously been updated causes the future versions to be marked as
locked, too; this is essential for correctness of foreign key checks.
This causes additional WAL-logging, also (there was previously a single
WAL record for a locked tuple; now there are as many as updated copies
of the tuple there exist.)
With all this in place, contention related to tuples being checked by
foreign key rules should be much reduced.
As a bonus, the old behavior that a subtransaction grabbing a stronger
tuple lock than the parent (sub)transaction held on a given tuple and
later aborting caused the weaker lock to be lost, has been fixed.
Many new spec files were added for isolation tester framework, to ensure
overall behavior is sane. There's probably room for several more tests.
There were several reviewers of this patch; in particular, Noah Misch
and Andres Freund spent considerable time in it. Original idea for the
patch came from Simon Riggs, after a problem report by Joel Jacobson.
Most code is from me, with contributions from Marti Raudsepp, Alexander
Shulgin, Noah Misch and Andres Freund.
This patch was discussed in several pgsql-hackers threads; the most
important start at the following message-ids:
AANLkTimo9XVcEzfiBR-ut3KVNDkjm2Vxh+t8kAmWjPuv@mail.gmail.com
1290721684-sup-3951@alvh.no-ip.org
1294953201-sup-2099@alvh.no-ip.org
1320343602-sup-2290@alvh.no-ip.org
1339690386-sup-8927@alvh.no-ip.org
4FE5FF020200002500048A3D@gw.wicourts.gov
4FEAB90A0200002500048B7D@gw.wicourts.gov
2013-01-23 16:04:59 +01:00
|
|
|
&src_frozenxid, &src_minmxid, &src_deftablespace,
|
2008-09-23 11:20:39 +02:00
|
|
|
&src_collate, &src_ctype))
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_DATABASE),
|
2006-05-04 18:07:29 +02:00
|
|
|
errmsg("template database \"%s\" does not exist",
|
|
|
|
dbtemplate)));
|
2001-03-22 05:01:46 +01:00
|
|
|
|
2000-11-14 19:37:49 +01:00
|
|
|
/*
|
2001-03-22 05:01:46 +01:00
|
|
|
* Permission check: to copy a DB that's not marked datistemplate, you
|
|
|
|
* must be superuser or the owner thereof.
|
2000-11-14 19:37:49 +01:00
|
|
|
*/
|
|
|
|
if (!src_istemplate)
|
|
|
|
{
|
2005-06-28 07:09:14 +02:00
|
|
|
if (!pg_database_ownercheck(src_dboid, GetUserId()))
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
|
2003-08-01 02:15:26 +02:00
|
|
|
errmsg("permission denied to copy database \"%s\"",
|
2003-07-19 01:20:33 +02:00
|
|
|
dbtemplate)));
|
2000-11-14 19:37:49 +01:00
|
|
|
}
|
2001-03-22 05:01:46 +01:00
|
|
|
|
2008-09-23 11:20:39 +02:00
|
|
|
/* If encoding or locales are defaulted, use source's setting */
|
2000-11-14 19:37:49 +01:00
|
|
|
if (encoding < 0)
|
|
|
|
encoding = src_encoding;
|
2008-09-23 11:20:39 +02:00
|
|
|
if (dbcollate == NULL)
|
|
|
|
dbcollate = src_collate;
|
|
|
|
if (dbctype == NULL)
|
|
|
|
dbctype = src_ctype;
|
2000-10-22 19:55:49 +02:00
|
|
|
|
Commit Karel's patch.
-------------------------------------------------------------------
Subject: Re: [PATCHES] encoding names
From: Karel Zak <zakkr@zf.jcu.cz>
To: Peter Eisentraut <peter_e@gmx.net>
Cc: pgsql-patches <pgsql-patches@postgresql.org>
Date: Fri, 31 Aug 2001 17:24:38 +0200
On Thu, Aug 30, 2001 at 01:30:40AM +0200, Peter Eisentraut wrote:
> > - convert encoding 'name' to 'id'
>
> I thought we decided not to add functions returning "new" names until we
> know exactly what the new names should be, and pending schema
Ok, the patch not to add functions.
> better
>
> ...(): encoding name too long
Fixed.
I found new bug in command/variable.c in parse_client_encoding(), nobody
probably never see this error:
if (pg_set_client_encoding(encoding))
{
elog(ERROR, "Conversion between %s and %s is not supported",
value, GetDatabaseEncodingName());
}
because pg_set_client_encoding() returns -1 for error and 0 as true.
It's fixed too.
IMHO it can be apply.
Karel
PS:
* following files are renamed:
src/utils/mb/Unicode/KOI8_to_utf8.map -->
src/utils/mb/Unicode/koi8r_to_utf8.map
src/utils/mb/Unicode/WIN_to_utf8.map -->
src/utils/mb/Unicode/win1251_to_utf8.map
src/utils/mb/Unicode/utf8_to_KOI8.map -->
src/utils/mb/Unicode/utf8_to_koi8r.map
src/utils/mb/Unicode/utf8_to_WIN.map -->
src/utils/mb/Unicode/utf8_to_win1251.map
* new file:
src/utils/mb/encname.c
* removed file:
src/utils/mb/common.c
--
Karel Zak <zakkr@zf.jcu.cz>
http://home.zf.jcu.cz/~zakkr/
C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz
2001-09-06 06:57:30 +02:00
|
|
|
/* Some encodings are client only */
|
2001-10-25 07:50:21 +02:00
|
|
|
if (!PG_VALID_BE_ENCODING(encoding))
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
2003-09-25 08:58:07 +02:00
|
|
|
errmsg("invalid server encoding %d", encoding)));
|
2001-10-25 07:50:21 +02:00
|
|
|
|
Replace empty locale name with implied value in CREATE DATABASE and initdb.
setlocale() accepts locale name "" as meaning "the locale specified by the
process's environment variables". Historically we've accepted that for
Postgres' locale settings, too. However, it's fairly unsafe to store an
empty string in a new database's pg_database.datcollate or datctype fields,
because then the interpretation could vary across postmaster restarts,
possibly resulting in index corruption and other unpleasantness.
Instead, we should expand "" to whatever it means at the moment of calling
CREATE DATABASE, which we can do by saving the value returned by
setlocale().
For consistency, make initdb set up the initial lc_xxx parameter values the
same way. initdb was already doing the right thing for empty locale names,
but it did not replace non-empty names with setlocale results. On a
platform where setlocale chooses to canonicalize the spellings of locale
names, this would result in annoying inconsistency. (It seems that popular
implementations of setlocale don't do such canonicalization, which is a
pity, but the POSIX spec certainly allows it to be done.) The same risk
of inconsistency leads me to not venture back-patching this, although it
could certainly be seen as a longstanding bug.
Per report from Jeff Davis, though this is not his proposed patch.
2012-03-26 03:47:22 +02:00
|
|
|
/* Check that the chosen locales are valid, and get canonical spellings */
|
|
|
|
if (!check_locale(LC_COLLATE, dbcollate, &canonname))
|
2008-09-23 11:20:39 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
2012-04-13 19:37:07 +02:00
|
|
|
errmsg("invalid locale name: \"%s\"", dbcollate)));
|
Replace empty locale name with implied value in CREATE DATABASE and initdb.
setlocale() accepts locale name "" as meaning "the locale specified by the
process's environment variables". Historically we've accepted that for
Postgres' locale settings, too. However, it's fairly unsafe to store an
empty string in a new database's pg_database.datcollate or datctype fields,
because then the interpretation could vary across postmaster restarts,
possibly resulting in index corruption and other unpleasantness.
Instead, we should expand "" to whatever it means at the moment of calling
CREATE DATABASE, which we can do by saving the value returned by
setlocale().
For consistency, make initdb set up the initial lc_xxx parameter values the
same way. initdb was already doing the right thing for empty locale names,
but it did not replace non-empty names with setlocale results. On a
platform where setlocale chooses to canonicalize the spellings of locale
names, this would result in annoying inconsistency. (It seems that popular
implementations of setlocale don't do such canonicalization, which is a
pity, but the POSIX spec certainly allows it to be done.) The same risk
of inconsistency leads me to not venture back-patching this, although it
could certainly be seen as a longstanding bug.
Per report from Jeff Davis, though this is not his proposed patch.
2012-03-26 03:47:22 +02:00
|
|
|
dbcollate = canonname;
|
|
|
|
if (!check_locale(LC_CTYPE, dbctype, &canonname))
|
2008-09-23 11:20:39 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
2012-04-13 19:37:07 +02:00
|
|
|
errmsg("invalid locale name: \"%s\"", dbctype)));
|
Replace empty locale name with implied value in CREATE DATABASE and initdb.
setlocale() accepts locale name "" as meaning "the locale specified by the
process's environment variables". Historically we've accepted that for
Postgres' locale settings, too. However, it's fairly unsafe to store an
empty string in a new database's pg_database.datcollate or datctype fields,
because then the interpretation could vary across postmaster restarts,
possibly resulting in index corruption and other unpleasantness.
Instead, we should expand "" to whatever it means at the moment of calling
CREATE DATABASE, which we can do by saving the value returned by
setlocale().
For consistency, make initdb set up the initial lc_xxx parameter values the
same way. initdb was already doing the right thing for empty locale names,
but it did not replace non-empty names with setlocale results. On a
platform where setlocale chooses to canonicalize the spellings of locale
names, this would result in annoying inconsistency. (It seems that popular
implementations of setlocale don't do such canonicalization, which is a
pity, but the POSIX spec certainly allows it to be done.) The same risk
of inconsistency leads me to not venture back-patching this, although it
could certainly be seen as a longstanding bug.
Per report from Jeff Davis, though this is not his proposed patch.
2012-03-26 03:47:22 +02:00
|
|
|
dbctype = canonname;
|
2008-09-23 11:20:39 +02:00
|
|
|
|
2011-02-12 14:54:13 +01:00
|
|
|
check_encoding_locale_matches(encoding, dbcollate, dbctype);
|
2008-09-23 12:58:03 +02:00
|
|
|
|
2008-09-23 11:20:39 +02:00
|
|
|
/*
|
2009-05-06 18:15:21 +02:00
|
|
|
* Check that the new encoding and locale settings match the source
|
|
|
|
* database. We insist on this because we simply copy the source data ---
|
|
|
|
* any non-ASCII data would be wrongly encoded, and any indexes sorted
|
|
|
|
* according to the source locale would be wrong.
|
2008-09-23 11:20:39 +02:00
|
|
|
*
|
2009-05-06 18:15:21 +02:00
|
|
|
* However, we assume that template0 doesn't contain any non-ASCII data
|
|
|
|
* nor any indexes that depend on collation or ctype, so template0 can be
|
|
|
|
* used as template for creating a database with any encoding or locale.
|
2008-09-23 11:20:39 +02:00
|
|
|
*/
|
|
|
|
if (strcmp(dbtemplate, "template0") != 0)
|
|
|
|
{
|
2009-05-06 18:15:21 +02:00
|
|
|
if (encoding != src_encoding)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
|
|
|
|
errmsg("new encoding (%s) is incompatible with the encoding of the template database (%s)",
|
|
|
|
pg_encoding_to_char(encoding),
|
|
|
|
pg_encoding_to_char(src_encoding)),
|
|
|
|
errhint("Use the same encoding as in the template database, or use template0 as template.")));
|
|
|
|
|
2009-04-23 19:39:21 +02:00
|
|
|
if (strcmp(dbcollate, src_collate) != 0)
|
2008-09-23 11:20:39 +02:00
|
|
|
ereport(ERROR,
|
2009-05-06 18:15:21 +02:00
|
|
|
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
|
|
|
|
errmsg("new collation (%s) is incompatible with the collation of the template database (%s)",
|
|
|
|
dbcollate, src_collate),
|
2009-04-15 23:36:12 +02:00
|
|
|
errhint("Use the same collation as in the template database, or use template0 as template.")));
|
2008-09-23 11:20:39 +02:00
|
|
|
|
2009-04-23 19:39:21 +02:00
|
|
|
if (strcmp(dbctype, src_ctype) != 0)
|
2008-09-23 11:20:39 +02:00
|
|
|
ereport(ERROR,
|
2009-05-06 18:15:21 +02:00
|
|
|
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
|
|
|
|
errmsg("new LC_CTYPE (%s) is incompatible with the LC_CTYPE of the template database (%s)",
|
|
|
|
dbctype, src_ctype),
|
2009-04-15 23:36:12 +02:00
|
|
|
errhint("Use the same LC_CTYPE as in the template database, or use template0 as template.")));
|
2008-09-23 11:20:39 +02:00
|
|
|
}
|
|
|
|
|
2004-06-18 08:14:31 +02:00
|
|
|
/* Resolve default tablespace for new database */
|
|
|
|
if (dtablespacename && dtablespacename->arg)
|
|
|
|
{
|
|
|
|
char *tablespacename;
|
2004-08-29 07:07:03 +02:00
|
|
|
AclResult aclresult;
|
2004-06-18 08:14:31 +02:00
|
|
|
|
2014-07-02 01:02:21 +02:00
|
|
|
tablespacename = defGetString(dtablespacename);
|
2010-08-05 16:45:09 +02:00
|
|
|
dst_deftablespace = get_tablespace_oid(tablespacename, false);
|
2004-06-18 08:14:31 +02:00
|
|
|
/* check permissions */
|
2004-08-29 07:07:03 +02:00
|
|
|
aclresult = pg_tablespace_aclcheck(dst_deftablespace, GetUserId(),
|
2004-06-18 08:14:31 +02:00
|
|
|
ACL_CREATE);
|
2004-08-29 07:07:03 +02:00
|
|
|
if (aclresult != ACLCHECK_OK)
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(aclresult, OBJECT_TABLESPACE,
|
2004-08-29 07:07:03 +02:00
|
|
|
tablespacename);
|
2004-10-17 22:47:21 +02:00
|
|
|
|
2007-10-12 20:55:12 +02:00
|
|
|
/* pg_global must never be the default tablespace */
|
|
|
|
if (dst_deftablespace == GLOBALTABLESPACE_OID)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("pg_global cannot be used as default tablespace")));
|
2007-10-12 20:55:12 +02:00
|
|
|
|
2004-10-17 22:47:21 +02:00
|
|
|
/*
|
|
|
|
* If we are trying to change the default tablespace of the template,
|
|
|
|
* we require that the template not have any files in the new default
|
2014-05-06 18:12:18 +02:00
|
|
|
* tablespace. This is necessary because otherwise the copied
|
2004-10-17 22:47:21 +02:00
|
|
|
* database would contain pg_class rows that refer to its default
|
|
|
|
* tablespace both explicitly (by OID) and implicitly (as zero), which
|
|
|
|
* would cause problems. For example another CREATE DATABASE using
|
|
|
|
* the copied database as template, and trying to change its default
|
|
|
|
* tablespace again, would yield outright incorrect results (it would
|
|
|
|
* improperly move tables to the new default tablespace that should
|
|
|
|
* stay in the same tablespace).
|
|
|
|
*/
|
|
|
|
if (dst_deftablespace != src_deftablespace)
|
|
|
|
{
|
|
|
|
char *srcpath;
|
|
|
|
struct stat st;
|
|
|
|
|
|
|
|
srcpath = GetDatabasePath(src_dboid, dst_deftablespace);
|
|
|
|
|
|
|
|
if (stat(srcpath, &st) == 0 &&
|
|
|
|
S_ISDIR(st.st_mode) &&
|
|
|
|
!directory_is_empty(srcpath))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
|
|
|
|
errmsg("cannot assign new default tablespace \"%s\"",
|
|
|
|
tablespacename),
|
|
|
|
errdetail("There is a conflict because database \"%s\" already has some tables in this tablespace.",
|
|
|
|
dbtemplate)));
|
|
|
|
pfree(srcpath);
|
|
|
|
}
|
2004-06-18 08:14:31 +02:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Use template database's default tablespace */
|
|
|
|
dst_deftablespace = src_deftablespace;
|
|
|
|
/* Note there is no additional permission check in this path */
|
|
|
|
}
|
|
|
|
|
Add an enforcement mechanism for global object names in regression tests.
In commit 18555b132 we tentatively established a rule that regression
tests should use names containing "regression" for databases, and names
starting with "regress_" for all other globally-visible object names, so
as to circumscribe the side-effects that "make installcheck" could have
on an existing installation.
This commit adds a simple enforcement mechanism for that rule: if the code
is compiled with ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS defined, it
will emit a warning (not an error) whenever a database, role, tablespace,
subscription, or replication origin name is created that doesn't obey the
rule. Running one or more buildfarm members with that symbol defined
should be enough to catch new violations, at least in the regular
regression tests. Most TAP tests wouldn't notice such warnings, but
that's actually fine because TAP tests don't execute against an existing
server anyway.
Since it's already the case that running src/test/modules/ tests in
installcheck mode is deprecated, we can use that as a home for tests
that seem unsafe to run against an existing server, such as tests that
might have side-effects on existing roles. Document that (though this
commit doesn't in itself make it any less safe than before).
Update regress.sgml to define these restrictions more clearly, and
to clean up assorted lack-of-up-to-date-ness in its descriptions of
the available regression tests.
Discussion: https://postgr.es/m/16638.1468620817@sss.pgh.pa.us
2019-06-29 17:34:00 +02:00
|
|
|
/*
|
|
|
|
* If built with appropriate switch, whine when regression-testing
|
|
|
|
* conventions for database names are violated. But don't complain during
|
|
|
|
* initdb.
|
|
|
|
*/
|
|
|
|
#ifdef ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS
|
|
|
|
if (IsUnderPostmaster && strstr(dbname, "regression") == NULL)
|
|
|
|
elog(WARNING, "databases created by regression test cases should have names including \"regression\"");
|
|
|
|
#endif
|
|
|
|
|
2001-03-22 05:01:46 +01:00
|
|
|
/*
|
2014-05-06 18:12:18 +02:00
|
|
|
* Check for db name conflict. This is just to give a more friendly error
|
2006-10-04 02:30:14 +02:00
|
|
|
* message than "unique index violation". There's a race condition but
|
|
|
|
* we're willing to accept the less friendly message in that case.
|
2000-11-14 19:37:49 +01:00
|
|
|
*/
|
2010-08-05 16:45:09 +02:00
|
|
|
if (OidIsValid(get_database_oid(dbname, true)))
|
2006-05-04 18:07:29 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_DUPLICATE_DATABASE),
|
|
|
|
errmsg("database \"%s\" already exists", dbname)));
|
|
|
|
|
2007-06-01 21:38:07 +02:00
|
|
|
/*
|
|
|
|
* The source DB can't have any active backends, except this one
|
|
|
|
* (exception is to allow CREATE DB while connected to template1).
|
|
|
|
* Otherwise we might copy inconsistent data.
|
|
|
|
*
|
|
|
|
* This should be last among the basic error checks, because it involves
|
|
|
|
* potential waiting; we may as well throw an error first if we're gonna
|
|
|
|
* throw one.
|
|
|
|
*/
|
2008-08-04 20:03:46 +02:00
|
|
|
if (CountOtherDBBackends(src_dboid, ¬herbackends, &npreparedxacts))
|
2007-06-01 21:38:07 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_OBJECT_IN_USE),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("source database \"%s\" is being accessed by other users",
|
|
|
|
dbtemplate),
|
2008-08-04 20:03:46 +02:00
|
|
|
errdetail_busy_db(notherbackends, npreparedxacts)));
|
2007-06-01 21:38:07 +02:00
|
|
|
|
2006-10-19 00:44:12 +02:00
|
|
|
/*
|
2007-11-15 22:14:46 +01:00
|
|
|
* Select an OID for the new database, checking that it doesn't have a
|
|
|
|
* filename conflict with anything already existing in the tablespace
|
2006-10-19 00:44:12 +02:00
|
|
|
* directories.
|
|
|
|
*/
|
2019-01-21 19:32:19 +01:00
|
|
|
pg_database_rel = table_open(DatabaseRelationId, RowExclusiveLock);
|
2006-10-19 00:44:12 +02:00
|
|
|
|
|
|
|
do
|
|
|
|
{
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
dboid = GetNewOidWithIndex(pg_database_rel, DatabaseOidIndexId,
|
|
|
|
Anum_pg_database_oid);
|
2006-10-19 00:44:12 +02:00
|
|
|
} while (check_db_file_conflict(dboid));
|
|
|
|
|
2006-05-04 18:07:29 +02:00
|
|
|
/*
|
2006-10-04 02:30:14 +02:00
|
|
|
* Insert a new tuple into pg_database. This establishes our ownership of
|
|
|
|
* the new database name (anyone else trying to insert the same name will
|
2006-10-19 00:44:12 +02:00
|
|
|
* block on the unique index, and fail after we commit).
|
2006-05-04 18:07:29 +02:00
|
|
|
*/
|
|
|
|
|
|
|
|
/* Form tuple */
|
|
|
|
MemSet(new_record, 0, sizeof(new_record));
|
2008-11-02 02:45:28 +01:00
|
|
|
MemSet(new_record_nulls, false, sizeof(new_record_nulls));
|
2006-05-04 18:07:29 +02:00
|
|
|
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
new_record[Anum_pg_database_oid - 1] = ObjectIdGetDatum(dboid);
|
2006-05-04 18:07:29 +02:00
|
|
|
new_record[Anum_pg_database_datname - 1] =
|
|
|
|
DirectFunctionCall1(namein, CStringGetDatum(dbname));
|
|
|
|
new_record[Anum_pg_database_datdba - 1] = ObjectIdGetDatum(datdba);
|
|
|
|
new_record[Anum_pg_database_encoding - 1] = Int32GetDatum(encoding);
|
2008-09-23 11:20:39 +02:00
|
|
|
new_record[Anum_pg_database_datcollate - 1] =
|
|
|
|
DirectFunctionCall1(namein, CStringGetDatum(dbcollate));
|
|
|
|
new_record[Anum_pg_database_datctype - 1] =
|
|
|
|
DirectFunctionCall1(namein, CStringGetDatum(dbctype));
|
2014-07-02 02:10:38 +02:00
|
|
|
new_record[Anum_pg_database_datistemplate - 1] = BoolGetDatum(dbistemplate);
|
|
|
|
new_record[Anum_pg_database_datallowconn - 1] = BoolGetDatum(dballowconnections);
|
2006-05-04 18:07:29 +02:00
|
|
|
new_record[Anum_pg_database_datconnlimit - 1] = Int32GetDatum(dbconnlimit);
|
|
|
|
new_record[Anum_pg_database_datlastsysoid - 1] = ObjectIdGetDatum(src_lastsysoid);
|
Fix recently-understood problems with handling of XID freezing, particularly
in PITR scenarios. We now WAL-log the replacement of old XIDs with
FrozenTransactionId, so that such replacement is guaranteed to propagate to
PITR slave databases. Also, rather than relying on hint-bit updates to be
preserved, pg_clog is not truncated until all instances of an XID are known to
have been replaced by FrozenTransactionId. Add new GUC variables and
pg_autovacuum columns to allow management of the freezing policy, so that
users can trade off the size of pg_clog against the amount of freezing work
done. Revise the already-existing code that forces autovacuum of tables
approaching the wraparound point to make it more bulletproof; also, revise the
autovacuum logic so that anti-wraparound vacuuming is done per-table rather
than per-database. initdb forced because of changes in pg_class, pg_database,
and pg_autovacuum catalogs. Heikki Linnakangas, Simon Riggs, and Tom Lane.
2006-11-05 23:42:10 +01:00
|
|
|
new_record[Anum_pg_database_datfrozenxid - 1] = TransactionIdGetDatum(src_frozenxid);
|
Improve concurrency of foreign key locking
This patch introduces two additional lock modes for tuples: "SELECT FOR
KEY SHARE" and "SELECT FOR NO KEY UPDATE". These don't block each
other, in contrast with already existing "SELECT FOR SHARE" and "SELECT
FOR UPDATE". UPDATE commands that do not modify the values stored in
the columns that are part of the key of the tuple now grab a SELECT FOR
NO KEY UPDATE lock on the tuple, allowing them to proceed concurrently
with tuple locks of the FOR KEY SHARE variety.
Foreign key triggers now use FOR KEY SHARE instead of FOR SHARE; this
means the concurrency improvement applies to them, which is the whole
point of this patch.
The added tuple lock semantics require some rejiggering of the multixact
module, so that the locking level that each transaction is holding can
be stored alongside its Xid. Also, multixacts now need to persist
across server restarts and crashes, because they can now represent not
only tuple locks, but also tuple updates. This means we need more
careful tracking of lifetime of pg_multixact SLRU files; since they now
persist longer, we require more infrastructure to figure out when they
can be removed. pg_upgrade also needs to be careful to copy
pg_multixact files over from the old server to the new, or at least part
of multixact.c state, depending on the versions of the old and new
servers.
Tuple time qualification rules (HeapTupleSatisfies routines) need to be
careful not to consider tuples with the "is multi" infomask bit set as
being only locked; they might need to look up MultiXact values (i.e.
possibly do pg_multixact I/O) to find out the Xid that updated a tuple,
whereas they previously were assured to only use information readily
available from the tuple header. This is considered acceptable, because
the extra I/O would involve cases that would previously cause some
commands to block waiting for concurrent transactions to finish.
Another important change is the fact that locking tuples that have
previously been updated causes the future versions to be marked as
locked, too; this is essential for correctness of foreign key checks.
This causes additional WAL-logging, also (there was previously a single
WAL record for a locked tuple; now there are as many as updated copies
of the tuple there exist.)
With all this in place, contention related to tuples being checked by
foreign key rules should be much reduced.
As a bonus, the old behavior that a subtransaction grabbing a stronger
tuple lock than the parent (sub)transaction held on a given tuple and
later aborting caused the weaker lock to be lost, has been fixed.
Many new spec files were added for isolation tester framework, to ensure
overall behavior is sane. There's probably room for several more tests.
There were several reviewers of this patch; in particular, Noah Misch
and Andres Freund spent considerable time in it. Original idea for the
patch came from Simon Riggs, after a problem report by Joel Jacobson.
Most code is from me, with contributions from Marti Raudsepp, Alexander
Shulgin, Noah Misch and Andres Freund.
This patch was discussed in several pgsql-hackers threads; the most
important start at the following message-ids:
AANLkTimo9XVcEzfiBR-ut3KVNDkjm2Vxh+t8kAmWjPuv@mail.gmail.com
1290721684-sup-3951@alvh.no-ip.org
1294953201-sup-2099@alvh.no-ip.org
1320343602-sup-2290@alvh.no-ip.org
1339690386-sup-8927@alvh.no-ip.org
4FE5FF020200002500048A3D@gw.wicourts.gov
4FEAB90A0200002500048B7D@gw.wicourts.gov
2013-01-23 16:04:59 +01:00
|
|
|
new_record[Anum_pg_database_datminmxid - 1] = TransactionIdGetDatum(src_minmxid);
|
2006-05-04 18:07:29 +02:00
|
|
|
new_record[Anum_pg_database_dattablespace - 1] = ObjectIdGetDatum(dst_deftablespace);
|
|
|
|
|
|
|
|
/*
|
2009-10-08 00:14:26 +02:00
|
|
|
* We deliberately set datacl to default (NULL), rather than copying it
|
2014-05-06 18:12:18 +02:00
|
|
|
* from the template database. Copying it would be a bad idea when the
|
2009-10-08 00:14:26 +02:00
|
|
|
* owner is not the same as the template's owner.
|
2006-05-04 18:07:29 +02:00
|
|
|
*/
|
2008-11-02 02:45:28 +01:00
|
|
|
new_record_nulls[Anum_pg_database_datacl - 1] = true;
|
2006-05-04 18:07:29 +02:00
|
|
|
|
2008-11-02 02:45:28 +01:00
|
|
|
tuple = heap_form_tuple(RelationGetDescr(pg_database_rel),
|
2009-06-11 16:49:15 +02:00
|
|
|
new_record, new_record_nulls);
|
2006-05-04 18:07:29 +02:00
|
|
|
|
2017-01-31 22:42:24 +01:00
|
|
|
CatalogTupleInsert(pg_database_rel, tuple);
|
2006-05-04 18:07:29 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Now generate additional catalog entries associated with the new DB
|
|
|
|
*/
|
|
|
|
|
|
|
|
/* Register owner dependency */
|
|
|
|
recordDependencyOnOwner(DatabaseRelationId, dboid, datdba);
|
|
|
|
|
|
|
|
/* Create pg_shdepend entries for objects within database */
|
|
|
|
copyTemplateDependencies(src_dboid, dboid);
|
2000-10-22 19:55:49 +02:00
|
|
|
|
2010-11-25 17:48:49 +01:00
|
|
|
/* Post creation hook for new database */
|
2013-03-07 02:52:06 +01:00
|
|
|
InvokeObjectPostCreateHook(DatabaseRelationId, dboid, 0);
|
2010-11-25 17:48:49 +01:00
|
|
|
|
2001-01-14 23:14:10 +01:00
|
|
|
/*
|
2014-10-20 23:43:46 +02:00
|
|
|
* Force a checkpoint before starting the copy. This will force all dirty
|
|
|
|
* buffers, including those of unlogged tables, out to disk, to ensure
|
|
|
|
* source database is up-to-date on disk for the copy.
|
2015-05-24 03:35:49 +02:00
|
|
|
* FlushDatabaseBuffers() would suffice for that, but we also want to
|
|
|
|
* process any pending unlink requests. Otherwise, if a checkpoint
|
2014-10-20 23:43:46 +02:00
|
|
|
* happened while we're copying files, a file might be deleted just when
|
|
|
|
* we're about to copy it, causing the lstat() call in copydir() to fail
|
|
|
|
* with ENOENT.
|
2000-11-18 04:36:48 +01:00
|
|
|
*/
|
2014-10-20 23:43:46 +02:00
|
|
|
RequestCheckpoint(CHECKPOINT_IMMEDIATE | CHECKPOINT_FORCE | CHECKPOINT_WAIT
|
|
|
|
| CHECKPOINT_FLUSH_ALL);
|
1998-08-24 03:14:24 +02:00
|
|
|
|
2000-06-02 06:04:54 +02:00
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* Once we start copying subdirectories, we need to be able to clean 'em
|
2008-04-17 01:59:40 +02:00
|
|
|
* up if we fail. Use an ENSURE block to make sure this happens. (This
|
2005-10-15 04:49:52 +02:00
|
|
|
* is not a 100% solution, because of the possibility of failure during
|
|
|
|
* transaction commit after we leave this routine, but it should handle
|
|
|
|
* most scenarios.)
|
2000-06-02 06:04:54 +02:00
|
|
|
*/
|
2008-04-17 01:59:40 +02:00
|
|
|
fparms.src_dboid = src_dboid;
|
|
|
|
fparms.dest_dboid = dboid;
|
|
|
|
PG_ENSURE_ERROR_CLEANUP(createdb_failure_callback,
|
|
|
|
PointerGetDatum(&fparms));
|
2004-06-18 08:14:31 +02:00
|
|
|
{
|
2005-08-02 21:02:32 +02:00
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* Iterate through all tablespaces of the template database, and copy
|
|
|
|
* each one to the new database.
|
2005-08-02 21:02:32 +02:00
|
|
|
*/
|
2019-01-21 19:32:19 +01:00
|
|
|
rel = table_open(TableSpaceRelationId, AccessShareLock);
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
scan = table_beginscan_catalog(rel, 0, NULL);
|
2005-08-02 21:02:32 +02:00
|
|
|
while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
|
2004-06-18 08:14:31 +02:00
|
|
|
{
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
Form_pg_tablespace spaceform = (Form_pg_tablespace) GETSTRUCT(tuple);
|
|
|
|
Oid srctablespace = spaceform->oid;
|
2005-08-02 21:02:32 +02:00
|
|
|
Oid dsttablespace;
|
|
|
|
char *srcpath;
|
|
|
|
char *dstpath;
|
|
|
|
struct stat st;
|
2004-06-18 08:14:31 +02:00
|
|
|
|
2005-08-02 21:02:32 +02:00
|
|
|
/* No need to copy global tablespace */
|
|
|
|
if (srctablespace == GLOBALTABLESPACE_OID)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
srcpath = GetDatabasePath(src_dboid, srctablespace);
|
|
|
|
|
|
|
|
if (stat(srcpath, &st) < 0 || !S_ISDIR(st.st_mode) ||
|
|
|
|
directory_is_empty(srcpath))
|
|
|
|
{
|
|
|
|
/* Assume we can ignore it */
|
|
|
|
pfree(srcpath);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (srctablespace == src_deftablespace)
|
|
|
|
dsttablespace = dst_deftablespace;
|
|
|
|
else
|
|
|
|
dsttablespace = srctablespace;
|
|
|
|
|
|
|
|
dstpath = GetDatabasePath(dboid, dsttablespace);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Copy this subdirectory to the new location
|
|
|
|
*
|
|
|
|
* We don't need to copy subdirectories
|
|
|
|
*/
|
|
|
|
copydir(srcpath, dstpath, false);
|
|
|
|
|
|
|
|
/* Record the filesystem change in XLOG */
|
|
|
|
{
|
|
|
|
xl_dbase_create_rec xlrec;
|
|
|
|
|
|
|
|
xlrec.db_id = dboid;
|
|
|
|
xlrec.tablespace_id = dsttablespace;
|
|
|
|
xlrec.src_db_id = src_dboid;
|
|
|
|
xlrec.src_tablespace_id = srctablespace;
|
|
|
|
|
Revamp the WAL record format.
Each WAL record now carries information about the modified relation and
block(s) in a standardized format. That makes it easier to write tools that
need that information, like pg_rewind, prefetching the blocks to speed up
recovery, etc.
There's a whole new API for building WAL records, replacing the XLogRecData
chains used previously. The new API consists of XLogRegister* functions,
which are called for each buffer and chunk of data that is added to the
record. The new API also gives more control over when a full-page image is
written, by passing flags to the XLogRegisterBuffer function.
This also simplifies the XLogReadBufferForRedo() calls. The function can dig
the relation and block number from the WAL record, so they no longer need to
be passed as arguments.
For the convenience of redo routines, XLogReader now disects each WAL record
after reading it, copying the main data part and the per-block data into
MAXALIGNed buffers. The data chunks are not aligned within the WAL record,
but the redo routines can assume that the pointers returned by XLogRecGet*
functions are. Redo routines are now passed the XLogReaderState, which
contains the record in the already-disected format, instead of the plain
XLogRecord.
The new record format also makes the fixed size XLogRecord header smaller,
by removing the xl_len field. The length of the "main data" portion is now
stored at the end of the WAL record, and there's a separate header after
XLogRecord for it. The alignment padding at the end of XLogRecord is also
removed. This compansates for the fact that the new format would otherwise
be more bulky than the old format.
Reviewed by Andres Freund, Amit Kapila, Michael Paquier, Alvaro Herrera,
Fujii Masao.
2014-11-20 16:56:26 +01:00
|
|
|
XLogBeginInsert();
|
|
|
|
XLogRegisterData((char *) &xlrec, sizeof(xl_dbase_create_rec));
|
2005-08-02 21:02:32 +02:00
|
|
|
|
Revamp the WAL record format.
Each WAL record now carries information about the modified relation and
block(s) in a standardized format. That makes it easier to write tools that
need that information, like pg_rewind, prefetching the blocks to speed up
recovery, etc.
There's a whole new API for building WAL records, replacing the XLogRecData
chains used previously. The new API consists of XLogRegister* functions,
which are called for each buffer and chunk of data that is added to the
record. The new API also gives more control over when a full-page image is
written, by passing flags to the XLogRegisterBuffer function.
This also simplifies the XLogReadBufferForRedo() calls. The function can dig
the relation and block number from the WAL record, so they no longer need to
be passed as arguments.
For the convenience of redo routines, XLogReader now disects each WAL record
after reading it, copying the main data part and the per-block data into
MAXALIGNed buffers. The data chunks are not aligned within the WAL record,
but the redo routines can assume that the pointers returned by XLogRecGet*
functions are. Redo routines are now passed the XLogReaderState, which
contains the record in the already-disected format, instead of the plain
XLogRecord.
The new record format also makes the fixed size XLogRecord header smaller,
by removing the xl_len field. The length of the "main data" portion is now
stored at the end of the WAL record, and there's a separate header after
XLogRecord for it. The alignment padding at the end of XLogRecord is also
removed. This compansates for the fact that the new format would otherwise
be more bulky than the old format.
Reviewed by Andres Freund, Amit Kapila, Michael Paquier, Alvaro Herrera,
Fujii Masao.
2014-11-20 16:56:26 +01:00
|
|
|
(void) XLogInsert(RM_DBASE_ID,
|
|
|
|
XLOG_DBASE_CREATE | XLR_SPECIAL_REL_UPDATE);
|
2005-08-02 21:02:32 +02:00
|
|
|
}
|
2004-06-18 08:14:31 +02:00
|
|
|
}
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
table_endscan(scan);
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(rel, AccessShareLock);
|
2004-08-29 07:07:03 +02:00
|
|
|
|
2005-08-02 21:02:32 +02:00
|
|
|
/*
|
|
|
|
* We force a checkpoint before committing. This effectively means
|
|
|
|
* that committed XLOG_DBASE_CREATE operations will never need to be
|
2005-10-15 04:49:52 +02:00
|
|
|
* replayed (at least not in ordinary crash recovery; we still have to
|
|
|
|
* make the XLOG entry for the benefit of PITR operations). This
|
|
|
|
* avoids two nasty scenarios:
|
2005-08-02 21:02:32 +02:00
|
|
|
*
|
|
|
|
* #1: When PITR is off, we don't XLOG the contents of newly created
|
|
|
|
* indexes; therefore the drop-and-recreate-whole-directory behavior
|
|
|
|
* of DBASE_CREATE replay would lose such indexes.
|
|
|
|
*
|
|
|
|
* #2: Since we have to recopy the source database during DBASE_CREATE
|
2005-10-15 04:49:52 +02:00
|
|
|
* replay, we run the risk of copying changes in it that were
|
|
|
|
* committed after the original CREATE DATABASE command but before the
|
|
|
|
* system crash that led to the replay. This is at least unexpected
|
|
|
|
* and at worst could lead to inconsistencies, eg duplicate table
|
|
|
|
* names.
|
2005-08-02 21:02:32 +02:00
|
|
|
*
|
|
|
|
* (Both of these were real bugs in releases 8.0 through 8.0.3.)
|
|
|
|
*
|
2005-11-22 19:17:34 +01:00
|
|
|
* In PITR replay, the first of these isn't an issue, and the second
|
|
|
|
* is only a risk if the CREATE DATABASE and subsequent template
|
|
|
|
* database change both occur while a base backup is being taken.
|
|
|
|
* There doesn't seem to be much we can do about that except document
|
|
|
|
* it as a limitation.
|
2005-08-02 21:02:32 +02:00
|
|
|
*
|
2005-11-22 19:17:34 +01:00
|
|
|
* Perhaps if we ever implement CREATE DATABASE in a less cheesy way,
|
|
|
|
* we can avoid this.
|
2005-08-02 21:02:32 +02:00
|
|
|
*/
|
2007-06-28 02:02:40 +02:00
|
|
|
RequestCheckpoint(CHECKPOINT_IMMEDIATE | CHECKPOINT_FORCE | CHECKPOINT_WAIT);
|
2000-11-14 19:37:49 +01:00
|
|
|
|
2006-05-04 18:07:29 +02:00
|
|
|
/*
|
2009-09-01 04:54:52 +02:00
|
|
|
* Close pg_database, but keep lock till commit.
|
2006-05-04 18:07:29 +02:00
|
|
|
*/
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(pg_database_rel, NoLock);
|
2006-05-04 18:07:29 +02:00
|
|
|
|
2005-08-02 21:02:32 +02:00
|
|
|
/*
|
2009-09-01 04:54:52 +02:00
|
|
|
* Force synchronous commit, thus minimizing the window between
|
2017-02-06 10:33:58 +01:00
|
|
|
* creation of the database files and committal of the transaction. If
|
2007-11-15 22:14:46 +01:00
|
|
|
* we crash before committing, we'll have a DB that's taking up disk
|
|
|
|
* space but is not in pg_database, which is not good.
|
2005-08-02 21:02:32 +02:00
|
|
|
*/
|
2009-09-01 04:54:52 +02:00
|
|
|
ForceSyncCommit();
|
2005-08-02 21:02:32 +02:00
|
|
|
}
|
2008-04-17 01:59:40 +02:00
|
|
|
PG_END_ENSURE_ERROR_CLEANUP(createdb_failure_callback,
|
|
|
|
PointerGetDatum(&fparms));
|
2012-12-29 13:55:37 +01:00
|
|
|
|
|
|
|
return dboid;
|
2008-04-17 01:59:40 +02:00
|
|
|
}
|
1996-07-09 08:22:35 +02:00
|
|
|
|
2011-02-12 14:54:13 +01:00
|
|
|
/*
|
|
|
|
* Check whether chosen encoding matches chosen locale settings. This
|
|
|
|
* restriction is necessary because libc's locale-specific code usually
|
|
|
|
* fails when presented with data in an encoding it's not expecting. We
|
|
|
|
* allow mismatch in four cases:
|
|
|
|
*
|
|
|
|
* 1. locale encoding = SQL_ASCII, which means that the locale is C/POSIX
|
|
|
|
* which works with any encoding.
|
|
|
|
*
|
|
|
|
* 2. locale encoding = -1, which means that we couldn't determine the
|
|
|
|
* locale's encoding and have to trust the user to get it right.
|
|
|
|
*
|
|
|
|
* 3. selected encoding is UTF8 and platform is win32. This is because
|
|
|
|
* UTF8 is a pseudo codepage that is supported in all locales since it's
|
|
|
|
* converted to UTF16 before being used.
|
|
|
|
*
|
|
|
|
* 4. selected encoding is SQL_ASCII, but only if you're a superuser. This
|
|
|
|
* is risky but we have historically allowed it --- notably, the
|
|
|
|
* regression tests require it.
|
|
|
|
*
|
|
|
|
* Note: if you change this policy, fix initdb to match.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
check_encoding_locale_matches(int encoding, const char *collate, const char *ctype)
|
|
|
|
{
|
2011-04-10 17:42:00 +02:00
|
|
|
int ctype_encoding = pg_get_encoding_from_locale(ctype, true);
|
|
|
|
int collate_encoding = pg_get_encoding_from_locale(collate, true);
|
2011-02-12 14:54:13 +01:00
|
|
|
|
|
|
|
if (!(ctype_encoding == encoding ||
|
|
|
|
ctype_encoding == PG_SQL_ASCII ||
|
|
|
|
ctype_encoding == -1 ||
|
|
|
|
#ifdef WIN32
|
|
|
|
encoding == PG_UTF8 ||
|
|
|
|
#endif
|
|
|
|
(encoding == PG_SQL_ASCII && superuser())))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
|
2012-04-13 19:37:07 +02:00
|
|
|
errmsg("encoding \"%s\" does not match locale \"%s\"",
|
2011-02-12 14:54:13 +01:00
|
|
|
pg_encoding_to_char(encoding),
|
|
|
|
ctype),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errdetail("The chosen LC_CTYPE setting requires encoding \"%s\".",
|
|
|
|
pg_encoding_to_char(ctype_encoding))));
|
2011-02-12 14:54:13 +01:00
|
|
|
|
|
|
|
if (!(collate_encoding == encoding ||
|
|
|
|
collate_encoding == PG_SQL_ASCII ||
|
|
|
|
collate_encoding == -1 ||
|
|
|
|
#ifdef WIN32
|
|
|
|
encoding == PG_UTF8 ||
|
|
|
|
#endif
|
|
|
|
(encoding == PG_SQL_ASCII && superuser())))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
|
2012-04-13 19:37:07 +02:00
|
|
|
errmsg("encoding \"%s\" does not match locale \"%s\"",
|
2011-02-12 14:54:13 +01:00
|
|
|
pg_encoding_to_char(encoding),
|
|
|
|
collate),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errdetail("The chosen LC_COLLATE setting requires encoding \"%s\".",
|
|
|
|
pg_encoding_to_char(collate_encoding))));
|
2011-02-12 14:54:13 +01:00
|
|
|
}
|
|
|
|
|
2008-04-17 01:59:40 +02:00
|
|
|
/* Error cleanup callback for createdb */
|
|
|
|
static void
|
|
|
|
createdb_failure_callback(int code, Datum arg)
|
|
|
|
{
|
|
|
|
createdb_failure_params *fparms = (createdb_failure_params *) DatumGetPointer(arg);
|
2005-07-07 22:40:02 +02:00
|
|
|
|
2008-04-17 01:59:40 +02:00
|
|
|
/*
|
2009-06-11 16:49:15 +02:00
|
|
|
* Release lock on source database before doing recursive remove. This is
|
|
|
|
* not essential but it seems desirable to release the lock as soon as
|
|
|
|
* possible.
|
2008-04-17 01:59:40 +02:00
|
|
|
*/
|
|
|
|
UnlockSharedObject(DatabaseRelationId, fparms->src_dboid, 0, ShareLock);
|
|
|
|
|
|
|
|
/* Throw away any successfully copied subdirectories */
|
|
|
|
remove_dbtablespaces(fparms->dest_dboid);
|
2000-11-14 19:37:49 +01:00
|
|
|
}
|
1999-12-12 06:15:10 +01:00
|
|
|
|
|
|
|
|
2000-01-13 19:26:18 +01:00
|
|
|
/*
|
|
|
|
* DROP DATABASE
|
|
|
|
*/
|
1996-07-09 08:22:35 +02:00
|
|
|
void
|
2019-11-12 06:36:13 +01:00
|
|
|
dropdb(const char *dbname, bool missing_ok, bool force)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
1998-08-29 06:09:29 +02:00
|
|
|
Oid db_id;
|
2005-06-28 07:09:14 +02:00
|
|
|
bool db_istemplate;
|
1999-09-24 02:25:33 +02:00
|
|
|
Relation pgdbrel;
|
|
|
|
HeapTuple tup;
|
2008-08-04 20:03:46 +02:00
|
|
|
int notherbackends;
|
|
|
|
int npreparedxacts;
|
2014-05-06 18:12:18 +02:00
|
|
|
int nslots,
|
|
|
|
nslots_active;
|
2017-01-19 18:00:00 +01:00
|
|
|
int nsubscriptions;
|
1997-09-07 07:04:48 +02:00
|
|
|
|
|
|
|
/*
|
2006-10-04 02:30:14 +02:00
|
|
|
* Look up the target database's OID, and get exclusive lock on it. We
|
|
|
|
* need this to ensure that no new backend starts up in the target
|
2006-05-04 18:07:29 +02:00
|
|
|
* database while we are deleting it (see postinit.c), and that no one is
|
|
|
|
* using it as a CREATE DATABASE template or trying to delete it for
|
|
|
|
* themselves.
|
1997-09-07 07:04:48 +02:00
|
|
|
*/
|
2019-01-21 19:32:19 +01:00
|
|
|
pgdbrel = table_open(DatabaseRelationId, RowExclusiveLock);
|
1999-09-24 02:25:33 +02:00
|
|
|
|
2006-05-04 18:07:29 +02:00
|
|
|
if (!get_db_info(dbname, AccessExclusiveLock, &db_id, NULL, NULL,
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
&db_istemplate, NULL, NULL, NULL, NULL, NULL, NULL, NULL))
|
2005-11-22 16:24:18 +01:00
|
|
|
{
|
2005-11-22 19:17:34 +01:00
|
|
|
if (!missing_ok)
|
2005-11-22 16:24:18 +01:00
|
|
|
{
|
|
|
|
ereport(ERROR,
|
2005-11-22 19:17:34 +01:00
|
|
|
(errcode(ERRCODE_UNDEFINED_DATABASE),
|
|
|
|
errmsg("database \"%s\" does not exist", dbname)));
|
2005-11-22 16:24:18 +01:00
|
|
|
}
|
|
|
|
else
|
|
|
|
{
|
|
|
|
/* Close pg_database, release the lock, since we changed nothing */
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(pgdbrel, RowExclusiveLock);
|
2005-11-22 19:17:34 +01:00
|
|
|
ereport(NOTICE,
|
|
|
|
(errmsg("database \"%s\" does not exist, skipping",
|
2005-11-22 16:24:18 +01:00
|
|
|
dbname)));
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
}
|
2000-11-14 19:37:49 +01:00
|
|
|
|
2006-05-04 18:07:29 +02:00
|
|
|
/*
|
|
|
|
* Permission checks
|
|
|
|
*/
|
2005-06-28 07:09:14 +02:00
|
|
|
if (!pg_database_ownercheck(db_id, GetUserId()))
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_DATABASE,
|
2003-08-01 02:15:26 +02:00
|
|
|
dbname);
|
2000-11-14 19:37:49 +01:00
|
|
|
|
2012-03-09 20:34:56 +01:00
|
|
|
/* DROP hook for the database being removed */
|
2013-03-07 02:52:06 +01:00
|
|
|
InvokeObjectDropHook(DatabaseRelationId, db_id, 0);
|
2012-03-09 20:34:56 +01:00
|
|
|
|
2000-11-14 19:37:49 +01:00
|
|
|
/*
|
2001-03-22 05:01:46 +01:00
|
|
|
* Disallow dropping a DB that is marked istemplate. This is just to
|
2005-10-15 04:49:52 +02:00
|
|
|
* prevent people from accidentally dropping template0 or template1; they
|
|
|
|
* can do so if they're really determined ...
|
2000-11-14 19:37:49 +01:00
|
|
|
*/
|
|
|
|
if (db_istemplate)
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_WRONG_OBJECT_TYPE),
|
|
|
|
errmsg("cannot drop a template database")));
|
2000-11-14 19:37:49 +01:00
|
|
|
|
2007-06-01 21:38:07 +02:00
|
|
|
/* Obviously can't drop my own database */
|
|
|
|
if (db_id == MyDatabaseId)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_OBJECT_IN_USE),
|
|
|
|
errmsg("cannot drop the currently open database")));
|
|
|
|
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
/*
|
2017-03-28 16:05:21 +02:00
|
|
|
* Check whether there are active logical slots that refer to the
|
|
|
|
* to-be-dropped database. The database lock we are holding prevents the
|
|
|
|
* creation of new slots using the database or existing slots becoming
|
|
|
|
* active.
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
*/
|
2017-03-28 16:05:21 +02:00
|
|
|
(void) ReplicationSlotsCountDBSlots(db_id, &nslots, &nslots_active);
|
|
|
|
if (nslots_active)
|
|
|
|
{
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_OBJECT_IN_USE),
|
2017-05-17 22:31:56 +02:00
|
|
|
errmsg("database \"%s\" is used by an active logical replication slot",
|
|
|
|
dbname),
|
2018-02-18 23:16:11 +01:00
|
|
|
errdetail_plural("There is %d active slot.",
|
|
|
|
"There are %d active slots.",
|
2017-03-28 16:05:21 +02:00
|
|
|
nslots_active, nslots_active)));
|
|
|
|
}
|
Introduce logical decoding.
This feature, building on previous commits, allows the write-ahead log
stream to be decoded into a series of logical changes; that is,
inserts, updates, and deletes and the transactions which contain them.
It is capable of handling decoding even across changes to the schema
of the effected tables. The output format is controlled by a
so-called "output plugin"; an example is included. To make use of
this in a real replication system, the output plugin will need to be
modified to produce output in the format appropriate to that system,
and to perform filtering.
Currently, information can be extracted from the logical decoding
system only via SQL; future commits will add the ability to stream
changes via walsender.
Andres Freund, with review and other contributions from many other
people, including Álvaro Herrera, Abhijit Menon-Sen, Peter Gheogegan,
Kevin Grittner, Robert Haas, Heikki Linnakangas, Fujii Masao, Abhijit
Menon-Sen, Michael Paquier, Simon Riggs, Craig Ringer, and Steve
Singer.
2014-03-03 22:32:18 +01:00
|
|
|
|
2017-01-19 18:00:00 +01:00
|
|
|
/*
|
|
|
|
* Check if there are subscriptions defined in the target database.
|
|
|
|
*
|
|
|
|
* We can't drop them automatically because they might be holding
|
|
|
|
* resources in other databases/instances.
|
|
|
|
*/
|
|
|
|
if ((nsubscriptions = CountDBSubscriptions(db_id)) > 0)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_OBJECT_IN_USE),
|
|
|
|
errmsg("database \"%s\" is being used by logical replication subscription",
|
|
|
|
dbname),
|
|
|
|
errdetail_plural("There is %d subscription.",
|
|
|
|
"There are %d subscriptions.",
|
|
|
|
nsubscriptions, nsubscriptions)));
|
|
|
|
|
2019-11-12 06:36:13 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Attempt to terminate all existing connections to the target database if
|
|
|
|
* the user has requested to do so.
|
|
|
|
*/
|
|
|
|
if (force)
|
|
|
|
TerminateOtherDBBackends(db_id);
|
|
|
|
|
2019-11-09 12:58:27 +01:00
|
|
|
/*
|
|
|
|
* Check for other backends in the target database. (Because we hold the
|
|
|
|
* database lock, no new ones can start after this.)
|
|
|
|
*
|
|
|
|
* As in CREATE DATABASE, check this after other error conditions.
|
|
|
|
*/
|
|
|
|
if (CountOtherDBBackends(db_id, ¬herbackends, &npreparedxacts))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_OBJECT_IN_USE),
|
|
|
|
errmsg("database \"%s\" is being accessed by other users",
|
|
|
|
dbname),
|
|
|
|
errdetail_busy_db(notherbackends, npreparedxacts)));
|
|
|
|
|
1999-09-24 02:25:33 +02:00
|
|
|
/*
|
2006-05-04 00:45:26 +02:00
|
|
|
* Remove the database's tuple from pg_database.
|
1999-09-24 02:25:33 +02:00
|
|
|
*/
|
2010-02-14 19:42:19 +01:00
|
|
|
tup = SearchSysCache1(DATABASEOID, ObjectIdGetDatum(db_id));
|
1999-09-24 02:25:33 +02:00
|
|
|
if (!HeapTupleIsValid(tup))
|
2006-05-04 00:45:26 +02:00
|
|
|
elog(ERROR, "cache lookup failed for database %u", db_id);
|
1999-09-24 02:25:33 +02:00
|
|
|
|
2017-02-01 22:13:30 +01:00
|
|
|
CatalogTupleDelete(pgdbrel, &tup->t_self);
|
1999-09-24 02:25:33 +02:00
|
|
|
|
2006-05-04 00:45:26 +02:00
|
|
|
ReleaseSysCache(tup);
|
1999-09-24 02:25:33 +02:00
|
|
|
|
2002-07-12 20:43:19 +02:00
|
|
|
/*
|
2012-06-10 21:20:04 +02:00
|
|
|
* Delete any comments or security labels associated with the database.
|
2002-07-12 20:43:19 +02:00
|
|
|
*/
|
2006-02-12 04:22:21 +01:00
|
|
|
DeleteSharedComments(db_id, DatabaseRelationId);
|
2011-07-20 19:18:24 +02:00
|
|
|
DeleteSharedSecurityLabel(db_id, DatabaseRelationId);
|
2001-08-10 20:57:42 +02:00
|
|
|
|
2009-10-08 00:14:26 +02:00
|
|
|
/*
|
|
|
|
* Remove settings associated with this database
|
|
|
|
*/
|
|
|
|
DropSetting(db_id, InvalidOid);
|
|
|
|
|
2005-10-10 22:02:20 +02:00
|
|
|
/*
|
|
|
|
* Remove shared dependency references for the database.
|
|
|
|
*/
|
|
|
|
dropDatabaseDependencies(db_id);
|
|
|
|
|
2017-03-28 16:05:21 +02:00
|
|
|
/*
|
|
|
|
* Drop db-specific replication slots.
|
|
|
|
*/
|
|
|
|
ReplicationSlotsDropDBSlots(db_id);
|
|
|
|
|
1999-09-24 02:25:33 +02:00
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* Drop pages for this database that are in the shared buffer cache. This
|
|
|
|
* is important to ensure that no remaining backend tries to write out a
|
|
|
|
* dirty buffer to the dead database later...
|
1999-09-24 02:25:33 +02:00
|
|
|
*/
|
2006-03-29 23:17:39 +02:00
|
|
|
DropDatabaseBuffers(db_id);
|
1999-03-15 15:07:44 +01:00
|
|
|
|
2007-02-09 17:12:19 +01:00
|
|
|
/*
|
|
|
|
* Tell the stats collector to forget it immediately, too.
|
|
|
|
*/
|
|
|
|
pgstat_drop_database(db_id);
|
|
|
|
|
2004-10-28 02:39:59 +02:00
|
|
|
/*
|
2012-06-10 21:20:04 +02:00
|
|
|
* Tell checkpointer to forget any pending fsync and unlink requests for
|
|
|
|
* files in the database; else the fsyncs will fail at next checkpoint, or
|
2009-06-11 16:49:15 +02:00
|
|
|
* worse, it will delete files that belong to a newly created database
|
|
|
|
* with the same OID.
|
2007-01-17 17:25:01 +01:00
|
|
|
*/
|
2019-04-04 10:56:03 +02:00
|
|
|
ForgetDatabaseSyncRequests(db_id);
|
2007-01-17 17:25:01 +01:00
|
|
|
|
|
|
|
/*
|
2012-06-10 21:20:04 +02:00
|
|
|
* Force a checkpoint to make sure the checkpointer has received the
|
2019-04-04 10:56:03 +02:00
|
|
|
* message sent by ForgetDatabaseSyncRequests. On Windows, this also
|
2012-06-10 21:20:04 +02:00
|
|
|
* ensures that background procs don't hold any open files, which would
|
|
|
|
* cause rmdir() to fail.
|
2004-10-28 02:39:59 +02:00
|
|
|
*/
|
2007-06-28 02:02:40 +02:00
|
|
|
RequestCheckpoint(CHECKPOINT_IMMEDIATE | CHECKPOINT_FORCE | CHECKPOINT_WAIT);
|
2004-10-28 02:39:59 +02:00
|
|
|
|
1997-09-07 07:04:48 +02:00
|
|
|
/*
|
2004-06-18 08:14:31 +02:00
|
|
|
* Remove all tablespace subdirs belonging to the database.
|
1997-09-07 07:04:48 +02:00
|
|
|
*/
|
2004-06-18 08:14:31 +02:00
|
|
|
remove_dbtablespaces(db_id);
|
2001-01-14 23:14:10 +01:00
|
|
|
|
2006-05-04 18:07:29 +02:00
|
|
|
/*
|
2009-09-01 04:54:52 +02:00
|
|
|
* Close pg_database, but keep lock till commit.
|
2006-05-04 18:07:29 +02:00
|
|
|
*/
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(pgdbrel, NoLock);
|
2005-02-20 03:22:07 +01:00
|
|
|
|
|
|
|
/*
|
2010-02-26 03:01:40 +01:00
|
|
|
* Force synchronous commit, thus minimizing the window between removal of
|
2017-02-06 10:33:58 +01:00
|
|
|
* the database files and committal of the transaction. If we crash before
|
2010-02-26 03:01:40 +01:00
|
|
|
* committing, we'll have a DB that's gone on disk but still there
|
2007-11-15 22:14:46 +01:00
|
|
|
* according to pg_database, which is not good.
|
2005-02-20 03:22:07 +01:00
|
|
|
*/
|
2009-09-01 04:54:52 +02:00
|
|
|
ForceSyncCommit();
|
1996-07-09 08:22:35 +02:00
|
|
|
}
|
|
|
|
|
1999-12-12 06:15:10 +01:00
|
|
|
|
2003-06-27 16:45:32 +02:00
|
|
|
/*
|
|
|
|
* Rename database
|
|
|
|
*/
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress
|
2003-06-27 16:45:32 +02:00
|
|
|
RenameDatabase(const char *oldname, const char *newname)
|
|
|
|
{
|
2006-05-04 18:07:29 +02:00
|
|
|
Oid db_id;
|
|
|
|
HeapTuple newtup;
|
2003-06-27 16:45:32 +02:00
|
|
|
Relation rel;
|
2008-08-04 20:03:46 +02:00
|
|
|
int notherbackends;
|
|
|
|
int npreparedxacts;
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress address;
|
2003-06-27 16:45:32 +02:00
|
|
|
|
|
|
|
/*
|
2006-10-04 02:30:14 +02:00
|
|
|
* Look up the target database's OID, and get exclusive lock on it. We
|
|
|
|
* need this for the same reasons as DROP DATABASE.
|
2003-06-27 16:45:32 +02:00
|
|
|
*/
|
2019-01-21 19:32:19 +01:00
|
|
|
rel = table_open(DatabaseRelationId, RowExclusiveLock);
|
2003-06-27 16:45:32 +02:00
|
|
|
|
2006-05-04 18:07:29 +02:00
|
|
|
if (!get_db_info(oldname, AccessExclusiveLock, &db_id, NULL, NULL,
|
Improve concurrency of foreign key locking
This patch introduces two additional lock modes for tuples: "SELECT FOR
KEY SHARE" and "SELECT FOR NO KEY UPDATE". These don't block each
other, in contrast with already existing "SELECT FOR SHARE" and "SELECT
FOR UPDATE". UPDATE commands that do not modify the values stored in
the columns that are part of the key of the tuple now grab a SELECT FOR
NO KEY UPDATE lock on the tuple, allowing them to proceed concurrently
with tuple locks of the FOR KEY SHARE variety.
Foreign key triggers now use FOR KEY SHARE instead of FOR SHARE; this
means the concurrency improvement applies to them, which is the whole
point of this patch.
The added tuple lock semantics require some rejiggering of the multixact
module, so that the locking level that each transaction is holding can
be stored alongside its Xid. Also, multixacts now need to persist
across server restarts and crashes, because they can now represent not
only tuple locks, but also tuple updates. This means we need more
careful tracking of lifetime of pg_multixact SLRU files; since they now
persist longer, we require more infrastructure to figure out when they
can be removed. pg_upgrade also needs to be careful to copy
pg_multixact files over from the old server to the new, or at least part
of multixact.c state, depending on the versions of the old and new
servers.
Tuple time qualification rules (HeapTupleSatisfies routines) need to be
careful not to consider tuples with the "is multi" infomask bit set as
being only locked; they might need to look up MultiXact values (i.e.
possibly do pg_multixact I/O) to find out the Xid that updated a tuple,
whereas they previously were assured to only use information readily
available from the tuple header. This is considered acceptable, because
the extra I/O would involve cases that would previously cause some
commands to block waiting for concurrent transactions to finish.
Another important change is the fact that locking tuples that have
previously been updated causes the future versions to be marked as
locked, too; this is essential for correctness of foreign key checks.
This causes additional WAL-logging, also (there was previously a single
WAL record for a locked tuple; now there are as many as updated copies
of the tuple there exist.)
With all this in place, contention related to tuples being checked by
foreign key rules should be much reduced.
As a bonus, the old behavior that a subtransaction grabbing a stronger
tuple lock than the parent (sub)transaction held on a given tuple and
later aborting caused the weaker lock to be lost, has been fixed.
Many new spec files were added for isolation tester framework, to ensure
overall behavior is sane. There's probably room for several more tests.
There were several reviewers of this patch; in particular, Noah Misch
and Andres Freund spent considerable time in it. Original idea for the
patch came from Simon Riggs, after a problem report by Joel Jacobson.
Most code is from me, with contributions from Marti Raudsepp, Alexander
Shulgin, Noah Misch and Andres Freund.
This patch was discussed in several pgsql-hackers threads; the most
important start at the following message-ids:
AANLkTimo9XVcEzfiBR-ut3KVNDkjm2Vxh+t8kAmWjPuv@mail.gmail.com
1290721684-sup-3951@alvh.no-ip.org
1294953201-sup-2099@alvh.no-ip.org
1320343602-sup-2290@alvh.no-ip.org
1339690386-sup-8927@alvh.no-ip.org
4FE5FF020200002500048A3D@gw.wicourts.gov
4FEAB90A0200002500048B7D@gw.wicourts.gov
2013-01-23 16:04:59 +01:00
|
|
|
NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL))
|
2003-06-27 16:45:32 +02:00
|
|
|
ereport(ERROR,
|
2003-07-19 01:20:33 +02:00
|
|
|
(errcode(ERRCODE_UNDEFINED_DATABASE),
|
2003-06-27 16:45:32 +02:00
|
|
|
errmsg("database \"%s\" does not exist", oldname)));
|
|
|
|
|
2007-06-01 21:38:07 +02:00
|
|
|
/* must be owner */
|
|
|
|
if (!pg_database_ownercheck(db_id, GetUserId()))
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_DATABASE,
|
2007-06-01 21:38:07 +02:00
|
|
|
oldname);
|
|
|
|
|
|
|
|
/* must have createdb rights */
|
2014-12-23 19:35:49 +01:00
|
|
|
if (!have_createdb_privilege())
|
2007-06-01 21:38:07 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
|
|
|
|
errmsg("permission denied to rename database")));
|
|
|
|
|
Add an enforcement mechanism for global object names in regression tests.
In commit 18555b132 we tentatively established a rule that regression
tests should use names containing "regression" for databases, and names
starting with "regress_" for all other globally-visible object names, so
as to circumscribe the side-effects that "make installcheck" could have
on an existing installation.
This commit adds a simple enforcement mechanism for that rule: if the code
is compiled with ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS defined, it
will emit a warning (not an error) whenever a database, role, tablespace,
subscription, or replication origin name is created that doesn't obey the
rule. Running one or more buildfarm members with that symbol defined
should be enough to catch new violations, at least in the regular
regression tests. Most TAP tests wouldn't notice such warnings, but
that's actually fine because TAP tests don't execute against an existing
server anyway.
Since it's already the case that running src/test/modules/ tests in
installcheck mode is deprecated, we can use that as a home for tests
that seem unsafe to run against an existing server, such as tests that
might have side-effects on existing roles. Document that (though this
commit doesn't in itself make it any less safe than before).
Update regress.sgml to define these restrictions more clearly, and
to clean up assorted lack-of-up-to-date-ness in its descriptions of
the available regression tests.
Discussion: https://postgr.es/m/16638.1468620817@sss.pgh.pa.us
2019-06-29 17:34:00 +02:00
|
|
|
/*
|
|
|
|
* If built with appropriate switch, whine when regression-testing
|
|
|
|
* conventions for database names are violated.
|
|
|
|
*/
|
|
|
|
#ifdef ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS
|
|
|
|
if (strstr(newname, "regression") == NULL)
|
|
|
|
elog(WARNING, "databases created by regression test cases should have names including \"regression\"");
|
|
|
|
#endif
|
|
|
|
|
2007-06-01 21:38:07 +02:00
|
|
|
/*
|
|
|
|
* Make sure the new name doesn't exist. See notes for same error in
|
|
|
|
* CREATE DATABASE.
|
|
|
|
*/
|
2010-08-05 16:45:09 +02:00
|
|
|
if (OidIsValid(get_database_oid(newname, true)))
|
2007-06-01 21:38:07 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_DUPLICATE_DATABASE),
|
|
|
|
errmsg("database \"%s\" already exists", newname)));
|
|
|
|
|
2003-06-27 16:45:32 +02:00
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* XXX Client applications probably store the current database somewhere,
|
|
|
|
* so renaming it could cause confusion. On the other hand, there may not
|
|
|
|
* be an actual problem besides a little confusion, so think about this
|
|
|
|
* and decide.
|
2003-06-27 16:45:32 +02:00
|
|
|
*/
|
2006-05-04 18:07:29 +02:00
|
|
|
if (db_id == MyDatabaseId)
|
2003-06-27 16:45:32 +02:00
|
|
|
ereport(ERROR,
|
2003-07-19 01:20:33 +02:00
|
|
|
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
|
Wording cleanup for error messages. Also change can't -> cannot.
Standard English uses "may", "can", and "might" in different ways:
may - permission, "You may borrow my rake."
can - ability, "I can lift that log."
might - possibility, "It might rain today."
Unfortunately, in conversational English, their use is often mixed, as
in, "You may use this variable to do X", when in fact, "can" is a better
choice. Similarly, "It may crash" is better stated, "It might crash".
2007-02-01 20:10:30 +01:00
|
|
|
errmsg("current database cannot be renamed")));
|
2003-06-27 16:45:32 +02:00
|
|
|
|
|
|
|
/*
|
2006-10-04 02:30:14 +02:00
|
|
|
* Make sure the database does not have active sessions. This is the same
|
|
|
|
* concern as above, but applied to other sessions.
|
2007-06-01 21:38:07 +02:00
|
|
|
*
|
|
|
|
* As in CREATE DATABASE, check this after other error conditions.
|
2003-06-27 16:45:32 +02:00
|
|
|
*/
|
2008-08-04 20:03:46 +02:00
|
|
|
if (CountOtherDBBackends(db_id, ¬herbackends, &npreparedxacts))
|
2003-07-19 01:20:33 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_OBJECT_IN_USE),
|
2005-10-15 04:49:52 +02:00
|
|
|
errmsg("database \"%s\" is being accessed by other users",
|
2008-08-04 20:03:46 +02:00
|
|
|
oldname),
|
|
|
|
errdetail_busy_db(notherbackends, npreparedxacts)));
|
2003-06-27 16:45:32 +02:00
|
|
|
|
|
|
|
/* rename */
|
2010-02-14 19:42:19 +01:00
|
|
|
newtup = SearchSysCacheCopy1(DATABASEOID, ObjectIdGetDatum(db_id));
|
2006-05-04 18:07:29 +02:00
|
|
|
if (!HeapTupleIsValid(newtup))
|
|
|
|
elog(ERROR, "cache lookup failed for database %u", db_id);
|
2003-06-27 16:45:32 +02:00
|
|
|
namestrcpy(&(((Form_pg_database) GETSTRUCT(newtup))->datname), newname);
|
2017-01-31 22:42:24 +01:00
|
|
|
CatalogTupleUpdate(rel, &newtup->t_self, newtup);
|
2003-06-27 16:45:32 +02:00
|
|
|
|
2013-03-18 03:55:14 +01:00
|
|
|
InvokeObjectPostAlterHook(DatabaseRelationId, db_id, 0);
|
|
|
|
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddressSet(address, DatabaseRelationId, db_id);
|
|
|
|
|
2006-05-04 18:07:29 +02:00
|
|
|
/*
|
2009-09-01 04:54:52 +02:00
|
|
|
* Close pg_database, but keep lock till commit.
|
2006-05-04 18:07:29 +02:00
|
|
|
*/
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(rel, NoLock);
|
2012-12-24 00:25:03 +01:00
|
|
|
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
return address;
|
2003-06-27 16:45:32 +02:00
|
|
|
}
|
|
|
|
|
1999-12-12 06:15:10 +01:00
|
|
|
|
2008-11-07 19:25:07 +01:00
|
|
|
/*
|
|
|
|
* ALTER DATABASE SET TABLESPACE
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
movedb(const char *dbname, const char *tblspcname)
|
|
|
|
{
|
2009-06-11 16:49:15 +02:00
|
|
|
Oid db_id;
|
|
|
|
Relation pgdbrel;
|
|
|
|
int notherbackends;
|
|
|
|
int npreparedxacts;
|
|
|
|
HeapTuple oldtuple,
|
|
|
|
newtuple;
|
|
|
|
Oid src_tblspcoid,
|
|
|
|
dst_tblspcoid;
|
|
|
|
Datum new_record[Natts_pg_database];
|
|
|
|
bool new_record_nulls[Natts_pg_database];
|
|
|
|
bool new_record_repl[Natts_pg_database];
|
|
|
|
ScanKeyData scankey;
|
|
|
|
SysScanDesc sysscan;
|
|
|
|
AclResult aclresult;
|
|
|
|
char *src_dbpath;
|
|
|
|
char *dst_dbpath;
|
|
|
|
DIR *dstdir;
|
2008-11-07 19:25:07 +01:00
|
|
|
struct dirent *xlde;
|
|
|
|
movedb_failure_params fparms;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Look up the target database's OID, and get exclusive lock on it. We
|
|
|
|
* need this to ensure that no new backend starts up in the database while
|
|
|
|
* we are moving it, and that no one is using it as a CREATE DATABASE
|
|
|
|
* template or trying to delete it.
|
|
|
|
*/
|
2019-01-21 19:32:19 +01:00
|
|
|
pgdbrel = table_open(DatabaseRelationId, RowExclusiveLock);
|
2008-11-07 19:25:07 +01:00
|
|
|
|
|
|
|
if (!get_db_info(dbname, AccessExclusiveLock, &db_id, NULL, NULL,
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
NULL, NULL, NULL, NULL, NULL, &src_tblspcoid, NULL, NULL))
|
2008-11-07 19:25:07 +01:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_DATABASE),
|
|
|
|
errmsg("database \"%s\" does not exist", dbname)));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* We actually need a session lock, so that the lock will persist across
|
|
|
|
* the commit/restart below. (We could almost get away with letting the
|
|
|
|
* lock be released at commit, except that someone could try to move
|
|
|
|
* relations of the DB back into the old directory while we rmtree() it.)
|
|
|
|
*/
|
|
|
|
LockSharedObjectForSession(DatabaseRelationId, db_id, 0,
|
|
|
|
AccessExclusiveLock);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Permission checks
|
|
|
|
*/
|
|
|
|
if (!pg_database_ownercheck(db_id, GetUserId()))
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_DATABASE,
|
2008-11-07 19:25:07 +01:00
|
|
|
dbname);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Obviously can't move the tables of my own database
|
|
|
|
*/
|
|
|
|
if (db_id == MyDatabaseId)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_OBJECT_IN_USE),
|
|
|
|
errmsg("cannot change the tablespace of the currently open database")));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Get tablespace's oid
|
|
|
|
*/
|
2010-08-05 16:45:09 +02:00
|
|
|
dst_tblspcoid = get_tablespace_oid(tblspcname, false);
|
2008-11-07 19:25:07 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Permission checks
|
|
|
|
*/
|
|
|
|
aclresult = pg_tablespace_aclcheck(dst_tblspcoid, GetUserId(),
|
|
|
|
ACL_CREATE);
|
|
|
|
if (aclresult != ACLCHECK_OK)
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(aclresult, OBJECT_TABLESPACE,
|
2008-11-07 19:25:07 +01:00
|
|
|
tblspcname);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* pg_global must never be the default tablespace
|
|
|
|
*/
|
|
|
|
if (dst_tblspcoid == GLOBALTABLESPACE_OID)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
|
|
|
|
errmsg("pg_global cannot be used as default tablespace")));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* No-op if same tablespace
|
|
|
|
*/
|
|
|
|
if (src_tblspcoid == dst_tblspcoid)
|
|
|
|
{
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(pgdbrel, NoLock);
|
2008-11-07 19:25:07 +01:00
|
|
|
UnlockSharedObjectForSession(DatabaseRelationId, db_id, 0,
|
|
|
|
AccessExclusiveLock);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Check for other backends in the target database. (Because we hold the
|
|
|
|
* database lock, no new ones can start after this.)
|
|
|
|
*
|
|
|
|
* As in CREATE DATABASE, check this after other error conditions.
|
|
|
|
*/
|
|
|
|
if (CountOtherDBBackends(db_id, ¬herbackends, &npreparedxacts))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_OBJECT_IN_USE),
|
|
|
|
errmsg("database \"%s\" is being accessed by other users",
|
|
|
|
dbname),
|
|
|
|
errdetail_busy_db(notherbackends, npreparedxacts)));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Get old and new database paths
|
|
|
|
*/
|
|
|
|
src_dbpath = GetDatabasePath(db_id, src_tblspcoid);
|
|
|
|
dst_dbpath = GetDatabasePath(db_id, dst_tblspcoid);
|
|
|
|
|
|
|
|
/*
|
2014-10-20 23:43:46 +02:00
|
|
|
* Force a checkpoint before proceeding. This will force all dirty
|
|
|
|
* buffers, including those of unlogged tables, out to disk, to ensure
|
|
|
|
* source database is up-to-date on disk for the copy.
|
2009-06-11 16:49:15 +02:00
|
|
|
* FlushDatabaseBuffers() would suffice for that, but we also want to
|
|
|
|
* process any pending unlink requests. Otherwise, the check for existing
|
|
|
|
* files in the target directory might fail unnecessarily, not to mention
|
|
|
|
* that the copy might fail due to source files getting deleted under it.
|
2011-11-01 19:48:47 +01:00
|
|
|
* On Windows, this also ensures that background procs don't hold any open
|
2009-06-11 16:49:15 +02:00
|
|
|
* files, which would cause rmdir() to fail.
|
2008-11-07 19:25:07 +01:00
|
|
|
*/
|
2014-10-20 23:43:46 +02:00
|
|
|
RequestCheckpoint(CHECKPOINT_IMMEDIATE | CHECKPOINT_FORCE | CHECKPOINT_WAIT
|
|
|
|
| CHECKPOINT_FLUSH_ALL);
|
2008-11-07 19:25:07 +01:00
|
|
|
|
2014-11-04 19:24:06 +01:00
|
|
|
/*
|
|
|
|
* Now drop all buffers holding data of the target database; they should
|
|
|
|
* no longer be dirty so DropDatabaseBuffers is safe.
|
|
|
|
*
|
|
|
|
* It might seem that we could just let these buffers age out of shared
|
|
|
|
* buffers naturally, since they should not get referenced anymore. The
|
|
|
|
* problem with that is that if the user later moves the database back to
|
|
|
|
* its original tablespace, any still-surviving buffers would appear to
|
|
|
|
* contain valid data again --- but they'd be missing any changes made in
|
|
|
|
* the database while it was in the new tablespace. In any case, freeing
|
|
|
|
* buffers that should never be used again seems worth the cycles.
|
|
|
|
*
|
|
|
|
* Note: it'd be sufficient to get rid of buffers matching db_id and
|
|
|
|
* src_tblspcoid, but bufmgr.c presently provides no API for that.
|
|
|
|
*/
|
|
|
|
DropDatabaseBuffers(db_id);
|
|
|
|
|
2008-11-07 19:25:07 +01:00
|
|
|
/*
|
|
|
|
* Check for existence of files in the target directory, i.e., objects of
|
|
|
|
* this database that are already in the target tablespace. We can't
|
|
|
|
* allow the move in such a case, because we would need to change those
|
|
|
|
* relations' pg_class.reltablespace entries to zero, and we don't have
|
|
|
|
* access to the DB's pg_class to do so.
|
|
|
|
*/
|
|
|
|
dstdir = AllocateDir(dst_dbpath);
|
|
|
|
if (dstdir != NULL)
|
|
|
|
{
|
|
|
|
while ((xlde = ReadDir(dstdir, dst_dbpath)) != NULL)
|
|
|
|
{
|
|
|
|
if (strcmp(xlde->d_name, ".") == 0 ||
|
|
|
|
strcmp(xlde->d_name, "..") == 0)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
ereport(ERROR,
|
2009-05-06 18:15:21 +02:00
|
|
|
(errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
|
|
|
|
errmsg("some relations of database \"%s\" are already in tablespace \"%s\"",
|
2008-11-07 19:25:07 +01:00
|
|
|
dbname, tblspcname),
|
|
|
|
errhint("You must move them back to the database's default tablespace before using this command.")));
|
|
|
|
}
|
|
|
|
|
|
|
|
FreeDir(dstdir);
|
|
|
|
|
|
|
|
/*
|
2009-06-11 16:49:15 +02:00
|
|
|
* The directory exists but is empty. We must remove it before using
|
|
|
|
* the copydir function.
|
2008-11-07 19:25:07 +01:00
|
|
|
*/
|
|
|
|
if (rmdir(dst_dbpath) != 0)
|
|
|
|
elog(ERROR, "could not remove directory \"%s\": %m",
|
|
|
|
dst_dbpath);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Use an ENSURE block to make sure we remove the debris if the copy fails
|
2014-05-06 18:12:18 +02:00
|
|
|
* (eg, due to out-of-disk-space). This is not a 100% solution, because
|
2008-11-07 19:25:07 +01:00
|
|
|
* of the possibility of failure during transaction commit, but it should
|
|
|
|
* handle most scenarios.
|
|
|
|
*/
|
|
|
|
fparms.dest_dboid = db_id;
|
|
|
|
fparms.dest_tsoid = dst_tblspcoid;
|
|
|
|
PG_ENSURE_ERROR_CLEANUP(movedb_failure_callback,
|
|
|
|
PointerGetDatum(&fparms));
|
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Copy files from the old tablespace to the new one
|
|
|
|
*/
|
|
|
|
copydir(src_dbpath, dst_dbpath, false);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Record the filesystem change in XLOG
|
|
|
|
*/
|
|
|
|
{
|
|
|
|
xl_dbase_create_rec xlrec;
|
|
|
|
|
|
|
|
xlrec.db_id = db_id;
|
|
|
|
xlrec.tablespace_id = dst_tblspcoid;
|
|
|
|
xlrec.src_db_id = db_id;
|
|
|
|
xlrec.src_tablespace_id = src_tblspcoid;
|
|
|
|
|
Revamp the WAL record format.
Each WAL record now carries information about the modified relation and
block(s) in a standardized format. That makes it easier to write tools that
need that information, like pg_rewind, prefetching the blocks to speed up
recovery, etc.
There's a whole new API for building WAL records, replacing the XLogRecData
chains used previously. The new API consists of XLogRegister* functions,
which are called for each buffer and chunk of data that is added to the
record. The new API also gives more control over when a full-page image is
written, by passing flags to the XLogRegisterBuffer function.
This also simplifies the XLogReadBufferForRedo() calls. The function can dig
the relation and block number from the WAL record, so they no longer need to
be passed as arguments.
For the convenience of redo routines, XLogReader now disects each WAL record
after reading it, copying the main data part and the per-block data into
MAXALIGNed buffers. The data chunks are not aligned within the WAL record,
but the redo routines can assume that the pointers returned by XLogRecGet*
functions are. Redo routines are now passed the XLogReaderState, which
contains the record in the already-disected format, instead of the plain
XLogRecord.
The new record format also makes the fixed size XLogRecord header smaller,
by removing the xl_len field. The length of the "main data" portion is now
stored at the end of the WAL record, and there's a separate header after
XLogRecord for it. The alignment padding at the end of XLogRecord is also
removed. This compansates for the fact that the new format would otherwise
be more bulky than the old format.
Reviewed by Andres Freund, Amit Kapila, Michael Paquier, Alvaro Herrera,
Fujii Masao.
2014-11-20 16:56:26 +01:00
|
|
|
XLogBeginInsert();
|
|
|
|
XLogRegisterData((char *) &xlrec, sizeof(xl_dbase_create_rec));
|
2008-11-07 19:25:07 +01:00
|
|
|
|
Revamp the WAL record format.
Each WAL record now carries information about the modified relation and
block(s) in a standardized format. That makes it easier to write tools that
need that information, like pg_rewind, prefetching the blocks to speed up
recovery, etc.
There's a whole new API for building WAL records, replacing the XLogRecData
chains used previously. The new API consists of XLogRegister* functions,
which are called for each buffer and chunk of data that is added to the
record. The new API also gives more control over when a full-page image is
written, by passing flags to the XLogRegisterBuffer function.
This also simplifies the XLogReadBufferForRedo() calls. The function can dig
the relation and block number from the WAL record, so they no longer need to
be passed as arguments.
For the convenience of redo routines, XLogReader now disects each WAL record
after reading it, copying the main data part and the per-block data into
MAXALIGNed buffers. The data chunks are not aligned within the WAL record,
but the redo routines can assume that the pointers returned by XLogRecGet*
functions are. Redo routines are now passed the XLogReaderState, which
contains the record in the already-disected format, instead of the plain
XLogRecord.
The new record format also makes the fixed size XLogRecord header smaller,
by removing the xl_len field. The length of the "main data" portion is now
stored at the end of the WAL record, and there's a separate header after
XLogRecord for it. The alignment padding at the end of XLogRecord is also
removed. This compansates for the fact that the new format would otherwise
be more bulky than the old format.
Reviewed by Andres Freund, Amit Kapila, Michael Paquier, Alvaro Herrera,
Fujii Masao.
2014-11-20 16:56:26 +01:00
|
|
|
(void) XLogInsert(RM_DBASE_ID,
|
|
|
|
XLOG_DBASE_CREATE | XLR_SPECIAL_REL_UPDATE);
|
2008-11-07 19:25:07 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Update the database's pg_database tuple
|
|
|
|
*/
|
|
|
|
ScanKeyInit(&scankey,
|
|
|
|
Anum_pg_database_datname,
|
|
|
|
BTEqualStrategyNumber, F_NAMEEQ,
|
2016-09-13 23:17:48 +02:00
|
|
|
CStringGetDatum(dbname));
|
2008-11-07 19:25:07 +01:00
|
|
|
sysscan = systable_beginscan(pgdbrel, DatabaseNameIndexId, true,
|
Use an MVCC snapshot, rather than SnapshotNow, for catalog scans.
SnapshotNow scans have the undesirable property that, in the face of
concurrent updates, the scan can fail to see either the old or the new
versions of the row. In many cases, we work around this by requiring
DDL operations to hold AccessExclusiveLock on the object being
modified; in some cases, the existing locking is inadequate and random
failures occur as a result. This commit doesn't change anything
related to locking, but will hopefully pave the way to allowing lock
strength reductions in the future.
The major issue has held us back from making this change in the past
is that taking an MVCC snapshot is significantly more expensive than
using a static special snapshot such as SnapshotNow. However, testing
of various worst-case scenarios reveals that this problem is not
severe except under fairly extreme workloads. To mitigate those
problems, we avoid retaking the MVCC snapshot for each new scan;
instead, we take a new snapshot only when invalidation messages have
been processed. The catcache machinery already requires that
invalidation messages be sent before releasing the related heavyweight
lock; else other backends might rely on locally-cached data rather
than scanning the catalog at all. Thus, making snapshot reuse
dependent on the same guarantees shouldn't break anything that wasn't
already subtly broken.
Patch by me. Review by Michael Paquier and Andres Freund.
2013-07-02 15:47:01 +02:00
|
|
|
NULL, 1, &scankey);
|
2008-11-07 19:25:07 +01:00
|
|
|
oldtuple = systable_getnext(sysscan);
|
Phase 2 of pgindent updates.
Change pg_bsd_indent to follow upstream rules for placement of comments
to the right of code, and remove pgindent hack that caused comments
following #endif to not obey the general rule.
Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using
the published version of pg_bsd_indent, but a hacked-up version that
tried to minimize the amount of movement of comments to the right of
code. The situation of interest is where such a comment has to be
moved to the right of its default placement at column 33 because there's
code there. BSD indent has always moved right in units of tab stops
in such cases --- but in the previous incarnation, indent was working
in 8-space tab stops, while now it knows we use 4-space tabs. So the
net result is that in about half the cases, such comments are placed
one tab stop left of before. This is better all around: it leaves
more room on the line for comment text, and it means that in such
cases the comment uniformly starts at the next 4-space tab stop after
the code, rather than sometimes one and sometimes two tabs after.
Also, ensure that comments following #endif are indented the same
as comments following other preprocessor commands such as #else.
That inconsistency turns out to have been self-inflicted damage
from a poorly-thought-through post-indent "fixup" in pgindent.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
|
|
|
if (!HeapTupleIsValid(oldtuple)) /* shouldn't happen... */
|
2008-11-07 19:25:07 +01:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_DATABASE),
|
|
|
|
errmsg("database \"%s\" does not exist", dbname)));
|
|
|
|
|
|
|
|
MemSet(new_record, 0, sizeof(new_record));
|
|
|
|
MemSet(new_record_nulls, false, sizeof(new_record_nulls));
|
|
|
|
MemSet(new_record_repl, false, sizeof(new_record_repl));
|
|
|
|
|
|
|
|
new_record[Anum_pg_database_dattablespace - 1] = ObjectIdGetDatum(dst_tblspcoid);
|
|
|
|
new_record_repl[Anum_pg_database_dattablespace - 1] = true;
|
|
|
|
|
|
|
|
newtuple = heap_modify_tuple(oldtuple, RelationGetDescr(pgdbrel),
|
|
|
|
new_record,
|
|
|
|
new_record_nulls, new_record_repl);
|
2017-01-31 22:42:24 +01:00
|
|
|
CatalogTupleUpdate(pgdbrel, &oldtuple->t_self, newtuple);
|
2008-11-07 19:25:07 +01:00
|
|
|
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
InvokeObjectPostAlterHook(DatabaseRelationId, db_id, 0);
|
2013-03-18 03:55:14 +01:00
|
|
|
|
2008-11-07 19:25:07 +01:00
|
|
|
systable_endscan(sysscan);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Force another checkpoint here. As in CREATE DATABASE, this is to
|
|
|
|
* ensure that we don't have to replay a committed XLOG_DBASE_CREATE
|
|
|
|
* operation, which would cause us to lose any unlogged operations
|
|
|
|
* done in the new DB tablespace before the next checkpoint.
|
|
|
|
*/
|
|
|
|
RequestCheckpoint(CHECKPOINT_IMMEDIATE | CHECKPOINT_FORCE | CHECKPOINT_WAIT);
|
|
|
|
|
|
|
|
/*
|
2009-09-01 04:54:52 +02:00
|
|
|
* Force synchronous commit, thus minimizing the window between
|
2017-02-06 10:33:58 +01:00
|
|
|
* copying the database files and committal of the transaction. If we
|
2008-11-07 19:25:07 +01:00
|
|
|
* crash before committing, we'll leave an orphaned set of files on
|
|
|
|
* disk, which is not fatal but not good either.
|
|
|
|
*/
|
2009-09-01 04:54:52 +02:00
|
|
|
ForceSyncCommit();
|
2008-11-07 19:25:07 +01:00
|
|
|
|
|
|
|
/*
|
2009-09-01 04:54:52 +02:00
|
|
|
* Close pg_database, but keep lock till commit.
|
2008-11-07 19:25:07 +01:00
|
|
|
*/
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(pgdbrel, NoLock);
|
2008-11-07 19:25:07 +01:00
|
|
|
}
|
|
|
|
PG_END_ENSURE_ERROR_CLEANUP(movedb_failure_callback,
|
|
|
|
PointerGetDatum(&fparms));
|
|
|
|
|
|
|
|
/*
|
2009-06-11 16:49:15 +02:00
|
|
|
* Commit the transaction so that the pg_database update is committed. If
|
|
|
|
* we crash while removing files, the database won't be corrupt, we'll
|
|
|
|
* just leave some orphaned files in the old directory.
|
2008-11-07 19:25:07 +01:00
|
|
|
*
|
|
|
|
* (This is OK because we know we aren't inside a transaction block.)
|
|
|
|
*
|
2009-06-11 16:49:15 +02:00
|
|
|
* XXX would it be safe/better to do this inside the ensure block? Not
|
2008-11-07 19:25:07 +01:00
|
|
|
* convinced it's a good idea; consider elog just after the transaction
|
|
|
|
* really commits.
|
|
|
|
*/
|
|
|
|
PopActiveSnapshot();
|
|
|
|
CommitTransactionCommand();
|
|
|
|
|
|
|
|
/* Start new transaction for the remaining work; don't need a snapshot */
|
|
|
|
StartTransactionCommand();
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Remove files from the old tablespace
|
|
|
|
*/
|
|
|
|
if (!rmtree(src_dbpath, true))
|
|
|
|
ereport(WARNING,
|
|
|
|
(errmsg("some useless files may be left behind in old database directory \"%s\"",
|
|
|
|
src_dbpath)));
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Record the filesystem change in XLOG
|
|
|
|
*/
|
|
|
|
{
|
|
|
|
xl_dbase_drop_rec xlrec;
|
|
|
|
|
|
|
|
xlrec.db_id = db_id;
|
2019-11-21 13:10:37 +01:00
|
|
|
xlrec.ntablespaces = 1;
|
2008-11-07 19:25:07 +01:00
|
|
|
|
Revamp the WAL record format.
Each WAL record now carries information about the modified relation and
block(s) in a standardized format. That makes it easier to write tools that
need that information, like pg_rewind, prefetching the blocks to speed up
recovery, etc.
There's a whole new API for building WAL records, replacing the XLogRecData
chains used previously. The new API consists of XLogRegister* functions,
which are called for each buffer and chunk of data that is added to the
record. The new API also gives more control over when a full-page image is
written, by passing flags to the XLogRegisterBuffer function.
This also simplifies the XLogReadBufferForRedo() calls. The function can dig
the relation and block number from the WAL record, so they no longer need to
be passed as arguments.
For the convenience of redo routines, XLogReader now disects each WAL record
after reading it, copying the main data part and the per-block data into
MAXALIGNed buffers. The data chunks are not aligned within the WAL record,
but the redo routines can assume that the pointers returned by XLogRecGet*
functions are. Redo routines are now passed the XLogReaderState, which
contains the record in the already-disected format, instead of the plain
XLogRecord.
The new record format also makes the fixed size XLogRecord header smaller,
by removing the xl_len field. The length of the "main data" portion is now
stored at the end of the WAL record, and there's a separate header after
XLogRecord for it. The alignment padding at the end of XLogRecord is also
removed. This compansates for the fact that the new format would otherwise
be more bulky than the old format.
Reviewed by Andres Freund, Amit Kapila, Michael Paquier, Alvaro Herrera,
Fujii Masao.
2014-11-20 16:56:26 +01:00
|
|
|
XLogBeginInsert();
|
|
|
|
XLogRegisterData((char *) &xlrec, sizeof(xl_dbase_drop_rec));
|
2019-11-21 13:10:37 +01:00
|
|
|
XLogRegisterData((char *) &src_tblspcoid, sizeof(Oid));
|
2008-11-07 19:25:07 +01:00
|
|
|
|
Revamp the WAL record format.
Each WAL record now carries information about the modified relation and
block(s) in a standardized format. That makes it easier to write tools that
need that information, like pg_rewind, prefetching the blocks to speed up
recovery, etc.
There's a whole new API for building WAL records, replacing the XLogRecData
chains used previously. The new API consists of XLogRegister* functions,
which are called for each buffer and chunk of data that is added to the
record. The new API also gives more control over when a full-page image is
written, by passing flags to the XLogRegisterBuffer function.
This also simplifies the XLogReadBufferForRedo() calls. The function can dig
the relation and block number from the WAL record, so they no longer need to
be passed as arguments.
For the convenience of redo routines, XLogReader now disects each WAL record
after reading it, copying the main data part and the per-block data into
MAXALIGNed buffers. The data chunks are not aligned within the WAL record,
but the redo routines can assume that the pointers returned by XLogRecGet*
functions are. Redo routines are now passed the XLogReaderState, which
contains the record in the already-disected format, instead of the plain
XLogRecord.
The new record format also makes the fixed size XLogRecord header smaller,
by removing the xl_len field. The length of the "main data" portion is now
stored at the end of the WAL record, and there's a separate header after
XLogRecord for it. The alignment padding at the end of XLogRecord is also
removed. This compansates for the fact that the new format would otherwise
be more bulky than the old format.
Reviewed by Andres Freund, Amit Kapila, Michael Paquier, Alvaro Herrera,
Fujii Masao.
2014-11-20 16:56:26 +01:00
|
|
|
(void) XLogInsert(RM_DBASE_ID,
|
|
|
|
XLOG_DBASE_DROP | XLR_SPECIAL_REL_UPDATE);
|
2008-11-07 19:25:07 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/* Now it's safe to release the database lock */
|
|
|
|
UnlockSharedObjectForSession(DatabaseRelationId, db_id, 0,
|
|
|
|
AccessExclusiveLock);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Error cleanup callback for movedb */
|
|
|
|
static void
|
|
|
|
movedb_failure_callback(int code, Datum arg)
|
|
|
|
{
|
|
|
|
movedb_failure_params *fparms = (movedb_failure_params *) DatumGetPointer(arg);
|
|
|
|
char *dstpath;
|
|
|
|
|
|
|
|
/* Get rid of anything we managed to copy to the target directory */
|
|
|
|
dstpath = GetDatabasePath(fparms->dest_dboid, fparms->dest_tsoid);
|
|
|
|
|
|
|
|
(void) rmtree(dstpath, true);
|
|
|
|
}
|
|
|
|
|
2019-11-12 06:36:13 +01:00
|
|
|
/*
|
|
|
|
* Process options and call dropdb function.
|
|
|
|
*/
|
|
|
|
void
|
|
|
|
DropDatabase(ParseState *pstate, DropdbStmt *stmt)
|
|
|
|
{
|
|
|
|
bool force = false;
|
|
|
|
ListCell *lc;
|
|
|
|
|
|
|
|
foreach(lc, stmt->options)
|
|
|
|
{
|
|
|
|
DefElem *opt = (DefElem *) lfirst(lc);
|
|
|
|
|
|
|
|
if (strcmp(opt->defname, "force") == 0)
|
|
|
|
force = true;
|
|
|
|
else
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
|
|
|
errmsg("unrecognized DROP DATABASE option \"%s\"", opt->defname),
|
|
|
|
parser_errposition(pstate, opt->location)));
|
|
|
|
}
|
|
|
|
|
|
|
|
dropdb(stmt->dbname, stmt->missing_ok, force);
|
|
|
|
}
|
2008-11-07 19:25:07 +01:00
|
|
|
|
2005-07-31 19:19:22 +02:00
|
|
|
/*
|
|
|
|
* ALTER DATABASE name ...
|
|
|
|
*/
|
2012-12-29 13:55:37 +01:00
|
|
|
Oid
|
2016-09-06 18:00:00 +02:00
|
|
|
AlterDatabase(ParseState *pstate, AlterDatabaseStmt *stmt, bool isTopLevel)
|
2005-07-31 19:19:22 +02:00
|
|
|
{
|
|
|
|
Relation rel;
|
2013-05-29 22:58:43 +02:00
|
|
|
Oid dboid;
|
2005-07-31 19:19:22 +02:00
|
|
|
HeapTuple tuple,
|
|
|
|
newtuple;
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
Form_pg_database datform;
|
2005-07-31 19:19:22 +02:00
|
|
|
ScanKeyData scankey;
|
|
|
|
SysScanDesc scan;
|
|
|
|
ListCell *option;
|
2014-07-02 02:10:38 +02:00
|
|
|
bool dbistemplate = false;
|
|
|
|
bool dballowconnections = true;
|
|
|
|
int dbconnlimit = -1;
|
|
|
|
DefElem *distemplate = NULL;
|
|
|
|
DefElem *dallowconnections = NULL;
|
2005-07-31 19:19:22 +02:00
|
|
|
DefElem *dconnlimit = NULL;
|
2008-11-07 19:25:07 +01:00
|
|
|
DefElem *dtablespace = NULL;
|
2005-07-31 19:19:22 +02:00
|
|
|
Datum new_record[Natts_pg_database];
|
2008-11-02 02:45:28 +01:00
|
|
|
bool new_record_nulls[Natts_pg_database];
|
|
|
|
bool new_record_repl[Natts_pg_database];
|
2005-07-31 19:19:22 +02:00
|
|
|
|
|
|
|
/* Extract options from the statement node tree */
|
|
|
|
foreach(option, stmt->options)
|
|
|
|
{
|
|
|
|
DefElem *defel = (DefElem *) lfirst(option);
|
|
|
|
|
2014-07-02 02:10:38 +02:00
|
|
|
if (strcmp(defel->defname, "is_template") == 0)
|
|
|
|
{
|
|
|
|
if (distemplate)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
2016-09-06 18:00:00 +02:00
|
|
|
errmsg("conflicting or redundant options"),
|
|
|
|
parser_errposition(pstate, defel->location)));
|
2014-07-02 02:10:38 +02:00
|
|
|
distemplate = defel;
|
|
|
|
}
|
|
|
|
else if (strcmp(defel->defname, "allow_connections") == 0)
|
|
|
|
{
|
|
|
|
if (dallowconnections)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
2016-09-06 18:00:00 +02:00
|
|
|
errmsg("conflicting or redundant options"),
|
|
|
|
parser_errposition(pstate, defel->location)));
|
2014-07-02 02:10:38 +02:00
|
|
|
dallowconnections = defel;
|
|
|
|
}
|
|
|
|
else if (strcmp(defel->defname, "connection_limit") == 0)
|
2005-07-31 19:19:22 +02:00
|
|
|
{
|
|
|
|
if (dconnlimit)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
2016-09-06 18:00:00 +02:00
|
|
|
errmsg("conflicting or redundant options"),
|
|
|
|
parser_errposition(pstate, defel->location)));
|
2005-07-31 19:19:22 +02:00
|
|
|
dconnlimit = defel;
|
|
|
|
}
|
2008-11-07 19:25:07 +01:00
|
|
|
else if (strcmp(defel->defname, "tablespace") == 0)
|
|
|
|
{
|
|
|
|
if (dtablespace)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
2016-09-06 18:00:00 +02:00
|
|
|
errmsg("conflicting or redundant options"),
|
|
|
|
parser_errposition(pstate, defel->location)));
|
2008-11-07 19:25:07 +01:00
|
|
|
dtablespace = defel;
|
|
|
|
}
|
2005-07-31 19:19:22 +02:00
|
|
|
else
|
2014-07-02 01:02:21 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_SYNTAX_ERROR),
|
2016-09-06 18:00:00 +02:00
|
|
|
errmsg("option \"%s\" not recognized", defel->defname),
|
|
|
|
parser_errposition(pstate, defel->location)));
|
2005-07-31 19:19:22 +02:00
|
|
|
}
|
|
|
|
|
2008-11-07 19:25:07 +01:00
|
|
|
if (dtablespace)
|
|
|
|
{
|
2014-07-02 01:02:21 +02:00
|
|
|
/*
|
|
|
|
* While the SET TABLESPACE syntax doesn't allow any other options,
|
|
|
|
* somebody could write "WITH TABLESPACE ...". Forbid any other
|
|
|
|
* options from being specified in that case.
|
|
|
|
*/
|
|
|
|
if (list_length(stmt->options) != 1)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("option \"%s\" cannot be specified with other options",
|
|
|
|
dtablespace->defname),
|
2016-09-06 18:00:00 +02:00
|
|
|
parser_errposition(pstate, dtablespace->location)));
|
2008-11-07 19:25:07 +01:00
|
|
|
/* this case isn't allowed within a transaction block */
|
2018-02-17 02:44:15 +01:00
|
|
|
PreventInTransactionBlock(isTopLevel, "ALTER DATABASE SET TABLESPACE");
|
2014-07-02 01:02:21 +02:00
|
|
|
movedb(stmt->dbname, defGetString(dtablespace));
|
2012-12-29 13:55:37 +01:00
|
|
|
return InvalidOid;
|
2008-11-07 19:25:07 +01:00
|
|
|
}
|
|
|
|
|
2014-07-02 02:10:38 +02:00
|
|
|
if (distemplate && distemplate->arg)
|
|
|
|
dbistemplate = defGetBoolean(distemplate);
|
|
|
|
if (dallowconnections && dallowconnections->arg)
|
|
|
|
dballowconnections = defGetBoolean(dallowconnections);
|
2014-07-02 01:02:21 +02:00
|
|
|
if (dconnlimit && dconnlimit->arg)
|
2009-01-30 18:24:47 +01:00
|
|
|
{
|
2014-07-02 02:10:38 +02:00
|
|
|
dbconnlimit = defGetInt32(dconnlimit);
|
|
|
|
if (dbconnlimit < -1)
|
2009-01-30 18:24:47 +01:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
|
2014-07-02 02:10:38 +02:00
|
|
|
errmsg("invalid connection limit: %d", dbconnlimit)));
|
2009-01-30 18:24:47 +01:00
|
|
|
}
|
2005-07-31 19:19:22 +02:00
|
|
|
|
|
|
|
/*
|
2006-05-04 18:07:29 +02:00
|
|
|
* Get the old tuple. We don't need a lock on the database per se,
|
|
|
|
* because we're not going to do anything that would mess up incoming
|
|
|
|
* connections.
|
2005-07-31 19:19:22 +02:00
|
|
|
*/
|
2019-01-21 19:32:19 +01:00
|
|
|
rel = table_open(DatabaseRelationId, RowExclusiveLock);
|
2005-07-31 19:19:22 +02:00
|
|
|
ScanKeyInit(&scankey,
|
|
|
|
Anum_pg_database_datname,
|
|
|
|
BTEqualStrategyNumber, F_NAMEEQ,
|
2016-09-13 23:17:48 +02:00
|
|
|
CStringGetDatum(stmt->dbname));
|
2005-07-31 19:19:22 +02:00
|
|
|
scan = systable_beginscan(rel, DatabaseNameIndexId, true,
|
Use an MVCC snapshot, rather than SnapshotNow, for catalog scans.
SnapshotNow scans have the undesirable property that, in the face of
concurrent updates, the scan can fail to see either the old or the new
versions of the row. In many cases, we work around this by requiring
DDL operations to hold AccessExclusiveLock on the object being
modified; in some cases, the existing locking is inadequate and random
failures occur as a result. This commit doesn't change anything
related to locking, but will hopefully pave the way to allowing lock
strength reductions in the future.
The major issue has held us back from making this change in the past
is that taking an MVCC snapshot is significantly more expensive than
using a static special snapshot such as SnapshotNow. However, testing
of various worst-case scenarios reveals that this problem is not
severe except under fairly extreme workloads. To mitigate those
problems, we avoid retaking the MVCC snapshot for each new scan;
instead, we take a new snapshot only when invalidation messages have
been processed. The catcache machinery already requires that
invalidation messages be sent before releasing the related heavyweight
lock; else other backends might rely on locally-cached data rather
than scanning the catalog at all. Thus, making snapshot reuse
dependent on the same guarantees shouldn't break anything that wasn't
already subtly broken.
Patch by me. Review by Michael Paquier and Andres Freund.
2013-07-02 15:47:01 +02:00
|
|
|
NULL, 1, &scankey);
|
2005-07-31 19:19:22 +02:00
|
|
|
tuple = systable_getnext(scan);
|
|
|
|
if (!HeapTupleIsValid(tuple))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_DATABASE),
|
|
|
|
errmsg("database \"%s\" does not exist", stmt->dbname)));
|
|
|
|
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
datform = (Form_pg_database) GETSTRUCT(tuple);
|
|
|
|
dboid = datform->oid;
|
2012-12-29 13:55:37 +01:00
|
|
|
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
if (!pg_database_ownercheck(dboid, GetUserId()))
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_DATABASE,
|
2005-07-31 19:19:22 +02:00
|
|
|
stmt->dbname);
|
|
|
|
|
2014-07-02 02:10:38 +02:00
|
|
|
/*
|
|
|
|
* In order to avoid getting locked out and having to go through
|
|
|
|
* standalone mode, we refuse to disallow connections to the database
|
|
|
|
* we're currently connected to. Lockout can still happen with concurrent
|
|
|
|
* sessions but the likeliness of that is not high enough to worry about.
|
|
|
|
*/
|
|
|
|
if (!dballowconnections && dboid == MyDatabaseId)
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INVALID_PARAMETER_VALUE),
|
|
|
|
errmsg("cannot disallow connections for current database")));
|
|
|
|
|
2005-07-31 19:19:22 +02:00
|
|
|
/*
|
|
|
|
* Build an updated tuple, perusing the information just obtained
|
|
|
|
*/
|
|
|
|
MemSet(new_record, 0, sizeof(new_record));
|
2008-11-02 02:45:28 +01:00
|
|
|
MemSet(new_record_nulls, false, sizeof(new_record_nulls));
|
|
|
|
MemSet(new_record_repl, false, sizeof(new_record_repl));
|
2005-07-31 19:19:22 +02:00
|
|
|
|
2014-07-02 02:10:38 +02:00
|
|
|
if (distemplate)
|
|
|
|
{
|
|
|
|
new_record[Anum_pg_database_datistemplate - 1] = BoolGetDatum(dbistemplate);
|
|
|
|
new_record_repl[Anum_pg_database_datistemplate - 1] = true;
|
|
|
|
}
|
|
|
|
if (dallowconnections)
|
|
|
|
{
|
|
|
|
new_record[Anum_pg_database_datallowconn - 1] = BoolGetDatum(dballowconnections);
|
|
|
|
new_record_repl[Anum_pg_database_datallowconn - 1] = true;
|
|
|
|
}
|
2005-07-31 19:19:22 +02:00
|
|
|
if (dconnlimit)
|
|
|
|
{
|
2014-07-02 02:10:38 +02:00
|
|
|
new_record[Anum_pg_database_datconnlimit - 1] = Int32GetDatum(dbconnlimit);
|
2008-11-02 02:45:28 +01:00
|
|
|
new_record_repl[Anum_pg_database_datconnlimit - 1] = true;
|
2005-07-31 19:19:22 +02:00
|
|
|
}
|
|
|
|
|
2008-11-02 02:45:28 +01:00
|
|
|
newtuple = heap_modify_tuple(tuple, RelationGetDescr(rel), new_record,
|
2009-06-11 16:49:15 +02:00
|
|
|
new_record_nulls, new_record_repl);
|
2017-01-31 22:42:24 +01:00
|
|
|
CatalogTupleUpdate(rel, &tuple->t_self, newtuple);
|
2005-07-31 19:19:22 +02:00
|
|
|
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
InvokeObjectPostAlterHook(DatabaseRelationId, dboid, 0);
|
2013-03-18 03:55:14 +01:00
|
|
|
|
2005-07-31 19:19:22 +02:00
|
|
|
systable_endscan(scan);
|
|
|
|
|
|
|
|
/* Close pg_database, but keep lock till commit */
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(rel, NoLock);
|
2012-12-29 13:55:37 +01:00
|
|
|
|
|
|
|
return dboid;
|
2005-07-31 19:19:22 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2002-03-01 23:45:19 +01:00
|
|
|
/*
|
|
|
|
* ALTER DATABASE name SET ...
|
|
|
|
*/
|
2012-12-29 13:55:37 +01:00
|
|
|
Oid
|
2002-03-01 23:45:19 +01:00
|
|
|
AlterDatabaseSet(AlterDatabaseSetStmt *stmt)
|
|
|
|
{
|
2010-08-05 16:45:09 +02:00
|
|
|
Oid datid = get_database_oid(stmt->dbname, false);
|
2010-02-26 03:01:40 +01:00
|
|
|
|
2004-11-18 02:14:26 +01:00
|
|
|
/*
|
2009-10-08 00:14:26 +02:00
|
|
|
* Obtain a lock on the database and make sure it didn't go away in the
|
|
|
|
* meantime.
|
2004-11-18 02:14:26 +01:00
|
|
|
*/
|
2009-10-08 00:14:26 +02:00
|
|
|
shdepLockAndCheckObject(DatabaseRelationId, datid);
|
2002-03-01 23:45:19 +01:00
|
|
|
|
2009-10-08 00:14:26 +02:00
|
|
|
if (!pg_database_ownercheck(datid, GetUserId()))
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_DATABASE,
|
2010-02-26 03:01:40 +01:00
|
|
|
stmt->dbname);
|
2002-03-01 23:45:19 +01:00
|
|
|
|
2009-10-08 00:14:26 +02:00
|
|
|
AlterSetting(datid, InvalidOid, stmt->setstmt);
|
2010-02-26 03:01:40 +01:00
|
|
|
|
2009-10-08 00:14:26 +02:00
|
|
|
UnlockSharedObject(DatabaseRelationId, datid, 0, AccessShareLock);
|
2012-12-29 13:55:37 +01:00
|
|
|
|
|
|
|
return datid;
|
2002-03-01 23:45:19 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2004-05-26 15:57:04 +02:00
|
|
|
/*
|
|
|
|
* ALTER DATABASE name OWNER TO newowner
|
|
|
|
*/
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress
|
2005-06-28 07:09:14 +02:00
|
|
|
AlterDatabaseOwner(const char *dbname, Oid newOwnerId)
|
2004-05-26 15:57:04 +02:00
|
|
|
{
|
2012-12-24 00:25:03 +01:00
|
|
|
Oid db_id;
|
2004-08-01 22:30:49 +02:00
|
|
|
HeapTuple tuple;
|
2004-05-26 15:57:04 +02:00
|
|
|
Relation rel;
|
|
|
|
ScanKeyData scankey;
|
|
|
|
SysScanDesc scan;
|
2004-08-29 07:07:03 +02:00
|
|
|
Form_pg_database datForm;
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddress address;
|
2004-05-26 15:57:04 +02:00
|
|
|
|
2004-11-18 02:14:26 +01:00
|
|
|
/*
|
2006-05-04 18:07:29 +02:00
|
|
|
* Get the old tuple. We don't need a lock on the database per se,
|
|
|
|
* because we're not going to do anything that would mess up incoming
|
|
|
|
* connections.
|
2004-11-18 02:14:26 +01:00
|
|
|
*/
|
2019-01-21 19:32:19 +01:00
|
|
|
rel = table_open(DatabaseRelationId, RowExclusiveLock);
|
2004-05-26 15:57:04 +02:00
|
|
|
ScanKeyInit(&scankey,
|
|
|
|
Anum_pg_database_datname,
|
|
|
|
BTEqualStrategyNumber, F_NAMEEQ,
|
2016-09-13 23:17:48 +02:00
|
|
|
CStringGetDatum(dbname));
|
2005-04-14 22:03:27 +02:00
|
|
|
scan = systable_beginscan(rel, DatabaseNameIndexId, true,
|
Use an MVCC snapshot, rather than SnapshotNow, for catalog scans.
SnapshotNow scans have the undesirable property that, in the face of
concurrent updates, the scan can fail to see either the old or the new
versions of the row. In many cases, we work around this by requiring
DDL operations to hold AccessExclusiveLock on the object being
modified; in some cases, the existing locking is inadequate and random
failures occur as a result. This commit doesn't change anything
related to locking, but will hopefully pave the way to allowing lock
strength reductions in the future.
The major issue has held us back from making this change in the past
is that taking an MVCC snapshot is significantly more expensive than
using a static special snapshot such as SnapshotNow. However, testing
of various worst-case scenarios reveals that this problem is not
severe except under fairly extreme workloads. To mitigate those
problems, we avoid retaking the MVCC snapshot for each new scan;
instead, we take a new snapshot only when invalidation messages have
been processed. The catcache machinery already requires that
invalidation messages be sent before releasing the related heavyweight
lock; else other backends might rely on locally-cached data rather
than scanning the catalog at all. Thus, making snapshot reuse
dependent on the same guarantees shouldn't break anything that wasn't
already subtly broken.
Patch by me. Review by Michael Paquier and Andres Freund.
2013-07-02 15:47:01 +02:00
|
|
|
NULL, 1, &scankey);
|
2004-05-26 15:57:04 +02:00
|
|
|
tuple = systable_getnext(scan);
|
|
|
|
if (!HeapTupleIsValid(tuple))
|
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_DATABASE),
|
|
|
|
errmsg("database \"%s\" does not exist", dbname)));
|
|
|
|
|
2004-08-01 22:30:49 +02:00
|
|
|
datForm = (Form_pg_database) GETSTRUCT(tuple);
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
db_id = datForm->oid;
|
2004-05-26 15:57:04 +02:00
|
|
|
|
2004-08-29 07:07:03 +02:00
|
|
|
/*
|
2004-06-25 23:55:59 +02:00
|
|
|
* If the new owner is the same as the existing owner, consider the
|
2004-08-29 07:07:03 +02:00
|
|
|
* command to have succeeded. This is to be consistent with other
|
|
|
|
* objects.
|
2004-06-25 23:55:59 +02:00
|
|
|
*/
|
2005-06-28 07:09:14 +02:00
|
|
|
if (datForm->datdba != newOwnerId)
|
2004-06-25 23:55:59 +02:00
|
|
|
{
|
2004-08-01 22:30:49 +02:00
|
|
|
Datum repl_val[Natts_pg_database];
|
2008-11-02 02:45:28 +01:00
|
|
|
bool repl_null[Natts_pg_database];
|
|
|
|
bool repl_repl[Natts_pg_database];
|
2004-08-29 07:07:03 +02:00
|
|
|
Acl *newAcl;
|
2004-08-01 22:30:49 +02:00
|
|
|
Datum aclDatum;
|
|
|
|
bool isNull;
|
|
|
|
HeapTuple newtuple;
|
|
|
|
|
2005-07-14 23:46:30 +02:00
|
|
|
/* Otherwise, must be owner of the existing object */
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
if (!pg_database_ownercheck(db_id, GetUserId()))
|
2017-12-02 15:26:34 +01:00
|
|
|
aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_DATABASE,
|
2005-07-14 23:46:30 +02:00
|
|
|
dbname);
|
|
|
|
|
|
|
|
/* Must be able to become new owner */
|
|
|
|
check_is_member_of_role(GetUserId(), newOwnerId);
|
|
|
|
|
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* must have createdb rights
|
2005-07-14 23:46:30 +02:00
|
|
|
*
|
2005-10-15 04:49:52 +02:00
|
|
|
* NOTE: This is different from other alter-owner checks in that the
|
|
|
|
* current user is checked for createdb privileges instead of the
|
|
|
|
* destination owner. This is consistent with the CREATE case for
|
|
|
|
* databases. Because superusers will always have this right, we need
|
|
|
|
* no special case for them.
|
2005-07-14 23:46:30 +02:00
|
|
|
*/
|
2014-12-23 19:35:49 +01:00
|
|
|
if (!have_createdb_privilege())
|
2004-06-25 23:55:59 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
errmsg("permission denied to change owner of database")));
|
2004-05-26 15:57:04 +02:00
|
|
|
|
2008-11-02 02:45:28 +01:00
|
|
|
memset(repl_null, false, sizeof(repl_null));
|
|
|
|
memset(repl_repl, false, sizeof(repl_repl));
|
2004-08-01 22:30:49 +02:00
|
|
|
|
2008-11-02 02:45:28 +01:00
|
|
|
repl_repl[Anum_pg_database_datdba - 1] = true;
|
2005-06-28 07:09:14 +02:00
|
|
|
repl_val[Anum_pg_database_datdba - 1] = ObjectIdGetDatum(newOwnerId);
|
2004-08-01 22:30:49 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Determine the modified ACL for the new owner. This is only
|
|
|
|
* necessary when the ACL is non-null.
|
|
|
|
*/
|
|
|
|
aclDatum = heap_getattr(tuple,
|
2004-08-29 07:07:03 +02:00
|
|
|
Anum_pg_database_datacl,
|
|
|
|
RelationGetDescr(rel),
|
|
|
|
&isNull);
|
2004-08-01 22:30:49 +02:00
|
|
|
if (!isNull)
|
|
|
|
{
|
|
|
|
newAcl = aclnewowner(DatumGetAclP(aclDatum),
|
2005-06-28 07:09:14 +02:00
|
|
|
datForm->datdba, newOwnerId);
|
2008-11-02 02:45:28 +01:00
|
|
|
repl_repl[Anum_pg_database_datacl - 1] = true;
|
2004-08-01 22:30:49 +02:00
|
|
|
repl_val[Anum_pg_database_datacl - 1] = PointerGetDatum(newAcl);
|
|
|
|
}
|
|
|
|
|
2008-11-02 02:45:28 +01:00
|
|
|
newtuple = heap_modify_tuple(tuple, RelationGetDescr(rel), repl_val, repl_null, repl_repl);
|
2017-01-31 22:42:24 +01:00
|
|
|
CatalogTupleUpdate(rel, &newtuple->t_self, newtuple);
|
2004-08-01 22:30:49 +02:00
|
|
|
|
|
|
|
heap_freetuple(newtuple);
|
2005-07-07 22:40:02 +02:00
|
|
|
|
|
|
|
/* Update owner dependency reference */
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
changeDependencyOnOwner(DatabaseRelationId, db_id, newOwnerId);
|
2004-06-25 23:55:59 +02:00
|
|
|
}
|
2004-05-26 15:57:04 +02:00
|
|
|
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
InvokeObjectPostAlterHook(DatabaseRelationId, db_id, 0);
|
2013-03-18 03:55:14 +01:00
|
|
|
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
ObjectAddressSet(address, DatabaseRelationId, db_id);
|
|
|
|
|
2005-02-26 19:43:34 +01:00
|
|
|
systable_endscan(scan);
|
|
|
|
|
|
|
|
/* Close pg_database, but keep lock till commit */
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(rel, NoLock);
|
2012-12-24 00:25:03 +01:00
|
|
|
|
Change many routines to return ObjectAddress rather than OID
The changed routines are mostly those that can be directly called by
ProcessUtilitySlow; the intention is to make the affected object
information more precise, in support for future event trigger changes.
Originally it was envisioned that the OID of the affected object would
be enough, and in most cases that is correct, but upon actually
implementing the event trigger changes it turned out that ObjectAddress
is more widely useful.
Additionally, some command execution routines grew an output argument
that's an object address which provides further info about the executed
command. To wit:
* for ALTER DOMAIN / ADD CONSTRAINT, it corresponds to the address of
the new constraint
* for ALTER OBJECT / SET SCHEMA, it corresponds to the address of the
schema that originally contained the object.
* for ALTER EXTENSION {ADD, DROP} OBJECT, it corresponds to the address
of the object added to or dropped from the extension.
There's no user-visible change in this commit, and no functional change
either.
Discussion: 20150218213255.GC6717@tamriel.snowman.net
Reviewed-By: Stephen Frost, Andres Freund
2015-03-03 18:10:50 +01:00
|
|
|
return address;
|
2004-05-26 15:57:04 +02:00
|
|
|
}
|
|
|
|
|
2002-03-01 23:45:19 +01:00
|
|
|
|
1996-07-09 08:22:35 +02:00
|
|
|
/*
|
2000-01-13 19:26:18 +01:00
|
|
|
* Helper functions
|
1996-07-09 08:22:35 +02:00
|
|
|
*/
|
|
|
|
|
2006-05-04 18:07:29 +02:00
|
|
|
/*
|
|
|
|
* Look up info about the database named "name". If the database exists,
|
|
|
|
* obtain the specified lock type on it, fill in any of the remaining
|
2017-08-16 06:22:32 +02:00
|
|
|
* parameters that aren't NULL, and return true. If no such database,
|
|
|
|
* return false.
|
2006-05-04 18:07:29 +02:00
|
|
|
*/
|
2000-01-13 19:26:18 +01:00
|
|
|
static bool
|
2006-05-04 18:07:29 +02:00
|
|
|
get_db_info(const char *name, LOCKMODE lockmode,
|
|
|
|
Oid *dbIdP, Oid *ownerIdP,
|
2005-03-12 22:33:55 +01:00
|
|
|
int *encodingP, bool *dbIsTemplateP, bool *dbAllowConnP,
|
Fix recently-understood problems with handling of XID freezing, particularly
in PITR scenarios. We now WAL-log the replacement of old XIDs with
FrozenTransactionId, so that such replacement is guaranteed to propagate to
PITR slave databases. Also, rather than relying on hint-bit updates to be
preserved, pg_clog is not truncated until all instances of an XID are known to
have been replaced by FrozenTransactionId. Add new GUC variables and
pg_autovacuum columns to allow management of the freezing policy, so that
users can trade off the size of pg_clog against the amount of freezing work
done. Revise the already-existing code that forces autovacuum of tables
approaching the wraparound point to make it more bulletproof; also, revise the
autovacuum logic so that anti-wraparound vacuuming is done per-table rather
than per-database. initdb forced because of changes in pg_class, pg_database,
and pg_autovacuum catalogs. Heikki Linnakangas, Simon Riggs, and Tom Lane.
2006-11-05 23:42:10 +01:00
|
|
|
Oid *dbLastSysOidP, TransactionId *dbFrozenXidP,
|
Improve concurrency of foreign key locking
This patch introduces two additional lock modes for tuples: "SELECT FOR
KEY SHARE" and "SELECT FOR NO KEY UPDATE". These don't block each
other, in contrast with already existing "SELECT FOR SHARE" and "SELECT
FOR UPDATE". UPDATE commands that do not modify the values stored in
the columns that are part of the key of the tuple now grab a SELECT FOR
NO KEY UPDATE lock on the tuple, allowing them to proceed concurrently
with tuple locks of the FOR KEY SHARE variety.
Foreign key triggers now use FOR KEY SHARE instead of FOR SHARE; this
means the concurrency improvement applies to them, which is the whole
point of this patch.
The added tuple lock semantics require some rejiggering of the multixact
module, so that the locking level that each transaction is holding can
be stored alongside its Xid. Also, multixacts now need to persist
across server restarts and crashes, because they can now represent not
only tuple locks, but also tuple updates. This means we need more
careful tracking of lifetime of pg_multixact SLRU files; since they now
persist longer, we require more infrastructure to figure out when they
can be removed. pg_upgrade also needs to be careful to copy
pg_multixact files over from the old server to the new, or at least part
of multixact.c state, depending on the versions of the old and new
servers.
Tuple time qualification rules (HeapTupleSatisfies routines) need to be
careful not to consider tuples with the "is multi" infomask bit set as
being only locked; they might need to look up MultiXact values (i.e.
possibly do pg_multixact I/O) to find out the Xid that updated a tuple,
whereas they previously were assured to only use information readily
available from the tuple header. This is considered acceptable, because
the extra I/O would involve cases that would previously cause some
commands to block waiting for concurrent transactions to finish.
Another important change is the fact that locking tuples that have
previously been updated causes the future versions to be marked as
locked, too; this is essential for correctness of foreign key checks.
This causes additional WAL-logging, also (there was previously a single
WAL record for a locked tuple; now there are as many as updated copies
of the tuple there exist.)
With all this in place, contention related to tuples being checked by
foreign key rules should be much reduced.
As a bonus, the old behavior that a subtransaction grabbing a stronger
tuple lock than the parent (sub)transaction held on a given tuple and
later aborting caused the weaker lock to be lost, has been fixed.
Many new spec files were added for isolation tester framework, to ensure
overall behavior is sane. There's probably room for several more tests.
There were several reviewers of this patch; in particular, Noah Misch
and Andres Freund spent considerable time in it. Original idea for the
patch came from Simon Riggs, after a problem report by Joel Jacobson.
Most code is from me, with contributions from Marti Raudsepp, Alexander
Shulgin, Noah Misch and Andres Freund.
This patch was discussed in several pgsql-hackers threads; the most
important start at the following message-ids:
AANLkTimo9XVcEzfiBR-ut3KVNDkjm2Vxh+t8kAmWjPuv@mail.gmail.com
1290721684-sup-3951@alvh.no-ip.org
1294953201-sup-2099@alvh.no-ip.org
1320343602-sup-2290@alvh.no-ip.org
1339690386-sup-8927@alvh.no-ip.org
4FE5FF020200002500048A3D@gw.wicourts.gov
4FEAB90A0200002500048B7D@gw.wicourts.gov
2013-01-23 16:04:59 +01:00
|
|
|
MultiXactId *dbMinMultiP,
|
2008-09-23 11:20:39 +02:00
|
|
|
Oid *dbTablespace, char **dbCollate, char **dbCtype)
|
1996-07-09 08:22:35 +02:00
|
|
|
{
|
2006-05-04 18:07:29 +02:00
|
|
|
bool result = false;
|
2000-01-13 19:26:18 +01:00
|
|
|
Relation relation;
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2000-04-12 19:17:23 +02:00
|
|
|
AssertArg(name);
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2000-11-14 19:37:49 +01:00
|
|
|
/* Caller may wish to grab a better lock on pg_database beforehand... */
|
2019-01-21 19:32:19 +01:00
|
|
|
relation = table_open(DatabaseRelationId, AccessShareLock);
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2006-05-04 18:07:29 +02:00
|
|
|
/*
|
2006-10-04 02:30:14 +02:00
|
|
|
* Loop covers the rare case where the database is renamed before we can
|
|
|
|
* lock it. We try again just in case we can find a new one of the same
|
|
|
|
* name.
|
2006-05-04 18:07:29 +02:00
|
|
|
*/
|
|
|
|
for (;;)
|
|
|
|
{
|
|
|
|
ScanKeyData scanKey;
|
|
|
|
SysScanDesc scan;
|
|
|
|
HeapTuple tuple;
|
|
|
|
Oid dbOid;
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2006-05-04 18:07:29 +02:00
|
|
|
/*
|
2006-10-04 02:30:14 +02:00
|
|
|
* there's no syscache for database-indexed-by-name, so must do it the
|
|
|
|
* hard way
|
2006-05-04 18:07:29 +02:00
|
|
|
*/
|
|
|
|
ScanKeyInit(&scanKey,
|
|
|
|
Anum_pg_database_datname,
|
|
|
|
BTEqualStrategyNumber, F_NAMEEQ,
|
2016-09-13 23:17:48 +02:00
|
|
|
CStringGetDatum(name));
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2006-05-04 18:07:29 +02:00
|
|
|
scan = systable_beginscan(relation, DatabaseNameIndexId, true,
|
Use an MVCC snapshot, rather than SnapshotNow, for catalog scans.
SnapshotNow scans have the undesirable property that, in the face of
concurrent updates, the scan can fail to see either the old or the new
versions of the row. In many cases, we work around this by requiring
DDL operations to hold AccessExclusiveLock on the object being
modified; in some cases, the existing locking is inadequate and random
failures occur as a result. This commit doesn't change anything
related to locking, but will hopefully pave the way to allowing lock
strength reductions in the future.
The major issue has held us back from making this change in the past
is that taking an MVCC snapshot is significantly more expensive than
using a static special snapshot such as SnapshotNow. However, testing
of various worst-case scenarios reveals that this problem is not
severe except under fairly extreme workloads. To mitigate those
problems, we avoid retaking the MVCC snapshot for each new scan;
instead, we take a new snapshot only when invalidation messages have
been processed. The catcache machinery already requires that
invalidation messages be sent before releasing the related heavyweight
lock; else other backends might rely on locally-cached data rather
than scanning the catalog at all. Thus, making snapshot reuse
dependent on the same guarantees shouldn't break anything that wasn't
already subtly broken.
Patch by me. Review by Michael Paquier and Andres Freund.
2013-07-02 15:47:01 +02:00
|
|
|
NULL, 1, &scanKey);
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2006-05-04 18:07:29 +02:00
|
|
|
tuple = systable_getnext(scan);
|
|
|
|
|
|
|
|
if (!HeapTupleIsValid(tuple))
|
|
|
|
{
|
|
|
|
/* definitely no database of that name */
|
|
|
|
systable_endscan(scan);
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
dbOid = ((Form_pg_database) GETSTRUCT(tuple))->oid;
|
2006-05-04 18:07:29 +02:00
|
|
|
|
|
|
|
systable_endscan(scan);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Now that we have a database OID, we can try to lock the DB.
|
|
|
|
*/
|
|
|
|
if (lockmode != NoLock)
|
|
|
|
LockSharedObject(DatabaseRelationId, dbOid, 0, lockmode);
|
|
|
|
|
|
|
|
/*
|
2014-05-06 18:12:18 +02:00
|
|
|
* And now, re-fetch the tuple by OID. If it's still there and still
|
2006-10-04 02:30:14 +02:00
|
|
|
* the same name, we win; else, drop the lock and loop back to try
|
|
|
|
* again.
|
2006-05-04 18:07:29 +02:00
|
|
|
*/
|
2010-02-14 19:42:19 +01:00
|
|
|
tuple = SearchSysCache1(DATABASEOID, ObjectIdGetDatum(dbOid));
|
2006-05-04 18:07:29 +02:00
|
|
|
if (HeapTupleIsValid(tuple))
|
|
|
|
{
|
|
|
|
Form_pg_database dbform = (Form_pg_database) GETSTRUCT(tuple);
|
|
|
|
|
|
|
|
if (strcmp(name, NameStr(dbform->datname)) == 0)
|
|
|
|
{
|
|
|
|
/* oid of the database */
|
|
|
|
if (dbIdP)
|
|
|
|
*dbIdP = dbOid;
|
|
|
|
/* oid of the owner */
|
|
|
|
if (ownerIdP)
|
|
|
|
*ownerIdP = dbform->datdba;
|
|
|
|
/* character encoding */
|
|
|
|
if (encodingP)
|
|
|
|
*encodingP = dbform->encoding;
|
|
|
|
/* allowed as template? */
|
|
|
|
if (dbIsTemplateP)
|
|
|
|
*dbIsTemplateP = dbform->datistemplate;
|
|
|
|
/* allowing connections? */
|
|
|
|
if (dbAllowConnP)
|
|
|
|
*dbAllowConnP = dbform->datallowconn;
|
|
|
|
/* last system OID used in database */
|
|
|
|
if (dbLastSysOidP)
|
|
|
|
*dbLastSysOidP = dbform->datlastsysoid;
|
Fix recently-understood problems with handling of XID freezing, particularly
in PITR scenarios. We now WAL-log the replacement of old XIDs with
FrozenTransactionId, so that such replacement is guaranteed to propagate to
PITR slave databases. Also, rather than relying on hint-bit updates to be
preserved, pg_clog is not truncated until all instances of an XID are known to
have been replaced by FrozenTransactionId. Add new GUC variables and
pg_autovacuum columns to allow management of the freezing policy, so that
users can trade off the size of pg_clog against the amount of freezing work
done. Revise the already-existing code that forces autovacuum of tables
approaching the wraparound point to make it more bulletproof; also, revise the
autovacuum logic so that anti-wraparound vacuuming is done per-table rather
than per-database. initdb forced because of changes in pg_class, pg_database,
and pg_autovacuum catalogs. Heikki Linnakangas, Simon Riggs, and Tom Lane.
2006-11-05 23:42:10 +01:00
|
|
|
/* limit of frozen XIDs */
|
|
|
|
if (dbFrozenXidP)
|
|
|
|
*dbFrozenXidP = dbform->datfrozenxid;
|
2019-07-29 05:28:30 +02:00
|
|
|
/* minimum MultiXactId */
|
Improve concurrency of foreign key locking
This patch introduces two additional lock modes for tuples: "SELECT FOR
KEY SHARE" and "SELECT FOR NO KEY UPDATE". These don't block each
other, in contrast with already existing "SELECT FOR SHARE" and "SELECT
FOR UPDATE". UPDATE commands that do not modify the values stored in
the columns that are part of the key of the tuple now grab a SELECT FOR
NO KEY UPDATE lock on the tuple, allowing them to proceed concurrently
with tuple locks of the FOR KEY SHARE variety.
Foreign key triggers now use FOR KEY SHARE instead of FOR SHARE; this
means the concurrency improvement applies to them, which is the whole
point of this patch.
The added tuple lock semantics require some rejiggering of the multixact
module, so that the locking level that each transaction is holding can
be stored alongside its Xid. Also, multixacts now need to persist
across server restarts and crashes, because they can now represent not
only tuple locks, but also tuple updates. This means we need more
careful tracking of lifetime of pg_multixact SLRU files; since they now
persist longer, we require more infrastructure to figure out when they
can be removed. pg_upgrade also needs to be careful to copy
pg_multixact files over from the old server to the new, or at least part
of multixact.c state, depending on the versions of the old and new
servers.
Tuple time qualification rules (HeapTupleSatisfies routines) need to be
careful not to consider tuples with the "is multi" infomask bit set as
being only locked; they might need to look up MultiXact values (i.e.
possibly do pg_multixact I/O) to find out the Xid that updated a tuple,
whereas they previously were assured to only use information readily
available from the tuple header. This is considered acceptable, because
the extra I/O would involve cases that would previously cause some
commands to block waiting for concurrent transactions to finish.
Another important change is the fact that locking tuples that have
previously been updated causes the future versions to be marked as
locked, too; this is essential for correctness of foreign key checks.
This causes additional WAL-logging, also (there was previously a single
WAL record for a locked tuple; now there are as many as updated copies
of the tuple there exist.)
With all this in place, contention related to tuples being checked by
foreign key rules should be much reduced.
As a bonus, the old behavior that a subtransaction grabbing a stronger
tuple lock than the parent (sub)transaction held on a given tuple and
later aborting caused the weaker lock to be lost, has been fixed.
Many new spec files were added for isolation tester framework, to ensure
overall behavior is sane. There's probably room for several more tests.
There were several reviewers of this patch; in particular, Noah Misch
and Andres Freund spent considerable time in it. Original idea for the
patch came from Simon Riggs, after a problem report by Joel Jacobson.
Most code is from me, with contributions from Marti Raudsepp, Alexander
Shulgin, Noah Misch and Andres Freund.
This patch was discussed in several pgsql-hackers threads; the most
important start at the following message-ids:
AANLkTimo9XVcEzfiBR-ut3KVNDkjm2Vxh+t8kAmWjPuv@mail.gmail.com
1290721684-sup-3951@alvh.no-ip.org
1294953201-sup-2099@alvh.no-ip.org
1320343602-sup-2290@alvh.no-ip.org
1339690386-sup-8927@alvh.no-ip.org
4FE5FF020200002500048A3D@gw.wicourts.gov
4FEAB90A0200002500048B7D@gw.wicourts.gov
2013-01-23 16:04:59 +01:00
|
|
|
if (dbMinMultiP)
|
|
|
|
*dbMinMultiP = dbform->datminmxid;
|
2006-05-04 18:07:29 +02:00
|
|
|
/* default tablespace for this database */
|
|
|
|
if (dbTablespace)
|
|
|
|
*dbTablespace = dbform->dattablespace;
|
2009-06-11 16:49:15 +02:00
|
|
|
/* default locale settings for this database */
|
|
|
|
if (dbCollate)
|
|
|
|
*dbCollate = pstrdup(NameStr(dbform->datcollate));
|
|
|
|
if (dbCtype)
|
|
|
|
*dbCtype = pstrdup(NameStr(dbform->datctype));
|
2006-05-04 18:07:29 +02:00
|
|
|
ReleaseSysCache(tuple);
|
|
|
|
result = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
/* can only get here if it was just renamed */
|
|
|
|
ReleaseSysCache(tuple);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (lockmode != NoLock)
|
|
|
|
UnlockSharedObject(DatabaseRelationId, dbOid, 0, lockmode);
|
1997-09-07 07:04:48 +02:00
|
|
|
}
|
|
|
|
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(relation, AccessShareLock);
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2006-05-04 18:07:29 +02:00
|
|
|
return result;
|
2000-01-13 19:26:18 +01:00
|
|
|
}
|
1997-09-07 07:04:48 +02:00
|
|
|
|
2014-12-23 19:35:49 +01:00
|
|
|
/* Check if current user has createdb privileges */
|
|
|
|
static bool
|
|
|
|
have_createdb_privilege(void)
|
|
|
|
{
|
|
|
|
bool result = false;
|
|
|
|
HeapTuple utup;
|
|
|
|
|
|
|
|
/* Superusers can always do everything */
|
|
|
|
if (superuser())
|
|
|
|
return true;
|
|
|
|
|
|
|
|
utup = SearchSysCache1(AUTHOID, ObjectIdGetDatum(GetUserId()));
|
|
|
|
if (HeapTupleIsValid(utup))
|
|
|
|
{
|
|
|
|
result = ((Form_pg_authid) GETSTRUCT(utup))->rolcreatedb;
|
|
|
|
ReleaseSysCache(utup);
|
|
|
|
}
|
|
|
|
return result;
|
|
|
|
}
|
|
|
|
|
2004-06-18 08:14:31 +02:00
|
|
|
/*
|
|
|
|
* Remove tablespace directories
|
|
|
|
*
|
|
|
|
* We don't know what tablespaces db_id is using, so iterate through all
|
|
|
|
* tablespaces removing <tablespace>/db_id
|
|
|
|
*/
|
|
|
|
static void
|
|
|
|
remove_dbtablespaces(Oid db_id)
|
2000-11-08 17:59:50 +01:00
|
|
|
{
|
2004-08-29 07:07:03 +02:00
|
|
|
Relation rel;
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
TableScanDesc scan;
|
2004-08-29 07:07:03 +02:00
|
|
|
HeapTuple tuple;
|
2019-11-21 13:10:37 +01:00
|
|
|
List *ltblspc = NIL;
|
|
|
|
ListCell *cell;
|
|
|
|
int ntblspc;
|
|
|
|
int i;
|
|
|
|
Oid *tablespace_ids;
|
2004-06-18 08:14:31 +02:00
|
|
|
|
2019-01-21 19:32:19 +01:00
|
|
|
rel = table_open(TableSpaceRelationId, AccessShareLock);
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
scan = table_beginscan_catalog(rel, 0, NULL);
|
2004-06-18 08:14:31 +02:00
|
|
|
while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
|
2000-11-08 17:59:50 +01:00
|
|
|
{
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
Form_pg_tablespace spcform = (Form_pg_tablespace) GETSTRUCT(tuple);
|
|
|
|
Oid dsttablespace = spcform->oid;
|
2004-08-29 07:07:03 +02:00
|
|
|
char *dstpath;
|
2004-06-18 08:14:31 +02:00
|
|
|
struct stat st;
|
2000-11-08 17:59:50 +01:00
|
|
|
|
2004-06-18 08:14:31 +02:00
|
|
|
/* Don't mess with the global tablespace */
|
|
|
|
if (dsttablespace == GLOBALTABLESPACE_OID)
|
|
|
|
continue;
|
2002-02-23 21:55:46 +01:00
|
|
|
|
2004-06-18 08:14:31 +02:00
|
|
|
dstpath = GetDatabasePath(db_id, dsttablespace);
|
2000-11-08 17:59:50 +01:00
|
|
|
|
2006-10-19 00:44:12 +02:00
|
|
|
if (lstat(dstpath, &st) < 0 || !S_ISDIR(st.st_mode))
|
2000-11-08 17:59:50 +01:00
|
|
|
{
|
2004-06-18 08:14:31 +02:00
|
|
|
/* Assume we can ignore it */
|
|
|
|
pfree(dstpath);
|
|
|
|
continue;
|
2000-11-08 17:59:50 +01:00
|
|
|
}
|
2000-11-14 19:37:49 +01:00
|
|
|
|
2004-08-01 08:19:26 +02:00
|
|
|
if (!rmtree(dstpath, true))
|
2004-06-18 08:14:31 +02:00
|
|
|
ereport(WARNING,
|
2008-04-18 19:05:45 +02:00
|
|
|
(errmsg("some useless files may be left behind in old database directory \"%s\"",
|
2004-08-29 23:08:48 +02:00
|
|
|
dstpath)));
|
|
|
|
|
2019-11-21 13:10:37 +01:00
|
|
|
ltblspc = lappend_oid(ltblspc, dsttablespace);
|
|
|
|
pfree(dstpath);
|
|
|
|
}
|
2004-08-29 23:08:48 +02:00
|
|
|
|
2019-11-21 13:10:37 +01:00
|
|
|
ntblspc = list_length(ltblspc);
|
|
|
|
if (ntblspc == 0)
|
|
|
|
{
|
|
|
|
table_endscan(scan);
|
|
|
|
table_close(rel, AccessShareLock);
|
|
|
|
return;
|
|
|
|
}
|
2005-03-23 01:03:37 +01:00
|
|
|
|
2019-11-21 13:10:37 +01:00
|
|
|
tablespace_ids = (Oid *) palloc(ntblspc * sizeof(Oid));
|
|
|
|
i = 0;
|
|
|
|
foreach(cell, ltblspc)
|
|
|
|
tablespace_ids[i++] = lfirst_oid(cell);
|
2004-08-29 23:08:48 +02:00
|
|
|
|
2019-11-21 13:10:37 +01:00
|
|
|
/* Record the filesystem change in XLOG */
|
|
|
|
{
|
|
|
|
xl_dbase_drop_rec xlrec;
|
2004-06-18 08:14:31 +02:00
|
|
|
|
2019-11-21 13:10:37 +01:00
|
|
|
xlrec.db_id = db_id;
|
|
|
|
xlrec.ntablespaces = ntblspc;
|
|
|
|
|
|
|
|
XLogBeginInsert();
|
|
|
|
XLogRegisterData((char *) &xlrec, MinSizeOfDbaseDropRec);
|
|
|
|
XLogRegisterData((char *) tablespace_ids, ntblspc * sizeof(Oid));
|
|
|
|
|
|
|
|
(void) XLogInsert(RM_DBASE_ID,
|
|
|
|
XLOG_DBASE_DROP | XLR_SPECIAL_REL_UPDATE);
|
2000-11-08 17:59:50 +01:00
|
|
|
}
|
|
|
|
|
2019-11-21 13:10:37 +01:00
|
|
|
list_free(ltblspc);
|
|
|
|
pfree(tablespace_ids);
|
|
|
|
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
table_endscan(scan);
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(rel, AccessShareLock);
|
2000-11-08 17:59:50 +01:00
|
|
|
}
|
2002-08-09 18:45:16 +02:00
|
|
|
|
2006-10-19 00:44:12 +02:00
|
|
|
/*
|
|
|
|
* Check for existing files that conflict with a proposed new DB OID;
|
2017-08-16 06:22:32 +02:00
|
|
|
* return true if there are any
|
2006-10-19 00:44:12 +02:00
|
|
|
*
|
|
|
|
* If there were a subdirectory in any tablespace matching the proposed new
|
|
|
|
* OID, we'd get a create failure due to the duplicate name ... and then we'd
|
|
|
|
* try to remove that already-existing subdirectory during the cleanup in
|
|
|
|
* remove_dbtablespaces. Nuking existing files seems like a bad idea, so
|
|
|
|
* instead we make this extra check before settling on the OID of the new
|
|
|
|
* database. This exactly parallels what GetNewRelFileNode() does for table
|
|
|
|
* relfilenode values.
|
|
|
|
*/
|
|
|
|
static bool
|
|
|
|
check_db_file_conflict(Oid db_id)
|
|
|
|
{
|
|
|
|
bool result = false;
|
|
|
|
Relation rel;
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
TableScanDesc scan;
|
2006-10-19 00:44:12 +02:00
|
|
|
HeapTuple tuple;
|
|
|
|
|
2019-01-21 19:32:19 +01:00
|
|
|
rel = table_open(TableSpaceRelationId, AccessShareLock);
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
scan = table_beginscan_catalog(rel, 0, NULL);
|
2006-10-19 00:44:12 +02:00
|
|
|
while ((tuple = heap_getnext(scan, ForwardScanDirection)) != NULL)
|
|
|
|
{
|
Remove WITH OIDS support, change oid catalog column visibility.
Previously tables declared WITH OIDS, including a significant fraction
of the catalog tables, stored the oid column not as a normal column,
but as part of the tuple header.
This special column was not shown by default, which was somewhat odd,
as it's often (consider e.g. pg_class.oid) one of the more important
parts of a row. Neither pg_dump nor COPY included the contents of the
oid column by default.
The fact that the oid column was not an ordinary column necessitated a
significant amount of special case code to support oid columns. That
already was painful for the existing, but upcoming work aiming to make
table storage pluggable, would have required expanding and duplicating
that "specialness" significantly.
WITH OIDS has been deprecated since 2005 (commit ff02d0a05280e0).
Remove it.
Removing includes:
- CREATE TABLE and ALTER TABLE syntax for declaring the table to be
WITH OIDS has been removed (WITH (oids[ = true]) will error out)
- pg_dump does not support dumping tables declared WITH OIDS and will
issue a warning when dumping one (and ignore the oid column).
- restoring an pg_dump archive with pg_restore will warn when
restoring a table with oid contents (and ignore the oid column)
- COPY will refuse to load binary dump that includes oids.
- pg_upgrade will error out when encountering tables declared WITH
OIDS, they have to be altered to remove the oid column first.
- Functionality to access the oid of the last inserted row (like
plpgsql's RESULT_OID, spi's SPI_lastoid, ...) has been removed.
The syntax for declaring a table WITHOUT OIDS (or WITH (oids = false)
for CREATE TABLE) is still supported. While that requires a bit of
support code, it seems unnecessary to break applications / dumps that
do not use oids, and are explicit about not using them.
The biggest user of WITH OID columns was postgres' catalog. This
commit changes all 'magic' oid columns to be columns that are normally
declared and stored. To reduce unnecessary query breakage all the
newly added columns are still named 'oid', even if a table's column
naming scheme would indicate 'reloid' or such. This obviously
requires adapting a lot code, mostly replacing oid access via
HeapTupleGetOid() with access to the underlying Form_pg_*->oid column.
The bootstrap process now assigns oids for all oid columns in
genbki.pl that do not have an explicit value (starting at the largest
oid previously used), only oids assigned later by oids will be above
FirstBootstrapObjectId. As the oid column now is a normal column the
special bootstrap syntax for oids has been removed.
Oids are not automatically assigned during insertion anymore, all
backend code explicitly assigns oids with GetNewOidWithIndex(). For
the rare case that insertions into the catalog via SQL are called for
the new pg_nextoid() function can be used (which only works on catalog
tables).
The fact that oid columns on system tables are now normal columns
means that they will be included in the set of columns expanded
by * (i.e. SELECT * FROM pg_class will now include the table's oid,
previously it did not). It'd not technically be hard to hide oid
column by default, but that'd mean confusing behavior would either
have to be carried forward forever, or it'd cause breakage down the
line.
While it's not unlikely that further adjustments are needed, the
scope/invasiveness of the patch makes it worthwhile to get merge this
now. It's painful to maintain externally, too complicated to commit
after the code code freeze, and a dependency of a number of other
patches.
Catversion bump, for obvious reasons.
Author: Andres Freund, with contributions by John Naylor
Discussion: https://postgr.es/m/20180930034810.ywp2c7awz7opzcfr@alap3.anarazel.de
2018-11-21 00:36:57 +01:00
|
|
|
Form_pg_tablespace spcform = (Form_pg_tablespace) GETSTRUCT(tuple);
|
|
|
|
Oid dsttablespace = spcform->oid;
|
2006-10-19 00:44:12 +02:00
|
|
|
char *dstpath;
|
|
|
|
struct stat st;
|
|
|
|
|
|
|
|
/* Don't mess with the global tablespace */
|
|
|
|
if (dsttablespace == GLOBALTABLESPACE_OID)
|
|
|
|
continue;
|
|
|
|
|
|
|
|
dstpath = GetDatabasePath(db_id, dsttablespace);
|
|
|
|
|
|
|
|
if (lstat(dstpath, &st) == 0)
|
|
|
|
{
|
|
|
|
/* Found a conflicting file (or directory, whatever) */
|
|
|
|
pfree(dstpath);
|
|
|
|
result = true;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
pfree(dstpath);
|
|
|
|
}
|
|
|
|
|
tableam: Add and use scan APIs.
Too allow table accesses to be not directly dependent on heap, several
new abstractions are needed. Specifically:
1) Heap scans need to be generalized into table scans. Do this by
introducing TableScanDesc, which will be the "base class" for
individual AMs. This contains the AM independent fields from
HeapScanDesc.
The previous heap_{beginscan,rescan,endscan} et al. have been
replaced with a table_ version.
There's no direct replacement for heap_getnext(), as that returned
a HeapTuple, which is undesirable for a other AMs. Instead there's
table_scan_getnextslot(). But note that heap_getnext() lives on,
it's still used widely to access catalog tables.
This is achieved by new scan_begin, scan_end, scan_rescan,
scan_getnextslot callbacks.
2) The portion of parallel scans that's shared between backends need
to be able to do so without the user doing per-AM work. To achieve
that new parallelscan_{estimate, initialize, reinitialize}
callbacks are introduced, which operate on a new
ParallelTableScanDesc, which again can be subclassed by AMs.
As it is likely that several AMs are going to be block oriented,
block oriented callbacks that can be shared between such AMs are
provided and used by heap. table_block_parallelscan_{estimate,
intiialize, reinitialize} as callbacks, and
table_block_parallelscan_{nextpage, init} for use in AMs. These
operate on a ParallelBlockTableScanDesc.
3) Index scans need to be able to access tables to return a tuple, and
there needs to be state across individual accesses to the heap to
store state like buffers. That's now handled by introducing a
sort-of-scan IndexFetchTable, which again is intended to be
subclassed by individual AMs (for heap IndexFetchHeap).
The relevant callbacks for an AM are index_fetch_{end, begin,
reset} to create the necessary state, and index_fetch_tuple to
retrieve an indexed tuple. Note that index_fetch_tuple
implementations need to be smarter than just blindly fetching the
tuples for AMs that have optimizations similar to heap's HOT - the
currently alive tuple in the update chain needs to be fetched if
appropriate.
Similar to table_scan_getnextslot(), it's undesirable to continue
to return HeapTuples. Thus index_fetch_heap (might want to rename
that later) now accepts a slot as an argument. Core code doesn't
have a lot of call sites performing index scans without going
through the systable_* API (in contrast to loads of heap_getnext
calls and working directly with HeapTuples).
Index scans now store the result of a search in
IndexScanDesc->xs_heaptid, rather than xs_ctup->t_self. As the
target is not generally a HeapTuple anymore that seems cleaner.
To be able to sensible adapt code to use the above, two further
callbacks have been introduced:
a) slot_callbacks returns a TupleTableSlotOps* suitable for creating
slots capable of holding a tuple of the AMs
type. table_slot_callbacks() and table_slot_create() are based
upon that, but have additional logic to deal with views, foreign
tables, etc.
While this change could have been done separately, nearly all the
call sites that needed to be adapted for the rest of this commit
also would have been needed to be adapted for
table_slot_callbacks(), making separation not worthwhile.
b) tuple_satisfies_snapshot checks whether the tuple in a slot is
currently visible according to a snapshot. That's required as a few
places now don't have a buffer + HeapTuple around, but a
slot (which in heap's case internally has that information).
Additionally a few infrastructure changes were needed:
I) SysScanDesc, as used by systable_{beginscan, getnext} et al. now
internally uses a slot to keep track of tuples. While
systable_getnext() still returns HeapTuples, and will so for the
foreseeable future, the index API (see 1) above) now only deals with
slots.
The remainder, and largest part, of this commit is then adjusting all
scans in postgres to use the new APIs.
Author: Andres Freund, Haribabu Kommi, Alvaro Herrera
Discussion:
https://postgr.es/m/20180703070645.wchpu5muyto5n647@alap3.anarazel.de
https://postgr.es/m/20160812231527.GA690404@alvherre.pgsql
2019-03-11 20:46:41 +01:00
|
|
|
table_endscan(scan);
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(rel, AccessShareLock);
|
2013-01-19 00:06:20 +01:00
|
|
|
|
2006-10-19 00:44:12 +02:00
|
|
|
return result;
|
|
|
|
}
|
2002-08-09 18:45:16 +02:00
|
|
|
|
2008-08-04 20:03:46 +02:00
|
|
|
/*
|
|
|
|
* Issue a suitable errdetail message for a busy database
|
|
|
|
*/
|
|
|
|
static int
|
|
|
|
errdetail_busy_db(int notherbackends, int npreparedxacts)
|
|
|
|
{
|
|
|
|
if (notherbackends > 0 && npreparedxacts > 0)
|
2013-05-29 22:58:43 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* We don't deal with singular versus plural here, since gettext
|
|
|
|
* doesn't support multiple plurals in one string.
|
|
|
|
*/
|
2008-08-04 20:03:46 +02:00
|
|
|
errdetail("There are %d other session(s) and %d prepared transaction(s) using the database.",
|
|
|
|
notherbackends, npreparedxacts);
|
|
|
|
else if (notherbackends > 0)
|
2012-06-15 01:01:00 +02:00
|
|
|
errdetail_plural("There is %d other session using the database.",
|
|
|
|
"There are %d other sessions using the database.",
|
|
|
|
notherbackends,
|
|
|
|
notherbackends);
|
2008-08-04 20:03:46 +02:00
|
|
|
else
|
2012-06-15 01:01:00 +02:00
|
|
|
errdetail_plural("There is %d prepared transaction using the database.",
|
Phase 3 of pgindent updates.
Don't move parenthesized lines to the left, even if that means they
flow past the right margin.
By default, BSD indent lines up statement continuation lines that are
within parentheses so that they start just to the right of the preceding
left parenthesis. However, traditionally, if that resulted in the
continuation line extending to the right of the desired right margin,
then indent would push it left just far enough to not overrun the margin,
if it could do so without making the continuation line start to the left of
the current statement indent. That makes for a weird mix of indentations
unless one has been completely rigid about never violating the 80-column
limit.
This behavior has been pretty universally panned by Postgres developers.
Hence, disable it with indent's new -lpl switch, so that parenthesized
lines are always lined up with the preceding left paren.
This patch is much less interesting than the first round of indent
changes, but also bulkier, so I thought it best to separate the effects.
Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org
Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:35:54 +02:00
|
|
|
"There are %d prepared transactions using the database.",
|
2012-06-15 01:01:00 +02:00
|
|
|
npreparedxacts,
|
|
|
|
npreparedxacts);
|
2008-08-04 20:03:46 +02:00
|
|
|
return 0; /* just to keep ereport macro happy */
|
|
|
|
}
|
|
|
|
|
2002-08-09 18:45:16 +02:00
|
|
|
/*
|
|
|
|
* get_database_oid - given a database name, look up the OID
|
|
|
|
*
|
2010-08-05 16:45:09 +02:00
|
|
|
* If missing_ok is false, throw an error if database name not found. If
|
|
|
|
* true, just return InvalidOid.
|
2002-08-09 18:45:16 +02:00
|
|
|
*/
|
|
|
|
Oid
|
2010-08-05 16:45:09 +02:00
|
|
|
get_database_oid(const char *dbname, bool missing_ok)
|
2002-08-09 18:45:16 +02:00
|
|
|
{
|
|
|
|
Relation pg_database;
|
|
|
|
ScanKeyData entry[1];
|
2003-08-04 02:43:34 +02:00
|
|
|
SysScanDesc scan;
|
2002-08-09 18:45:16 +02:00
|
|
|
HeapTuple dbtuple;
|
|
|
|
Oid oid;
|
|
|
|
|
2006-05-04 00:45:26 +02:00
|
|
|
/*
|
2006-10-04 02:30:14 +02:00
|
|
|
* There's no syscache for pg_database indexed by name, so we must look
|
|
|
|
* the hard way.
|
2006-05-04 00:45:26 +02:00
|
|
|
*/
|
2019-01-21 19:32:19 +01:00
|
|
|
pg_database = table_open(DatabaseRelationId, AccessShareLock);
|
2003-11-12 22:15:59 +01:00
|
|
|
ScanKeyInit(&entry[0],
|
|
|
|
Anum_pg_database_datname,
|
|
|
|
BTEqualStrategyNumber, F_NAMEEQ,
|
|
|
|
CStringGetDatum(dbname));
|
2005-04-14 22:03:27 +02:00
|
|
|
scan = systable_beginscan(pg_database, DatabaseNameIndexId, true,
|
Use an MVCC snapshot, rather than SnapshotNow, for catalog scans.
SnapshotNow scans have the undesirable property that, in the face of
concurrent updates, the scan can fail to see either the old or the new
versions of the row. In many cases, we work around this by requiring
DDL operations to hold AccessExclusiveLock on the object being
modified; in some cases, the existing locking is inadequate and random
failures occur as a result. This commit doesn't change anything
related to locking, but will hopefully pave the way to allowing lock
strength reductions in the future.
The major issue has held us back from making this change in the past
is that taking an MVCC snapshot is significantly more expensive than
using a static special snapshot such as SnapshotNow. However, testing
of various worst-case scenarios reveals that this problem is not
severe except under fairly extreme workloads. To mitigate those
problems, we avoid retaking the MVCC snapshot for each new scan;
instead, we take a new snapshot only when invalidation messages have
been processed. The catcache machinery already requires that
invalidation messages be sent before releasing the related heavyweight
lock; else other backends might rely on locally-cached data rather
than scanning the catalog at all. Thus, making snapshot reuse
dependent on the same guarantees shouldn't break anything that wasn't
already subtly broken.
Patch by me. Review by Michael Paquier and Andres Freund.
2013-07-02 15:47:01 +02:00
|
|
|
NULL, 1, entry);
|
2002-08-09 18:45:16 +02:00
|
|
|
|
2003-06-27 16:45:32 +02:00
|
|
|
dbtuple = systable_getnext(scan);
|
2002-08-09 18:45:16 +02:00
|
|
|
|
|
|
|
/* We assume that there can be at most one matching tuple */
|
|
|
|
if (HeapTupleIsValid(dbtuple))
|
2019-05-22 18:55:34 +02:00
|
|
|
oid = ((Form_pg_database) GETSTRUCT(dbtuple))->oid;
|
2002-08-09 18:45:16 +02:00
|
|
|
else
|
|
|
|
oid = InvalidOid;
|
|
|
|
|
2003-06-27 16:45:32 +02:00
|
|
|
systable_endscan(scan);
|
2019-01-21 19:32:19 +01:00
|
|
|
table_close(pg_database, AccessShareLock);
|
2002-08-09 18:45:16 +02:00
|
|
|
|
2010-08-05 16:45:09 +02:00
|
|
|
if (!OidIsValid(oid) && !missing_ok)
|
2011-04-10 17:42:00 +02:00
|
|
|
ereport(ERROR,
|
|
|
|
(errcode(ERRCODE_UNDEFINED_DATABASE),
|
|
|
|
errmsg("database \"%s\" does not exist",
|
|
|
|
dbname)));
|
2010-08-05 16:45:09 +02:00
|
|
|
|
2002-08-09 18:45:16 +02:00
|
|
|
return oid;
|
|
|
|
}
|
|
|
|
|
2003-06-27 16:45:32 +02:00
|
|
|
|
2002-08-09 18:45:16 +02:00
|
|
|
/*
|
2003-06-27 16:45:32 +02:00
|
|
|
* get_database_name - given a database OID, look up the name
|
2002-08-09 18:45:16 +02:00
|
|
|
*
|
2004-06-18 08:14:31 +02:00
|
|
|
* Returns a palloc'd string, or NULL if no such database.
|
2002-08-09 18:45:16 +02:00
|
|
|
*/
|
2003-06-27 16:45:32 +02:00
|
|
|
char *
|
|
|
|
get_database_name(Oid dbid)
|
2002-08-09 18:45:16 +02:00
|
|
|
{
|
|
|
|
HeapTuple dbtuple;
|
2003-06-27 16:45:32 +02:00
|
|
|
char *result;
|
2002-08-09 18:45:16 +02:00
|
|
|
|
2010-02-14 19:42:19 +01:00
|
|
|
dbtuple = SearchSysCache1(DATABASEOID, ObjectIdGetDatum(dbid));
|
2003-06-27 16:45:32 +02:00
|
|
|
if (HeapTupleIsValid(dbtuple))
|
2006-05-04 00:45:26 +02:00
|
|
|
{
|
2003-06-27 16:45:32 +02:00
|
|
|
result = pstrdup(NameStr(((Form_pg_database) GETSTRUCT(dbtuple))->datname));
|
2006-05-04 00:45:26 +02:00
|
|
|
ReleaseSysCache(dbtuple);
|
|
|
|
}
|
2003-06-27 16:45:32 +02:00
|
|
|
else
|
|
|
|
result = NULL;
|
2002-08-09 18:45:16 +02:00
|
|
|
|
2003-06-27 16:45:32 +02:00
|
|
|
return result;
|
2002-08-09 18:45:16 +02:00
|
|
|
}
|
2004-08-29 23:08:48 +02:00
|
|
|
|
|
|
|
/*
|
|
|
|
* DATABASE resource manager's routines
|
|
|
|
*/
|
|
|
|
void
|
Revamp the WAL record format.
Each WAL record now carries information about the modified relation and
block(s) in a standardized format. That makes it easier to write tools that
need that information, like pg_rewind, prefetching the blocks to speed up
recovery, etc.
There's a whole new API for building WAL records, replacing the XLogRecData
chains used previously. The new API consists of XLogRegister* functions,
which are called for each buffer and chunk of data that is added to the
record. The new API also gives more control over when a full-page image is
written, by passing flags to the XLogRegisterBuffer function.
This also simplifies the XLogReadBufferForRedo() calls. The function can dig
the relation and block number from the WAL record, so they no longer need to
be passed as arguments.
For the convenience of redo routines, XLogReader now disects each WAL record
after reading it, copying the main data part and the per-block data into
MAXALIGNed buffers. The data chunks are not aligned within the WAL record,
but the redo routines can assume that the pointers returned by XLogRecGet*
functions are. Redo routines are now passed the XLogReaderState, which
contains the record in the already-disected format, instead of the plain
XLogRecord.
The new record format also makes the fixed size XLogRecord header smaller,
by removing the xl_len field. The length of the "main data" portion is now
stored at the end of the WAL record, and there's a separate header after
XLogRecord for it. The alignment padding at the end of XLogRecord is also
removed. This compansates for the fact that the new format would otherwise
be more bulky than the old format.
Reviewed by Andres Freund, Amit Kapila, Michael Paquier, Alvaro Herrera,
Fujii Masao.
2014-11-20 16:56:26 +01:00
|
|
|
dbase_redo(XLogReaderState *record)
|
2004-08-29 23:08:48 +02:00
|
|
|
{
|
Revamp the WAL record format.
Each WAL record now carries information about the modified relation and
block(s) in a standardized format. That makes it easier to write tools that
need that information, like pg_rewind, prefetching the blocks to speed up
recovery, etc.
There's a whole new API for building WAL records, replacing the XLogRecData
chains used previously. The new API consists of XLogRegister* functions,
which are called for each buffer and chunk of data that is added to the
record. The new API also gives more control over when a full-page image is
written, by passing flags to the XLogRegisterBuffer function.
This also simplifies the XLogReadBufferForRedo() calls. The function can dig
the relation and block number from the WAL record, so they no longer need to
be passed as arguments.
For the convenience of redo routines, XLogReader now disects each WAL record
after reading it, copying the main data part and the per-block data into
MAXALIGNed buffers. The data chunks are not aligned within the WAL record,
but the redo routines can assume that the pointers returned by XLogRecGet*
functions are. Redo routines are now passed the XLogReaderState, which
contains the record in the already-disected format, instead of the plain
XLogRecord.
The new record format also makes the fixed size XLogRecord header smaller,
by removing the xl_len field. The length of the "main data" portion is now
stored at the end of the WAL record, and there's a separate header after
XLogRecord for it. The alignment padding at the end of XLogRecord is also
removed. This compansates for the fact that the new format would otherwise
be more bulky than the old format.
Reviewed by Andres Freund, Amit Kapila, Michael Paquier, Alvaro Herrera,
Fujii Masao.
2014-11-20 16:56:26 +01:00
|
|
|
uint8 info = XLogRecGetInfo(record) & ~XLR_INFO_MASK;
|
2004-08-29 23:08:48 +02:00
|
|
|
|
2009-01-20 19:59:37 +01:00
|
|
|
/* Backup blocks are not used in dbase records */
|
Revamp the WAL record format.
Each WAL record now carries information about the modified relation and
block(s) in a standardized format. That makes it easier to write tools that
need that information, like pg_rewind, prefetching the blocks to speed up
recovery, etc.
There's a whole new API for building WAL records, replacing the XLogRecData
chains used previously. The new API consists of XLogRegister* functions,
which are called for each buffer and chunk of data that is added to the
record. The new API also gives more control over when a full-page image is
written, by passing flags to the XLogRegisterBuffer function.
This also simplifies the XLogReadBufferForRedo() calls. The function can dig
the relation and block number from the WAL record, so they no longer need to
be passed as arguments.
For the convenience of redo routines, XLogReader now disects each WAL record
after reading it, copying the main data part and the per-block data into
MAXALIGNed buffers. The data chunks are not aligned within the WAL record,
but the redo routines can assume that the pointers returned by XLogRecGet*
functions are. Redo routines are now passed the XLogReaderState, which
contains the record in the already-disected format, instead of the plain
XLogRecord.
The new record format also makes the fixed size XLogRecord header smaller,
by removing the xl_len field. The length of the "main data" portion is now
stored at the end of the WAL record, and there's a separate header after
XLogRecord for it. The alignment padding at the end of XLogRecord is also
removed. This compansates for the fact that the new format would otherwise
be more bulky than the old format.
Reviewed by Andres Freund, Amit Kapila, Michael Paquier, Alvaro Herrera,
Fujii Masao.
2014-11-20 16:56:26 +01:00
|
|
|
Assert(!XLogRecHasAnyBlockRefs(record));
|
2009-01-20 19:59:37 +01:00
|
|
|
|
2004-08-29 23:08:48 +02:00
|
|
|
if (info == XLOG_DBASE_CREATE)
|
|
|
|
{
|
|
|
|
xl_dbase_create_rec *xlrec = (xl_dbase_create_rec *) XLogRecGetData(record);
|
2005-03-23 01:03:37 +01:00
|
|
|
char *src_path;
|
|
|
|
char *dst_path;
|
|
|
|
struct stat st;
|
|
|
|
|
|
|
|
src_path = GetDatabasePath(xlrec->src_db_id, xlrec->src_tablespace_id);
|
|
|
|
dst_path = GetDatabasePath(xlrec->db_id, xlrec->tablespace_id);
|
|
|
|
|
|
|
|
/*
|
2005-10-15 04:49:52 +02:00
|
|
|
* Our theory for replaying a CREATE is to forcibly drop the target
|
|
|
|
* subdirectory if present, then re-copy the source data. This may be
|
|
|
|
* more work than needed, but it is simple to implement.
|
2005-03-23 01:03:37 +01:00
|
|
|
*/
|
|
|
|
if (stat(dst_path, &st) == 0 && S_ISDIR(st.st_mode))
|
|
|
|
{
|
|
|
|
if (!rmtree(dst_path, true))
|
2010-07-20 20:14:16 +02:00
|
|
|
/* If this failed, copydir() below is going to error. */
|
2005-03-23 01:03:37 +01:00
|
|
|
ereport(WARNING,
|
2008-04-18 19:05:45 +02:00
|
|
|
(errmsg("some useless files may be left behind in old database directory \"%s\"",
|
2005-08-02 21:02:32 +02:00
|
|
|
dst_path)));
|
2005-03-23 01:03:37 +01:00
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Force dirty buffers out to disk, to ensure source database is
|
2007-06-28 02:02:40 +02:00
|
|
|
* up-to-date for the copy.
|
2005-03-23 01:03:37 +01:00
|
|
|
*/
|
2007-06-28 02:02:40 +02:00
|
|
|
FlushDatabaseBuffers(xlrec->src_db_id);
|
2005-03-23 01:03:37 +01:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Copy this subdirectory to the new location
|
|
|
|
*
|
2005-08-02 21:02:32 +02:00
|
|
|
* We don't need to copy subdirectories
|
2005-03-23 01:03:37 +01:00
|
|
|
*/
|
2005-08-02 21:02:32 +02:00
|
|
|
copydir(src_path, dst_path, false);
|
2005-03-23 01:03:37 +01:00
|
|
|
}
|
|
|
|
else if (info == XLOG_DBASE_DROP)
|
|
|
|
{
|
|
|
|
xl_dbase_drop_rec *xlrec = (xl_dbase_drop_rec *) XLogRecGetData(record);
|
|
|
|
char *dst_path;
|
2019-11-21 13:10:37 +01:00
|
|
|
int i;
|
2005-03-23 01:03:37 +01:00
|
|
|
|
Allow read only connections during recovery, known as Hot Standby.
Enabled by recovery_connections = on (default) and forcing archive recovery using a recovery.conf. Recovery processing now emulates the original transactions as they are replayed, providing full locking and MVCC behaviour for read only queries. Recovery must enter consistent state before connections are allowed, so there is a delay, typically short, before connections succeed. Replay of recovering transactions can conflict and in some cases deadlock with queries during recovery; these result in query cancellation after max_standby_delay seconds have expired. Infrastructure changes have minor effects on normal running, though introduce four new types of WAL record.
New test mode "make standbycheck" allows regression tests of static command behaviour on a standby server while in recovery. Typical and extreme dynamic behaviours have been checked via code inspection and manual testing. Few port specific behaviours have been utilised, though primary testing has been on Linux only so far.
This commit is the basic patch. Additional changes will follow in this release to enhance some aspects of behaviour, notably improved handling of conflicts, deadlock detection and query cancellation. Changes to VACUUM FULL are also required.
Simon Riggs, with significant and lengthy review by Heikki Linnakangas, including streamlined redesign of snapshot creation and two-phase commit.
Important contributions from Florian Pflug, Mark Kirkwood, Merlin Moncure, Greg Stark, Gianni Ciolli, Gabriele Bartolini, Hannu Krosing, Robert Haas, Tatsuo Ishii, Hiroyuki Yamada plus support and feedback from many other community members.
2009-12-19 02:32:45 +01:00
|
|
|
if (InHotStandby)
|
2010-01-16 15:16:31 +01:00
|
|
|
{
|
|
|
|
/*
|
2010-02-26 03:01:40 +01:00
|
|
|
* Lock database while we resolve conflicts to ensure that
|
|
|
|
* InitPostgres() cannot fully re-execute concurrently. This
|
|
|
|
* avoids backends re-connecting automatically to same database,
|
|
|
|
* which can happen in some cases.
|
2017-03-28 16:05:21 +02:00
|
|
|
*
|
|
|
|
* This will lock out walsenders trying to connect to db-specific
|
2017-05-17 22:31:56 +02:00
|
|
|
* slots for logical decoding too, so it's safe for us to drop
|
|
|
|
* slots.
|
2010-01-16 15:16:31 +01:00
|
|
|
*/
|
|
|
|
LockSharedObjectForSession(DatabaseRelationId, xlrec->db_id, 0, AccessExclusiveLock);
|
2010-01-14 12:08:02 +01:00
|
|
|
ResolveRecoveryConflictWithDatabase(xlrec->db_id);
|
2010-01-16 15:16:31 +01:00
|
|
|
}
|
Allow read only connections during recovery, known as Hot Standby.
Enabled by recovery_connections = on (default) and forcing archive recovery using a recovery.conf. Recovery processing now emulates the original transactions as they are replayed, providing full locking and MVCC behaviour for read only queries. Recovery must enter consistent state before connections are allowed, so there is a delay, typically short, before connections succeed. Replay of recovering transactions can conflict and in some cases deadlock with queries during recovery; these result in query cancellation after max_standby_delay seconds have expired. Infrastructure changes have minor effects on normal running, though introduce four new types of WAL record.
New test mode "make standbycheck" allows regression tests of static command behaviour on a standby server while in recovery. Typical and extreme dynamic behaviours have been checked via code inspection and manual testing. Few port specific behaviours have been utilised, though primary testing has been on Linux only so far.
This commit is the basic patch. Additional changes will follow in this release to enhance some aspects of behaviour, notably improved handling of conflicts, deadlock detection and query cancellation. Changes to VACUUM FULL are also required.
Simon Riggs, with significant and lengthy review by Heikki Linnakangas, including streamlined redesign of snapshot creation and two-phase commit.
Important contributions from Florian Pflug, Mark Kirkwood, Merlin Moncure, Greg Stark, Gianni Ciolli, Gabriele Bartolini, Hannu Krosing, Robert Haas, Tatsuo Ishii, Hiroyuki Yamada plus support and feedback from many other community members.
2009-12-19 02:32:45 +01:00
|
|
|
|
2017-03-28 16:05:21 +02:00
|
|
|
/* Drop any database-specific replication slots */
|
|
|
|
ReplicationSlotsDropDBSlots(xlrec->db_id);
|
|
|
|
|
2006-03-29 23:17:39 +02:00
|
|
|
/* Drop pages for this database that are in the shared buffer cache */
|
|
|
|
DropDatabaseBuffers(xlrec->db_id);
|
|
|
|
|
2007-04-12 17:04:35 +02:00
|
|
|
/* Also, clean out any fsync requests that might be pending in md.c */
|
2019-04-04 10:56:03 +02:00
|
|
|
ForgetDatabaseSyncRequests(xlrec->db_id);
|
2007-04-12 17:04:35 +02:00
|
|
|
|
2006-03-29 23:17:39 +02:00
|
|
|
/* Clean out the xlog relcache too */
|
|
|
|
XLogDropDatabase(xlrec->db_id);
|
2005-03-23 01:03:37 +01:00
|
|
|
|
2019-11-21 13:10:37 +01:00
|
|
|
for (i = 0; i < xlrec->ntablespaces; i++)
|
|
|
|
{
|
|
|
|
dst_path = GetDatabasePath(xlrec->db_id, xlrec->tablespace_ids[i]);
|
|
|
|
|
|
|
|
/* And remove the physical files */
|
|
|
|
if (!rmtree(dst_path, true))
|
|
|
|
ereport(WARNING,
|
|
|
|
(errmsg("some useless files may be left behind in old database directory \"%s\"",
|
|
|
|
dst_path)));
|
|
|
|
pfree(dst_path);
|
|
|
|
}
|
2010-01-16 15:16:31 +01:00
|
|
|
|
|
|
|
if (InHotStandby)
|
|
|
|
{
|
|
|
|
/*
|
2010-02-26 03:01:40 +01:00
|
|
|
* Release locks prior to commit. XXX There is a race condition
|
|
|
|
* here that may allow backends to reconnect, but the window for
|
|
|
|
* this is small because the gap between here and commit is mostly
|
|
|
|
* fairly small and it is unlikely that people will be dropping
|
|
|
|
* databases that we are trying to connect to anyway.
|
2010-01-16 15:16:31 +01:00
|
|
|
*/
|
|
|
|
UnlockSharedObjectForSession(DatabaseRelationId, xlrec->db_id, 0, AccessExclusiveLock);
|
|
|
|
}
|
2005-03-23 01:03:37 +01:00
|
|
|
}
|
2004-08-29 23:08:48 +02:00
|
|
|
else
|
|
|
|
elog(PANIC, "dbase_redo: unknown op code %u", info);
|
|
|
|
}
|