doc: Fix spacing in verbatim environments

This commit is contained in:
Peter Eisentraut 2014-07-08 11:39:07 -04:00
parent 6048896e4c
commit 53166fe248
7 changed files with 96 additions and 95 deletions

View File

@ -238,9 +238,9 @@ include 'filename'
The <filename>postgresql.conf</> file can also contain
<literal>include_dir</literal> directives, which specify an entire directory
of configuration files to include. It is used similarly:
<programlisting>
include_dir 'directory'
</programlisting>
<programlisting>
include_dir 'directory'
</programlisting>
Non-absolute directory names follow the same rules as single file include
directives: they are relative to the directory containing the referencing
configuration file. Within that directory, only non-directory files whose
@ -263,11 +263,11 @@ include 'filename'
situation is to break the custom configuration changes for your site into
three files. You could add this to the end of your
<filename>postgresql.conf</> file to include them:
<programlisting>
include 'shared.conf'
include 'memory.conf'
include 'server.conf'
</programlisting>
<programlisting>
include 'shared.conf'
include 'memory.conf'
include 'server.conf'
</programlisting>
All systems would have the same <filename>shared.conf</>. Each server
with a particular amount of memory could share the same
<filename>memory.conf</>; you might have one for all servers with 8GB of RAM,
@ -279,15 +279,15 @@ include 'filename'
Another possibility is to create a configuration file directory and
put this information into files there. For example, a <filename>conf.d</>
directory could be referenced at the end of<filename>postgresql.conf</>:
<screen>
include_dir 'conf.d'
</screen>
<programlisting>
include_dir 'conf.d'
</programlisting>
Then you could name the files in the <filename>conf.d</> directory like this:
<screen>
00shared.conf
01memory.conf
02server.conf
</screen>
<programlisting>
00shared.conf
01memory.conf
02server.conf
</programlisting>
This shows a clear order in which these files will be loaded. This is
important because only the last setting encountered when the server is
reading its configuration will be used. Something set in
@ -298,11 +298,11 @@ include 'filename'
<para>
You might instead use this configuration directory approach while naming
these files more descriptively:
<screen>
00shared.conf
01memory-8GB.conf
02server-foo.conf
</screen>
<programlisting>
00shared.conf
01memory-8GB.conf
02server-foo.conf
</programlisting>
This sort of arrangement gives a unique name for each configuration file
variation. This can help eliminate ambiguity when several servers have
their configurations all stored in one place, such as in a version

View File

@ -1167,12 +1167,12 @@ include $(PGXS)
This procedure is also called a
<indexterm><primary>VPATH</primary></indexterm><firstterm>VPATH</firstterm>
build. Here's how:
<screen>
<userinput>mkdir build_dir</userinput>
<userinput>cd build_dir</userinput>
<userinput>make -f /path/to/extension/source/tree/Makefile</userinput>
<userinput>make -f /path/to/extension/source/tree/Makefile install</userinput>
</screen>
<programlisting>
mkdir build_dir
cd build_dir
make -f /path/to/extension/source/tree/Makefile
make -f /path/to/extension/source/tree/Makefile install
</programlisting>
</para>
<para>
@ -1181,10 +1181,10 @@ include $(PGXS)
core script <filename>config/prep_buildtree</>. Once this has been done
you can build by setting the <literal>make</literal> variable
<varname>USE_VPATH</varname> like this:
<screen>
<userinput>make USE_VPATH=/path/to/extension/source/tree</userinput>
<userinput>make USE_VPATH=/path/to/extension/source/tree install</userinput>
</screen>
<programlisting>
make USE_VPATH=/path/to/extension/source/tree
make USE_VPATH=/path/to/extension/source/tree install
</programlisting>
This procedure can work with a greater variety of directory layouts.
</para>

View File

@ -10495,7 +10495,7 @@ table2-mapping
-----+-------
a | "foo"
b | "bar"
</programlisting>
</programlisting>
</entry>
</row>
<row>
@ -10514,7 +10514,7 @@ table2-mapping
-----+-------
a | foo
b | bar
</programlisting>
</programlisting>
</entry>
</row>
<row>
@ -10598,7 +10598,7 @@ table2-mapping
---+---
1 | 2
3 | 4
</programlisting>
</programlisting>
</entry>
</row>
<row>
@ -10671,7 +10671,7 @@ table2-mapping
a | b | d
---+---------+---
1 | [1,2,3] |
</programlisting>
</programlisting>
</entry>
</row>
<row>
@ -10692,7 +10692,7 @@ table2-mapping
---+-----
1 | foo
2 |
</programlisting>
</programlisting>
</entry>
</row>
</tbody>

View File

@ -163,7 +163,7 @@
</para>
<para>
The following are all valid <type>json</> (or <type>jsonb</>) expressions:
<programlisting>
<programlisting>
-- Simple scalar/primitive value
-- Primitive values can be numbers, quoted strings, true, false, or null
SELECT '5'::json;
@ -177,7 +177,7 @@ SELECT '{"bar": "baz", "balance": 7.77, "active": false}'::json;
-- Arrays and objects can be nested arbitrarily
SELECT '{"foo": [true, "bar"], "tags": {"a": 1, "b": null}}'::json;
</programlisting>
</programlisting>
</para>
<para>
@ -262,7 +262,7 @@ SELECT '{"reading": 1.230e-5}'::json, '{"reading": 1.230e-5}'::jsonb;
one <type>jsonb</> document has contained within it another one.
These examples return true except as noted:
</para>
<programlisting>
<programlisting>
-- Simple scalar/primitive values contain only the identical value:
SELECT '"foo"'::jsonb @> '"foo"'::jsonb;
@ -282,7 +282,7 @@ SELECT '[1, 2, [1, 3]]'::jsonb @> '[[1, 3]]'::jsonb;
-- Similarly, containment is not reported here:
SELECT '{"foo": {"bar": "baz"}}'::jsonb @> '{"bar": "baz"}'::jsonb; -- yields false
</programlisting>
</programlisting>
<para>
The general principle is that the contained object must match the
@ -296,13 +296,13 @@ SELECT '{"foo": {"bar": "baz"}}'::jsonb @> '{"bar": "baz"}'::jsonb; -- yields f
As a special exception to the general principle that the structures
must match, an array may contain a primitive value:
</para>
<programlisting>
<programlisting>
-- This array contains the primitive string value:
SELECT '["foo", "bar"]'::jsonb @> '"bar"'::jsonb;
-- This exception is not reciprocal -- non-containment is reported here:
SELECT '"bar"'::jsonb @> '["bar"]'::jsonb; -- yields false
</programlisting>
</programlisting>
<para>
<type>jsonb</> also has an <firstterm>existence</> operator, which is
@ -363,22 +363,22 @@ SELECT '"foo"'::jsonb ? 'foo';
(For details of the semantics that these operators
implement, see <xref linkend="functions-jsonb-op-table">.)
An example of creating an index with this operator class is:
<programlisting>
<programlisting>
CREATE INDEX idxgin ON api USING gin (jdoc);
</programlisting>
</programlisting>
The non-default GIN operator class <literal>jsonb_path_ops</>
supports indexing the <literal>@&gt;</> operator only.
An example of creating an index with this operator class is:
<programlisting>
<programlisting>
CREATE INDEX idxginp ON api USING gin (jdoc jsonb_path_ops);
</programlisting>
</programlisting>
</para>
<para>
Consider the example of a table that stores JSON documents
retrieved from a third-party web service, with a documented schema
definition. A typical document is:
<programlisting>
<programlisting>
{
"guid": "9c36adc1-7fb5-4d5b-83b4-90356a46061a",
"name": "Angela Barton",
@ -394,32 +394,32 @@ CREATE INDEX idxginp ON api USING gin (jdoc jsonb_path_ops);
"qui"
]
}
</programlisting>
</programlisting>
We store these documents in a table named <structname>api</>,
in a <type>jsonb</> column named <structfield>jdoc</>.
If a GIN index is created on this column,
queries like the following can make use of the index:
<programlisting>
<programlisting>
-- Find documents in which the key "company" has value "Magnafone"
SELECT jdoc-&gt;'guid', jdoc-&gt;'name' FROM api WHERE jdoc @&gt; '{"company": "Magnafone"}';
</programlisting>
</programlisting>
However, the index could not be used for queries like the
following, because though the operator <literal>?</> is indexable,
it is not applied directly to the indexed column <structfield>jdoc</>:
<programlisting>
<programlisting>
-- Find documents in which the key "tags" contains key or array element "qui"
SELECT jdoc-&gt;'guid', jdoc-&gt;'name' FROM api WHERE jdoc -&gt; 'tags' ? 'qui';
</programlisting>
</programlisting>
Still, with appropriate use of expression indexes, the above
query can use an index. If querying for particular items within
the <literal>"tags"</> key is common, defining an index like this
may be worthwhile:
<programlisting>
<programlisting>
-- Note that the "jsonb -&gt; text" operator can only be called on a JSON
-- object, so as a consequence of creating this index the root of each
-- "jdoc" value must be an object. This is enforced during insertion.
CREATE INDEX idxgintags ON api USING gin ((jdoc -&gt; 'tags'));
</programlisting>
</programlisting>
Now, the <literal>WHERE</> clause <literal>jdoc -&gt; 'tags' ? 'qui'</>
will be recognized as an application of the indexable
operator <literal>?</> to the indexed
@ -429,10 +429,10 @@ CREATE INDEX idxgintags ON api USING gin ((jdoc -&gt; 'tags'));
</para>
<para>
Another approach to querying is to exploit containment, for example:
<programlisting>
<programlisting>
-- Find documents in which the key "tags" contains array element "qui"
SELECT jdoc-&gt;'guid', jdoc-&gt;'name' FROM api WHERE jdoc @&gt; '{"tags": ["qui"]}';
</programlisting>
</programlisting>
A simple GIN index on the <structfield>jdoc</> column can support this
query. But note that such an index will store copies of every key and
value in the <structfield>jdoc</> column, whereas the expression index
@ -460,7 +460,7 @@ SELECT jdoc-&gt;'guid', jdoc-&gt;'name' FROM api WHERE jdoc @&gt; '{"tags": ["qu
and a <literal>jsonb_path_ops</literal> GIN index is that the former
creates independent index items for each key and value in the data,
while the latter creates index items only for each value in the
data.
data.
<footnote>
<para>
For this purpose, the term <quote>value</> includes array elements,
@ -501,17 +501,17 @@ SELECT jdoc-&gt;'guid', jdoc-&gt;'name' FROM api WHERE jdoc @&gt; '{"tags": ["qu
equality of complete JSON documents.
The <literal>btree</> ordering for <type>jsonb</> datums is seldom
of great interest, but for completeness it is:
<synopsis>
<replaceable>Object</replaceable> > <replaceable>Array</replaceable> > <replaceable>Boolean</replaceable> > <replaceable>Number</replaceable> > <replaceable>String</replaceable> > <replaceable>Null</replaceable>
<synopsis>
<replaceable>Object</replaceable> > <replaceable>Array</replaceable> > <replaceable>Boolean</replaceable> > <replaceable>Number</replaceable> > <replaceable>String</replaceable> > <replaceable>Null</replaceable>
<replaceable>Object with n pairs</replaceable> > <replaceable>object with n - 1 pairs</replaceable>
<replaceable>Object with n pairs</replaceable> > <replaceable>object with n - 1 pairs</replaceable>
<replaceable>Array with n elements</replaceable> > <replaceable>array with n - 1 elements</replaceable>
</synopsis>
<replaceable>Array with n elements</replaceable> > <replaceable>array with n - 1 elements</replaceable>
</synopsis>
Objects with equal numbers of pairs are compared in the order:
<synopsis>
<replaceable>key-1</replaceable>, <replaceable>value-1</replaceable>, <replaceable>key-2</replaceable> ...
</synopsis>
<synopsis>
<replaceable>key-1</replaceable>, <replaceable>value-1</replaceable>, <replaceable>key-2</replaceable> ...
</synopsis>
Note that object keys are compared in their storage order;
in particular, since shorter keys are stored before longer keys, this
can lead to results that might be unintuitive, such as:
@ -520,9 +520,9 @@ SELECT jdoc-&gt;'guid', jdoc-&gt;'name' FROM api WHERE jdoc @&gt; '{"tags": ["qu
</programlisting>
Similarly, arrays with equal numbers of elements are compared in the
order:
<synopsis>
<replaceable>element-1</replaceable>, <replaceable>element-2</replaceable> ...
</synopsis>
<synopsis>
<replaceable>element-1</replaceable>, <replaceable>element-2</replaceable> ...
</synopsis>
Primitive JSON values are compared using the same
comparison rules as for the underlying
<productname>PostgreSQL</productname> data type. Strings are

View File

@ -51,7 +51,7 @@
Then, you should connect to the target database (in the example
below, <literal>postgres</literal>) as a superuser.
</para>
<programlisting>
<programlisting>
postgres=# -- Create a slot named 'regression_slot' using the output plugin 'test_decoding'
postgres=# SELECT * FROM pg_create_logical_replication_slot('regression_slot', 'test_decoding');
slot_name | xlog_position
@ -139,7 +139,7 @@ postgres=# SELECT pg_drop_replication_slot('regression_slot');
-----------------------
(1 row)
</programlisting>
</programlisting>
<para>
The following example shows usage of the walsender interface using
the <link linkend="app-pgrecvlogical"><command>pg_recvlogical</command></link>
@ -148,7 +148,7 @@ postgres=# SELECT pg_drop_replication_slot('regression_slot');
and <varname>max_wal_senders</varname> to be set sufficiently high for
another connection.
</para>
<programlisting>
<programlisting>
# pg_recvlogical -d postgres --slot test --create
# pg_recvlogical -d postgres --slot test --start -f -
CTRL-Z
@ -159,7 +159,7 @@ table public.data: INSERT: id[integer]:4 data[text]:'4'
COMMIT 693
CTRL-C
# pg_recvlogical -d postgres --slot test --drop
</programlisting>
</programlisting>
</sect1>
<sect1 id="logicaldecoding-explanation">
<title>Logical Decoding Concepts</title>
@ -317,7 +317,7 @@ CTRL-C
<function>_PG_output_plugin_init</function>. This function is passed a
struct that needs to be filled with the callback function pointers for
individual actions.
<programlisting>
<programlisting>
typedef struct OutputPluginCallbacks
{
LogicalDecodeStartupCB startup_cb;
@ -326,8 +326,9 @@ typedef struct OutputPluginCallbacks
LogicalDecodeCommitCB commit_cb;
LogicalDecodeShutdownCB shutdown_cb;
} OutputPluginCallbacks;
typedef void (*LogicalOutputPluginInit)(struct OutputPluginCallbacks *cb);
</programlisting>
</programlisting>
The <function>begin_cb</function>, <function>change_cb</function>
and <function>commit_cb</function> callbacks are required,
while <function>startup_cb</function>
@ -344,10 +345,10 @@ typedef void (*LogicalOutputPluginInit)(struct OutputPluginCallbacks *cb);
accessed that either have been created by <command>initdb</command> in
the <literal>pg_catalog</literal> schema, or have been marked as user
provided catalog tables using
<programlisting>
<programlisting>
ALTER TABLE user_catalog_table SET (user_catalog_table = true);
CREATE TABLE another_catalog_table(data text) WITH (user_catalog_table = true);
</programlisting>
</programlisting>
Any actions leading to xid assignment are prohibited. That, among others,
includes writing to tables, performing DDL changes and
calling <literal>txid_current()</literal>.
@ -385,23 +386,23 @@ CREATE TABLE another_catalog_table(data text) WITH (user_catalog_table = true);
The optional <function>startup_cb</function> callback is called whenever
a replication slot is created or asked to stream changes, independent
of the number of changes that are ready to be put out.
<programlisting>
<programlisting>
typedef void (*LogicalDecodeStartupCB) (
struct LogicalDecodingContext *ctx,
OutputPluginOptions *options,
bool is_init
);
</programlisting>
</programlisting>
The <literal>is_init</literal> parameter will be true when the
replication slot is being created and false
otherwise. <parameter>options</parameter> points to a struct of options
that output plugins can set:
<programlisting>
<programlisting>
typedef struct OutputPluginOptions
{
OutputPluginOutputType output_type;
} OutputPluginOptions;
</programlisting>
</programlisting>
<literal>output_type</literal> has to either be set to
<literal>OUTPUT_PLUGIN_TEXTUAL_OUTPUT</literal>
or <literal>OUTPUT_PLUGIN_BINARY_OUTPUT</literal>.
@ -420,11 +421,11 @@ typedef struct OutputPluginOptions
whenever a formerly active replication slot is not used anymore and can
be used to deallocate resources private to the output plugin. The slot
isn't necessarily being dropped, streaming is just being stopped.
<programlisting>
<programlisting>
typedef void (*LogicalDecodeShutdownCB) (
struct LogicalDecodingContext *ctx
);
</programlisting>
</programlisting>
</para>
</sect3>
<sect3 id="logicaldecoding-output-plugin-begin">
@ -433,12 +434,12 @@ typedef void (*LogicalDecodeShutdownCB) (
The required <function>begin_cb</function> callback is called whenever a
start of a commited transaction has been decoded. Aborted transactions
and their contents never get decoded.
<programlisting>
<programlisting>
typedef void (*LogicalDecodeBeginCB) (
struct LogicalDecodingContext *,
ReorderBufferTXN *txn
);
</programlisting>
</programlisting>
The <parameter>txn</parameter> parameter contains meta information about
the transaction, like the timestamp at which it has been committed and
its XID.
@ -452,12 +453,12 @@ typedef void (*LogicalDecodeBeginCB) (
decoded. The <function>change_cb</function> callbacks for all modified
rows will have been called before this, if there have been any modified
rows.
<programlisting>
<programlisting>
typedef void (*LogicalDecodeCommitCB) (
struct LogicalDecodingContext *,
ReorderBufferTXN *txn
);
</programlisting>
</programlisting>
</para>
</sect3>
<sect3 id="logicaldecoding-output-plugin-change">
@ -470,14 +471,14 @@ typedef void (*LogicalDecodeCommitCB) (
or <command>DELETE</command>. Even if the original command modified
several rows at once the callback will be called indvidually for each
row.
<programlisting>
<programlisting>
typedef void (*LogicalDecodeChangeCB) (
struct LogicalDecodingContext *ctx,
ReorderBufferTXN *txn,
Relation relation,
ReorderBufferChange *change
);
</programlisting>
</programlisting>
The <parameter>ctx</parameter> and <parameter>txn</parameter> parameters
have the same contents as for the <function>begin_cb</function>
and <function>commit_cb</function> callbacks, but additionally the
@ -513,11 +514,11 @@ typedef void (*LogicalDecodeChangeCB) (
<para>
The following example shows how to output data to the consumer of an
output plugin:
<programlisting>
<programlisting>
OutputPluginPrepareWrite(ctx, true);
appendStringInfo(ctx->out, "BEGIN %u", txn->xid);
OutputPluginWrite(ctx, true);
</programlisting>
</programlisting>
</para>
</sect2>
</sect1>

View File

@ -1315,9 +1315,9 @@ the connection to be used for logical replication from that database.
connection via <application>psql</application> or any other <literal>libpq</literal>-using
tool with a connection string including the <literal>replication</literal> option,
e.g.:
<programlisting>
psql "dbname=postgres replication=database" -c "IDENTIFY_SYSTEM;"
</programlisting>
<programlisting>
psql "dbname=postgres replication=database" -c "IDENTIFY_SYSTEM;"
</programlisting>
However it is often more useful to use
<application>pg_receivexlog</application> (for physical replication) or
<application>pg_recvlogical</application> (for logical replication).

View File

@ -23,7 +23,7 @@
Typical output from this plugin, used over the SQL logical decoding
interface, might be:
<programlisting>
<programlisting>
postgres=# SELECT * FROM pg_logical_slot_get_changes('test_slot', NULL, NULL, 'include-xids', '0');
location | xid | data
-----------+-----+--------------------------------------------------
@ -36,7 +36,7 @@ postgres=# SELECT * FROM pg_logical_slot_get_changes('test_slot', NULL, NULL, 'i
0/16D3398 | 692 | table public.data: DELETE: id[int4]:3
0/16D3398 | 692 | COMMIT
(8 rows)
</programlisting>
</programlisting>
</para>
</sect1>