2011-08-25 06:06:16 +02:00
|
|
|
CREATE EXTENSION pgstattuple;
|
|
|
|
--
|
|
|
|
-- It's difficult to come up with platform-independent test cases for
|
|
|
|
-- the pgstattuple functions, but the results for empty tables and
|
|
|
|
-- indexes should be that.
|
|
|
|
--
|
2012-12-05 08:58:03 +01:00
|
|
|
create table test (a int primary key, b int[]);
|
2013-07-18 20:50:20 +02:00
|
|
|
select * from pgstattuple('test');
|
|
|
|
table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space | free_percent
|
|
|
|
-----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+--------------
|
|
|
|
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
|
|
|
|
(1 row)
|
|
|
|
|
2011-08-25 06:06:16 +02:00
|
|
|
select * from pgstattuple('test'::text);
|
|
|
|
table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space | free_percent
|
|
|
|
-----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+--------------
|
|
|
|
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
|
|
|
|
(1 row)
|
|
|
|
|
2013-07-18 20:50:20 +02:00
|
|
|
select * from pgstattuple('test'::name);
|
|
|
|
table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space | free_percent
|
|
|
|
-----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+--------------
|
|
|
|
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
|
|
|
|
(1 row)
|
|
|
|
|
2011-08-25 06:06:16 +02:00
|
|
|
select * from pgstattuple('test'::regclass);
|
|
|
|
table_len | tuple_count | tuple_len | tuple_percent | dead_tuple_count | dead_tuple_len | dead_tuple_percent | free_space | free_percent
|
|
|
|
-----------+-------------+-----------+---------------+------------------+----------------+--------------------+------------+--------------
|
|
|
|
0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0
|
|
|
|
(1 row)
|
|
|
|
|
2013-07-18 20:50:20 +02:00
|
|
|
select pgstattuple(oid) from pg_class where relname = 'test';
|
|
|
|
pgstattuple
|
|
|
|
---------------------
|
|
|
|
(0,0,0,0,0,0,0,0,0)
|
|
|
|
(1 row)
|
|
|
|
|
|
|
|
select pgstattuple(relname) from pg_class where relname = 'test';
|
|
|
|
pgstattuple
|
|
|
|
---------------------
|
|
|
|
(0,0,0,0,0,0,0,0,0)
|
|
|
|
(1 row)
|
|
|
|
|
Fix multiple bugs in contrib/pgstattuple's pgstatindex() function.
Dead or half-dead index leaf pages were incorrectly reported as live, as a
consequence of a code rearrangement I made (during a moment of severe brain
fade, evidently) in commit d287818eb514d431.
The index metapage was not counted in index_size, causing that result to
not agree with the actual index size on-disk.
Index root pages were not counted in internal_pages, which is inconsistent
compared to the case of a root that's also a leaf (one-page index), where
the root would be counted in leaf_pages. Aside from that inconsistency,
this could lead to additional transient discrepancies between the reported
page counts and index_size, since it's possible for pgstatindex's scan to
see zero or multiple pages marked as BTP_ROOT, if the root moves due to
a split during the scan. With these fixes, index_size will always be
exactly one page more than the sum of the displayed page counts.
Also, the index_size result was incorrectly documented as being measured in
pages; it's always been measured in bytes. (While fixing that, I couldn't
resist doing some small additional wordsmithing on the pgstattuple docs.)
Including the metapage causes the reported index_size to not be zero for
an empty index. To preserve the desired property that the pgstattuple
regression test results are platform-independent (ie, BLCKSZ configuration
independent), scale the index_size result in the regression tests.
The documentation issue was reported by Otsuka Kenji, and the inconsistent
root page counting by Peter Geoghegan; the other problems noted by me.
Back-patch to all supported branches, because this has been broken for
a long time.
2016-02-18 21:40:35 +01:00
|
|
|
select version, tree_level,
|
|
|
|
index_size / current_setting('block_size')::int as index_size,
|
|
|
|
root_block_no, internal_pages, leaf_pages, empty_pages, deleted_pages,
|
|
|
|
avg_leaf_density, leaf_fragmentation
|
|
|
|
from pgstatindex('test_pkey');
|
2011-08-25 06:06:16 +02:00
|
|
|
version | tree_level | index_size | root_block_no | internal_pages | leaf_pages | empty_pages | deleted_pages | avg_leaf_density | leaf_fragmentation
|
|
|
|
---------+------------+------------+---------------+----------------+------------+-------------+---------------+------------------+--------------------
|
Skip full index scan during cleanup of B-tree indexes when possible
Vacuum of index consists from two stages: multiple (zero of more) ambulkdelete
calls and one amvacuumcleanup call. When workload on particular table
is append-only, then autovacuum isn't intended to touch this table. However,
user may run vacuum manually in order to fill visibility map and get benefits
of index-only scans. Then ambulkdelete wouldn't be called for indexes
of such table (because no heap tuples were deleted), only amvacuumcleanup would
be called In this case, amvacuumcleanup would perform full index scan for
two objectives: put recyclable pages into free space map and update index
statistics.
This patch allows btvacuumclanup to skip full index scan when two conditions
are satisfied: no pages are going to be put into free space map and index
statistics isn't stalled. In order to check first condition, we store
oldest btpo_xact in the meta-page. When it's precedes RecentGlobalXmin, then
there are some recyclable pages. In order to check second condition we store
number of heap tuples observed during previous full index scan by cleanup.
If fraction of newly inserted tuples is less than
vacuum_cleanup_index_scale_factor, then statistics isn't considered to be
stalled. vacuum_cleanup_index_scale_factor can be defined as both reloption and GUC (default).
This patch bumps B-tree meta-page version. Upgrade of meta-page is performed
"on the fly": during VACUUM meta-page is rewritten with new version. No special
handling in pg_upgrade is required.
Author: Masahiko Sawada, Alexander Korotkov
Review by: Peter Geoghegan, Kyotaro Horiguchi, Alexander Korotkov, Yura Sokolov
Discussion: https://www.postgresql.org/message-id/flat/CAD21AoAX+d2oD_nrd9O2YkpzHaFr=uQeGr9s1rKC3O4ENc568g@mail.gmail.com
2018-04-04 18:29:00 +02:00
|
|
|
3 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | NaN | NaN
|
2011-08-25 06:06:16 +02:00
|
|
|
(1 row)
|
|
|
|
|
Fix multiple bugs in contrib/pgstattuple's pgstatindex() function.
Dead or half-dead index leaf pages were incorrectly reported as live, as a
consequence of a code rearrangement I made (during a moment of severe brain
fade, evidently) in commit d287818eb514d431.
The index metapage was not counted in index_size, causing that result to
not agree with the actual index size on-disk.
Index root pages were not counted in internal_pages, which is inconsistent
compared to the case of a root that's also a leaf (one-page index), where
the root would be counted in leaf_pages. Aside from that inconsistency,
this could lead to additional transient discrepancies between the reported
page counts and index_size, since it's possible for pgstatindex's scan to
see zero or multiple pages marked as BTP_ROOT, if the root moves due to
a split during the scan. With these fixes, index_size will always be
exactly one page more than the sum of the displayed page counts.
Also, the index_size result was incorrectly documented as being measured in
pages; it's always been measured in bytes. (While fixing that, I couldn't
resist doing some small additional wordsmithing on the pgstattuple docs.)
Including the metapage causes the reported index_size to not be zero for
an empty index. To preserve the desired property that the pgstattuple
regression test results are platform-independent (ie, BLCKSZ configuration
independent), scale the index_size result in the regression tests.
The documentation issue was reported by Otsuka Kenji, and the inconsistent
root page counting by Peter Geoghegan; the other problems noted by me.
Back-patch to all supported branches, because this has been broken for
a long time.
2016-02-18 21:40:35 +01:00
|
|
|
select version, tree_level,
|
|
|
|
index_size / current_setting('block_size')::int as index_size,
|
|
|
|
root_block_no, internal_pages, leaf_pages, empty_pages, deleted_pages,
|
|
|
|
avg_leaf_density, leaf_fragmentation
|
|
|
|
from pgstatindex('test_pkey'::text);
|
2013-07-18 20:50:20 +02:00
|
|
|
version | tree_level | index_size | root_block_no | internal_pages | leaf_pages | empty_pages | deleted_pages | avg_leaf_density | leaf_fragmentation
|
|
|
|
---------+------------+------------+---------------+----------------+------------+-------------+---------------+------------------+--------------------
|
Skip full index scan during cleanup of B-tree indexes when possible
Vacuum of index consists from two stages: multiple (zero of more) ambulkdelete
calls and one amvacuumcleanup call. When workload on particular table
is append-only, then autovacuum isn't intended to touch this table. However,
user may run vacuum manually in order to fill visibility map and get benefits
of index-only scans. Then ambulkdelete wouldn't be called for indexes
of such table (because no heap tuples were deleted), only amvacuumcleanup would
be called In this case, amvacuumcleanup would perform full index scan for
two objectives: put recyclable pages into free space map and update index
statistics.
This patch allows btvacuumclanup to skip full index scan when two conditions
are satisfied: no pages are going to be put into free space map and index
statistics isn't stalled. In order to check first condition, we store
oldest btpo_xact in the meta-page. When it's precedes RecentGlobalXmin, then
there are some recyclable pages. In order to check second condition we store
number of heap tuples observed during previous full index scan by cleanup.
If fraction of newly inserted tuples is less than
vacuum_cleanup_index_scale_factor, then statistics isn't considered to be
stalled. vacuum_cleanup_index_scale_factor can be defined as both reloption and GUC (default).
This patch bumps B-tree meta-page version. Upgrade of meta-page is performed
"on the fly": during VACUUM meta-page is rewritten with new version. No special
handling in pg_upgrade is required.
Author: Masahiko Sawada, Alexander Korotkov
Review by: Peter Geoghegan, Kyotaro Horiguchi, Alexander Korotkov, Yura Sokolov
Discussion: https://www.postgresql.org/message-id/flat/CAD21AoAX+d2oD_nrd9O2YkpzHaFr=uQeGr9s1rKC3O4ENc568g@mail.gmail.com
2018-04-04 18:29:00 +02:00
|
|
|
3 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | NaN | NaN
|
2013-07-18 20:50:20 +02:00
|
|
|
(1 row)
|
|
|
|
|
Fix multiple bugs in contrib/pgstattuple's pgstatindex() function.
Dead or half-dead index leaf pages were incorrectly reported as live, as a
consequence of a code rearrangement I made (during a moment of severe brain
fade, evidently) in commit d287818eb514d431.
The index metapage was not counted in index_size, causing that result to
not agree with the actual index size on-disk.
Index root pages were not counted in internal_pages, which is inconsistent
compared to the case of a root that's also a leaf (one-page index), where
the root would be counted in leaf_pages. Aside from that inconsistency,
this could lead to additional transient discrepancies between the reported
page counts and index_size, since it's possible for pgstatindex's scan to
see zero or multiple pages marked as BTP_ROOT, if the root moves due to
a split during the scan. With these fixes, index_size will always be
exactly one page more than the sum of the displayed page counts.
Also, the index_size result was incorrectly documented as being measured in
pages; it's always been measured in bytes. (While fixing that, I couldn't
resist doing some small additional wordsmithing on the pgstattuple docs.)
Including the metapage causes the reported index_size to not be zero for
an empty index. To preserve the desired property that the pgstattuple
regression test results are platform-independent (ie, BLCKSZ configuration
independent), scale the index_size result in the regression tests.
The documentation issue was reported by Otsuka Kenji, and the inconsistent
root page counting by Peter Geoghegan; the other problems noted by me.
Back-patch to all supported branches, because this has been broken for
a long time.
2016-02-18 21:40:35 +01:00
|
|
|
select version, tree_level,
|
|
|
|
index_size / current_setting('block_size')::int as index_size,
|
|
|
|
root_block_no, internal_pages, leaf_pages, empty_pages, deleted_pages,
|
|
|
|
avg_leaf_density, leaf_fragmentation
|
|
|
|
from pgstatindex('test_pkey'::name);
|
2013-07-18 20:50:20 +02:00
|
|
|
version | tree_level | index_size | root_block_no | internal_pages | leaf_pages | empty_pages | deleted_pages | avg_leaf_density | leaf_fragmentation
|
|
|
|
---------+------------+------------+---------------+----------------+------------+-------------+---------------+------------------+--------------------
|
Skip full index scan during cleanup of B-tree indexes when possible
Vacuum of index consists from two stages: multiple (zero of more) ambulkdelete
calls and one amvacuumcleanup call. When workload on particular table
is append-only, then autovacuum isn't intended to touch this table. However,
user may run vacuum manually in order to fill visibility map and get benefits
of index-only scans. Then ambulkdelete wouldn't be called for indexes
of such table (because no heap tuples were deleted), only amvacuumcleanup would
be called In this case, amvacuumcleanup would perform full index scan for
two objectives: put recyclable pages into free space map and update index
statistics.
This patch allows btvacuumclanup to skip full index scan when two conditions
are satisfied: no pages are going to be put into free space map and index
statistics isn't stalled. In order to check first condition, we store
oldest btpo_xact in the meta-page. When it's precedes RecentGlobalXmin, then
there are some recyclable pages. In order to check second condition we store
number of heap tuples observed during previous full index scan by cleanup.
If fraction of newly inserted tuples is less than
vacuum_cleanup_index_scale_factor, then statistics isn't considered to be
stalled. vacuum_cleanup_index_scale_factor can be defined as both reloption and GUC (default).
This patch bumps B-tree meta-page version. Upgrade of meta-page is performed
"on the fly": during VACUUM meta-page is rewritten with new version. No special
handling in pg_upgrade is required.
Author: Masahiko Sawada, Alexander Korotkov
Review by: Peter Geoghegan, Kyotaro Horiguchi, Alexander Korotkov, Yura Sokolov
Discussion: https://www.postgresql.org/message-id/flat/CAD21AoAX+d2oD_nrd9O2YkpzHaFr=uQeGr9s1rKC3O4ENc568g@mail.gmail.com
2018-04-04 18:29:00 +02:00
|
|
|
3 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | NaN | NaN
|
2013-07-18 20:50:20 +02:00
|
|
|
(1 row)
|
|
|
|
|
Fix multiple bugs in contrib/pgstattuple's pgstatindex() function.
Dead or half-dead index leaf pages were incorrectly reported as live, as a
consequence of a code rearrangement I made (during a moment of severe brain
fade, evidently) in commit d287818eb514d431.
The index metapage was not counted in index_size, causing that result to
not agree with the actual index size on-disk.
Index root pages were not counted in internal_pages, which is inconsistent
compared to the case of a root that's also a leaf (one-page index), where
the root would be counted in leaf_pages. Aside from that inconsistency,
this could lead to additional transient discrepancies between the reported
page counts and index_size, since it's possible for pgstatindex's scan to
see zero or multiple pages marked as BTP_ROOT, if the root moves due to
a split during the scan. With these fixes, index_size will always be
exactly one page more than the sum of the displayed page counts.
Also, the index_size result was incorrectly documented as being measured in
pages; it's always been measured in bytes. (While fixing that, I couldn't
resist doing some small additional wordsmithing on the pgstattuple docs.)
Including the metapage causes the reported index_size to not be zero for
an empty index. To preserve the desired property that the pgstattuple
regression test results are platform-independent (ie, BLCKSZ configuration
independent), scale the index_size result in the regression tests.
The documentation issue was reported by Otsuka Kenji, and the inconsistent
root page counting by Peter Geoghegan; the other problems noted by me.
Back-patch to all supported branches, because this has been broken for
a long time.
2016-02-18 21:40:35 +01:00
|
|
|
select version, tree_level,
|
|
|
|
index_size / current_setting('block_size')::int as index_size,
|
|
|
|
root_block_no, internal_pages, leaf_pages, empty_pages, deleted_pages,
|
|
|
|
avg_leaf_density, leaf_fragmentation
|
|
|
|
from pgstatindex('test_pkey'::regclass);
|
2013-07-18 20:50:20 +02:00
|
|
|
version | tree_level | index_size | root_block_no | internal_pages | leaf_pages | empty_pages | deleted_pages | avg_leaf_density | leaf_fragmentation
|
|
|
|
---------+------------+------------+---------------+----------------+------------+-------------+---------------+------------------+--------------------
|
Skip full index scan during cleanup of B-tree indexes when possible
Vacuum of index consists from two stages: multiple (zero of more) ambulkdelete
calls and one amvacuumcleanup call. When workload on particular table
is append-only, then autovacuum isn't intended to touch this table. However,
user may run vacuum manually in order to fill visibility map and get benefits
of index-only scans. Then ambulkdelete wouldn't be called for indexes
of such table (because no heap tuples were deleted), only amvacuumcleanup would
be called In this case, amvacuumcleanup would perform full index scan for
two objectives: put recyclable pages into free space map and update index
statistics.
This patch allows btvacuumclanup to skip full index scan when two conditions
are satisfied: no pages are going to be put into free space map and index
statistics isn't stalled. In order to check first condition, we store
oldest btpo_xact in the meta-page. When it's precedes RecentGlobalXmin, then
there are some recyclable pages. In order to check second condition we store
number of heap tuples observed during previous full index scan by cleanup.
If fraction of newly inserted tuples is less than
vacuum_cleanup_index_scale_factor, then statistics isn't considered to be
stalled. vacuum_cleanup_index_scale_factor can be defined as both reloption and GUC (default).
This patch bumps B-tree meta-page version. Upgrade of meta-page is performed
"on the fly": during VACUUM meta-page is rewritten with new version. No special
handling in pg_upgrade is required.
Author: Masahiko Sawada, Alexander Korotkov
Review by: Peter Geoghegan, Kyotaro Horiguchi, Alexander Korotkov, Yura Sokolov
Discussion: https://www.postgresql.org/message-id/flat/CAD21AoAX+d2oD_nrd9O2YkpzHaFr=uQeGr9s1rKC3O4ENc568g@mail.gmail.com
2018-04-04 18:29:00 +02:00
|
|
|
3 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | NaN | NaN
|
2013-07-18 20:50:20 +02:00
|
|
|
(1 row)
|
|
|
|
|
2011-08-25 06:06:16 +02:00
|
|
|
select pg_relpages('test');
|
|
|
|
pg_relpages
|
|
|
|
-------------
|
|
|
|
0
|
|
|
|
(1 row)
|
|
|
|
|
|
|
|
select pg_relpages('test_pkey');
|
|
|
|
pg_relpages
|
|
|
|
-------------
|
|
|
|
1
|
|
|
|
(1 row)
|
|
|
|
|
2013-07-18 20:50:20 +02:00
|
|
|
select pg_relpages('test_pkey'::text);
|
|
|
|
pg_relpages
|
|
|
|
-------------
|
|
|
|
1
|
|
|
|
(1 row)
|
|
|
|
|
|
|
|
select pg_relpages('test_pkey'::name);
|
|
|
|
pg_relpages
|
|
|
|
-------------
|
|
|
|
1
|
|
|
|
(1 row)
|
|
|
|
|
|
|
|
select pg_relpages('test_pkey'::regclass);
|
|
|
|
pg_relpages
|
|
|
|
-------------
|
|
|
|
1
|
|
|
|
(1 row)
|
|
|
|
|
|
|
|
select pg_relpages(oid) from pg_class where relname = 'test_pkey';
|
|
|
|
pg_relpages
|
|
|
|
-------------
|
|
|
|
1
|
|
|
|
(1 row)
|
|
|
|
|
|
|
|
select pg_relpages(relname) from pg_class where relname = 'test_pkey';
|
|
|
|
pg_relpages
|
|
|
|
-------------
|
|
|
|
1
|
|
|
|
(1 row)
|
|
|
|
|
2012-12-05 08:58:03 +01:00
|
|
|
create index test_ginidx on test using gin (b);
|
|
|
|
select * from pgstatginindex('test_ginidx');
|
|
|
|
version | pending_pages | pending_tuples
|
|
|
|
---------+---------------+----------------
|
2014-01-22 17:51:48 +01:00
|
|
|
2 | 0 | 0
|
2012-12-05 08:58:03 +01:00
|
|
|
(1 row)
|
|
|
|
|
2017-02-03 20:35:25 +01:00
|
|
|
create index test_hashidx on test using hash (b);
|
|
|
|
select * from pgstathashindex('test_hashidx');
|
2017-04-12 17:53:00 +02:00
|
|
|
version | bucket_pages | overflow_pages | bitmap_pages | unused_pages | live_items | dead_items | free_percent
|
|
|
|
---------+--------------+----------------+--------------+--------------+------------+------------+--------------
|
hash: Increase the number of possible overflow bitmaps by 8x.
Per a report from AP, it's not that hard to exhaust the supply of
bitmap pages if you create a table with a hash index and then insert a
few billion rows - and then you start getting errors when you try to
insert additional rows. In the particular case reported by AP,
there's another fix that we can make to improve recycling of overflow
pages, which is another way to avoid the error, but there may be other
cases where this problem happens and that fix won't help. So let's
buy ourselves as much headroom as we can without rearchitecting
anything.
The comments claim that the old limit was 64GB, but it was really
only 32GB, because we didn't use all the bits in the page for bitmap
bits - only the largest power of 2 that could fit after deducting
space for the page header and so forth. Thus, we have 4kB per page
for bitmap bits, not 8kB. The new limit is thus actually 8 times the
old *real* limit but only 4 times the old *purported* limit.
Since this breaks on-disk compatibility, bump HASH_VERSION. We've
already done this earlier in this release cycle, so this doesn't cause
any incremental inconvenience for people using pg_upgrade from
releases prior to v10. However, users who use pg_upgrade to reach
10beta3 or later from 10beta2 or earlier will need to REINDEX any hash
indexes again.
Amit Kapila and Robert Haas
Discussion: http://postgr.es/m/20170704105728.mwb72jebfmok2nm2@zip.com.au
2017-08-04 21:29:26 +02:00
|
|
|
4 | 4 | 0 | 1 | 0 | 0 | 0 | 100
|
2017-02-03 20:35:25 +01:00
|
|
|
(1 row)
|
|
|
|
|
2017-03-09 22:34:25 +01:00
|
|
|
-- these should error with the wrong type
|
|
|
|
select pgstatginindex('test_pkey');
|
|
|
|
ERROR: relation "test_pkey" is not a GIN index
|
|
|
|
select pgstathashindex('test_pkey');
|
2018-05-09 18:44:50 +02:00
|
|
|
ERROR: relation "test_pkey" is not a hash index
|
2017-03-09 22:34:25 +01:00
|
|
|
select pgstatindex('test_ginidx');
|
|
|
|
ERROR: relation "test_ginidx" is not a btree index
|
|
|
|
select pgstathashindex('test_ginidx');
|
2018-05-09 18:44:50 +02:00
|
|
|
ERROR: relation "test_ginidx" is not a hash index
|
2017-03-09 22:34:25 +01:00
|
|
|
select pgstatindex('test_hashidx');
|
|
|
|
ERROR: relation "test_hashidx" is not a btree index
|
|
|
|
select pgstatginindex('test_hashidx');
|
|
|
|
ERROR: relation "test_hashidx" is not a GIN index
|
|
|
|
-- check that using any of these functions with unsupported relations will fail
|
|
|
|
create table test_partitioned (a int) partition by range (a);
|
2018-05-09 19:03:43 +02:00
|
|
|
create index test_partitioned_index on test_partitioned(a);
|
2017-03-09 22:34:25 +01:00
|
|
|
-- these should all fail
|
|
|
|
select pgstattuple('test_partitioned');
|
|
|
|
ERROR: "test_partitioned" (partitioned table) is not supported
|
2018-05-09 19:03:43 +02:00
|
|
|
select pgstattuple('test_partitioned_index');
|
|
|
|
ERROR: "test_partitioned_index" (partitioned index) is not supported
|
2017-03-09 22:34:25 +01:00
|
|
|
select pgstattuple_approx('test_partitioned');
|
|
|
|
ERROR: "test_partitioned" is not a table or materialized view
|
|
|
|
select pg_relpages('test_partitioned');
|
|
|
|
ERROR: "test_partitioned" is not a table, index, materialized view, sequence, or TOAST table
|
|
|
|
select pgstatindex('test_partitioned');
|
|
|
|
ERROR: relation "test_partitioned" is not a btree index
|
|
|
|
select pgstatginindex('test_partitioned');
|
|
|
|
ERROR: relation "test_partitioned" is not a GIN index
|
|
|
|
select pgstathashindex('test_partitioned');
|
|
|
|
ERROR: "test_partitioned" is not an index
|
|
|
|
create view test_view as select 1;
|
|
|
|
-- these should all fail
|
|
|
|
select pgstattuple('test_view');
|
|
|
|
ERROR: "test_view" (view) is not supported
|
|
|
|
select pgstattuple_approx('test_view');
|
|
|
|
ERROR: "test_view" is not a table or materialized view
|
|
|
|
select pg_relpages('test_view');
|
|
|
|
ERROR: "test_view" is not a table, index, materialized view, sequence, or TOAST table
|
|
|
|
select pgstatindex('test_view');
|
|
|
|
ERROR: relation "test_view" is not a btree index
|
|
|
|
select pgstatginindex('test_view');
|
|
|
|
ERROR: relation "test_view" is not a GIN index
|
|
|
|
select pgstathashindex('test_view');
|
|
|
|
ERROR: "test_view" is not an index
|
|
|
|
create foreign data wrapper dummy;
|
|
|
|
create server dummy_server foreign data wrapper dummy;
|
|
|
|
create foreign table test_foreign_table () server dummy_server;
|
|
|
|
-- these should all fail
|
|
|
|
select pgstattuple('test_foreign_table');
|
|
|
|
ERROR: "test_foreign_table" (foreign table) is not supported
|
|
|
|
select pgstattuple_approx('test_foreign_table');
|
|
|
|
ERROR: "test_foreign_table" is not a table or materialized view
|
|
|
|
select pg_relpages('test_foreign_table');
|
|
|
|
ERROR: "test_foreign_table" is not a table, index, materialized view, sequence, or TOAST table
|
|
|
|
select pgstatindex('test_foreign_table');
|
|
|
|
ERROR: relation "test_foreign_table" is not a btree index
|
|
|
|
select pgstatginindex('test_foreign_table');
|
|
|
|
ERROR: relation "test_foreign_table" is not a GIN index
|
|
|
|
select pgstathashindex('test_foreign_table');
|
|
|
|
ERROR: "test_foreign_table" is not an index
|
|
|
|
-- a partition of a partitioned table should work though
|
|
|
|
create table test_partition partition of test_partitioned for values from (1) to (100);
|
|
|
|
select pgstattuple('test_partition');
|
|
|
|
pgstattuple
|
|
|
|
---------------------
|
|
|
|
(0,0,0,0,0,0,0,0,0)
|
|
|
|
(1 row)
|
|
|
|
|
|
|
|
select pgstattuple_approx('test_partition');
|
|
|
|
pgstattuple_approx
|
|
|
|
-----------------------
|
|
|
|
(0,0,0,0,0,0,0,0,0,0)
|
|
|
|
(1 row)
|
|
|
|
|
|
|
|
select pg_relpages('test_partition');
|
|
|
|
pg_relpages
|
|
|
|
-------------
|
|
|
|
0
|
|
|
|
(1 row)
|
|
|
|
|
|
|
|
-- not for the index calls though, of course
|
|
|
|
select pgstatindex('test_partition');
|
|
|
|
ERROR: relation "test_partition" is not a btree index
|
|
|
|
select pgstatginindex('test_partition');
|
|
|
|
ERROR: relation "test_partition" is not a GIN index
|
|
|
|
select pgstathashindex('test_partition');
|
|
|
|
ERROR: "test_partition" is not an index
|
2017-03-10 02:06:11 +01:00
|
|
|
-- an actual index of a partitioned table should work though
|
2017-03-09 22:34:25 +01:00
|
|
|
create index test_partition_idx on test_partition(a);
|
|
|
|
create index test_partition_hash_idx on test_partition using hash (a);
|
|
|
|
-- these should work
|
|
|
|
select pgstatindex('test_partition_idx');
|
|
|
|
pgstatindex
|
|
|
|
------------------------------
|
Skip full index scan during cleanup of B-tree indexes when possible
Vacuum of index consists from two stages: multiple (zero of more) ambulkdelete
calls and one amvacuumcleanup call. When workload on particular table
is append-only, then autovacuum isn't intended to touch this table. However,
user may run vacuum manually in order to fill visibility map and get benefits
of index-only scans. Then ambulkdelete wouldn't be called for indexes
of such table (because no heap tuples were deleted), only amvacuumcleanup would
be called In this case, amvacuumcleanup would perform full index scan for
two objectives: put recyclable pages into free space map and update index
statistics.
This patch allows btvacuumclanup to skip full index scan when two conditions
are satisfied: no pages are going to be put into free space map and index
statistics isn't stalled. In order to check first condition, we store
oldest btpo_xact in the meta-page. When it's precedes RecentGlobalXmin, then
there are some recyclable pages. In order to check second condition we store
number of heap tuples observed during previous full index scan by cleanup.
If fraction of newly inserted tuples is less than
vacuum_cleanup_index_scale_factor, then statistics isn't considered to be
stalled. vacuum_cleanup_index_scale_factor can be defined as both reloption and GUC (default).
This patch bumps B-tree meta-page version. Upgrade of meta-page is performed
"on the fly": during VACUUM meta-page is rewritten with new version. No special
handling in pg_upgrade is required.
Author: Masahiko Sawada, Alexander Korotkov
Review by: Peter Geoghegan, Kyotaro Horiguchi, Alexander Korotkov, Yura Sokolov
Discussion: https://www.postgresql.org/message-id/flat/CAD21AoAX+d2oD_nrd9O2YkpzHaFr=uQeGr9s1rKC3O4ENc568g@mail.gmail.com
2018-04-04 18:29:00 +02:00
|
|
|
(3,0,8192,0,0,0,0,0,NaN,NaN)
|
2017-03-09 22:34:25 +01:00
|
|
|
(1 row)
|
|
|
|
|
|
|
|
select pgstathashindex('test_partition_hash_idx');
|
|
|
|
pgstathashindex
|
|
|
|
---------------------
|
hash: Increase the number of possible overflow bitmaps by 8x.
Per a report from AP, it's not that hard to exhaust the supply of
bitmap pages if you create a table with a hash index and then insert a
few billion rows - and then you start getting errors when you try to
insert additional rows. In the particular case reported by AP,
there's another fix that we can make to improve recycling of overflow
pages, which is another way to avoid the error, but there may be other
cases where this problem happens and that fix won't help. So let's
buy ourselves as much headroom as we can without rearchitecting
anything.
The comments claim that the old limit was 64GB, but it was really
only 32GB, because we didn't use all the bits in the page for bitmap
bits - only the largest power of 2 that could fit after deducting
space for the page header and so forth. Thus, we have 4kB per page
for bitmap bits, not 8kB. The new limit is thus actually 8 times the
old *real* limit but only 4 times the old *purported* limit.
Since this breaks on-disk compatibility, bump HASH_VERSION. We've
already done this earlier in this release cycle, so this doesn't cause
any incremental inconvenience for people using pg_upgrade from
releases prior to v10. However, users who use pg_upgrade to reach
10beta3 or later from 10beta2 or earlier will need to REINDEX any hash
indexes again.
Amit Kapila and Robert Haas
Discussion: http://postgr.es/m/20170704105728.mwb72jebfmok2nm2@zip.com.au
2017-08-04 21:29:26 +02:00
|
|
|
(4,8,0,1,0,0,0,100)
|
2017-03-09 22:34:25 +01:00
|
|
|
(1 row)
|
|
|
|
|
|
|
|
drop table test_partitioned;
|
|
|
|
drop view test_view;
|
|
|
|
drop foreign table test_foreign_table;
|
|
|
|
drop server dummy_server;
|
|
|
|
drop foreign data wrapper dummy;
|