Commit Graph

4644 Commits

Author SHA1 Message Date
David Rowley cf96907aad Fix incorrect comment in pg_get_partkeydef()
The comment claimed the output of the function was prefixed by "PARTITION
BY".  This is incorrect.

Author: Japin Li
Reviewed-by: Ashutosh Bapat
Discussion: https://postgr.es/m/MEYP282MB166923B446FF5FE55B9DACB7B6B69@MEYP282MB1669.AUSP282.PROD.OUTLOOK.COM
2023-03-07 14:33:28 +13:00
Peter Eisentraut 102a5c164a SQL JSON path enhanced numeric literals
Add support for non-decimal integer literals and underscores in
numeric literals to SQL JSON path language.  This follows the rules of
ECMAScript, as referred to by the SQL standard.

Internally, all the numeric literal parsing of jsonpath goes through
numeric_in, which already supports all this, so this patch is just a
bit of lexer work and some tests and documentation.

Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/b11b25bb-6ec1-d42f-cedd-311eae59e1fb@enterprisedb.com
2023-03-05 15:19:58 +01:00
Peter Eisentraut b1307b8b60 Fix incorrect format placeholders 2023-03-03 07:01:18 +01:00
Tom Lane d7056bc1c7 Avoid fetching one past the end of translate()'s "to" parameter.
This is usually harmless, but if you were very unlucky it could
provoke a segfault due to the "to" string being right up against
the end of memory.  Found via valgrind testing (so we might've
found it earlier, except that our regression tests lacked any
exercise of translate()'s deletion feature).

Fix by switching the order of the test-for-end-of-string and
advance-pointer steps.  While here, compute "to_ptr + tolen"
just once.  (Smarter compilers might figure that out for
themselves, but let's just make sure.)

Report and fix by Daniil Anisimov, in bug #17816.

Discussion: https://postgr.es/m/17816-70f3d2764e88a108@postgresql.org
2023-03-01 11:30:31 -05:00
Michael Paquier b8da37b3ad Rework pg_input_error_message(), now renamed pg_input_error_info()
pg_input_error_info() is now a SQL function able to return a row with
more than just the error message generated for incorrect data type
inputs when these are able to handle soft failures, returning more
contents of ErrorData, as of:
- The error message (same as before).
- The error detail, if set.
- The error hint, if set.
- SQL error code.

All the regression tests that relied on pg_input_error_message() are
updated to reflect the effects of the rename.

Per discussion with Tom Lane and Andrew Dunstan.

Author: Nathan Bossart
Discussion: https://postgr.es/m/139a68e1-bd1f-a9a7-b5fe-0be9845c6311@dunslane.net
2023-02-28 08:04:13 +09:00
Tom Lane 728560db7d Suppress compiler warnings in new pgstats code.
Some clang versions whine about comparing an enum variable to
a value outside the range of the enum, on the grounds that the
result must be constant.  In the cases we fix here, the loops
will terminate only if the enum variable can in fact hold a
value one beyond its declared range.  While that's very likely
to always be true for these enum types, it still seems like a
poor coding practice to assume it; so use "int" loop variables
instead to silence the warnings.  (This matches what we've done
in other places, for example loops over the range of ForkNumber.)

While at it, let's drop the XXX_FIRST macros for these enums and just
write zeroes for the loop start values.  The apparent flexibility
seems rather illusory given that iterating up to one-less-than-
the-number-of-values is only correct for a zero-based range.

Melanie Plageman

Discussion: https://postgr.es/m/20520.1677435600@sss.pgh.pa.us
2023-02-27 17:21:31 -05:00
Tom Lane ded7b7bbc3 Silence more compiler warnings introduced by d87d548cd0.
Per buildfarm, there are still a couple of functions where we
get warnings from compilers that don't know that elog(ERROR)
doesn't return.
2023-02-26 14:05:03 -05:00
Jeff Davis 05fc551796 Silence compiler warnings introduced by d87d548cd0.
Reported-by: Justin Pryzby
Discussion: https://postgr.es/m/20230224002029.GQ1653@telsasoft.com
2023-02-24 09:11:35 -08:00
Jeff Davis e0b3074e89 Remove unnecessary #ifdef USE_ICU and branch.
Now that the provider-independent API pg_strnxfrm() is available, we
no longer need the special cases for ICU in hashfunc.c and varchar.c.

Reviewed-by: Peter Eisentraut, Peter Geoghegan
Discussion: https://postgr.es/m/a581136455c940d7bd0ff482d3a2bd51af25a94f.camel%40j-davis.com
2023-02-23 11:20:00 -08:00
Jeff Davis 6974a8f768 Refactor to introduce pg_locale_deterministic().
Avoids the need of callers to test for NULL, and also avoids the need
to access the pg_locale_t structure directly.

Reviewed-by: Peter Eisentraut, Peter Geoghegan
Discussion: https://postgr.es/m/a581136455c940d7bd0ff482d3a2bd51af25a94f.camel%40j-davis.com
2023-02-23 11:17:41 -08:00
Jeff Davis d87d548cd0 Refactor to add pg_strcoll(), pg_strxfrm(), and variants.
Offers a generally better separation of responsibilities for collation
code. Also, a step towards multi-lib ICU, which should be based on a
clean separation of the routines required for collation providers.

Callers with NUL-terminated strings should call pg_strcoll() or
pg_strxfrm(); callers with strings and their length should call the
variants pg_strncoll() or pg_strnxfrm().

Reviewed-by: Peter Eisentraut, Peter Geoghegan
Discussion: https://postgr.es/m/a581136455c940d7bd0ff482d3a2bd51af25a94f.camel%40j-davis.com
2023-02-23 10:55:20 -08:00
Peter Eisentraut 2ddab010c2 Implement ANY_VALUE aggregate
SQL:2023 defines an ANY_VALUE aggregate whose purpose is to emit an
implementation-dependent (i.e. non-deterministic) value from the
aggregated rows.

Author: Vik Fearing <vik@postgresfriends.org>
Reviewed-by: Peter Eisentraut <peter.eisentraut@enterprisedb.com>
Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/5cff866c-10a8-d2df-32cb-e9072e6b04a2@postgresfriends.org
2023-02-22 09:33:07 +01:00
Tom Lane 8028e294b4 Detect overflow in timestamp[tz] subtraction.
It's possible to overflow the int64 microseconds field of the
output interval when subtracting two timestamps.  Detect that
instead of silently returning a bogus result.

Nick Babadzhanian

Discussion: https://postgr.es/m/CABw73Uq2oJ3E+kYvvDuY04EkhhkChim2e-PaghBDjOmgUAMWGw@mail.gmail.com
2023-02-20 17:26:25 -05:00
Tom Lane f0d0394e84 Fix parsing of ISO-8601 interval fields with exponential notation.
Historically we've accepted interval input like 'P.1e10D'.  This
is probably an accident of having used strtod() to do the parsing,
rather than something anyone intended, but it's been that way for
a long time.  Commit e39f99046 broke this by trying to parse the
integer and fractional parts separately, without accounting for
the possibility of an exponent.  In principle that coding allowed
for precise conversions of field values wider than 15 decimal
digits, but that does not seem like a goal worth sweating bullets
for.  So, rather than trying to manage an exponent on top of the
existing complexity, let's just revert to the previous coding that
used strtod() by itself.  We can still improve on the old code to
the extent of allowing the value to range up to 1.0e15 rather than
only INT_MAX.  (Allowing more than that risks creating problems
due to precision loss: the converted fractional part might have
absolute value more than 1.  Perhaps that could be dealt with in
some way, but it really does not seem worth additional effort.)

Per bug #17795 from Alexander Lakhin.  Back-patch to v15 where
the faulty code came in.

Discussion: https://postgr.es/m/17795-748d6db3ed95d313@postgresql.org
2023-02-20 16:55:59 -05:00
Tom Lane 393430f575 Print the correct aliases for DML target tables in ruleutils.
ruleutils.c blindly printed the user-given alias (or nothing if there
hadn't been one) for the target table of INSERT/UPDATE/DELETE queries.
That works a large percentage of the time, but not always: for queries
appearing in WITH, it's possible that we chose a different alias to
avoid conflict with outer-scope names.  Since the chosen alias would
be used in any Var references to the target table, this'd lead to an
inconsistent printout with consequences such as dump/restore failures.

The correct logic for printing (or not) a relation alias was embedded
in get_from_clause_item.  Factor it out to a separate function so that
we don't need a jointree node to use it.  (Only a limited part of that
function can be reached from these new call sites, but this seems like
the cleanest non-duplicative factorization.)

In passing, I got rid of a redundant "\d+ rules_src" step in rules.sql.

Initial report from Jonathan Katz; thanks to Vignesh C for analysis.
This has been broken for a long time, so back-patch to all supported
branches.

Discussion: https://postgr.es/m/e947fa21-24b2-f922-375a-d4f763ef3e4b@postgresql.org
Discussion: https://postgr.es/m/CALDaNm1MMntjmT_NJGp-Z=xbF02qHGAyuSHfYHias3TqQbPF2w@mail.gmail.com
2023-02-17 16:40:34 -05:00
Peter Eisentraut 3b12e68a5c Change argument type of pq_sendbytes from char * to void *
This is a follow-up to 1f605b82ba.  It
allows getting rid of further casts at call sites.

Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Discussion: https://www.postgresql.org/message-id/783a4edb-84f9-6df2-7470-2ef5ccc6607a@enterprisedb.com
2023-02-14 13:32:19 +01:00
Peter Eisentraut bd944884e9 Consolidate ItemPointer to Datum conversion functions
Instead of defining the same set of macros several times, define it
once in an appropriate header file.  In passing, convert to inline
functions.

Reviewed-by: Heikki Linnakangas <hlinnaka@iki.fi>
Discussion: https://www.postgresql.org/message-id/flat/844dd4c5-e5a1-3df1-bfaf-d1e1c2a16e45%40enterprisedb.com
2023-02-13 09:57:15 +01:00
Tom Lane 5e80d35154 Avoid dereferencing an undefined pointer in DecodeInterval().
Commit e39f99046 moved some code up closer to the start of
DecodeInterval(), without noticing that it had been implicitly
relying on previous checks to reject the case of empty input.
Given empty input, we'd now dereference a pointer that hadn't been
set, possibly leading to a core dump.  (But if we fail to provoke
a SIGSEGV, nothing bad happens, and the expected syntax error is
thrown a bit later.)

Per bug #17788 from Alexander Lakhin.  Back-patch to v15 where
the fault was introduced.

Discussion: https://postgr.es/m/17788-dabac9f98f7eafd5@postgresql.org
2023-02-12 12:50:55 -05:00
Andres Freund a9c70b46db Add pg_stat_io view, providing more detailed IO statistics
Builds on 28e626bde0 and f30d62c2fc. See the former for motivation.

Rows of the view show IO operations for a particular backend type, IO target
object, IO context combination (e.g. a client backend's operations on
permanent relations in shared buffers) and each column in the view is the
total number of IO Operations done (e.g. writes). So a cell in the view would
be, for example, the number of blocks of relation data written from shared
buffers by client backends since the last stats reset.

In anticipation of tracking WAL IO and non-block-oriented IO (such as
temporary file IO), the "op_bytes" column specifies the unit of the "reads",
"writes", and "extends" columns for a given row.

Rows for combinations of IO operation, backend type, target object and context
that never occur, are ommitted entirely. For example, checkpointer will never
operate on temporary relations.

Similarly, if an IO operation never occurs for such a combination, the IO
operation's cell will be null, to distinguish from 0 observed IO
operations. For example, bgwriter should not perform reads.

Note that some of the cells in the view are redundant with fields in
pg_stat_bgwriter (e.g. buffers_backend). For now, these have been kept for
backwards compatibility.

Bumps catversion.

Author: Melanie Plageman <melanieplageman@gmail.com>
Author: Samay Sharma <smilingsamay@gmail.com>
Reviewed-by: Maciek Sakrejda <m.sakrejda@gmail.com>
Reviewed-by: Lukas Fittl <lukas@fittl.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20200124195226.lth52iydq2n2uilq@alap3.anarazel.de
2023-02-11 09:52:15 -08:00
Andres Freund 28e626bde0 pgstat: Infrastructure for more detailed IO statistics
This commit adds the infrastructure for more detailed IO statistics. The calls
to actually count IOs, a system view to access the new statistics,
documentation and tests will be added in subsequent commits, to make review
easier.

While we already had some IO statistics, e.g. in pg_stat_bgwriter and
pg_stat_database, they did not provide sufficient detail to understand what
the main sources of IO are, or whether configuration changes could avoid
IO. E.g., pg_stat_bgwriter.buffers_backend does contain the number of buffers
written out by a backend, but as that includes extending relations (always
done by backends) and writes triggered by the use of buffer access strategies,
it cannot easily be used to tune background writer or checkpointer. Similarly,
pg_stat_database.blks_read cannot easily be used to tune shared_buffers /
compute a cache hit ratio, as the use of buffer access strategies will often
prevent a large fraction of the read blocks to end up in shared_buffers.

The new IO statistics count IO operations (evict, extend, fsync, read, reuse,
and write), and are aggregated for each combination of backend type (backend,
autovacuum worker, bgwriter, etc), target object of the IO (relations, temp
relations) and context of the IO (normal, vacuum, bulkread, bulkwrite).

What is tracked in this series of patches, is sufficient to perform the
aforementioned analyses. Further details, e.g. tracking the number of buffer
hits, would make that even easier, but was left out for now, to keep the scope
of the already large patchset manageable.

Bumps PGSTAT_FILE_FORMAT_ID.

Author: Melanie Plageman <melanieplageman@gmail.com>
Reviewed-by: Andres Freund <andres@anarazel.de>
Reviewed-by: Justin Pryzby <pryzby@telsasoft.com>
Reviewed-by: Kyotaro Horiguchi <horikyota.ntt@gmail.com>
Discussion: https://postgr.es/m/20200124195226.lth52iydq2n2uilq@alap3.anarazel.de
2023-02-08 20:53:42 -08:00
Peter Eisentraut aa69541046 Remove useless casts to (void *) in arguments of some system functions
The affected functions are: bsearch, memcmp, memcpy, memset, memmove,
qsort, repalloc

Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/fd9adf5d-b1aa-e82f-e4c7-263c30145807%40enterprisedb.com
2023-02-07 06:57:59 +01:00
Peter Eisentraut 54a177a948 Remove useless casts to (void *) in hash_search() calls
Some of these appear to be leftovers from when hash_search() took a
char * argument (changed in 5999e78fc4).

Since after this there is some more horizontal space available, do
some light reformatting where suitable.

Reviewed-by: Corey Huinker <corey.huinker@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/fd9adf5d-b1aa-e82f-e4c7-263c30145807%40enterprisedb.com
2023-02-06 09:41:01 +01:00
Dean Rasheed faff8f8e47 Allow underscores in integer and numeric constants.
This allows underscores to be used in integer and numeric literals,
and their corresponding type input functions, for visual grouping.
For example:

    1_500_000_000
    3.14159_26535_89793
    0xffff_ffff
    0b_1001_0001

A single underscore is allowed between any 2 digits, or immediately
after the base prefix indicator of non-decimal integers, per SQL:202x
draft.

Peter Eisentraut and Dean Rasheed

Discussion: https://postgr.es/m/84aae844-dc55-a4be-86d9-4f0fa405cc97%40enterprisedb.com
2023-02-04 09:48:51 +00:00
Peter Eisentraut 1b6f632a35 Remove unused code related to unknown type
These are leftovers obsoleted by
cfd9be939e.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/e7887965-9e70-fd01-c2d1-5bc02f9169aa%40enterprisedb.com
2023-02-04 07:56:09 +01:00
Dean Rasheed b2d4792890 Make int64_div_fast_to_numeric() more robust.
The prior coding of int64_div_fast_to_numeric() had a number of bugs
that would cause it to fail under different circumstances, such as
with log10val2 <= 0, or log10val2 a multiple of 4, or in the "slow"
numeric path with log10val2 >= 10.

None of those could be triggered by any of our current code, which
only uses log10val2 = 3 or 6. However, they made it a hazard for any
future code that might use it. Also, since this is exported by
numeric.c, users writing their own C code might choose to use it.

Therefore fix, and back-patch to v14, where it was introduced.

Dean Rasheed, reviewed by Tom Lane.

Discussion: https://postgr.es/m/CAEZATCW8gXgW0tgPxPgHDPhVX71%2BSWFRkhnXy%2BTfGDsKLepu2g%40mail.gmail.com
2023-02-03 11:13:34 +00:00
Dean Rasheed 0736fc1ceb Clarify the choice of rscale in numeric_sqrt().
Improve the comment explaining the choice of rscale in numeric_sqrt(),
and ensure that the code works consistently when other values of
NBASE/DEC_DIGITS are used.

Note that, in practice, we always expect DEC_DIGITS == 4, and this
does not change the computation in that case.

Joel Jacobson and Dean Rasheed

Discussion: https://postgr.es/m/06712c29-98e9-43b3-98da-f234d81c6e49%40app.fastmail.com
2023-02-02 09:41:22 +00:00
Dean Rasheed 9a84f2947b Ensure that numeric.c compiles with other NBASE values.
As noted in the comments, support for different NBASE values is really
only of historical interest, but as long as we're keeping it, we might
as well make sure that it compiles.

Joel Jacobson

Discussion: https://postgr.es/m/06712c29-98e9-43b3-98da-f234d81c6e49%40app.fastmail.com
2023-02-02 09:39:08 +00:00
Tom Lane 2489d76c49 Make Vars be outer-join-aware.
Traditionally we used the same Var struct to represent the value
of a table column everywhere in parse and plan trees.  This choice
predates our support for SQL outer joins, and it's really a pretty
bad idea with outer joins, because the Var's value can depend on
where it is in the tree: it might go to NULL above an outer join.
So expression nodes that are equal() per equalfuncs.c might not
represent the same value, which is a huge correctness hazard for
the planner.

To improve this, decorate Var nodes with a bitmapset showing
which outer joins (identified by RTE indexes) may have nulled
them at the point in the parse tree where the Var appears.
This allows us to trust that equal() Vars represent the same value.
A certain amount of klugery is still needed to cope with cases
where we re-order two outer joins, but it's possible to make it
work without sacrificing that core principle.  PlaceHolderVars
receive similar decoration for the same reason.

In the planner, we include these outer join bitmapsets into the relids
that an expression is considered to depend on, and in consequence also
add outer-join relids to the relids of join RelOptInfos.  This allows
us to correctly perceive whether an expression can be calculated above
or below a particular outer join.

This change affects FDWs that want to plan foreign joins.  They *must*
follow suit when labeling foreign joins in order to match with the
core planner, but for many purposes (if postgres_fdw is any guide)
they'd prefer to consider only base relations within the join.
To support both requirements, redefine ForeignScan.fs_relids as
base+OJ relids, and add a new field fs_base_relids that's set up by
the core planner.

Large though it is, this commit just does the minimum necessary to
install the new mechanisms and get check-world passing again.
Follow-up patches will perform some cleanup.  (The README additions
and comments mention some stuff that will appear in the follow-up.)

Patch by me; thanks to Richard Guo for review.

Discussion: https://postgr.es/m/830269.1656693747@sss.pgh.pa.us
2023-01-30 13:16:20 -05:00
David Rowley 456fa635a9 Teach planner about more monotonic window functions
9d9c02ccd introduced runConditions for window functions to allow
monotonic window function evaluation to be made more efficient when the
window function value went beyond some value that it would never go back
from due to its monotonic nature.  That commit added prosupport functions
to inform the planner that row_number(), rank(), dense_rank() and some
forms of count(*) were monotonic.  Here we add support for ntile(),
cume_dist() and percent_rank().

Reviewed-by: Melanie Plageman
Discussion: https://postgr.es/m/CAApHDvqR+VqB8s+xR-24bzJbU8xyFrBszJ17qKgECf7cWxLCaA@mail.gmail.com
2023-01-27 16:08:41 +13:00
Tom Lane 3a28d78089 Improve TimestampDifferenceMilliseconds to cope with overflow sanely.
We'd like to use TimestampDifferenceMilliseconds with the stop_time
possibly being TIMESTAMP_INFINITY, but up to now it's disclaimed
responsibility for overflow cases.  Define it to clamp its output to
the range [0, INT_MAX], handling overflow correctly.  (INT_MAX rather
than LONG_MAX seems appropriate, because the function is already
described as being intended for calculating wait times for WaitLatch
et al, and that infrastructure only handles waits up to INT_MAX.
Also, this choice gets rid of cross-platform behavioral differences.)

Having done that, we can replace some ad-hoc code in walreceiver.c
with a simple call to TimestampDifferenceMilliseconds.

While at it, fix some buglets in existing callers of
TimestampDifferenceMilliseconds: basebackup_copy.c had not read the
memo about TimestampDifferenceMilliseconds never returning a negative
value, and postmaster.c had not read the memo about Min() and Max()
being macros with multiple-evaluation hazards.  Neither of these
quite seem worth back-patching.

Patch by me; thanks to Nathan Bossart for review.

Discussion: https://postgr.es/m/3126727.1674759248@sss.pgh.pa.us
2023-01-26 17:09:12 -05:00
Dean Rasheed 6dfacbf72b Add non-decimal integer support to type numeric.
This enhances the numeric type input function, adding support for
hexadecimal, octal, and binary integers of any size, up to the limits
of the numeric type.

Since 6fcda9aba8, such non-decimal integers have been accepted by the
parser as integer literals and passed through to numeric_in(). This
commit gives numeric_in() the ability to handle them.

While at it, simplify the handling of NaN and infinities, reducing the
number of calls to pg_strncasecmp(), and arrange for pg_strncasecmp()
to not be called at all for regular numbers. This gives a significant
performance improvement for decimal inputs, more than offsetting the
small performance hit of checking for non-decimal input.

Discussion: https://postgr.es/m/CAEZATCV8XShnmT9HZy25C%2Bo78CVOFmUN5EM9FRAZ5xvYTggPMg%40mail.gmail.com
2023-01-23 19:21:22 +00:00
Dean Rasheed 0aa38db56b Optimise numeric division for 3 and 4 base-NBASE digit divisors.
On platforms with 128-bit integer support, introduce a new function
div_var_int64(), along the same lines as div_var_int() added in
d1b307eef2 for divisors with 1 or 2 base-NBASE digits, and use it to
speed up div_var() and div_var_fast() in a similar way when the
divisor has 3 or 4 base-NBASE digits.

This gives significant performance gains for divisors with 9-16
decimal digits.

Joel Jacobson.

Discussion:
  https://postgr.es/m/b7a5893d-af18-4c0b-8918-96de5f1bbf39%40app.fastmail.com
  https://postgr.es/m/CAEZATCXGm%3DDyTq%3DFrcOqC0gPMVveKUYTaD5KRRoajrUTiWxVMw%40mail.gmail.com
2023-01-23 11:58:28 +00:00
David Rowley 16fd03e956 Allow parallel aggregate on string_agg and array_agg
This adds combine, serial and deserial functions for the array_agg() and
string_agg() aggregate functions, thus allowing these aggregates to
partake in partial aggregations.  This allows both parallel aggregation to
take place when these aggregates are present and also allows additional
partition-wise aggregation plan shapes to include plans that require
additional aggregation once the partially aggregated results from the
partitions have been combined.

Author: David Rowley
Reviewed-by: Andres Freund, Tomas Vondra, Stephen Frost, Tom Lane
Discussion: https://postgr.es/m/CAKJS1f9sx_6GTcvd6TMuZnNtCh0VhBzhX6FZqw17TgVFH-ga_A@mail.gmail.com
2023-01-23 17:35:01 +13:00
David Rowley 9f1ca6ce65 Use appendStringInfoSpaces in more places
This adjusts a few places which were appending a string constant
containing spaces onto a StringInfo.  We have appendStringInfoSpaces for
that job, so let's use that instead.

For the change to jsonb.c's add_indent() function, appendStringInfoString
was being called inside a loop to append 4 spaces on each loop.  This
meant that enlargeStringInfo would get called once per loop.  Here it
should be much more efficient to get rid of the loop and just calculate
the number of spaces with "level * 4" and just append all the spaces in
one go.

Here we additionally adjust the appendStringInfoSpaces function so it
makes use of memset rather than a while loop to apply the required spaces
to the StringInfo.  One of the problems with the while loop was that it
was incrementing one variable and decrementing another variable once per
loop.  That's more work than what's required to get the job done.  We may
as well use memset for this rather than trying to optimize the existing
loop.  Some testing has shown memset is faster even for very small sizes.

Discussion: https://postgr.es/m/CAApHDvp_rKkvwudBKgBHniNRg67bzXVjyvVKfX0G2zS967K43A@mail.gmail.com
2023-01-20 13:07:24 +13:00
Tom Lane 5a617d75d3 Fix ts_headline() to handle ORs and phrase queries more honestly.
This patch largely reverts what I did in commits c9b0c678d and
78e73e875.  The maximum cover length limit that I added in 78e73e875
(to band-aid over c9b0c678d's performance issues) creates too many
user-visible behavior discrepancies, as complained of for example in
bug #17691.  The real problem with hlCover() is not what I thought
at the time, but more that it seems to have been designed with only
AND tsquery semantics in mind.  It doesn't work quite right for OR,
and even less so for NOT or phrase queries.  However, we can improve
that situation by building a variant of TS_execute() that returns a
list of match locations.  We already get an ExecPhraseData struct
representing match locations for the primitive case of a simple match,
as well as one for a phrase match; we just need to add some logic to
combine these for AND and OR operators.  The result is a list of
ExecPhraseDatas, which hlCover can regard as having simple AND
semantics, so that its old algorithm works correctly.

There's still a lot not to like about ts_headline's behavior, but
I think the remaining issues have to do with the heuristics used
in mark_hl_words and mark_hl_fragments (which, likewise, were not
revisited when phrase search was added).  Improving those is a task
for another day.

Patch by me; thanks to Alvaro Herrera for review.

Discussion: https://postgr.es/m/840.1669405935@sss.pgh.pa.us
2023-01-19 16:21:44 -05:00
Alvaro Herrera 438e6b7240
Remove some dead code in selfuncs.c
RelOptInfo.userid is the same for all relations in a given inheritance
tree, so the code in examine_variable() and example_simple_variable()
that repeats the ACL checks on the root parent rel instead of a given
leaf child relations need not recompute userid too.

Author: Amit Langote <amitlangote09@gmail.com>
Reported-by: Justin Pryzby <pryzby@telsasoft.com>
Discussion: https://postgr.es/m/20221210201753.GA27893@telsasoft.com
2023-01-19 12:54:15 +01:00
Amit Kapila d540a02a72 Display the leader apply worker's PID for parallel apply workers.
Add leader_pid to pg_stat_subscription. leader_pid is the process ID of
the leader apply worker if this process is a parallel apply worker. If
this field is NULL, it indicates that the process is a leader apply
worker or a synchronization worker. The new column makes it easier to
distinguish parallel apply workers from other kinds of workers and helps
to identify the leader for the parallel workers corresponding to a
particular subscription.

Additionally, update the leader_pid column in pg_stat_activity as well to
display the PID of the leader apply worker for parallel apply workers.

Author: Hou Zhijie
Reviewed-by: Peter Smith, Sawada Masahiko, Amit Kapila, Shveta Mallik
Discussion: https://postgr.es/m/CAA4eK1+wyN6zpaHUkCLorEWNx75MG0xhMwcFhvjqm2KURZEAGw@mail.gmail.com
2023-01-18 09:03:12 +05:30
John Naylor e29c565343 Remove dead code in formatting.c
Remove some code guarded by IS_MINUS() or IS_PLUS(), where the entire
stanza is inside an else-block where both of these are false. This
should slightly improve test coverage.

While at it, remove coding that apparently assumes that unsetting a
bit is so expensive that we have to first check if it's already set
in the first place.

Per Coverity report from Ranier Vilela
Analysis and review by Justin Pryzby

Discussion: https://www.postgresql.org/message-id/20221223010818.GP1153%40telsasoft.com
2023-01-17 13:55:49 +07:00
Michael Paquier 02d3448f4f Store IdentLine->pg_user as an AuthToken
While system_user was stored as an AuthToken in IdentLine, pg_user was
stored as a plain string.  This commit changes the code as we start
storing pg_user as an AuthToken too.

This does not have any functional changes, as all the operations on
pg_user only use the string from the AuthToken.  There is no regexp
compiled and no check based on its quoting, yet.  This is in preparation
of more features that intend to extend its capabilities, like support
for regexps and group membership.

Author: Jelte Fennema
Discussion: https://postgr.es/m/CAGECzQRNow4MwkBjgPxywXdJU_K3a9+Pm78JB7De3yQwwkTDew@mail.gmail.com
2023-01-16 13:58:07 +09:00
Tom Lane 647fa50054 Remove arbitrary FUNC_MAX_ARGS limit in int2vectorin and oidvectorin.
int2vectorin limited the number of array elements it'd take to
FUNC_MAX_ARGS, which is probably fine for the traditional use-cases.
But now that pg_publication_rel.prattrs is an int2vector, it's not
fine at all: it's easy to construct cases where that can have up to
about MaxTupleAttributeNumber entries.  Trying to replicate such
tables leads to logical-replication failures.

As long as we have to touch this code anyway, let's just remove
the a-priori limit altogether, and let it accept any size that'll
be allowed by repalloc.  (Note that since int2vector isn't toastable,
we cannot store arrays longer than about BLCKSZ/2; but there is no
good excuse for letting int2vectorin depend on that.  Perhaps we
will lift the no-toast restriction someday.)

While at it, also improve the equivalent logic in oidvectorin.
I don't know of any practical use-case for long oidvectors right
now, but doing it right actually makes the code shorter.

Per report from Erik Rijkers.  Back-patch to v15 where
pg_publication_rel.prattrs was added.

Discussion: https://postgr.es/m/668ba539-33c5-8190-ca11-def2913cb94b@xs4all.nl
2023-01-15 17:32:09 -05:00
Andres Freund 250c8ee07e Manual cleanup and pgindent of pgstat and bufmgr related code
This is in preparation for commiting a larger patch series in the area.

Discussion: https://postgr.es/m/CAAKRu_bHwGEbzNxxy+MQDkrsgog6aO6iUvajJ4d6PD98gFU7+w@mail.gmail.com
2023-01-13 15:23:17 -08:00
Alexander Korotkov 3161ae86ce Fix jsonpath existense checking of missing variables
The current jsonpath code assumes that the referenced variable always exists.
It could only throw an error at the value valuation time.  At the same time
existence checking assumes variable is present without valuation, and error
suppression doesn't work for missing variables.

This commit makes existense checking trigger an error for missing variables.
This makes the overall behavior consistent.

Backpatch to 12 where jsonpath was introduced.

Reported-by: David G. Johnston
Discussion: https://postgr.es/m/CAKFQuwbeytffJkVnEqDyLZ%3DrQsznoTh1OgDoOF3VmOMkxcTMjA%40mail.gmail.com
Author: Alexander Korotkov, David G. Johnston
Backpatch-through: 12
2023-01-12 18:16:34 +03:00
Michael Paquier 8607630d74 Rename some variables related to ident files in hba.{c,h}
The code that handles authentication for user maps was pretty confusing
with its choice of variable names.  It involves two types of users: a
system user and a Postgres user (well, role), and these were not named
consistently throughout the code that processes the user maps loaded
from pg_ident.conf at authentication.

This commit changes the following things to improve the situation:
- Rename "pg_role" to "pg_user" and "token" to "system_user" in
IndetLine.  These choices are more consistent with the pg_ident.conf
example in the docs, as well.  "token" has been introduced recently in
fc579e1, and it is way worse than the choice before that, "ident_user".
- Switch the order of the fields in IdentLine to map with the order of
the items in the ident files, as of map name, system user and PG user.
- In check_ident_usermap(), rename "regexp_pgrole" to "expanded_pg_user"
when processing a regexp for the system user entry in a user map.  This
variable does not store a regular expression at all: it would be either
a string or a substitution to \1 if the Postgres role is specified as
such.

Author: Jelte Fennema
Discussion: https://postgr.es/m/CAGECzQTkwELHUOAKhvdA+m3tWbUQySHHkExJV8GAZ1pwgbEgXg@mail.gmail.com
2023-01-12 14:23:20 +09:00
Peter Eisentraut d952373a98 New header varatt.h split off from postgres.h
This new header contains all the variable-length data types support
(TOAST support) from postgres.h, which isn't needed by large parts of
the backend code.

Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us>
Discussion: https://www.postgresql.org/message-id/flat/ddcce239-0f29-6e62-4b47-1f8ca742addf%40enterprisedb.com
2023-01-10 05:54:36 +01:00
Amit Kapila f745739697 Fix the display of lock information for specktoken.
A transaction id is now displayed in the transactionid field and
speculative insertion token is displayed in the objid field.

Author: Sawada Masahiko
Reviewed-by: Amit Kapila
Discussion: https://postgr.es/m/CAD21AoCEKxZztULP1CDm45aSNNR1QO-Bh1q6LMTspQ78PBuJrw@mail.gmail.com
2023-01-10 08:53:47 +05:30
Tom Lane 38d81760c4 Invent random_normal() to provide normally-distributed random numbers.
There is already a version of this in contrib/tablefunc, but it
seems sufficiently widely useful to justify having it in core.

Paul Ramsey

Discussion: https://postgr.es/m/CACowWR0DqHAvOKUCNxTrASFkWsDLqKMd6WiXvVvaWg4pV1BMnQ@mail.gmail.com
2023-01-09 12:44:00 -05:00
David Rowley 3c569049b7 Allow left join removals and unique joins on partitioned tables
This allows left join removals and unique joins to work with partitioned
tables.  The planner just lacked sufficient proofs that a given join
would not cause any row duplication.  Unique indexes currently serve as
that proof, so have get_relation_info() populate the indexlist for
partitioned tables too.

Author: Arne Roland
Reviewed-by: Alvaro Herrera, Zhihong Yu, Amit Langote, David Rowley
Discussion: https://postgr.es/m/c3b2408b7a39433b8230bbcd02e9f302@index.de
2023-01-09 17:15:08 +13:00
Amit Kapila 216a784829 Perform apply of large transactions by parallel workers.
Currently, for large transactions, the publisher sends the data in
multiple streams (changes divided into chunks depending upon
logical_decoding_work_mem), and then on the subscriber-side, the apply
worker writes the changes into temporary files and once it receives the
commit, it reads from those files and applies the entire transaction. To
improve the performance of such transactions, we can instead allow them to
be applied via parallel workers.

In this approach, we assign a new parallel apply worker (if available) as
soon as the xact's first stream is received and the leader apply worker
will send changes to this new worker via shared memory. The parallel apply
worker will directly apply the change instead of writing it to temporary
files. However, if the leader apply worker times out while attempting to
send a message to the parallel apply worker, it will switch to
"partial serialize" mode -  in this mode, the leader serializes all
remaining changes to a file and notifies the parallel apply workers to
read and apply them at the end of the transaction. We use a non-blocking
way to send the messages from the leader apply worker to the parallel
apply to avoid deadlocks. We keep this parallel apply assigned till the
transaction commit is received and also wait for the worker to finish at
commit. This preserves commit ordering and avoid writing to and reading
from files in most cases. We still need to spill if there is no worker
available.

This patch also extends the SUBSCRIPTION 'streaming' parameter so that the
user can control whether to apply the streaming transaction in a parallel
apply worker or spill the change to disk. The user can set the streaming
parameter to 'on/off', or 'parallel'. The parameter value 'parallel' means
the streaming will be applied via a parallel apply worker, if available.
The parameter value 'on' means the streaming transaction will be spilled
to disk. The default value is 'off' (same as current behaviour).

In addition, the patch extends the logical replication STREAM_ABORT
message so that abort_lsn and abort_time can also be sent which can be
used to update the replication origin in parallel apply worker when the
streaming transaction is aborted. Because this message extension is needed
to support parallel streaming, parallel streaming is not supported for
publications on servers < PG16.

Author: Hou Zhijie, Wang wei, Amit Kapila with design inputs from Sawada Masahiko
Reviewed-by: Sawada Masahiko, Peter Smith, Dilip Kumar, Shi yu, Kuroda Hayato, Shveta Mallik
Discussion: https://postgr.es/m/CAA4eK1+wyN6zpaHUkCLorEWNx75MG0xhMwcFhvjqm2KURZEAGw@mail.gmail.com
2023-01-09 07:52:45 +05:30
Alexander Korotkov cd9479af2a Improve GIN cost estimation
GIN index scans were not taking any descent CPU-based cost into account.  That
made them look cheaper than other types of indexes when they shouldn't be.

We use the same heuristic as for btree indexes, but multiply it by the number
of searched entries.

Additionally, the CPU cost for the tree was based largely on a
genericcostestimate.  For a GIN index, we should not charge index quals per
tuple, but per entry. On top of this, charge cpu_index_tuple_cost per actual
tuple.

This should fix the cases where a GIN index is preferred over a btree and
the ones where a memoize node is not added on top of the GIN index scan
because it seemed too cheap.

We don't packpatch this to evade unexpected plan changes in stable versions.

Discussion: https://postgr.es/m/CABs3KGQnOkyQ42-zKQqiE7M0Ks9oWDSee%3D%2BJx3-TGq%3D68xqWYw%40mail.gmail.com
Discussion: https://postgr.es/m/3188617.44csPzL39Z%40aivenronan
Author: Ronan Dunklau
Reported-By: Hung Nguyen
Reviewed-by: Tom Lane, Alexander Korotkov
2023-01-08 22:51:43 +03:00
Alexander Korotkov eb5c4e953b Extract the multiplier for CPU process cost of index page into a macro
B-tree, GiST and SP-GiST all charge 50.0 * cpu_operator_cost for processing
an index page.  Extract this to a macro to avoid repeated magic numbers.

Discussion: https://mail.google.com/mail/u/0/?ik=a20b091faa&view=om&permmsgid=msg-f%3A1751459697261369543
Author: Ronan Dunklau
2023-01-08 22:37:33 +03:00
Michael Paquier 33ab0a2a52 Fix typos in comments, code and documentation
While on it, newlines are removed from the end of two elog() strings.
The others are simple grammar mistakes.  One comment in pg_upgrade
referred incorrectly to sequences since a7e5457.

Author: Justin Pryzby
Discussion: https://postgr.es/m/20221230231257.GI1153@telsasoft.com
Backpatch-through: 11
2023-01-03 16:26:14 +09:00
Bruce Momjian c8e1ba736b Update copyright for 2023
Backpatch-through: 11
2023-01-02 15:00:37 -05:00
Tom Lane 2ceea5adb0 Accept "+infinity" in date and timestamp[tz] input.
The float and numeric types accept this variant spelling of
"infinity", so it seems like the datetime types should too.

Vik Fearing, some cosmetic mods by me

Discussion: https://postgr.es/m/d0bef637-2dbd-0a5d-e539-48243b6f6c5e@postgresfriends.org
2023-01-01 14:16:07 -05:00
Michael Paquier 7aa81c61ec Fix precision handling for some COERCE_SQL_SYNTAX functions
f193883 has been incorrectly setting up the precision used in the
timestamp compilations returned by the following functions:
- LOCALTIME
- LOCALTIMESTAMP
- CURRENT_TIME
- CURRENT_TIMESTAMP

Specifying an out-of-range precision for CURRENT_TIMESTAMP and
LOCALTIMESTAMP was raising a WARNING without adjusting the precision,
leading to a subsequent error.  LOCALTIME and CURRENT_TIME raised a
WARNING without an error, still the precision given to the internal
routines was not correct, so let's be clean.

Ian has reported the problems in timestamp.c, while I have noticed the
ones in date.c.  Regression tests are added for all of them with
precisions high enough to provide coverage for the warnings, something
that went missing up to this commit.

Author: Ian Lawrence Barwick, Michael Paquier
Discussion: https://postgr.es/m/CAB8KJ=jQEnn9sYG+N752spt68wMrhmT-ocHCh4oeNmHF82QMWA@mail.gmail.com
2022-12-30 20:47:57 +09:00
Peter Eisentraut 1f605b82ba Change argument of appendBinaryStringInfo from char * to void *
There is some code that uses this function to assemble some kind of
packed binary layout, which requires a bunch of casts because of this.
Functions taking binary data plus length should take void * instead,
like memcpy() for example.

Discussion: https://www.postgresql.org/message-id/flat/a0086cfc-ff0f-2827-20fe-52b591d2666c%40enterprisedb.com
2022-12-30 11:05:09 +01:00
Peter Eisentraut 33a33f0ba4 Use appendStringInfoString instead of appendBinaryStringInfo where possible
For the jsonpath output, we don't need to squeeze out every bit of
performance, so instead use a more robust coding style.  There are
similar calls in jsonb.c, which we leave alone here since there is
indeed a performance impact for bulk exports.

Discussion: https://www.postgresql.org/message-id/flat/a0086cfc-ff0f-2827-20fe-52b591d2666c%40enterprisedb.com
2022-12-30 11:05:09 +01:00
Tom Lane a5434c5258 Remove new locale dependency in regproc regression test.
The modified error message for regcollationin failure includes
the database encoding, which it should've occurred to me is a
portability hazard for the regression tests.  Adjust the test
so the expected output doesn't include that.

In passing, fix a comment typo introduced in b8c0ffbd2.

Per buildfarm.
2022-12-27 13:06:42 -05:00
Tom Lane 3ea7329c9a Simplify the implementations of the to_reg* functions.
Given the soft-input-error feature, we can reduce these functions
to be just thin wrappers around a soft-error call of the
corresponding datatype input function.  This means less code and
more certainty that the to_reg* functions match the normal input
behavior.

Notably, it also means that they will accept numeric OID input,
which they didn't before.  It's not clear to me if that omission
had more than laziness behind it, but it doesn't seem like
something we need to work hard to preserve.

Discussion: https://postgr.es/m/3910031.1672095600@sss.pgh.pa.us
2022-12-27 12:33:04 -05:00
Tom Lane 858e776c84 Convert the reg* input functions to report (most) errors softly.
This is not really complete, but it catches most cases of practical
interest.  The main omissions are:

* regtype, regprocedure, and regoperator parse type names by
calling the main grammar, so any grammar-detected syntax error
will still be a hard error.  Also, if one includes a type
modifier in such a type specification, errors detected by the
typmodin function will be hard errors.

* Lookup errors are handled just by passing missing_ok = true
to the relevant catalog lookup function.  Because we've used
quite a restrictive definition of "missing_ok", this means that
edge cases such as "the named schema exists, but you lack
USAGE permission on it" are still hard errors.

It would make sense to me to replace most/all missing_ok
parameters with an escontext parameter and then allow these
additional lookup failure cases to be trapped too.  But that's
a job for some other day.

Discussion: https://postgr.es/m/3342239.1671988406@sss.pgh.pa.us
2022-12-27 12:26:01 -05:00
Tom Lane 78212f2101 Convert tsqueryin and tsvectorin to report errors softly.
This is slightly tedious because the adjustments cascade through
a couple of levels of subroutines, but it's not very hard.
I chose to avoid changing function signatures more than absolutely
necessary, by passing the escontext pointer in existing structs
where possible.

tsquery's nuisance NOTICEs about empty queries are suppressed in
soft-error mode, since they're not errors and we surely don't want
them to be shown to the user anyway.  Maybe that whole behavior
should be reconsidered.

Discussion: https://postgr.es/m/3824377.1672076822@sss.pgh.pa.us
2022-12-27 12:00:31 -05:00
Tom Lane eb8312a22a Detect bad input for types xid, xid8, and cid.
Historically these input functions just called strtoul or strtoull
and returned the result, with no error detection whatever.  Upgrade
them to reject garbage input and out-of-range values, similarly to
our other numeric input routines.

To share the code for this with type oid, adjust the existing
"oidin_subr" to be agnostic about the SQL name of the type it is
handling, and move it to numutils.c; then clone it for 64-bit types.

Because the xid types previously accepted hex and octal input by
reason of calling strtoul[l] with third argument zero, I made the
common subroutine do that too, with the consequence that type oid
now also accepts hex and octal input.  In view of 6fcda9aba, that
seems like a good thing.

While at it, simplify the existing over-complicated handling of
syntax errors from strtoul: we only need one ereturn not three.

Discussion: https://postgr.es/m/3526121.1672000729@sss.pgh.pa.us
2022-12-27 11:40:01 -05:00
Tom Lane 442e25d248 Convert enum_in() to report errors softly.
I missed this in my initial survey, probably because I examined
the contents of pg_type in the postgres database, which lacks
any enumerated types.

Discussion: https://postgr.es/m/CAAJ_b97KeDWUdpTKGOaFYPv0OicjOu6EW+QYWj-Ywrgj_aEy1g@mail.gmail.com
2022-12-25 14:32:30 -05:00
Andrew Dunstan e37fe1db6e Convert jsonpath's input function to report errors softly
Reviewed by Tom Lane

Discussion: https://postgr.es/m/a8dc5700-c341-3ba8-0507-cc09881e6200@dunslane.net
2022-12-24 15:21:20 -05:00
Tom Lane 780ec9f1b2 Make the numeric-OID cases of regprocin and friends be non-throwing.
While at it, use a common subroutine already.

This doesn't move the needle very far in terms of making these
functions non-throwing; the only case we're now able to trap is
numeric-OID-is-out-of-range.  Still, it seems like a pretty
non-controversial step in that direction.
2022-12-24 15:01:21 -05:00
David Rowley ed1a88ddac Allow window functions to adjust their frameOptions
WindowFuncs such as row_number() don't care if it's called with ROWS
UNBOUNDED PRECEDING AND CURRENT ROW or with RANGE UNBOUNDED PRECEDING AND
CURRENT ROW.  The latter is less efficient as the RANGE option requires
that the executor check for peer rows, so using the ROW option instead
would cause less overhead.  Because RANGE is part of the default frame
options for WindowClauses, it means WindowAgg is, by default, working much
harder than it needs to for window functions where the ROWS / RANGE option
has no effect on the window function's result.

On a test query from the discussion thread, a performance improvement of
344% was seen by using ROWS instead of RANGE.

Here we add a new support function node type to allow support functions to
be called for window functions so that the most optimal version of the
frame options can be set.  The planner has been adjusted so that the frame
options are changed only if all window functions sharing the same window
clause agree on what the optimized frame options are.

Here we give the ability for row_number(), rank(), dense_rank(),
percent_rank(), cume_dist() and ntile() to alter their WindowClause's
frameOptions.

Reviewed-by: Vik Fearing, Erwin Brandstetter, Zhihong Yu
Discussion: https://postgr.es/m/CAGHENJ7LBBszxS+SkWWFVnBmOT2oVsBhDMB1DFrgerCeYa_DyA@mail.gmail.com
Discussion: https://postgr.es/m/CAApHDvohAKEtTXxq7Pc-ic2dKT8oZfbRKeEJP64M0B6+S88z+A@mail.gmail.com
2022-12-23 12:43:52 +13:00
Andrew Dunstan 33dd895ef3 Introduce float4in_internal
This is the guts of float4in, callable as a routine to input floats,
which will be useful in an upcoming patch for allowing soft errors in
the seg module's input function.

A similar operation was performed some years ago for float8in in
commit 50861cd683.

Reviewed by Tom Lane

Discussion: https://postgr.es/m/cee4e426-d014-c0b7-aa22-a659f2cd9130@dunslane.net
2022-12-21 16:55:52 -05:00
Michael Paquier 22e3b55805 Switch some system functions to use get_call_result_type()
This shaves some code by replacing the combinations of
CreateTemplateTupleDesc()/TupleDescInitEntry() hardcoding a mapping of
the attributes listed in pg_proc.dat by get_call_result_type() to build
the TupleDesc needed for the rows generated.

get_call_result_type() is more expensive than the former style, but this
removes some duplication with the lists of OUT parameters (pg_proc.dat
and the attributes hardcoded in these code paths).  This is applied to
functions that are not considered as critical (aka that could be called
repeatedly for monitoring purposes).

Author: Bharath Rupireddy
Reviewed-by: Robert Haas, Álvaro Herrera, Tom Lane, Michael Paquier
Discussion: https://postgr.es/m/CALj2ACV23HW5HP5hFjd89FNS-z5X8r2jNXdMXcpN2BgTtKd87w@mail.gmail.com
2022-12-21 10:11:22 +09:00
Andrew Dunstan 8284cf5f74 Add copyright notices to meson files
Discussion: https://postgr.es/m/222b43a5-2fb3-2c1b-9cd0-375d376c8246@dunslane.net
2022-12-20 07:54:39 -05:00
Robert Haas 10ea0f924a Expose some information about backend subxact status.
A new function pg_stat_get_backend_subxact() can be used to get
information about the number of subtransactions in the cache of
a particular backend and whether that cache has overflowed. This
can be useful for tracking down performance problems that can
result from overflowed snapshots.

Dilip Kumar, reviewed by Zhihong Yu, Nikolay Samokhvalov,
Justin Pryzby, Nathan Bossart, Ashutosh Sharma, Julien
Rouhaud. Additional design comments from Andres Freund,
Tom Lane, Bruce Momjian, and David G. Johnston.

Discussion: http://postgr.es/m/CAFiTN-ut0uwkRJDQJeDPXpVyTWD46m3gt3JDToE02hTfONEN=Q@mail.gmail.com
2022-12-19 14:43:09 -05:00
Tom Lane c4939f1215 Clean up dubious error handling in wellformed_xml().
This ancient bit of code was summarily trapping any ereport longjmp
whatsoever and assuming that it must represent an invalid-XML report.
It's not really appropriate to handle OOM-like situations that way:
maybe the input is valid or maybe not, but we couldn't find out.
And it'd be a seriously bad idea to ignore, say, a query cancel
error that way.  (Perhaps that can't happen because there is no
CHECK_FOR_INTERRUPTS anywhere within xml_parse, but even if that's
true today it's obviously a very fragile assumption.)

But in the wake of the previous commit, we can drop the PG_TRY
here altogether, and use the soft error mechanism to catch only
the kinds of errors that are legitimate to treat as invalid-XML.

(This is our first use of the soft error mechanism for something
not directly related to a datatype input function.  It won't be
the last.)

xml_is_document can be converted in the same way.  That one is
not actively broken, because it was checking specifically for
ERRCODE_INVALID_XML_DOCUMENT rather than trapping everything;
but the code is still shorter and probably faster this way.

Discussion: https://postgr.es/m/3564577.1671142683@sss.pgh.pa.us
2022-12-16 11:10:40 -05:00
Tom Lane 37bef842f5 Convert xml_in to report errors softly.
The key idea here is that xml_parse must distinguish hard errors
from soft errors.  We want to throw a hard error for libxml
initialization failures: those might be out-of-memory, or something
else, but in any case they are not the fault of the input string.
If we get to the point of parsing the input, and something goes
wrong, we can fairly consider that to mean bad input.

One thing that arguably does mean bad input, but I didn't trouble
to handle softly, is encoding conversion failure while converting
the server encoding to UTF8.  This might be something to improve
later, but it seems like a pretty low-probability scenario.

Discussion: https://postgr.es/m/3564577.1671142683@sss.pgh.pa.us
2022-12-16 11:10:40 -05:00
Tom Lane d35a1af468 Convert range_in and multirange_in to report errors softly.
This is mostly straightforward, except that if the range type
has a canonical function, that might throw an error during range
input.  (Such errors probably only occur for edge cases: in the
in-core canonical functions, it happens only if a bound has the
maximum valid value for the underlying type.)  Hence, this patch
extends the soft-error regime to allow canonical functions to
return errors softly as well.  Extensions implementing range
canonical functions will need modification anyway because of the
API change for range_serialize(); while at it, they might want
to do something similar to what's been done here in the in-core
canonical functions.

Discussion: https://postgr.es/m/3284599.1671075185@sss.pgh.pa.us
2022-12-15 12:18:36 -05:00
Peter Eisentraut 75f49221c2 Static assertions cleanup
Because we added StaticAssertStmt() first before StaticAssertDecl(),
some uses as well as the instructions in c.h are now a bit backwards
from the "native" way static assertions are meant to be used in C.
This updates the guidance and moves some static assertions to better
places.

Specifically, since the addition of StaticAssertDecl(), we can put
static assertions at the file level.  This moves a number of static
assertions out of function bodies, where they might have been stuck
out of necessity, to perhaps better places at the file level or in
header files.

Also, when the static assertion appears in a position where a
declaration is allowed, then using StaticAssertDecl() is more native
than StaticAssertStmt().

Reviewed-by: John Naylor <john.naylor@enterprisedb.com>
Discussion: https://www.postgresql.org/message-id/flat/941a04e7-dd6f-c0e4-8cdf-a33b3338cbda%40enterprisedb.com
2022-12-15 10:10:32 +01:00
Tom Lane 3b9d2deb67 Convert a few more datatype input functions to report errors softly.
Convert the remaining string-category input functions
(bpcharin, varcharin, byteain) to the new style.

Discussion: https://postgr.es/m/3038346.1671060258@sss.pgh.pa.us
2022-12-14 19:42:05 -05:00
Tom Lane 90161dad4d Convert a few more datatype input functions to report errors softly.
Convert cash_in and uuid_in to the new style.

Amul Sul, minor mods by me

Discussion: https://postgr.es/m/CAAJ_b97KeDWUdpTKGOaFYPv0OicjOu6EW+QYWj-Ywrgj_aEy1g@mail.gmail.com
2022-12-14 18:03:11 -05:00
Tom Lane 47f3f97fcd Convert a few more datatype input functions to report errors softly.
Convert assorted internal-ish datatypes, namely aclitemin,
int2vectorin, oidin, oidvectorin, pg_lsn_in, pg_snapshot_in,
and tidin to the new style.

(Some others you might expect to find in this group, such as
cidin and xidin, need no changes because they never throw
errors at all.  That seems a little cheesy ... but it is not in
the charter of this patch series to add new error conditions.)

Amul Sul, minor mods by me

Discussion: https://postgr.es/m/CAAJ_b97KeDWUdpTKGOaFYPv0OicjOu6EW+QYWj-Ywrgj_aEy1g@mail.gmail.com
2022-12-14 17:50:24 -05:00
Tom Lane 332741e739 Convert the geometric input functions to report errors softly.
Convert box_in, circle_in, line_in, lseg_in, path_in, point_in,
and poly_in to the new style.

line_in still throws hard errors for overflows/underflows that can occur
when the input is specified as two points rather than in the canonical
"Ax + By + C = 0" style.  I'm not too concerned about that: it won't be
reached in normal dump/restore cases, and it's fairly debatable that
such conversion should ever have been made part of a type input function
in the first place.  But in any case, I don't want to extend the soft
error conventions into float.h without more discussion than this patch
has had.

Amul Sul, minor mods by me

Discussion: https://postgr.es/m/CAAJ_b97KeDWUdpTKGOaFYPv0OicjOu6EW+QYWj-Ywrgj_aEy1g@mail.gmail.com
2022-12-14 16:10:20 -05:00
Tom Lane 17407a8eaa Convert a few more datatype input functions to report errors softly.
Convert bit_in, varbit_in, inet_in, cidr_in, macaddr_in, and
macaddr8_in to the new style.

Amul Sul, minor mods by me

Discussion: https://postgr.es/m/CAAJ_b97KeDWUdpTKGOaFYPv0OicjOu6EW+QYWj-Ywrgj_aEy1g@mail.gmail.com
2022-12-14 13:22:08 -05:00
Peter Eisentraut 6fcda9aba8 Non-decimal integer literals
Add support for hexadecimal, octal, and binary integer literals:

    0x42F
    0o273
    0b100101

per SQL:202x draft.

This adds support in the lexer as well as in the integer type input
functions.

Reviewed-by: John Naylor <john.naylor@enterprisedb.com>
Reviewed-by: Zhihong Yu <zyu@yugabyte.com>
Reviewed-by: David Rowley <dgrowleyml@gmail.com>
Reviewed-by: Dean Rasheed <dean.a.rasheed@gmail.com>
Discussion: https://www.postgresql.org/message-id/flat/b239564c-cad0-b23e-c57e-166d883cb97d@enterprisedb.com
2022-12-14 06:17:07 +01:00
Jeff Davis 60684dd834 Add grantable MAINTAIN privilege and pg_maintain role.
Allows VACUUM, ANALYZE, REINDEX, REFRESH MATERIALIZED VIEW, CLUSTER,
and LOCK TABLE.

Effectively reverts 4441fc704d. Instead of creating separate
privileges for VACUUM, ANALYZE, and other maintenance commands, group
them together under a single MAINTAIN privilege.

Author: Nathan Bossart
Discussion: https://postgr.es/m/20221212210136.GA449764@nathanxps13
Discussion: https://postgr.es/m/45224.1670476523@sss.pgh.pa.us
2022-12-13 17:33:28 -08:00
Tom Lane b0feda79fd Fix jsonb subscripting to cope with toasted subscript values.
jsonb_get_element() was incautious enough to use VARDATA() and
VARSIZE() directly on an arbitrary text Datum.  That of course
fails if the Datum is short-header, compressed, or out-of-line.
The typical result would be failing to match any element of a
jsonb object, though matching the wrong one seems possible as well.

setPathObject() was slightly brighter, in that it used VARDATA_ANY
and VARSIZE_ANY_EXHDR, but that only kept it out of trouble for
short-header Datums.  push_path() had the same issue.  This could
result in faulty subscripted insertions, though keys long enough to
cause a problem are likely rare in the wild.

Having seen these, I looked around for unsafe usages in the rest
of the adt/json* files.  There are a couple of places where it's not
immediately obvious that the Datum can't be compressed or out-of-line,
so I added pg_detoast_datum_packed() to cope if it is.  Also, remove
some other usages of VARDATA/VARSIZE on Datums we just extracted from
a text array.  Those aren't actively broken, but they will become so
if we ever start allowing short-header array elements, which does not
seem like a terribly unreasonable thing to do.  In any case they are
not great coding examples, and they could also do with comments
pointing out that we're assuming we don't need pg_detoast_datum_packed.

Per report from exe-dealer@yandex.ru.  Patch by me, but thanks to
David Johnston for initial investigation.  Back-patch to v14 where
jsonb subscripting was introduced.

Discussion: https://postgr.es/m/205321670615953@mail.yandex.ru
2022-12-12 16:17:54 -05:00
Tom Lane b8c0ffbd2c Convert domain_in to report errors softly.
This is straightforward as far as it goes.  However, it does not
attempt to trap errors occurring during the execution of domain
CHECK constraints.  Since those are general user-defined
expressions, the only way to do that would involve starting up a
subtransaction for each check.  Of course the entire point of
the soft-errors feature is to not need subtransactions, so that
would be self-defeating.  For now, we'll rely on the assumption
that domain checks are written to avoid throwing errors.

Discussion: https://postgr.es/m/1181028.1670635727@sss.pgh.pa.us
2022-12-11 12:56:54 -05:00
Tom Lane c60c9badba Convert json_in and jsonb_in to report errors softly.
This requires a bit of further infrastructure-extension to allow
trapping errors reported by numeric_in and pg_unicode_to_server,
but otherwise it's pretty straightforward.

In the case of jsonb_in, we are only capturing errors reported
during the initial "parse" phase.  The value-construction phase
(JsonbValueToJsonb) can also throw errors if assorted implementation
limits are exceeded.  We should improve that, but it seems like a
separable project.

Andrew Dunstan and Tom Lane

Discussion: https://postgr.es/m/3bac9841-fe07-713d-fa42-606c225567d6@dunslane.net
2022-12-11 11:28:15 -05:00
Tom Lane 50428a301d Change JsonSemAction to allow non-throw error reporting.
Formerly, semantic action functions for the JSON parser returned void,
so that there was no way for them to affect the parser's behavior.
That means in particular that they can't force an error exit except by
longjmp'ing.  That won't do in the context of our project to make input
functions return errors softly.  Hence, change them to return the same
JsonParseErrorType enum value as the parser itself uses.  If an action
function returns anything besides JSON_SUCCESS, the parse is abandoned
and that error code is returned.

Action functions can thus easily return the same error conditions that
the parser already knows about.  As an escape hatch for expansion, also
invent a code JSON_SEM_ACTION_FAILED that the core parser does not know
the exact meaning of.  When returning this code, an action function
must use some out-of-band mechanism for reporting the error details.

This commit simply makes the API change and causes all the existing
action functions to return JSON_SUCCESS, so that there is no actual
change in behavior here.  This is long enough and boring enough that
it seemed best to commit it separately from the changes that make
real use of the new mechanism.

In passing, remove a duplicate assignment of
transform_string_values_scalar.

Discussion: https://postgr.es/m/1436686.1670701118@sss.pgh.pa.us
2022-12-11 10:39:05 -05:00
Tom Lane d02ef65bce Standardize error reports in unimplemented I/O functions.
We chose a specific wording of the not-implemented errors for
pseudotype I/O functions and other cases where there's little
value in implementing input and/or output.  gtsvectorin never
got that memo though, nor did most of contrib.  Make these all
fall in line, mostly because I'm a neatnik but also to remove
unnecessary translatable strings.

gbtreekey_in needs a bit of extra love since it supports
multiple SQL types.  Sadly, gbtreekey_out doesn't have the
ability to do that, but I think it's unreachable anyway.

Noted while surveying datatype input functions to see what we
have left to fix.
2022-12-10 18:26:43 -05:00
Tom Lane e730718072 Use the macro, not handwritten code, to construct anymultirange_in().
Apparently anymultirange_in was written before we converted all
these pseudotype input functions to use a common macro, and it didn't
get fixed before committing.  Sloppy merging probably explains its
unintuitive ordering, too, so rearrange.

Noted while surveying datatype input functions to see what we
have left to fix.  I'm inclined to leave the pseudotypes as
throwing hard errors, because it's difficult to see a reason why
anyone would need something else.  But in any case, if we want
to change that, we shouldn't have to change multiple copies of
the code.
2022-12-10 17:22:16 -05:00
Michael Paquier 66dcb09246 Fix macro definitions in pgstatfuncs.c
Buildfarm member wrasse has been complaining about empty declarations
as an effect of 8018ffb and 83a1a1b due to extra semicolons.

While on it, remove also the last backslash of the macros definitions,
causing more lines to be eaten in it than necessary, per comment from
Tom Lane.

Reported-by: Tom Lane, and buildfarm member wrasse
Author: Nathan Bossart, Michael Paquier
Discussion: https://postgr.es/m/1188769.1670640236@sss.pgh.pa.us
2022-12-10 13:28:02 +09:00
Tom Lane 4dd687502d Restructure soft-error handling in formatting.c.
Replace the error trapping scheme introduced in 5bc450629 with our
shiny new errsave/ereturn mechanism.  This doesn't have any real
functional impact (although I think that the new coding is able
to report a few more errors softly than v15 did).  And I doubt
there's any measurable performance difference either.  But this
gets rid of an ad-hoc, one-of-a-kind design in favor of a mechanism
that will be widely used going forward, so it should be a net win
for code readability.

Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
2022-12-09 20:15:56 -05:00
Tom Lane c60488b474 Convert datetime input functions to use "soft" error reporting.
This patch converts the input functions for date, time, timetz,
timestamp, timestamptz, and interval to the new soft-error style.
There's some related stuff in formatting.c that remains to be
cleaned up, but that seems like a separable project.

Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
2022-12-09 16:07:49 -05:00
Tom Lane 2661469d86 Allow DateTimeParseError to handle bad-timezone error messages.
Pay down some ancient technical debt (dating to commit 022fd9966):
fix a couple of places in datetime parsing that were throwing
ereport's immediately instead of returning a DTERR code that could be
interpreted by DateTimeParseError.  The reason for that was that there
was no mechanism for passing any auxiliary data (such as a zone name)
to DateTimeParseError, and these errors seemed to really need it.
Up to now it didn't matter that much just where the error got thrown,
but now we'd like to have a hard policy that datetime parse errors
get thrown from just the one place.

Hence, invent a "DateTimeErrorExtra" struct that can be used to
carry any extra values needed for specific DTERR codes.  Perhaps
in the future somebody will be motivated to use this to improve
the specificity of other DateTimeParseError messages, but for now
just deal with the timezone-error cases.

This is on the way to making the datetime input functions report
parse errors softly; but it's really an independent change, so
commit separately.

Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
2022-12-09 13:30:47 -05:00
Tom Lane bad5116957 Const-ify a couple of datetime parsing subroutines.
More could be done in this line, but I just grabbed some low-hanging
fruit.  Principal objective was to remove the need for several ugly
unconstify() usages in formatting.c.
2022-12-09 10:43:45 -05:00
Tom Lane ccff2d20ed Convert a few datatype input functions to use "soft" error reporting.
This patch converts the input functions for bool, int2, int4, int8,
float4, float8, numeric, and contrib/cube to the new soft-error style.
array_in and record_in are also converted.  There's lots more to do,
but this is enough to provide proof-of-concept that the soft-error
API is usable, as well as reference examples for how to convert
input functions.

This patch is mostly by me, but it owes very substantial debt to
earlier work by Nikita Glukhov, Andrew Dunstan, and Amul Sul.
Thanks to Andres Freund for review.

Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
2022-12-09 10:14:53 -05:00
Tom Lane 1939d26282 Add test scaffolding for soft error reporting from input functions.
pg_input_is_valid() returns boolean, while pg_input_error_message()
returns the primary error message if the input is bad, or NULL
if the input is OK.  The main reason for having two functions is
so that we can test both the details-wanted and the no-details-wanted
code paths.

Although these are primarily designed with testing in mind,
it could well be that they'll be useful to end users as well.

This patch is mostly by me, but it owes very substantial debt to
earlier work by Nikita Glukhov, Andrew Dunstan, and Amul Sul.
Thanks to Andres Freund for review.

Discussion: https://postgr.es/m/3bbbb0df-7382-bf87-9737-340ba096e034@postgrespro.ru
2022-12-09 10:08:44 -05:00
Alexander Korotkov 096dd80f3c Add USER SET parameter values for pg_db_role_setting
The USER SET flag specifies that the variable should be set on behalf of an
ordinary role.  That lets ordinary roles set placeholder variables, which
permission requirements are not known yet.  Such a value wouldn't be used if
the variable finally appear to require superuser privileges.

The new flags are stored in the pg_db_role_setting.setuser array.  Catversion
is bumped.

This commit is inspired by the previous work by Steve Chavez.

Discussion: https://postgr.es/m/CAPpHfdsLd6E--epnGqXENqLP6dLwuNZrPMcNYb3wJ87WR7UBOQ%40mail.gmail.com
Author: Alexander Korotkov, Steve Chavez
Reviewed-by: Pavel Borisov, Steve Chavez
2022-12-09 13:12:20 +03:00
Michael Paquier 8018ffbf58 Generate pg_stat_get*() functions for databases using macros
The same code pattern is repeated 21 times for int64 counters (0 for
missing entry) and 5 times for doubles (0 for missing entry) on database
entries.  This code is switched to use macros for the basic code
instead, shaving a few hundred lines of originally-duplicated code
patterns.  The function names remain the same, but some fields of
PgStat_StatDBEntry have to be renamed to cope with the new style.

This is in the same spirit as 83a1a1b.

Author: Michael Paquier
Reviewed-by: Nathan Bossart, Bertrand Drouvot
Discussion: https://postgr.es/m/Y46stlxQ2LQE20Na@paquier.xyz
2022-12-07 09:11:48 +09:00
Alvaro Herrera a61b1f7482
Rework query relation permission checking
Currently, information about the permissions to be checked on relations
mentioned in a query is stored in their range table entries.  So the
executor must scan the entire range table looking for relations that
need to have permissions checked.  This can make the permission checking
part of the executor initialization needlessly expensive when many
inheritance children are present in the range range.  While the
permissions need not be checked on the individual child relations, the
executor still must visit every range table entry to filter them out.

This commit moves the permission checking information out of the range
table entries into a new plan node called RTEPermissionInfo.  Every
top-level (inheritance "root") RTE_RELATION entry in the range table
gets one and a list of those is maintained alongside the range table.
This new list is initialized by the parser when initializing the range
table.  The rewriter can add more entries to it as rules/views are
expanded.  Finally, the planner combines the lists of the individual
subqueries into one flat list that is passed to the executor for
checking.

To make it quick to find the RTEPermissionInfo entry belonging to a
given relation, RangeTblEntry gets a new Index field 'perminfoindex'
that stores the corresponding RTEPermissionInfo's index in the query's
list of the latter.

ExecutorCheckPerms_hook has gained another List * argument; the
signature is now:
typedef bool (*ExecutorCheckPerms_hook_type) (List *rangeTable,
					      List *rtePermInfos,
					      bool ereport_on_violation);
The first argument is no longer used by any in-core uses of the hook,
but we leave it in place because there may be other implementations that
do.  Implementations should likely scan the rtePermInfos list to
determine which operations to allow or deny.

Author: Amit Langote <amitlangote09@gmail.com>
Discussion: https://postgr.es/m/CA+HiwqGjJDmUhDSfv-U2qhKJjt9ST7Xh9JXC_irsAQ1TAUsJYg@mail.gmail.com
2022-12-06 16:09:24 +01:00
Michael Paquier 83a1a1b566 Generate pg_stat_get*() functions for tables using macros
The same code pattern is repeated 17 times for int64 counters (0 for
missing entry) and 5 times for timestamps (NULL for missing entry) on
table entries.  This code is switched to use a macro for the basic code
instead, shaving a few hundred lines of originally-duplicated code.  The
function names remain the same, but some fields of PgStat_StatTabEntry
have to be renamed to cope with the new style.

Author: Bertrand Drouvot
Reviewed-by: Nathan Bossart
Discussion: https:/postgr.es/m/20221204173207.GA2669116@nathanxps13
2022-12-06 10:46:35 +09:00
David Rowley 8692f6644e Fix thinko introduced in 6b423ec67
As pointed out by Dean Rasheed, we really should be using tmp >
-(PG_INTNN_MIN / 10) rather than tmp > (PG_INTNN_MAX / 10) for checking
for overflows in the accumulation in the pg_strtointNN functions.  This
does happen to be the same number when dividing by 10, but there is a
pending patch which adds other bases and this is not the same number if we
were to divide by 2 rather than 10, for example.  If the base 2 parsing
was to follow this example then we could accidentally think a string
containing the value of PG_INT32_MIN was an overflow in pg_strtoint32.
Clearly that shouldn't overflow.

This does not fix any actual live bugs, only some bad examples of overflow
checks for future bases.

Reported-by: Dean Rasheed
Discussion: https://postgr.es/m/CAEZATCVEtwfhdm-K-etZYFB0=qsR0nT6qXta_W+GQx4RYph1dg@mail.gmail.com
2022-12-05 11:55:05 +13:00
Tom Lane 92c4dafe1e Re-pgindent a few files.
Just because I'm a neatnik, and I'm currently working on
code in this area.  It annoys me to not be able to pgindent
my patches without working around unrelated changes.
2022-12-04 14:25:53 -05:00
David Rowley 6b423ec677 Improve performance of pg_strtointNN functions
Experiments have shown that modern versions of both gcc and clang are
unable to fully optimize the multiplication by 10 that we're doing in the
pg_strtointNN functions.  Both compilers seem to be making use of "imul",
which is not the most efficient way to multiply by 10.  This seems to be
due to the overflow checking that we're doing.  Without the overflow
checks, both those compilers switch to a more efficient method of
multiplying by 10.  In absence of overflow concern, integer multiplication
by 10 can be done by bit-shifting left 3 places to multiply by 8 and then
adding the original value twice.

To allow compilers this flexibility, here we adjust the code so that we
accumulate the number as an unsigned version of the type and remove the
use of pg_mul_sNN_overflow() and pg_sub_sNN_overflow().  The overflow
checking can be done simply by checking if the accumulated value has gone
beyond a 10th of the maximum *signed* value for the given type.  If it has
then the accumulation of the next digit will cause an overflow.  After
this is done, we do a final overflow check before converting the unsigned
version of the number back to its signed counterpart.

Testing has shown about an 8% speedup of a COPY into a table containing 2
INT columns.

Author: David Rowley, Dean Rasheed
Discussion: https://postgr.es/m/CAApHDvrL6_+wKgPqRHr7gH_6xy3hXM6a3QCsZ5ForurjDFfenA@mail.gmail.com
Discussion: https://postgr.es/m/CAApHDvrdYByjfj-=WbmVNFgmVZg88-dE7heukw8p55aJ+W=qxQ@mail.gmail.com
2022-12-04 16:18:18 +13:00