Fix conversion of JSON strings to JSON output columns in json_to_record().

json_to_record(), when an output column is declared as type json or jsonb,
should emit the corresponding field of the input JSON object.  But it got
this slightly wrong when the field is just a string literal: it failed to
escape the contents of the string.  That typically resulted in syntax
errors if the string contained any double quotes or backslashes.

jsonb_to_record() handles such cases correctly, but I added corresponding
test cases for it too, to prevent future backsliding.

Improve the documentation, as it provided only a very hand-wavy
description of the conversion rules used by these functions.

Per bug report from Robert Vollmert.  Back-patch to v10 where the
error was introduced (by commit cf35346e8).

Note that PG 9.4 - 9.6 also get this case wrong, but differently so:
they feed the de-escaped contents of the string literal to json[b]_in.
That behavior is less obviously wrong, so possibly it's being depended on
in the field, so I won't risk trying to make the older branches behave
like the newer ones.

Discussion: https://postgr.es/m/D6921B37-BD8E-4664-8D5F-DB3525765DCD@vllmrt.net
This commit is contained in:
Tom Lane 2019-06-11 13:33:08 -04:00
parent c015560176
commit 1c9034579c
6 changed files with 167 additions and 43 deletions

View File

@ -12006,29 +12006,72 @@ table2-mapping
</note> </note>
<note> <note>
<para> <para>
While the examples for the functions The functions
<function>json_populate_record</function>, <function>json[b]_populate_record</function>,
<function>json_populate_recordset</function>, <function>json[b]_populate_recordset</function>,
<function>json_to_record</function> and <function>json[b]_to_record</function> and
<function>json_to_recordset</function> use constants, the typical use <function>json[b]_to_recordset</function>
would be to reference a table in the <literal>FROM</literal> clause operate on a JSON object, or array of objects, and extract the values
and use one of its <type>json</type> or <type>jsonb</type> columns associated with keys whose names match column names of the output row
as an argument to the function. Extracted key values can then be type.
referenced in other parts of the query, like <literal>WHERE</literal> Object fields that do not correspond to any output column name are
clauses and target lists. Extracting multiple values in this ignored, and output columns that do not match any object field will be
way can improve performance over extracting them separately with filled with nulls.
per-key operators. To convert a JSON value to the SQL type of an output column, the
</para> following rules are applied in sequence:
<itemizedlist spacing="compact">
<listitem>
<para>
A JSON null value is converted to a SQL null in all cases.
</para>
</listitem>
<listitem>
<para>
If the output column is of type <type>json</type>
or <type>jsonb</type>, the JSON value is just reproduced exactly.
</para>
</listitem>
<listitem>
<para>
If the output column is a composite (row) type, and the JSON value is
a JSON object, the fields of the object are converted to columns of
the output row type by recursive application of these rules.
</para>
</listitem>
<listitem>
<para>
Likewise, if the output column is an array type and the JSON value is
a JSON array, the elements of the JSON array are converted to elements
of the output array by recursive application of these rules.
</para>
</listitem>
<listitem>
<para>
Otherwise, if the JSON value is a string literal, the contents of the
string are fed to the input conversion function for the column's data
type.
</para>
</listitem>
<listitem>
<para>
Otherwise, the ordinary text representation of the JSON value is fed
to the input conversion function for the column's data type.
</para>
</listitem>
</itemizedlist>
</para>
<para> <para>
JSON keys are matched to identical column names in the target While the examples for these functions use constants, the typical use
row type. JSON type coercion for these functions is <quote>best would be to reference a table in the <literal>FROM</literal> clause
effort</quote> and may not result in desired values for some types. and use one of its <type>json</type> or <type>jsonb</type> columns
JSON fields that do not appear in the target row type will be as an argument to the function. Extracted key values can then be
omitted from the output, and target columns that do not match any referenced in other parts of the query, like <literal>WHERE</literal>
JSON field will simply be NULL. clauses and target lists. Extracting multiple values in this
</para> way can improve performance over extracting them separately with
per-key operators.
</para>
</note> </note>
<note> <note>

View File

@ -2803,26 +2803,7 @@ populate_scalar(ScalarIOData *io, Oid typid, int32 typmod, JsValue *jsv)
json = jsv->val.json.str; json = jsv->val.json.str;
Assert(json); Assert(json);
if (len >= 0)
/* already done the hard work in the json case */
if ((typid == JSONOID || typid == JSONBOID) &&
jsv->val.json.type == JSON_TOKEN_STRING)
{
/*
* Add quotes around string value (should be already escaped) if
* converting to json/jsonb.
*/
if (len < 0)
len = strlen(json);
str = palloc(len + sizeof(char) * 3);
str[0] = '"';
memcpy(&str[1], json, len);
str[len + 1] = '"';
str[len + 2] = '\0';
}
else if (len >= 0)
{ {
/* Need to copy non-null-terminated string */ /* Need to copy non-null-terminated string */
str = palloc(len + 1 * sizeof(char)); str = palloc(len + 1 * sizeof(char));
@ -2830,7 +2811,21 @@ populate_scalar(ScalarIOData *io, Oid typid, int32 typmod, JsValue *jsv)
str[len] = '\0'; str[len] = '\0';
} }
else else
str = json; /* null-terminated string */ str = json; /* string is already null-terminated */
/* If converting to json/jsonb, make string into valid JSON literal */
if ((typid == JSONOID || typid == JSONBOID) &&
jsv->val.json.type == JSON_TOKEN_STRING)
{
StringInfoData buf;
initStringInfo(&buf);
escape_json(&buf, str);
/* free temporary buffer */
if (str != json)
pfree(str);
str = buf.data;
}
} }
else else
{ {

View File

@ -2276,6 +2276,42 @@ select * from json_to_record('{"ia2": [[[1], [2], [3]]]}') as x(ia2 int4[][]);
{{{1},{2},{3}}} {{{1},{2},{3}}}
(1 row) (1 row)
select * from json_to_record('{"out": {"key": 1}}') as x(out json);
out
------------
{"key": 1}
(1 row)
select * from json_to_record('{"out": [{"key": 1}]}') as x(out json);
out
--------------
[{"key": 1}]
(1 row)
select * from json_to_record('{"out": "{\"key\": 1}"}') as x(out json);
out
----------------
"{\"key\": 1}"
(1 row)
select * from json_to_record('{"out": {"key": 1}}') as x(out jsonb);
out
------------
{"key": 1}
(1 row)
select * from json_to_record('{"out": [{"key": 1}]}') as x(out jsonb);
out
--------------
[{"key": 1}]
(1 row)
select * from json_to_record('{"out": "{\"key\": 1}"}') as x(out jsonb);
out
----------------
"{\"key\": 1}"
(1 row)
-- json_strip_nulls -- json_strip_nulls
select json_strip_nulls(null); select json_strip_nulls(null);
json_strip_nulls json_strip_nulls

View File

@ -2652,6 +2652,42 @@ select * from jsonb_to_record('{"ia2": [[[1], [2], [3]]]}') as x(ia2 int4[][]);
{{{1},{2},{3}}} {{{1},{2},{3}}}
(1 row) (1 row)
select * from jsonb_to_record('{"out": {"key": 1}}') as x(out json);
out
------------
{"key": 1}
(1 row)
select * from jsonb_to_record('{"out": [{"key": 1}]}') as x(out json);
out
--------------
[{"key": 1}]
(1 row)
select * from jsonb_to_record('{"out": "{\"key\": 1}"}') as x(out json);
out
----------------
"{\"key\": 1}"
(1 row)
select * from jsonb_to_record('{"out": {"key": 1}}') as x(out jsonb);
out
------------
{"key": 1}
(1 row)
select * from jsonb_to_record('{"out": [{"key": 1}]}') as x(out jsonb);
out
--------------
[{"key": 1}]
(1 row)
select * from jsonb_to_record('{"out": "{\"key\": 1}"}') as x(out jsonb);
out
----------------
"{\"key\": 1}"
(1 row)
-- test type info caching in jsonb_populate_record() -- test type info caching in jsonb_populate_record()
CREATE TEMP TABLE jsbpoptest (js jsonb); CREATE TEMP TABLE jsbpoptest (js jsonb);
INSERT INTO jsbpoptest INSERT INTO jsbpoptest

View File

@ -742,6 +742,13 @@ select * from json_to_record('{"ia2": [1, 2, 3]}') as x(ia2 int[][]);
select * from json_to_record('{"ia2": [[1, 2], [3, 4]]}') as x(ia2 int4[][]); select * from json_to_record('{"ia2": [[1, 2], [3, 4]]}') as x(ia2 int4[][]);
select * from json_to_record('{"ia2": [[[1], [2], [3]]]}') as x(ia2 int4[][]); select * from json_to_record('{"ia2": [[[1], [2], [3]]]}') as x(ia2 int4[][]);
select * from json_to_record('{"out": {"key": 1}}') as x(out json);
select * from json_to_record('{"out": [{"key": 1}]}') as x(out json);
select * from json_to_record('{"out": "{\"key\": 1}"}') as x(out json);
select * from json_to_record('{"out": {"key": 1}}') as x(out jsonb);
select * from json_to_record('{"out": [{"key": 1}]}') as x(out jsonb);
select * from json_to_record('{"out": "{\"key\": 1}"}') as x(out jsonb);
-- json_strip_nulls -- json_strip_nulls
select json_strip_nulls(null); select json_strip_nulls(null);

View File

@ -709,6 +709,13 @@ select * from jsonb_to_record('{"ia2": [1, 2, 3]}') as x(ia2 int[][]);
select * from jsonb_to_record('{"ia2": [[1, 2], [3, 4]]}') as x(ia2 int4[][]); select * from jsonb_to_record('{"ia2": [[1, 2], [3, 4]]}') as x(ia2 int4[][]);
select * from jsonb_to_record('{"ia2": [[[1], [2], [3]]]}') as x(ia2 int4[][]); select * from jsonb_to_record('{"ia2": [[[1], [2], [3]]]}') as x(ia2 int4[][]);
select * from jsonb_to_record('{"out": {"key": 1}}') as x(out json);
select * from jsonb_to_record('{"out": [{"key": 1}]}') as x(out json);
select * from jsonb_to_record('{"out": "{\"key\": 1}"}') as x(out json);
select * from jsonb_to_record('{"out": {"key": 1}}') as x(out jsonb);
select * from jsonb_to_record('{"out": [{"key": 1}]}') as x(out jsonb);
select * from jsonb_to_record('{"out": "{\"key\": 1}"}') as x(out jsonb);
-- test type info caching in jsonb_populate_record() -- test type info caching in jsonb_populate_record()
CREATE TEMP TABLE jsbpoptest (js jsonb); CREATE TEMP TABLE jsbpoptest (js jsonb);