2010-09-20 22:08:53 +02:00
|
|
|
<!-- doc/src/sgml/textsearch.sgml -->
|
2007-09-04 05:46:36 +02:00
|
|
|
|
2007-08-21 23:08:47 +02:00
|
|
|
<chapter id="textsearch">
|
2010-04-03 09:23:02 +02:00
|
|
|
<title>Full Text Search</title>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-09-04 05:46:36 +02:00
|
|
|
<indexterm zone="textsearch">
|
|
|
|
<primary>full text search</primary>
|
|
|
|
</indexterm>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-09-04 05:46:36 +02:00
|
|
|
<indexterm zone="textsearch">
|
|
|
|
<primary>text search</primary>
|
|
|
|
</indexterm>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
|
|
|
<sect1 id="textsearch-intro">
|
2007-08-29 22:37:14 +02:00
|
|
|
<title>Introduction</title>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
|
|
|
<para>
|
2007-10-15 23:39:57 +02:00
|
|
|
Full Text Searching (or just <firstterm>text search</firstterm>) provides
|
2017-10-09 03:44:17 +02:00
|
|
|
the capability to identify natural-language <firstterm>documents</firstterm> that
|
2007-10-21 22:04:37 +02:00
|
|
|
satisfy a <firstterm>query</firstterm>, and optionally to sort them by
|
|
|
|
relevance to the query. The most common type of search
|
2007-08-29 04:37:04 +02:00
|
|
|
is to find all documents containing given <firstterm>query terms</firstterm>
|
|
|
|
and return them in order of their <firstterm>similarity</firstterm> to the
|
2007-10-15 23:39:57 +02:00
|
|
|
query. Notions of <varname>query</varname> and
|
2007-08-29 04:37:04 +02:00
|
|
|
<varname>similarity</varname> are very flexible and depend on the specific
|
|
|
|
application. The simplest search considers <varname>query</varname> as a
|
|
|
|
set of words and <varname>similarity</varname> as the frequency of query
|
2007-10-21 22:04:37 +02:00
|
|
|
words in the document.
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Textual search operators have existed in databases for years.
|
|
|
|
<productname>PostgreSQL</productname> has
|
2007-10-15 23:39:57 +02:00
|
|
|
<literal>~</literal>, <literal>~*</literal>, <literal>LIKE</literal>, and
|
2007-11-28 16:42:31 +01:00
|
|
|
<literal>ILIKE</literal> operators for textual data types, but they lack
|
2007-08-29 04:37:04 +02:00
|
|
|
many essential properties required by modern information systems:
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<itemizedlist spacing="compact" mark="bullet">
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2007-10-21 22:04:37 +02:00
|
|
|
There is no linguistic support, even for English. Regular expressions
|
|
|
|
are not sufficient because they cannot easily handle derived words, e.g.,
|
|
|
|
<literal>satisfies</literal> and <literal>satisfy</literal>. You might
|
2007-10-15 23:39:57 +02:00
|
|
|
miss documents that contain <literal>satisfies</literal>, although you
|
2007-08-29 04:37:04 +02:00
|
|
|
probably would like to find them when searching for
|
|
|
|
<literal>satisfy</literal>. It is possible to use <literal>OR</literal>
|
2007-10-21 22:04:37 +02:00
|
|
|
to search for multiple derived forms, but this is tedious and error-prone
|
|
|
|
(some words can have several thousand derivatives).
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
They provide no ordering (ranking) of search results, which makes them
|
|
|
|
ineffective when thousands of matching documents are found.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2007-10-21 22:04:37 +02:00
|
|
|
They tend to be slow because there is no index support, so they must
|
|
|
|
process all documents for every search.
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</itemizedlist>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Full text indexing allows documents to be <emphasis>preprocessed</emphasis>
|
|
|
|
and an index saved for later rapid searching. Preprocessing includes:
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<itemizedlist mark="none">
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
<emphasis>Parsing documents into <firstterm>tokens</firstterm></emphasis>. It is
|
2009-04-27 18:27:36 +02:00
|
|
|
useful to identify various classes of tokens, e.g., numbers, words,
|
2007-09-04 05:46:36 +02:00
|
|
|
complex words, email addresses, so that they can be processed
|
2007-10-17 03:01:28 +02:00
|
|
|
differently. In principle token classes depend on the specific
|
|
|
|
application, but for most purposes it is adequate to use a predefined
|
2007-09-04 05:46:36 +02:00
|
|
|
set of classes.
|
2017-10-09 03:44:17 +02:00
|
|
|
<productname>PostgreSQL</productname> uses a <firstterm>parser</firstterm> to
|
2007-09-04 05:46:36 +02:00
|
|
|
perform this step. A standard parser is provided, and custom parsers
|
|
|
|
can be created for specific needs.
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
<emphasis>Converting tokens into <firstterm>lexemes</firstterm></emphasis>.
|
2007-10-17 03:01:28 +02:00
|
|
|
A lexeme is a string, just like a token, but it has been
|
2017-10-09 03:44:17 +02:00
|
|
|
<firstterm>normalized</firstterm> so that different forms of the same word
|
2007-10-17 03:01:28 +02:00
|
|
|
are made alike. For example, normalization almost always includes
|
|
|
|
folding upper-case letters to lower-case, and often involves removal
|
2017-10-09 03:44:17 +02:00
|
|
|
of suffixes (such as <literal>s</literal> or <literal>es</literal> in English).
|
2007-10-17 03:01:28 +02:00
|
|
|
This allows searches to find variant forms of the
|
2007-09-04 05:46:36 +02:00
|
|
|
same word, without tediously entering all the possible variants.
|
2017-10-09 03:44:17 +02:00
|
|
|
Also, this step typically eliminates <firstterm>stop words</firstterm>, which
|
2007-09-04 05:46:36 +02:00
|
|
|
are words that are so common that they are useless for searching.
|
2007-10-17 03:01:28 +02:00
|
|
|
(In short, then, tokens are raw fragments of the document text, while
|
|
|
|
lexemes are words that are believed useful for indexing and searching.)
|
2017-10-09 03:44:17 +02:00
|
|
|
<productname>PostgreSQL</productname> uses <firstterm>dictionaries</firstterm> to
|
2007-09-04 05:46:36 +02:00
|
|
|
perform this step. Various standard dictionaries are provided, and
|
|
|
|
custom ones can be created for specific needs.
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2007-09-04 05:46:36 +02:00
|
|
|
<emphasis>Storing preprocessed documents optimized for
|
|
|
|
searching</emphasis>. For example, each document can be represented
|
|
|
|
as a sorted array of normalized lexemes. Along with the lexemes it is
|
2007-10-17 03:01:28 +02:00
|
|
|
often desirable to store positional information to use for
|
|
|
|
<firstterm>proximity ranking</firstterm>, so that a document that
|
2017-10-09 03:44:17 +02:00
|
|
|
contains a more <quote>dense</quote> region of query words is
|
2007-08-29 22:37:14 +02:00
|
|
|
assigned a higher rank than one with scattered query words.
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</itemizedlist>
|
|
|
|
|
|
|
|
<para>
|
2007-10-17 03:01:28 +02:00
|
|
|
Dictionaries allow fine-grained control over how tokens are normalized.
|
|
|
|
With appropriate dictionaries, you can:
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<itemizedlist spacing="compact" mark="bullet">
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2007-09-04 05:46:36 +02:00
|
|
|
Define stop words that should not be indexed.
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
Map synonyms to a single word using <application>Ispell</application>.
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
Map phrases to a single word using a thesaurus.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
Map different variations of a word to a canonical form using
|
2017-10-09 03:44:17 +02:00
|
|
|
an <application>Ispell</application> dictionary.
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
Map different variations of a word to a canonical form using
|
2017-10-09 03:44:17 +02:00
|
|
|
<application>Snowball</application> stemmer rules.
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</itemizedlist>
|
|
|
|
|
|
|
|
<para>
|
2007-09-04 05:46:36 +02:00
|
|
|
A data type <type>tsvector</type> is provided for storing preprocessed
|
|
|
|
documents, along with a type <type>tsquery</type> for representing processed
|
2017-11-23 15:39:47 +01:00
|
|
|
queries (<xref linkend="datatype-textsearch"/>). There are many
|
2007-10-17 03:01:28 +02:00
|
|
|
functions and operators available for these data types
|
2017-11-23 15:39:47 +01:00
|
|
|
(<xref linkend="functions-textsearch"/>), the most important of which is
|
2007-10-17 03:01:28 +02:00
|
|
|
the match operator <literal>@@</literal>, which we introduce in
|
2017-11-23 15:39:47 +01:00
|
|
|
<xref linkend="textsearch-matching"/>. Full text searches can be accelerated
|
|
|
|
using indexes (<xref linkend="textsearch-indexes"/>).
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
|
|
|
|
<sect2 id="textsearch-document">
|
2007-10-21 22:04:37 +02:00
|
|
|
<title>What Is a Document?</title>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-08-29 22:37:14 +02:00
|
|
|
<indexterm zone="textsearch-document">
|
2007-10-21 22:04:37 +02:00
|
|
|
<primary>document</primary>
|
|
|
|
<secondary>text search</secondary>
|
2007-08-29 22:37:14 +02:00
|
|
|
</indexterm>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-08-29 22:37:14 +02:00
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
A <firstterm>document</firstterm> is the unit of searching in a full text search
|
2007-09-04 05:46:36 +02:00
|
|
|
system; for example, a magazine article or email message. The text search
|
|
|
|
engine must be able to parse documents and store associations of lexemes
|
|
|
|
(key words) with their parent document. Later, these associations are
|
2007-10-15 23:39:57 +02:00
|
|
|
used to search for documents that contain query words.
|
2007-08-29 22:37:14 +02:00
|
|
|
</para>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-08-29 22:37:14 +02:00
|
|
|
<para>
|
2007-09-04 05:46:36 +02:00
|
|
|
For searches within <productname>PostgreSQL</productname>,
|
|
|
|
a document is normally a textual field within a row of a database table,
|
|
|
|
or possibly a combination (concatenation) of such fields, perhaps stored
|
|
|
|
in several tables or obtained dynamically. In other words, a document can
|
|
|
|
be constructed from different parts for indexing and it might not be
|
|
|
|
stored anywhere as a whole. For example:
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-08-29 22:37:14 +02:00
|
|
|
<programlisting>
|
|
|
|
SELECT title || ' ' || author || ' ' || abstract || ' ' || body AS document
|
|
|
|
FROM messages
|
|
|
|
WHERE mid = 12;
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-08-29 22:37:14 +02:00
|
|
|
SELECT m.title || ' ' || m.author || ' ' || m.abstract || ' ' || d.body AS document
|
|
|
|
FROM messages m, docs d
|
2020-03-21 01:19:32 +01:00
|
|
|
WHERE m.mid = d.did AND m.mid = 12;
|
2007-08-29 22:37:14 +02:00
|
|
|
</programlisting>
|
|
|
|
</para>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-08-29 22:37:14 +02:00
|
|
|
<note>
|
2007-08-29 04:37:04 +02:00
|
|
|
<para>
|
2007-10-21 22:04:37 +02:00
|
|
|
Actually, in these example queries, <function>coalesce</function>
|
2007-10-15 23:39:57 +02:00
|
|
|
should be used to prevent a single <literal>NULL</literal> attribute from
|
2007-09-04 05:46:36 +02:00
|
|
|
causing a <literal>NULL</literal> result for the whole document.
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
2007-08-29 22:37:14 +02:00
|
|
|
</note>
|
2007-09-04 05:46:36 +02:00
|
|
|
|
|
|
|
<para>
|
|
|
|
Another possibility is to store the documents as simple text files in the
|
|
|
|
file system. In this case, the database can be used to store the full text
|
|
|
|
index and to execute searches, and some unique identifier can be used to
|
|
|
|
retrieve the document from the file system. However, retrieving files
|
|
|
|
from outside the database requires superuser permissions or special
|
|
|
|
function support, so this is usually less convenient than keeping all
|
2007-10-21 22:04:37 +02:00
|
|
|
the data inside <productname>PostgreSQL</productname>. Also, keeping
|
|
|
|
everything inside the database allows easy access
|
|
|
|
to document metadata to assist in indexing and display.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
For text search purposes, each document must be reduced to the
|
2017-10-09 03:44:17 +02:00
|
|
|
preprocessed <type>tsvector</type> format. Searching and ranking
|
|
|
|
are performed entirely on the <type>tsvector</type> representation
|
2007-10-21 22:04:37 +02:00
|
|
|
of a document — the original text need only be retrieved
|
|
|
|
when the document has been selected for display to a user.
|
2017-10-09 03:44:17 +02:00
|
|
|
We therefore often speak of the <type>tsvector</type> as being the
|
2007-10-21 22:04:37 +02:00
|
|
|
document, but of course it is only a compact representation of
|
|
|
|
the full document.
|
2007-09-04 05:46:36 +02:00
|
|
|
</para>
|
2007-08-29 22:37:14 +02:00
|
|
|
</sect2>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<sect2 id="textsearch-matching">
|
|
|
|
<title>Basic Text Matching</title>
|
2007-08-29 22:37:14 +02:00
|
|
|
|
|
|
|
<para>
|
|
|
|
Full text searching in <productname>PostgreSQL</productname> is based on
|
2007-10-17 03:01:28 +02:00
|
|
|
the match operator <literal>@@</literal>, which returns
|
|
|
|
<literal>true</literal> if a <type>tsvector</type>
|
|
|
|
(document) matches a <type>tsquery</type> (query).
|
|
|
|
It doesn't matter which data type is written first:
|
2007-08-21 23:08:47 +02:00
|
|
|
|
|
|
|
<programlisting>
|
2007-10-17 03:01:28 +02:00
|
|
|
SELECT 'a fat cat sat on a mat and ate a fat rat'::tsvector @@ 'cat & rat'::tsquery;
|
|
|
|
?column?
|
|
|
|
----------
|
|
|
|
t
|
|
|
|
|
|
|
|
SELECT 'fat & cow'::tsquery @@ 'a fat cat sat on a mat and ate a fat rat'::tsvector;
|
|
|
|
?column?
|
|
|
|
----------
|
|
|
|
f
|
2007-08-21 23:08:47 +02:00
|
|
|
</programlisting>
|
2007-08-29 22:37:14 +02:00
|
|
|
</para>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 22:37:14 +02:00
|
|
|
<para>
|
2007-10-17 03:01:28 +02:00
|
|
|
As the above example suggests, a <type>tsquery</type> is not just raw
|
|
|
|
text, any more than a <type>tsvector</type> is. A <type>tsquery</type>
|
2007-10-21 22:04:37 +02:00
|
|
|
contains search terms, which must be already-normalized lexemes, and
|
2016-06-09 06:30:59 +02:00
|
|
|
may combine multiple terms using AND, OR, NOT, and FOLLOWED BY operators.
|
2017-11-23 15:39:47 +01:00
|
|
|
(For syntax details see <xref linkend="datatype-tsquery"/>.) There are
|
2017-10-09 03:44:17 +02:00
|
|
|
functions <function>to_tsquery</function>, <function>plainto_tsquery</function>,
|
|
|
|
and <function>phraseto_tsquery</function>
|
2007-10-17 03:01:28 +02:00
|
|
|
that are helpful in converting user-written text into a proper
|
2016-06-09 06:30:59 +02:00
|
|
|
<type>tsquery</type>, primarily by normalizing words appearing in
|
2017-10-09 03:44:17 +02:00
|
|
|
the text. Similarly, <function>to_tsvector</function> is used to parse and
|
2007-10-17 03:01:28 +02:00
|
|
|
normalize a document string. So in practice a text search match would
|
|
|
|
look more like this:
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-08-21 23:08:47 +02:00
|
|
|
<programlisting>
|
2007-10-17 03:01:28 +02:00
|
|
|
SELECT to_tsvector('fat cats ate fat rats') @@ to_tsquery('fat & rat');
|
2022-04-20 17:04:28 +02:00
|
|
|
?column?
|
2007-08-29 22:37:14 +02:00
|
|
|
----------
|
|
|
|
t
|
2007-10-17 03:01:28 +02:00
|
|
|
</programlisting>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-10-17 03:01:28 +02:00
|
|
|
Observe that this match would not succeed if written as
|
|
|
|
|
|
|
|
<programlisting>
|
|
|
|
SELECT 'fat cats ate fat rats'::tsvector @@ to_tsquery('fat & rat');
|
2022-04-20 17:04:28 +02:00
|
|
|
?column?
|
2007-08-29 22:37:14 +02:00
|
|
|
----------
|
|
|
|
f
|
2007-10-17 03:01:28 +02:00
|
|
|
</programlisting>
|
|
|
|
|
2017-10-09 03:44:17 +02:00
|
|
|
since here no normalization of the word <literal>rats</literal> will occur.
|
|
|
|
The elements of a <type>tsvector</type> are lexemes, which are assumed
|
|
|
|
already normalized, so <literal>rats</literal> does not match <literal>rat</literal>.
|
2007-10-17 03:01:28 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
The <literal>@@</literal> operator also
|
|
|
|
supports <type>text</type> input, allowing explicit conversion of a text
|
2017-10-09 03:44:17 +02:00
|
|
|
string to <type>tsvector</type> or <type>tsquery</type> to be skipped
|
2007-10-17 03:01:28 +02:00
|
|
|
in simple cases. The variants available are:
|
|
|
|
|
|
|
|
<programlisting>
|
|
|
|
tsvector @@ tsquery
|
|
|
|
tsquery @@ tsvector
|
|
|
|
text @@ tsquery
|
|
|
|
text @@ text
|
2007-08-29 22:37:14 +02:00
|
|
|
</programlisting>
|
|
|
|
</para>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 22:37:14 +02:00
|
|
|
<para>
|
2007-10-17 03:01:28 +02:00
|
|
|
The first two of these we saw already.
|
2007-08-29 22:37:14 +02:00
|
|
|
The form <type>text</type> <literal>@@</literal> <type>tsquery</type>
|
|
|
|
is equivalent to <literal>to_tsvector(x) @@ y</literal>.
|
|
|
|
The form <type>text</type> <literal>@@</literal> <type>text</type>
|
|
|
|
is equivalent to <literal>to_tsvector(x) @@ plainto_tsquery(y)</literal>.
|
|
|
|
</para>
|
2016-06-09 06:30:59 +02:00
|
|
|
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
Within a <type>tsquery</type>, the <literal>&</literal> (AND) operator
|
2016-06-09 06:30:59 +02:00
|
|
|
specifies that both its arguments must appear in the document to have a
|
|
|
|
match. Similarly, the <literal>|</literal> (OR) operator specifies that
|
2017-10-09 03:44:17 +02:00
|
|
|
at least one of its arguments must appear, while the <literal>!</literal> (NOT)
|
|
|
|
operator specifies that its argument must <emphasis>not</emphasis> appear in
|
2016-06-29 21:00:25 +02:00
|
|
|
order to have a match.
|
2017-10-09 03:44:17 +02:00
|
|
|
For example, the query <literal>fat & ! rat</literal> matches documents that
|
|
|
|
contain <literal>fat</literal> but not <literal>rat</literal>.
|
2016-06-09 06:30:59 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Searching for phrases is possible with the help of
|
2017-10-09 03:44:17 +02:00
|
|
|
the <literal><-></literal> (FOLLOWED BY) <type>tsquery</type> operator, which
|
2016-06-09 06:30:59 +02:00
|
|
|
matches only if its arguments have matches that are adjacent and in the
|
|
|
|
given order. For example:
|
|
|
|
|
|
|
|
<programlisting>
|
|
|
|
SELECT to_tsvector('fatal error') @@ to_tsquery('fatal <-> error');
|
2022-04-20 17:04:28 +02:00
|
|
|
?column?
|
2016-06-09 06:30:59 +02:00
|
|
|
----------
|
|
|
|
t
|
|
|
|
|
|
|
|
SELECT to_tsvector('error is not fatal') @@ to_tsquery('fatal <-> error');
|
2022-04-20 17:04:28 +02:00
|
|
|
?column?
|
2016-06-09 06:30:59 +02:00
|
|
|
----------
|
|
|
|
f
|
|
|
|
</programlisting>
|
|
|
|
|
|
|
|
There is a more general version of the FOLLOWED BY operator having the
|
2017-10-09 03:44:17 +02:00
|
|
|
form <literal><<replaceable>N</replaceable>></literal>,
|
|
|
|
where <replaceable>N</replaceable> is an integer standing for the difference between
|
2016-06-29 21:00:25 +02:00
|
|
|
the positions of the matching lexemes. <literal><1></literal> is
|
2017-10-09 03:44:17 +02:00
|
|
|
the same as <literal><-></literal>, while <literal><2></literal>
|
2016-06-29 21:00:25 +02:00
|
|
|
allows exactly one other lexeme to appear between the matches, and so
|
2017-10-09 03:44:17 +02:00
|
|
|
on. The <literal>phraseto_tsquery</literal> function makes use of this
|
|
|
|
operator to construct a <literal>tsquery</literal> that can match a multi-word
|
2016-06-09 06:30:59 +02:00
|
|
|
phrase when some of the words are stop words. For example:
|
|
|
|
|
|
|
|
<programlisting>
|
|
|
|
SELECT phraseto_tsquery('cats ate rats');
|
2022-04-20 17:04:28 +02:00
|
|
|
phraseto_tsquery
|
2016-06-09 06:30:59 +02:00
|
|
|
-------------------------------
|
2016-06-29 16:59:36 +02:00
|
|
|
'cat' <-> 'ate' <-> 'rat'
|
2016-06-09 06:30:59 +02:00
|
|
|
|
|
|
|
SELECT phraseto_tsquery('the cats ate the rats');
|
2022-04-20 17:04:28 +02:00
|
|
|
phraseto_tsquery
|
2016-06-09 06:30:59 +02:00
|
|
|
-------------------------------
|
2016-06-29 16:59:36 +02:00
|
|
|
'cat' <-> 'ate' <2> 'rat'
|
2016-06-09 06:30:59 +02:00
|
|
|
</programlisting>
|
|
|
|
</para>
|
2016-06-29 21:00:25 +02:00
|
|
|
|
|
|
|
<para>
|
|
|
|
A special case that's sometimes useful is that <literal><0></literal>
|
|
|
|
can be used to require that two patterns match the same word.
|
|
|
|
</para>
|
|
|
|
|
2016-06-29 16:59:36 +02:00
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
Parentheses can be used to control nesting of the <type>tsquery</type>
|
2016-06-29 21:00:25 +02:00
|
|
|
operators. Without parentheses, <literal>|</literal> binds least tightly,
|
|
|
|
then <literal>&</literal>, then <literal><-></literal>,
|
|
|
|
and <literal>!</literal> most tightly.
|
2016-06-29 16:59:36 +02:00
|
|
|
</para>
|
Fix strange behavior (and possible crashes) in full text phrase search.
In an attempt to simplify the tsquery matching engine, the original
phrase search patch invented rewrite rules that would rearrange a
tsquery so that no AND/OR/NOT operator appeared below a PHRASE operator.
But this approach had numerous problems. The rearrangement step was
missed by ts_rewrite (and perhaps other places), allowing tsqueries
to be created that would cause Assert failures or perhaps crashes at
execution, as reported by Andreas Seltenreich. The rewrite rules
effectively defined semantics for operators underneath PHRASE that were
buggy, or at least unintuitive. And because rewriting was done in
tsqueryin() rather than at execution, the rearrangement was user-visible,
which is not very desirable --- for example, it might cause unexpected
matches or failures to match in ts_rewrite.
As a somewhat independent problem, the behavior of nested PHRASE operators
was only sane for left-deep trees; queries like "x <-> (y <-> z)" did not
behave intuitively at all.
To fix, get rid of the rewrite logic altogether, and instead teach the
tsquery execution engine to manage AND/OR/NOT below a PHRASE operator
by explicitly computing the match location(s) and match widths for these
operators.
This requires introducing some additional fields into the publicly visible
ExecPhraseData struct; but since there's no way for third-party code to
pass such a struct to TS_phrase_execute, it shouldn't create an ABI problem
as long as we don't move the offsets of the existing fields.
Another related problem was that index searches supposed that "!x <-> y"
could be lossily approximated as "!x & y", which isn't correct because
the latter will reject, say, "x q y" which the query itself accepts.
This required some tweaking in TS_execute_ternary along with the main
tsquery engine.
Back-patch to 9.6 where phrase operators were introduced. While this
could be argued to change behavior more than we'd like in a stable branch,
we have to do something about the crash hazards and index-vs-seqscan
inconsistency, and it doesn't seem desirable to let the unintuitive
behaviors induced by the rewriting implementation stand as precedent.
Discussion: https://postgr.es/m/28215.1481999808@sss.pgh.pa.us
Discussion: https://postgr.es/m/26706.1482087250@sss.pgh.pa.us
2016-12-21 21:18:25 +01:00
|
|
|
|
|
|
|
<para>
|
|
|
|
It's worth noticing that the AND/OR/NOT operators mean something subtly
|
|
|
|
different when they are within the arguments of a FOLLOWED BY operator
|
|
|
|
than when they are not, because within FOLLOWED BY the exact position of
|
2017-10-09 03:44:17 +02:00
|
|
|
the match is significant. For example, normally <literal>!x</literal> matches
|
|
|
|
only documents that do not contain <literal>x</literal> anywhere.
|
|
|
|
But <literal>!x <-> y</literal> matches <literal>y</literal> if it is not
|
|
|
|
immediately after an <literal>x</literal>; an occurrence of <literal>x</literal>
|
Fix strange behavior (and possible crashes) in full text phrase search.
In an attempt to simplify the tsquery matching engine, the original
phrase search patch invented rewrite rules that would rearrange a
tsquery so that no AND/OR/NOT operator appeared below a PHRASE operator.
But this approach had numerous problems. The rearrangement step was
missed by ts_rewrite (and perhaps other places), allowing tsqueries
to be created that would cause Assert failures or perhaps crashes at
execution, as reported by Andreas Seltenreich. The rewrite rules
effectively defined semantics for operators underneath PHRASE that were
buggy, or at least unintuitive. And because rewriting was done in
tsqueryin() rather than at execution, the rearrangement was user-visible,
which is not very desirable --- for example, it might cause unexpected
matches or failures to match in ts_rewrite.
As a somewhat independent problem, the behavior of nested PHRASE operators
was only sane for left-deep trees; queries like "x <-> (y <-> z)" did not
behave intuitively at all.
To fix, get rid of the rewrite logic altogether, and instead teach the
tsquery execution engine to manage AND/OR/NOT below a PHRASE operator
by explicitly computing the match location(s) and match widths for these
operators.
This requires introducing some additional fields into the publicly visible
ExecPhraseData struct; but since there's no way for third-party code to
pass such a struct to TS_phrase_execute, it shouldn't create an ABI problem
as long as we don't move the offsets of the existing fields.
Another related problem was that index searches supposed that "!x <-> y"
could be lossily approximated as "!x & y", which isn't correct because
the latter will reject, say, "x q y" which the query itself accepts.
This required some tweaking in TS_execute_ternary along with the main
tsquery engine.
Back-patch to 9.6 where phrase operators were introduced. While this
could be argued to change behavior more than we'd like in a stable branch,
we have to do something about the crash hazards and index-vs-seqscan
inconsistency, and it doesn't seem desirable to let the unintuitive
behaviors induced by the rewriting implementation stand as precedent.
Discussion: https://postgr.es/m/28215.1481999808@sss.pgh.pa.us
Discussion: https://postgr.es/m/26706.1482087250@sss.pgh.pa.us
2016-12-21 21:18:25 +01:00
|
|
|
elsewhere in the document does not prevent a match. Another example is
|
2017-10-09 03:44:17 +02:00
|
|
|
that <literal>x & y</literal> normally only requires that <literal>x</literal>
|
|
|
|
and <literal>y</literal> both appear somewhere in the document, but
|
|
|
|
<literal>(x & y) <-> z</literal> requires <literal>x</literal>
|
|
|
|
and <literal>y</literal> to match at the same place, immediately before
|
|
|
|
a <literal>z</literal>. Thus this query behaves differently from
|
|
|
|
<literal>x <-> z & y <-> z</literal>, which will match a
|
|
|
|
document containing two separate sequences <literal>x z</literal> and
|
|
|
|
<literal>y z</literal>. (This specific query is useless as written,
|
|
|
|
since <literal>x</literal> and <literal>y</literal> could not match at the same place;
|
Fix strange behavior (and possible crashes) in full text phrase search.
In an attempt to simplify the tsquery matching engine, the original
phrase search patch invented rewrite rules that would rearrange a
tsquery so that no AND/OR/NOT operator appeared below a PHRASE operator.
But this approach had numerous problems. The rearrangement step was
missed by ts_rewrite (and perhaps other places), allowing tsqueries
to be created that would cause Assert failures or perhaps crashes at
execution, as reported by Andreas Seltenreich. The rewrite rules
effectively defined semantics for operators underneath PHRASE that were
buggy, or at least unintuitive. And because rewriting was done in
tsqueryin() rather than at execution, the rearrangement was user-visible,
which is not very desirable --- for example, it might cause unexpected
matches or failures to match in ts_rewrite.
As a somewhat independent problem, the behavior of nested PHRASE operators
was only sane for left-deep trees; queries like "x <-> (y <-> z)" did not
behave intuitively at all.
To fix, get rid of the rewrite logic altogether, and instead teach the
tsquery execution engine to manage AND/OR/NOT below a PHRASE operator
by explicitly computing the match location(s) and match widths for these
operators.
This requires introducing some additional fields into the publicly visible
ExecPhraseData struct; but since there's no way for third-party code to
pass such a struct to TS_phrase_execute, it shouldn't create an ABI problem
as long as we don't move the offsets of the existing fields.
Another related problem was that index searches supposed that "!x <-> y"
could be lossily approximated as "!x & y", which isn't correct because
the latter will reject, say, "x q y" which the query itself accepts.
This required some tweaking in TS_execute_ternary along with the main
tsquery engine.
Back-patch to 9.6 where phrase operators were introduced. While this
could be argued to change behavior more than we'd like in a stable branch,
we have to do something about the crash hazards and index-vs-seqscan
inconsistency, and it doesn't seem desirable to let the unintuitive
behaviors induced by the rewriting implementation stand as precedent.
Discussion: https://postgr.es/m/28215.1481999808@sss.pgh.pa.us
Discussion: https://postgr.es/m/26706.1482087250@sss.pgh.pa.us
2016-12-21 21:18:25 +01:00
|
|
|
but with more complex situations such as prefix-match patterns, a query
|
|
|
|
of this form could be useful.)
|
|
|
|
</para>
|
2007-08-30 03:29:52 +02:00
|
|
|
</sect2>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<sect2 id="textsearch-intro-configurations">
|
2007-08-29 22:37:14 +02:00
|
|
|
<title>Configurations</title>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 22:37:14 +02:00
|
|
|
<para>
|
|
|
|
The above are all simple text search examples. As mentioned before, full
|
|
|
|
text search functionality includes the ability to do many more things:
|
|
|
|
skip indexing certain words (stop words), process synonyms, and use
|
2009-04-27 18:27:36 +02:00
|
|
|
sophisticated parsing, e.g., parse based on more than just white space.
|
2007-10-15 23:39:57 +02:00
|
|
|
This functionality is controlled by <firstterm>text search
|
2017-10-09 03:44:17 +02:00
|
|
|
configurations</firstterm>. <productname>PostgreSQL</productname> comes with predefined
|
2007-10-15 23:39:57 +02:00
|
|
|
configurations for many languages, and you can easily create your own
|
2017-10-09 03:44:17 +02:00
|
|
|
configurations. (<application>psql</application>'s <command>\dF</command> command
|
2007-10-15 23:39:57 +02:00
|
|
|
shows all available configurations.)
|
2007-08-29 23:51:45 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2007-10-15 23:39:57 +02:00
|
|
|
During installation an appropriate configuration is selected and
|
2017-11-23 15:39:47 +01:00
|
|
|
<xref linkend="guc-default-text-search-config"/> is set accordingly
|
2017-10-09 03:44:17 +02:00
|
|
|
in <filename>postgresql.conf</filename>. If you are using the same text search
|
2007-08-29 23:51:45 +02:00
|
|
|
configuration for the entire cluster you can use the value in
|
2017-10-09 03:44:17 +02:00
|
|
|
<filename>postgresql.conf</filename>. To use different configurations
|
2007-10-15 23:39:57 +02:00
|
|
|
throughout the cluster but the same configuration within any one database,
|
2017-10-09 03:44:17 +02:00
|
|
|
use <command>ALTER DATABASE ... SET</command>. Otherwise, you can set
|
2007-10-15 23:39:57 +02:00
|
|
|
<varname>default_text_search_config</varname> in each session.
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Each text search function that depends on a configuration has an optional
|
2017-10-09 03:44:17 +02:00
|
|
|
<type>regconfig</type> argument, so that the configuration to use can be
|
2007-10-21 22:04:37 +02:00
|
|
|
specified explicitly. <varname>default_text_search_config</varname>
|
|
|
|
is used only when this argument is omitted.
|
2007-10-15 23:39:57 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
To make it easier to build custom text search configurations, a
|
|
|
|
configuration is built up from simpler database objects.
|
2017-10-09 03:44:17 +02:00
|
|
|
<productname>PostgreSQL</productname>'s text search facility provides
|
2007-10-15 23:39:57 +02:00
|
|
|
four types of configuration-related database objects:
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<itemizedlist spacing="compact" mark="bullet">
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
<firstterm>Text search parsers</firstterm> break documents into tokens
|
2007-10-17 03:01:28 +02:00
|
|
|
and classify each token (for example, as words or numbers).
|
2007-10-15 23:39:57 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
<firstterm>Text search dictionaries</firstterm> convert tokens to normalized
|
2007-10-15 23:39:57 +02:00
|
|
|
form and reject stop words.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
<firstterm>Text search templates</firstterm> provide the functions underlying
|
2007-10-15 23:39:57 +02:00
|
|
|
dictionaries. (A dictionary simply specifies a template and a set
|
|
|
|
of parameters for the template.)
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
<firstterm>Text search configurations</firstterm> select a parser and a set
|
2007-10-17 03:01:28 +02:00
|
|
|
of dictionaries to use to normalize the tokens produced by the parser.
|
2007-10-15 23:39:57 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</itemizedlist>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Text search parsers and templates are built from low-level C functions;
|
2009-06-17 23:58:49 +02:00
|
|
|
therefore it requires C programming ability to develop new ones, and
|
2007-10-15 23:39:57 +02:00
|
|
|
superuser privileges to install one into a database. (There are examples
|
2017-10-09 03:44:17 +02:00
|
|
|
of add-on parsers and templates in the <filename>contrib/</filename> area of the
|
|
|
|
<productname>PostgreSQL</productname> distribution.) Since dictionaries and
|
2007-10-15 23:39:57 +02:00
|
|
|
configurations just parameterize and connect together some underlying
|
|
|
|
parsers and templates, no special privilege is needed to create a new
|
|
|
|
dictionary or configuration. Examples of creating custom dictionaries and
|
|
|
|
configurations appear later in this chapter.
|
2007-08-29 22:37:14 +02:00
|
|
|
</para>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 22:37:14 +02:00
|
|
|
</sect2>
|
2007-08-29 23:51:45 +02:00
|
|
|
|
2007-08-29 22:37:14 +02:00
|
|
|
</sect1>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 22:37:14 +02:00
|
|
|
<sect1 id="textsearch-tables">
|
|
|
|
<title>Tables and Indexes</title>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-08-29 22:37:14 +02:00
|
|
|
<para>
|
2007-10-21 22:04:37 +02:00
|
|
|
The examples in the previous section illustrated full text matching using
|
|
|
|
simple constant strings. This section shows how to search table data,
|
|
|
|
optionally using indexes.
|
2007-08-29 22:37:14 +02:00
|
|
|
</para>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 22:37:14 +02:00
|
|
|
<sect2 id="textsearch-tables-search">
|
|
|
|
<title>Searching a Table</title>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-08-29 22:37:14 +02:00
|
|
|
<para>
|
2009-04-27 18:27:36 +02:00
|
|
|
It is possible to do a full text search without an index. A simple query
|
2017-10-09 03:44:17 +02:00
|
|
|
to print the <structname>title</structname> of each row that contains the word
|
|
|
|
<literal>friend</literal> in its <structfield>body</structfield> field is:
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-08-21 23:08:47 +02:00
|
|
|
<programlisting>
|
2007-08-29 22:37:14 +02:00
|
|
|
SELECT title
|
|
|
|
FROM pgweb
|
2007-10-21 22:04:37 +02:00
|
|
|
WHERE to_tsvector('english', body) @@ to_tsquery('english', 'friend');
|
2007-08-21 23:08:47 +02:00
|
|
|
</programlisting>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2017-10-09 03:44:17 +02:00
|
|
|
This will also find related words such as <literal>friends</literal>
|
|
|
|
and <literal>friendly</literal>, since all these are reduced to the same
|
2007-10-21 22:04:37 +02:00
|
|
|
normalized lexeme.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
The query above specifies that the <literal>english</literal> configuration
|
2007-10-15 23:39:57 +02:00
|
|
|
is to be used to parse and normalize the strings. Alternatively we
|
|
|
|
could omit the configuration parameters:
|
|
|
|
|
|
|
|
<programlisting>
|
|
|
|
SELECT title
|
|
|
|
FROM pgweb
|
2007-10-21 22:04:37 +02:00
|
|
|
WHERE to_tsvector(body) @@ to_tsquery('friend');
|
2007-10-15 23:39:57 +02:00
|
|
|
</programlisting>
|
|
|
|
|
|
|
|
This query will use the configuration set by <xref
|
2017-11-23 15:39:47 +01:00
|
|
|
linkend="guc-default-text-search-config"/>.
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
A more complex example is to
|
2017-10-09 03:44:17 +02:00
|
|
|
select the ten most recent documents that contain <literal>create</literal> and
|
|
|
|
<literal>table</literal> in the <structname>title</structname> or <structname>body</structname>:
|
2007-08-21 23:08:47 +02:00
|
|
|
|
|
|
|
<programlisting>
|
2007-08-29 22:37:14 +02:00
|
|
|
SELECT title
|
|
|
|
FROM pgweb
|
2009-04-19 22:36:06 +02:00
|
|
|
WHERE to_tsvector(title || ' ' || body) @@ to_tsquery('create & table')
|
2009-04-27 18:27:36 +02:00
|
|
|
ORDER BY last_mod_date DESC
|
|
|
|
LIMIT 10;
|
2007-08-21 23:08:47 +02:00
|
|
|
</programlisting>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2009-04-19 22:36:06 +02:00
|
|
|
For clarity we omitted the <function>coalesce</function> function calls
|
|
|
|
which would be needed to find rows that contain <literal>NULL</literal>
|
2007-10-15 23:39:57 +02:00
|
|
|
in one of the two fields.
|
2007-08-29 22:37:14 +02:00
|
|
|
</para>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-17 03:01:28 +02:00
|
|
|
<para>
|
|
|
|
Although these queries will work without an index, most applications
|
|
|
|
will find this approach too slow, except perhaps for occasional ad-hoc
|
2007-10-21 22:04:37 +02:00
|
|
|
searches. Practical use of text searching usually requires creating
|
2007-10-17 03:01:28 +02:00
|
|
|
an index.
|
|
|
|
</para>
|
|
|
|
|
2007-08-29 22:37:14 +02:00
|
|
|
</sect2>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 22:37:14 +02:00
|
|
|
<sect2 id="textsearch-tables-index">
|
|
|
|
<title>Creating Indexes</title>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-08-29 22:37:14 +02:00
|
|
|
<para>
|
2007-10-15 23:39:57 +02:00
|
|
|
We can create a <acronym>GIN</acronym> index (<xref
|
2017-11-23 15:39:47 +01:00
|
|
|
linkend="textsearch-indexes"/>) to speed up text searches:
|
2007-08-21 23:08:47 +02:00
|
|
|
|
|
|
|
<programlisting>
|
2015-05-15 17:42:29 +02:00
|
|
|
CREATE INDEX pgweb_idx ON pgweb USING GIN (to_tsvector('english', body));
|
2007-08-21 23:08:47 +02:00
|
|
|
</programlisting>
|
2007-08-31 18:33:36 +02:00
|
|
|
|
2007-10-15 23:39:57 +02:00
|
|
|
Notice that the 2-argument version of <function>to_tsvector</function> is
|
|
|
|
used. Only text search functions that specify a configuration name can
|
2017-11-23 15:39:47 +01:00
|
|
|
be used in expression indexes (<xref linkend="indexes-expressional"/>).
|
2007-08-29 22:37:14 +02:00
|
|
|
This is because the index contents must be unaffected by <xref
|
2017-11-23 15:39:47 +01:00
|
|
|
linkend="guc-default-text-search-config"/>. If they were affected, the
|
2007-08-29 22:37:14 +02:00
|
|
|
index contents might be inconsistent because different entries could
|
2017-10-09 03:44:17 +02:00
|
|
|
contain <type>tsvector</type>s that were created with different text search
|
2007-08-29 22:37:14 +02:00
|
|
|
configurations, and there would be no way to guess which was which. It
|
|
|
|
would be impossible to dump and restore such an index correctly.
|
|
|
|
</para>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 22:37:14 +02:00
|
|
|
<para>
|
|
|
|
Because the two-argument version of <function>to_tsvector</function> was
|
|
|
|
used in the index above, only a query reference that uses the 2-argument
|
|
|
|
version of <function>to_tsvector</function> with the same configuration
|
2007-10-15 23:39:57 +02:00
|
|
|
name will use that index. That is, <literal>WHERE
|
2017-10-09 03:44:17 +02:00
|
|
|
to_tsvector('english', body) @@ 'a & b'</literal> can use the index,
|
|
|
|
but <literal>WHERE to_tsvector(body) @@ 'a & b'</literal> cannot.
|
2007-10-15 23:39:57 +02:00
|
|
|
This ensures that an index will be used only with the same configuration
|
|
|
|
used to create the index entries.
|
2007-08-29 22:37:14 +02:00
|
|
|
</para>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 22:37:14 +02:00
|
|
|
<para>
|
2007-10-21 22:04:37 +02:00
|
|
|
It is possible to set up more complex expression indexes wherein the
|
2007-08-29 22:37:14 +02:00
|
|
|
configuration name is specified by another column, e.g.:
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 22:37:14 +02:00
|
|
|
<programlisting>
|
2015-05-15 17:42:29 +02:00
|
|
|
CREATE INDEX pgweb_idx ON pgweb USING GIN (to_tsvector(config_name, body));
|
2007-08-29 22:37:14 +02:00
|
|
|
</programlisting>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2017-10-09 03:44:17 +02:00
|
|
|
where <literal>config_name</literal> is a column in the <literal>pgweb</literal>
|
2007-08-29 22:37:14 +02:00
|
|
|
table. This allows mixed configurations in the same index while
|
2007-10-21 22:04:37 +02:00
|
|
|
recording which configuration was used for each index entry. This
|
|
|
|
would be useful, for example, if the document collection contained
|
|
|
|
documents in different languages. Again,
|
2009-06-17 23:58:49 +02:00
|
|
|
queries that are meant to use the index must be phrased to match, e.g.,
|
2017-10-09 03:44:17 +02:00
|
|
|
<literal>WHERE to_tsvector(config_name, body) @@ 'a & b'</literal>.
|
2007-08-29 22:37:14 +02:00
|
|
|
</para>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 22:37:14 +02:00
|
|
|
<para>
|
|
|
|
Indexes can even concatenate columns:
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 22:37:14 +02:00
|
|
|
<programlisting>
|
2015-05-15 17:42:29 +02:00
|
|
|
CREATE INDEX pgweb_idx ON pgweb USING GIN (to_tsvector('english', title || ' ' || body));
|
2007-08-29 22:37:14 +02:00
|
|
|
</programlisting>
|
|
|
|
</para>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 22:37:14 +02:00
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
Another approach is to create a separate <type>tsvector</type> column
|
2019-03-30 08:13:09 +01:00
|
|
|
to hold the output of <function>to_tsvector</function>. To keep this
|
|
|
|
column automatically up to date with its source data, use a stored
|
|
|
|
generated column. This example is a
|
2007-08-29 22:37:14 +02:00
|
|
|
concatenation of <literal>title</literal> and <literal>body</literal>,
|
2017-10-09 03:44:17 +02:00
|
|
|
using <function>coalesce</function> to ensure that one field will still be
|
|
|
|
indexed when the other is <literal>NULL</literal>:
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 22:37:14 +02:00
|
|
|
<programlisting>
|
2019-03-30 08:13:09 +01:00
|
|
|
ALTER TABLE pgweb
|
|
|
|
ADD COLUMN textsearchable_index_col tsvector
|
|
|
|
GENERATED ALWAYS AS (to_tsvector('english', coalesce(title, '') || ' ' || coalesce(body, ''))) STORED;
|
2007-08-29 22:37:14 +02:00
|
|
|
</programlisting>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 22:37:14 +02:00
|
|
|
Then we create a <acronym>GIN</acronym> index to speed up the search:
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 22:37:14 +02:00
|
|
|
<programlisting>
|
2015-05-15 17:42:29 +02:00
|
|
|
CREATE INDEX textsearch_idx ON pgweb USING GIN (textsearchable_index_col);
|
2007-08-29 22:37:14 +02:00
|
|
|
</programlisting>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-10-15 23:39:57 +02:00
|
|
|
Now we are ready to perform a fast full text search:
|
2007-08-21 23:08:47 +02:00
|
|
|
|
|
|
|
<programlisting>
|
2007-10-21 22:04:37 +02:00
|
|
|
SELECT title
|
|
|
|
FROM pgweb
|
2007-11-15 00:48:55 +01:00
|
|
|
WHERE textsearchable_index_col @@ to_tsquery('create & table')
|
2009-04-27 18:27:36 +02:00
|
|
|
ORDER BY last_mod_date DESC
|
|
|
|
LIMIT 10;
|
2007-08-21 23:08:47 +02:00
|
|
|
</programlisting>
|
2007-10-15 23:39:57 +02:00
|
|
|
</para>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-10-15 23:39:57 +02:00
|
|
|
<para>
|
2007-10-21 22:04:37 +02:00
|
|
|
One advantage of the separate-column approach over an expression index
|
|
|
|
is that it is not necessary to explicitly specify the text search
|
|
|
|
configuration in queries in order to make use of the index. As shown
|
|
|
|
in the example above, the query can depend on
|
2017-10-09 03:44:17 +02:00
|
|
|
<varname>default_text_search_config</varname>. Another advantage is that
|
2007-10-21 22:04:37 +02:00
|
|
|
searches will be faster, since it will not be necessary to redo the
|
2017-10-09 03:44:17 +02:00
|
|
|
<function>to_tsvector</function> calls to verify index matches. (This is more
|
2007-10-21 22:04:37 +02:00
|
|
|
important when using a GiST index than a GIN index; see <xref
|
2017-11-23 15:39:47 +01:00
|
|
|
linkend="textsearch-indexes"/>.) The expression-index approach is
|
2007-10-21 22:04:37 +02:00
|
|
|
simpler to set up, however, and it requires less disk space since the
|
2017-10-09 03:44:17 +02:00
|
|
|
<type>tsvector</type> representation is not stored explicitly.
|
2007-08-29 22:37:14 +02:00
|
|
|
</para>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
</sect2>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
</sect1>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
<sect1 id="textsearch-controls">
|
2007-10-21 22:04:37 +02:00
|
|
|
<title>Controlling Text Search</title>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
<para>
|
|
|
|
To implement full text searching there must be a function to create a
|
|
|
|
<type>tsvector</type> from a document and a <type>tsquery</type> from a
|
2007-10-21 22:04:37 +02:00
|
|
|
user query. Also, we need to return results in a useful order, so we need
|
2007-10-15 23:39:57 +02:00
|
|
|
a function that compares documents with respect to their relevance to
|
2007-10-21 22:04:37 +02:00
|
|
|
the query. It's also important to be able to display the results nicely.
|
2007-08-29 04:37:04 +02:00
|
|
|
<productname>PostgreSQL</productname> provides support for all of these
|
|
|
|
functions.
|
|
|
|
</para>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<sect2 id="textsearch-parsing-documents">
|
|
|
|
<title>Parsing Documents</title>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
<productname>PostgreSQL</productname> provides the
|
|
|
|
function <function>to_tsvector</function> for converting a document to
|
|
|
|
the <type>tsvector</type> data type.
|
|
|
|
</para>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<indexterm>
|
|
|
|
<primary>to_tsvector</primary>
|
2007-08-31 06:52:29 +02:00
|
|
|
</indexterm>
|
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<synopsis>
|
2017-10-09 03:44:17 +02:00
|
|
|
to_tsvector(<optional> <replaceable class="parameter">config</replaceable> <type>regconfig</type>, </optional> <replaceable class="parameter">document</replaceable> <type>text</type>) returns <type>tsvector</type>
|
2010-07-29 21:34:41 +02:00
|
|
|
</synopsis>
|
2007-10-21 22:04:37 +02:00
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
<para>
|
2007-10-21 22:04:37 +02:00
|
|
|
<function>to_tsvector</function> parses a textual document into tokens,
|
|
|
|
reduces the tokens to lexemes, and returns a <type>tsvector</type> which
|
|
|
|
lists the lexemes together with their positions in the document.
|
|
|
|
The document is processed according to the specified or default
|
|
|
|
text search configuration.
|
|
|
|
Here is a simple example:
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2007-08-21 23:08:47 +02:00
|
|
|
SELECT to_tsvector('english', 'a fat cat sat on a mat - it ate a fat rats');
|
2007-08-29 04:37:04 +02:00
|
|
|
to_tsvector
|
2007-08-21 23:08:47 +02:00
|
|
|
-----------------------------------------------------
|
|
|
|
'ate':9 'cat':3 'fat':2,11 'mat':7 'rat':12 'sat':4
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
|
2007-08-31 18:33:36 +02:00
|
|
|
<para>
|
2007-08-29 04:37:04 +02:00
|
|
|
In the example above we see that the resulting <type>tsvector</type> does not
|
|
|
|
contain the words <literal>a</literal>, <literal>on</literal>, or
|
|
|
|
<literal>it</literal>, the word <literal>rats</literal> became
|
|
|
|
<literal>rat</literal>, and the punctuation sign <literal>-</literal> was
|
2007-08-31 18:33:36 +02:00
|
|
|
ignored.
|
|
|
|
</para>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
|
|
|
<para>
|
|
|
|
The <function>to_tsvector</function> function internally calls a parser
|
2007-10-21 22:04:37 +02:00
|
|
|
which breaks the document text into tokens and assigns a type to
|
|
|
|
each token. For each token, a list of
|
2017-11-23 15:39:47 +01:00
|
|
|
dictionaries (<xref linkend="textsearch-dictionaries"/>) is consulted,
|
2007-10-17 03:01:28 +02:00
|
|
|
where the list can vary depending on the token type. The first dictionary
|
2017-10-09 03:44:17 +02:00
|
|
|
that <firstterm>recognizes</firstterm> the token emits one or more normalized
|
2007-10-17 03:01:28 +02:00
|
|
|
<firstterm>lexemes</firstterm> to represent the token. For example,
|
2007-08-29 04:37:04 +02:00
|
|
|
<literal>rats</literal> became <literal>rat</literal> because one of the
|
|
|
|
dictionaries recognized that the word <literal>rats</literal> is a plural
|
2007-10-21 22:04:37 +02:00
|
|
|
form of <literal>rat</literal>. Some words are recognized as
|
2017-11-23 15:39:47 +01:00
|
|
|
<firstterm>stop words</firstterm> (<xref linkend="textsearch-stopwords"/>), which
|
2007-10-21 22:04:37 +02:00
|
|
|
causes them to be ignored since they occur too frequently to be useful in
|
|
|
|
searching. In our example these are
|
2007-08-29 04:37:04 +02:00
|
|
|
<literal>a</literal>, <literal>on</literal>, and <literal>it</literal>.
|
2007-10-17 03:01:28 +02:00
|
|
|
If no dictionary in the list recognizes the token then it is also ignored.
|
|
|
|
In this example that happened to the punctuation sign <literal>-</literal>
|
|
|
|
because there are in fact no dictionaries assigned for its token type
|
|
|
|
(<literal>Space symbols</literal>), meaning space tokens will never be
|
|
|
|
indexed. The choices of parser, dictionaries and which types of tokens to
|
|
|
|
index are determined by the selected text search configuration (<xref
|
2017-11-23 15:39:47 +01:00
|
|
|
linkend="textsearch-configuration"/>). It is possible to have
|
2007-10-15 23:39:57 +02:00
|
|
|
many different configurations in the same database, and predefined
|
|
|
|
configurations are available for various languages. In our example
|
2007-08-29 04:37:04 +02:00
|
|
|
we used the default configuration <literal>english</literal> for the
|
|
|
|
English language.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2007-10-21 22:04:37 +02:00
|
|
|
The function <function>setweight</function> can be used to label the
|
2017-10-09 03:44:17 +02:00
|
|
|
entries of a <type>tsvector</type> with a given <firstterm>weight</firstterm>,
|
|
|
|
where a weight is one of the letters <literal>A</literal>, <literal>B</literal>,
|
|
|
|
<literal>C</literal>, or <literal>D</literal>.
|
2007-10-17 03:01:28 +02:00
|
|
|
This is typically used to mark entries coming from
|
2007-10-21 22:04:37 +02:00
|
|
|
different parts of a document, such as title versus body. Later, this
|
|
|
|
information can be used for ranking of search results.
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2007-10-15 23:39:57 +02:00
|
|
|
Because <function>to_tsvector</function>(<literal>NULL</literal>) will
|
|
|
|
return <literal>NULL</literal>, it is recommended to use
|
|
|
|
<function>coalesce</function> whenever a field might be null.
|
|
|
|
Here is the recommended method for creating
|
|
|
|
a <type>tsvector</type> from a structured document:
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-08-21 23:08:47 +02:00
|
|
|
<programlisting>
|
2007-10-15 23:39:57 +02:00
|
|
|
UPDATE tt SET ti =
|
|
|
|
setweight(to_tsvector(coalesce(title,'')), 'A') ||
|
|
|
|
setweight(to_tsvector(coalesce(keyword,'')), 'B') ||
|
|
|
|
setweight(to_tsvector(coalesce(abstract,'')), 'C') ||
|
2007-08-21 23:08:47 +02:00
|
|
|
setweight(to_tsvector(coalesce(body,'')), 'D');
|
|
|
|
</programlisting>
|
2007-10-15 23:39:57 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
Here we have used <function>setweight</function> to label the source
|
2007-10-15 23:39:57 +02:00
|
|
|
of each lexeme in the finished <type>tsvector</type>, and then merged
|
2017-10-09 03:44:17 +02:00
|
|
|
the labeled <type>tsvector</type> values using the <type>tsvector</type>
|
|
|
|
concatenation operator <literal>||</literal>. (<xref
|
2017-11-23 15:39:47 +01:00
|
|
|
linkend="textsearch-manipulate-tsvector"/> gives details about these
|
2007-10-21 22:04:37 +02:00
|
|
|
operations.)
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
</sect2>
|
|
|
|
|
|
|
|
<sect2 id="textsearch-parsing-queries">
|
|
|
|
<title>Parsing Queries</title>
|
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
<para>
|
2007-10-21 22:04:37 +02:00
|
|
|
<productname>PostgreSQL</productname> provides the
|
2016-04-07 17:44:18 +02:00
|
|
|
functions <function>to_tsquery</function>,
|
2018-04-05 18:55:11 +02:00
|
|
|
<function>plainto_tsquery</function>,
|
|
|
|
<function>phraseto_tsquery</function> and
|
|
|
|
<function>websearch_to_tsquery</function>
|
2016-04-07 17:44:18 +02:00
|
|
|
for converting a query to the <type>tsquery</type> data type.
|
|
|
|
<function>to_tsquery</function> offers access to more features
|
2016-06-09 06:30:59 +02:00
|
|
|
than either <function>plainto_tsquery</function> or
|
2018-04-05 18:55:11 +02:00
|
|
|
<function>phraseto_tsquery</function>, but it is less forgiving about its
|
|
|
|
input. <function>websearch_to_tsquery</function> is a simplified version
|
|
|
|
of <function>to_tsquery</function> with an alternative syntax, similar
|
|
|
|
to the one used by web search engines.
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<indexterm>
|
|
|
|
<primary>to_tsquery</primary>
|
|
|
|
</indexterm>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<synopsis>
|
2017-10-09 03:44:17 +02:00
|
|
|
to_tsquery(<optional> <replaceable class="parameter">config</replaceable> <type>regconfig</type>, </optional> <replaceable class="parameter">querytext</replaceable> <type>text</type>) returns <type>tsquery</type>
|
2010-07-29 21:34:41 +02:00
|
|
|
</synopsis>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
<function>to_tsquery</function> creates a <type>tsquery</type> value from
|
2007-10-21 22:04:37 +02:00
|
|
|
<replaceable>querytext</replaceable>, which must consist of single tokens
|
2017-10-09 03:44:17 +02:00
|
|
|
separated by the <type>tsquery</type> operators <literal>&</literal> (AND),
|
2016-06-09 06:30:59 +02:00
|
|
|
<literal>|</literal> (OR), <literal>!</literal> (NOT), and
|
|
|
|
<literal><-></literal> (FOLLOWED BY), possibly grouped
|
|
|
|
using parentheses. In other words, the input to
|
2007-10-21 22:04:37 +02:00
|
|
|
<function>to_tsquery</function> must already follow the general rules for
|
2017-10-09 03:44:17 +02:00
|
|
|
<type>tsquery</type> input, as described in <xref
|
2017-11-23 15:39:47 +01:00
|
|
|
linkend="datatype-tsquery"/>. The difference is that while basic
|
2017-10-09 03:44:17 +02:00
|
|
|
<type>tsquery</type> input takes the tokens at face value,
|
2016-06-09 06:30:59 +02:00
|
|
|
<function>to_tsquery</function> normalizes each token into a lexeme using
|
2007-10-21 22:04:37 +02:00
|
|
|
the specified or default configuration, and discards any tokens that are
|
|
|
|
stop words according to the configuration. For example:
|
2007-08-31 07:04:03 +02:00
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
SELECT to_tsquery('english', 'The & Fat & Rats');
|
2022-04-20 17:04:28 +02:00
|
|
|
to_tsquery
|
2007-10-21 22:04:37 +02:00
|
|
|
---------------
|
|
|
|
'fat' & 'rat'
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2017-10-09 03:44:17 +02:00
|
|
|
As in basic <type>tsquery</type> input, weight(s) can be attached to each
|
|
|
|
lexeme to restrict it to match only <type>tsvector</type> lexemes of those
|
2007-10-21 22:04:37 +02:00
|
|
|
weight(s). For example:
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
SELECT to_tsquery('english', 'Fat | Rats:AB');
|
2022-04-20 17:04:28 +02:00
|
|
|
to_tsquery
|
2007-10-21 22:04:37 +02:00
|
|
|
------------------
|
|
|
|
'fat' | 'rat':AB
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2017-10-09 03:44:17 +02:00
|
|
|
Also, <literal>*</literal> can be attached to a lexeme to specify prefix matching:
|
2008-05-16 18:31:02 +02:00
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2008-05-16 18:31:02 +02:00
|
|
|
SELECT to_tsquery('supern:*A & star:A*B');
|
2022-04-20 17:04:28 +02:00
|
|
|
to_tsquery
|
2008-05-16 18:31:02 +02:00
|
|
|
--------------------------
|
|
|
|
'supern':*A & 'star':*AB
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2008-05-16 18:31:02 +02:00
|
|
|
|
2017-10-09 03:44:17 +02:00
|
|
|
Such a lexeme will match any word in a <type>tsvector</type> that begins
|
2008-05-16 18:31:02 +02:00
|
|
|
with the given string.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2007-10-21 22:04:37 +02:00
|
|
|
<function>to_tsquery</function> can also accept single-quoted
|
|
|
|
phrases. This is primarily useful when the configuration includes a
|
|
|
|
thesaurus dictionary that may trigger on such phrases.
|
|
|
|
In the example below, a thesaurus contains the rule <literal>supernovae
|
|
|
|
stars : sn</literal>:
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
SELECT to_tsquery('''supernovae stars'' & !crab');
|
|
|
|
to_tsquery
|
|
|
|
---------------
|
|
|
|
'sn' & !'crab'
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
Without quotes, <function>to_tsquery</function> will generate a syntax
|
2016-06-09 06:30:59 +02:00
|
|
|
error for tokens that are not separated by an AND, OR, or FOLLOWED BY
|
|
|
|
operator.
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<indexterm>
|
|
|
|
<primary>plainto_tsquery</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<synopsis>
|
2017-10-09 03:44:17 +02:00
|
|
|
plainto_tsquery(<optional> <replaceable class="parameter">config</replaceable> <type>regconfig</type>, </optional> <replaceable class="parameter">querytext</replaceable> <type>text</type>) returns <type>tsquery</type>
|
2010-07-29 21:34:41 +02:00
|
|
|
</synopsis>
|
2007-10-21 22:04:37 +02:00
|
|
|
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
<function>plainto_tsquery</function> transforms the unformatted text
|
2016-06-09 06:30:59 +02:00
|
|
|
<replaceable>querytext</replaceable> to a <type>tsquery</type> value.
|
2017-10-09 03:44:17 +02:00
|
|
|
The text is parsed and normalized much as for <function>to_tsvector</function>,
|
2016-06-09 06:30:59 +02:00
|
|
|
then the <literal>&</literal> (AND) <type>tsquery</type> operator is
|
|
|
|
inserted between surviving words.
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Example:
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
|
|
|
SELECT plainto_tsquery('english', 'The Fat Rats');
|
2022-04-20 17:04:28 +02:00
|
|
|
plainto_tsquery
|
2007-10-21 22:04:37 +02:00
|
|
|
-----------------
|
|
|
|
'fat' & 'rat'
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2017-10-09 03:44:17 +02:00
|
|
|
Note that <function>plainto_tsquery</function> will not
|
2016-06-09 06:30:59 +02:00
|
|
|
recognize <type>tsquery</type> operators, weight labels,
|
2016-04-07 17:44:18 +02:00
|
|
|
or prefix-match labels in its input:
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
SELECT plainto_tsquery('english', 'The Fat & Rats:C');
|
2022-04-20 17:04:28 +02:00
|
|
|
plainto_tsquery
|
2007-10-21 22:04:37 +02:00
|
|
|
---------------------
|
|
|
|
'fat' & 'rat' & 'c'
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
|
2020-04-26 17:45:54 +02:00
|
|
|
Here, all the input punctuation was discarded.
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2016-04-07 17:44:18 +02:00
|
|
|
<indexterm>
|
|
|
|
<primary>phraseto_tsquery</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
|
|
|
<synopsis>
|
2017-10-09 03:44:17 +02:00
|
|
|
phraseto_tsquery(<optional> <replaceable class="parameter">config</replaceable> <type>regconfig</type>, </optional> <replaceable class="parameter">querytext</replaceable> <type>text</type>) returns <type>tsquery</type>
|
2016-04-07 17:44:18 +02:00
|
|
|
</synopsis>
|
|
|
|
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
<function>phraseto_tsquery</function> behaves much like
|
|
|
|
<function>plainto_tsquery</function>, except that it inserts
|
2016-06-09 06:30:59 +02:00
|
|
|
the <literal><-></literal> (FOLLOWED BY) operator between
|
|
|
|
surviving words instead of the <literal>&</literal> (AND) operator.
|
|
|
|
Also, stop words are not simply discarded, but are accounted for by
|
2017-10-09 03:44:17 +02:00
|
|
|
inserting <literal><<replaceable>N</replaceable>></literal> operators rather
|
2016-06-09 06:30:59 +02:00
|
|
|
than <literal><-></literal> operators. This function is useful
|
|
|
|
when searching for exact lexeme sequences, since the FOLLOWED BY
|
|
|
|
operators check lexeme order not just the presence of all the lexemes.
|
2016-04-07 17:44:18 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Example:
|
|
|
|
|
|
|
|
<screen>
|
|
|
|
SELECT phraseto_tsquery('english', 'The Fat Rats');
|
|
|
|
phraseto_tsquery
|
|
|
|
------------------
|
|
|
|
'fat' <-> 'rat'
|
|
|
|
</screen>
|
|
|
|
|
2017-10-09 03:44:17 +02:00
|
|
|
Like <function>plainto_tsquery</function>, the
|
|
|
|
<function>phraseto_tsquery</function> function will not
|
2016-06-09 06:30:59 +02:00
|
|
|
recognize <type>tsquery</type> operators, weight labels,
|
2016-04-07 17:44:18 +02:00
|
|
|
or prefix-match labels in its input:
|
|
|
|
|
|
|
|
<screen>
|
|
|
|
SELECT phraseto_tsquery('english', 'The Fat & Rats:C');
|
|
|
|
phraseto_tsquery
|
|
|
|
-----------------------------
|
2016-06-29 16:59:36 +02:00
|
|
|
'fat' <-> 'rat' <-> 'c'
|
2016-04-07 17:44:18 +02:00
|
|
|
</screen>
|
|
|
|
</para>
|
|
|
|
|
2018-04-05 18:55:11 +02:00
|
|
|
<synopsis>
|
|
|
|
websearch_to_tsquery(<optional> <replaceable class="parameter">config</replaceable> <type>regconfig</type>, </optional> <replaceable class="parameter">querytext</replaceable> <type>text</type>) returns <type>tsquery</type>
|
|
|
|
</synopsis>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
<function>websearch_to_tsquery</function> creates a <type>tsquery</type>
|
|
|
|
value from <replaceable>querytext</replaceable> using an alternative
|
|
|
|
syntax in which simple unformatted text is a valid query.
|
|
|
|
Unlike <function>plainto_tsquery</function>
|
|
|
|
and <function>phraseto_tsquery</function>, it also recognizes certain
|
2020-04-26 17:45:54 +02:00
|
|
|
operators. Moreover, this function will never raise syntax errors,
|
2018-04-05 18:55:11 +02:00
|
|
|
which makes it possible to use raw user-supplied input for search.
|
|
|
|
The following syntax is supported:
|
2020-04-26 17:45:54 +02:00
|
|
|
|
2018-04-05 18:55:11 +02:00
|
|
|
<itemizedlist spacing="compact" mark="bullet">
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
<literal>unquoted text</literal>: text not inside quote marks will be
|
|
|
|
converted to terms separated by <literal>&</literal> operators, as
|
2020-04-26 17:45:54 +02:00
|
|
|
if processed by <function>plainto_tsquery</function>.
|
2018-04-05 18:55:11 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
<literal>"quoted text"</literal>: text inside quote marks will be
|
|
|
|
converted to terms separated by <literal><-></literal>
|
|
|
|
operators, as if processed by <function>phraseto_tsquery</function>.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2020-04-26 17:45:54 +02:00
|
|
|
<literal>OR</literal>: the word <quote>or</quote> will be converted to
|
2018-04-05 18:55:11 +02:00
|
|
|
the <literal>|</literal> operator.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2020-04-26 17:45:54 +02:00
|
|
|
<literal>-</literal>: a dash will be converted to
|
2018-04-05 18:55:11 +02:00
|
|
|
the <literal>!</literal> operator.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</itemizedlist>
|
2020-04-26 17:45:54 +02:00
|
|
|
|
|
|
|
Other punctuation is ignored. So
|
|
|
|
like <function>plainto_tsquery</function>
|
|
|
|
and <function>phraseto_tsquery</function>,
|
|
|
|
the <function>websearch_to_tsquery</function> function will not
|
|
|
|
recognize <type>tsquery</type> operators, weight labels, or prefix-match
|
|
|
|
labels in its input.
|
2018-04-05 18:55:11 +02:00
|
|
|
</para>
|
2020-04-26 17:45:54 +02:00
|
|
|
|
2018-04-05 18:55:11 +02:00
|
|
|
<para>
|
|
|
|
Examples:
|
2018-05-21 17:41:37 +02:00
|
|
|
<screen>
|
|
|
|
SELECT websearch_to_tsquery('english', 'The fat rats');
|
|
|
|
websearch_to_tsquery
|
|
|
|
----------------------
|
|
|
|
'fat' & 'rat'
|
|
|
|
(1 row)
|
|
|
|
|
|
|
|
SELECT websearch_to_tsquery('english', '"supernovae stars" -crab');
|
2018-04-05 18:55:11 +02:00
|
|
|
websearch_to_tsquery
|
2018-05-21 17:41:37 +02:00
|
|
|
----------------------------------
|
|
|
|
'supernova' <-> 'star' & !'crab'
|
|
|
|
(1 row)
|
|
|
|
|
|
|
|
SELECT websearch_to_tsquery('english', '"sad cat" or "fat rat"');
|
2018-04-05 18:55:11 +02:00
|
|
|
websearch_to_tsquery
|
2018-05-21 17:41:37 +02:00
|
|
|
-----------------------------------
|
|
|
|
'sad' <-> 'cat' | 'fat' <-> 'rat'
|
|
|
|
(1 row)
|
|
|
|
|
|
|
|
SELECT websearch_to_tsquery('english', 'signal -"segmentation fault"');
|
|
|
|
websearch_to_tsquery
|
|
|
|
---------------------------------------
|
|
|
|
'signal' & !( 'segment' <-> 'fault' )
|
|
|
|
(1 row)
|
|
|
|
|
|
|
|
SELECT websearch_to_tsquery('english', '""" )( dummy \\ query <->');
|
|
|
|
websearch_to_tsquery
|
|
|
|
----------------------
|
|
|
|
'dummi' & 'queri'
|
|
|
|
(1 row)
|
|
|
|
</screen>
|
2018-04-05 18:55:11 +02:00
|
|
|
</para>
|
2007-08-29 04:37:04 +02:00
|
|
|
</sect2>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
<sect2 id="textsearch-ranking">
|
2007-08-29 22:37:14 +02:00
|
|
|
<title>Ranking Search Results</title>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
<para>
|
|
|
|
Ranking attempts to measure how relevant documents are to a particular
|
2007-10-21 22:04:37 +02:00
|
|
|
query, so that when there are many matches the most relevant ones can be
|
|
|
|
shown first. <productname>PostgreSQL</productname> provides two
|
|
|
|
predefined ranking functions, which take into account lexical, proximity,
|
|
|
|
and structural information; that is, they consider how often the query
|
|
|
|
terms appear in the document, how close together the terms are in the
|
|
|
|
document, and how important is the part of the document where they occur.
|
|
|
|
However, the concept of relevancy is vague and very application-specific.
|
|
|
|
Different applications might require additional information for ranking,
|
2009-04-27 18:27:36 +02:00
|
|
|
e.g., document modification time. The built-in ranking functions are only
|
2007-10-21 22:04:37 +02:00
|
|
|
examples. You can write your own ranking functions and/or combine their
|
|
|
|
results with additional factors to fit your specific needs.
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
<para>
|
|
|
|
The two ranking functions currently available are:
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
<variablelist>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
<varlistentry>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
<term>
|
2014-05-07 03:28:58 +02:00
|
|
|
<indexterm>
|
|
|
|
<primary>ts_rank</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
2017-10-09 03:44:17 +02:00
|
|
|
<literal>ts_rank(<optional> <replaceable class="parameter">weights</replaceable> <type>float4[]</type>, </optional> <replaceable class="parameter">vector</replaceable> <type>tsvector</type>, <replaceable class="parameter">query</replaceable> <type>tsquery</type> <optional>, <replaceable class="parameter">normalization</replaceable> <type>integer</type> </optional>) returns <type>float4</type></literal>
|
2007-08-29 04:37:04 +02:00
|
|
|
</term>
|
|
|
|
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2012-09-12 02:46:17 +02:00
|
|
|
Ranks vectors based on the frequency of their matching lexemes.
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
|
|
|
<varlistentry>
|
|
|
|
|
2014-05-07 03:28:58 +02:00
|
|
|
<term>
|
2007-08-31 07:04:03 +02:00
|
|
|
<indexterm>
|
|
|
|
<primary>ts_rank_cd</primary>
|
2007-08-29 04:37:04 +02:00
|
|
|
</indexterm>
|
|
|
|
|
2017-10-09 03:44:17 +02:00
|
|
|
<literal>ts_rank_cd(<optional> <replaceable class="parameter">weights</replaceable> <type>float4[]</type>, </optional> <replaceable class="parameter">vector</replaceable> <type>tsvector</type>, <replaceable class="parameter">query</replaceable> <type>tsquery</type> <optional>, <replaceable class="parameter">normalization</replaceable> <type>integer</type> </optional>) returns <type>float4</type></literal>
|
2007-08-29 04:37:04 +02:00
|
|
|
</term>
|
|
|
|
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2007-10-21 22:04:37 +02:00
|
|
|
This function computes the <firstterm>cover density</firstterm>
|
|
|
|
ranking for the given document vector and query, as described in
|
|
|
|
Clarke, Cormack, and Tudhope's "Relevance Ranking for One to Three
|
|
|
|
Term Queries" in the journal "Information Processing and Management",
|
2017-10-09 03:44:17 +02:00
|
|
|
1999. Cover density is similar to <function>ts_rank</function> ranking
|
2014-03-24 20:46:59 +01:00
|
|
|
except that the proximity of matching lexemes to each other is
|
|
|
|
taken into consideration.
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2014-03-24 19:36:36 +01:00
|
|
|
This function requires lexeme positional information to perform
|
2017-10-09 03:44:17 +02:00
|
|
|
its calculation. Therefore, it ignores any <quote>stripped</quote>
|
|
|
|
lexemes in the <type>tsvector</type>. If there are no unstripped
|
2014-03-24 19:36:36 +01:00
|
|
|
lexemes in the input, the result will be zero. (See <xref
|
2017-11-23 15:39:47 +01:00
|
|
|
linkend="textsearch-manipulate-tsvector"/> for more information
|
2017-10-09 03:44:17 +02:00
|
|
|
about the <function>strip</function> function and positional information
|
|
|
|
in <type>tsvector</type>s.)
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
|
|
|
</variablelist>
|
|
|
|
|
|
|
|
</para>
|
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<para>
|
|
|
|
For both these functions,
|
2017-10-09 04:00:57 +02:00
|
|
|
the optional <replaceable class="parameter">weights</replaceable>
|
2007-10-21 22:04:37 +02:00
|
|
|
argument offers the ability to weigh word instances more or less
|
|
|
|
heavily depending on how they are labeled. The weight arrays specify
|
|
|
|
how heavily to weigh each category of word, in the order:
|
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<synopsis>
|
2007-10-21 22:04:37 +02:00
|
|
|
{D-weight, C-weight, B-weight, A-weight}
|
2010-07-29 21:34:41 +02:00
|
|
|
</synopsis>
|
2007-10-21 22:04:37 +02:00
|
|
|
|
2017-10-09 04:00:57 +02:00
|
|
|
If no <replaceable class="parameter">weights</replaceable> are provided,
|
2007-10-21 22:04:37 +02:00
|
|
|
then these defaults are used:
|
|
|
|
|
|
|
|
<programlisting>
|
|
|
|
{0.1, 0.2, 0.4, 1.0}
|
|
|
|
</programlisting>
|
|
|
|
|
|
|
|
Typically weights are used to mark words from special areas of the
|
2009-04-27 18:27:36 +02:00
|
|
|
document, like the title or an initial abstract, so they can be
|
|
|
|
treated with more or less importance than words in the document body.
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
<para>
|
|
|
|
Since a longer document has a greater chance of containing a query term
|
2009-04-27 18:27:36 +02:00
|
|
|
it is reasonable to take into account document size, e.g., a hundred-word
|
2007-08-29 04:37:04 +02:00
|
|
|
document with five instances of a search word is probably more relevant
|
|
|
|
than a thousand-word document with five instances. Both ranking functions
|
|
|
|
take an integer <replaceable>normalization</replaceable> option that
|
2007-10-21 22:04:37 +02:00
|
|
|
specifies whether and how a document's length should impact its rank.
|
|
|
|
The integer option controls several behaviors, so it is a bit mask:
|
|
|
|
you can specify one or more behaviors using
|
2007-10-15 23:39:57 +02:00
|
|
|
<literal>|</literal> (for example, <literal>2|4</literal>).
|
2007-08-29 04:37:04 +02:00
|
|
|
|
|
|
|
<itemizedlist spacing="compact" mark="bullet">
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
0 (the default) ignores the document length
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
1 divides the rank by 1 + the logarithm of the document length
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2007-10-21 22:04:37 +02:00
|
|
|
2 divides the rank by the document length
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
4 divides the rank by the mean harmonic distance between extents
|
2017-10-09 03:44:17 +02:00
|
|
|
(this is implemented only by <function>ts_rank_cd</function>)
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
8 divides the rank by the number of unique words in document
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2007-10-21 22:04:37 +02:00
|
|
|
16 divides the rank by 1 + the logarithm of the number
|
|
|
|
of unique words in document
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
2007-11-15 00:43:27 +01:00
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
32 divides the rank by itself + 1
|
|
|
|
</para>
|
|
|
|
</listitem>
|
2007-08-29 04:37:04 +02:00
|
|
|
</itemizedlist>
|
|
|
|
|
2007-11-15 00:43:27 +01:00
|
|
|
If more than one flag bit is specified, the transformations are
|
|
|
|
applied in the order listed.
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2007-10-15 23:39:57 +02:00
|
|
|
It is important to note that the ranking functions do not use any global
|
2007-11-15 00:43:27 +01:00
|
|
|
information, so it is impossible to produce a fair normalization to 1% or
|
|
|
|
100% as sometimes desired. Normalization option 32
|
|
|
|
(<literal>rank/(rank+1)</literal>) can be applied to scale all ranks
|
|
|
|
into the range zero to one, but of course this is just a cosmetic change;
|
|
|
|
it will not affect the ordering of the search results.
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2007-10-21 22:04:37 +02:00
|
|
|
Here is an example that selects only the ten highest-ranked matches:
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
SELECT title, ts_rank_cd(textsearch, query) AS rank
|
2007-08-21 23:08:47 +02:00
|
|
|
FROM apod, to_tsquery('neutrino|(dark & matter)') query
|
|
|
|
WHERE query @@ textsearch
|
2009-04-27 18:27:36 +02:00
|
|
|
ORDER BY rank DESC
|
|
|
|
LIMIT 10;
|
2007-10-21 22:04:37 +02:00
|
|
|
title | rank
|
2007-08-21 23:08:47 +02:00
|
|
|
-----------------------------------------------+----------
|
|
|
|
Neutrinos in the Sun | 3.1
|
|
|
|
The Sudbury Neutrino Detector | 2.4
|
|
|
|
A MACHO View of Galactic Dark Matter | 2.01317
|
|
|
|
Hot Gas and Dark Matter | 1.91171
|
|
|
|
The Virgo Cluster: Hot Plasma and Dark Matter | 1.90953
|
|
|
|
Rafting for Solar Neutrinos | 1.9
|
|
|
|
NGC 4650A: Strange Galaxy and Dark Matter | 1.85774
|
|
|
|
Hot Gas and Dark Matter | 1.6123
|
|
|
|
Ice Fishing for Cosmic Neutrinos | 1.6
|
|
|
|
Weak Lensing Distorts the Universe | 0.818218
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
This is the same example using normalized ranking:
|
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2007-11-15 00:43:27 +01:00
|
|
|
SELECT title, ts_rank_cd(textsearch, query, 32 /* rank/(rank+1) */ ) AS rank
|
2007-08-21 23:08:47 +02:00
|
|
|
FROM apod, to_tsquery('neutrino|(dark & matter)') query
|
|
|
|
WHERE query @@ textsearch
|
2009-04-27 18:27:36 +02:00
|
|
|
ORDER BY rank DESC
|
|
|
|
LIMIT 10;
|
2007-10-21 22:04:37 +02:00
|
|
|
title | rank
|
2007-08-21 23:08:47 +02:00
|
|
|
-----------------------------------------------+-------------------
|
|
|
|
Neutrinos in the Sun | 0.756097569485493
|
|
|
|
The Sudbury Neutrino Detector | 0.705882361190954
|
|
|
|
A MACHO View of Galactic Dark Matter | 0.668123210574724
|
|
|
|
Hot Gas and Dark Matter | 0.65655958650282
|
|
|
|
The Virgo Cluster: Hot Plasma and Dark Matter | 0.656301290640973
|
|
|
|
Rafting for Solar Neutrinos | 0.655172410958162
|
|
|
|
NGC 4650A: Strange Galaxy and Dark Matter | 0.650072921219637
|
|
|
|
Hot Gas and Dark Matter | 0.617195790024749
|
|
|
|
Ice Fishing for Cosmic Neutrinos | 0.615384618911517
|
|
|
|
Weak Lensing Distorts the Universe | 0.450010798361481
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Ranking can be expensive since it requires consulting the
|
2007-10-21 22:04:37 +02:00
|
|
|
<type>tsvector</type> of each matching document, which can be I/O bound and
|
|
|
|
therefore slow. Unfortunately, it is almost impossible to avoid since
|
2009-06-17 23:58:49 +02:00
|
|
|
practical queries often result in large numbers of matches.
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
</sect2>
|
|
|
|
|
|
|
|
<sect2 id="textsearch-headline">
|
2007-08-29 22:37:14 +02:00
|
|
|
<title>Highlighting Results</title>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
|
|
|
<para>
|
|
|
|
To present search results it is ideal to show a part of each document and
|
|
|
|
how it is related to the query. Usually, search engines show fragments of
|
2017-10-09 03:44:17 +02:00
|
|
|
the document with marked search terms. <productname>PostgreSQL</productname>
|
2007-10-21 22:04:37 +02:00
|
|
|
provides a function <function>ts_headline</function> that
|
2007-10-15 23:39:57 +02:00
|
|
|
implements this functionality.
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<indexterm>
|
|
|
|
<primary>ts_headline</primary>
|
|
|
|
</indexterm>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<synopsis>
|
2017-10-09 03:44:17 +02:00
|
|
|
ts_headline(<optional> <replaceable class="parameter">config</replaceable> <type>regconfig</type>, </optional> <replaceable class="parameter">document</replaceable> <type>text</type>, <replaceable class="parameter">query</replaceable> <type>tsquery</type> <optional>, <replaceable class="parameter">options</replaceable> <type>text</type> </optional>) returns <type>text</type>
|
2010-07-29 21:34:41 +02:00
|
|
|
</synopsis>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<para>
|
|
|
|
<function>ts_headline</function> accepts a document along
|
2009-06-17 23:58:49 +02:00
|
|
|
with a query, and returns an excerpt from
|
2007-10-21 22:04:37 +02:00
|
|
|
the document in which terms from the query are highlighted. The
|
|
|
|
configuration to be used to parse the document can be specified by
|
|
|
|
<replaceable>config</replaceable>; if <replaceable>config</replaceable>
|
|
|
|
is omitted, the
|
|
|
|
<varname>default_text_search_config</varname> configuration is used.
|
|
|
|
</para>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
|
|
|
<para>
|
2007-10-21 22:04:37 +02:00
|
|
|
If an <replaceable>options</replaceable> string is specified it must
|
2007-10-15 23:39:57 +02:00
|
|
|
consist of a comma-separated list of one or more
|
2017-10-09 03:44:17 +02:00
|
|
|
<replaceable>option</replaceable><literal>=</literal><replaceable>value</replaceable> pairs.
|
2007-08-29 04:37:04 +02:00
|
|
|
The available options are:
|
|
|
|
|
|
|
|
<itemizedlist spacing="compact" mark="bullet">
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2020-04-09 21:11:08 +02:00
|
|
|
<literal>MaxWords</literal>, <literal>MinWords</literal> (integers):
|
|
|
|
these numbers determine the longest and shortest headlines to output.
|
|
|
|
The default values are 35 and 15.
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
2011-08-30 19:32:49 +02:00
|
|
|
<listitem>
|
2007-08-29 04:37:04 +02:00
|
|
|
<para>
|
2020-04-09 21:11:08 +02:00
|
|
|
<literal>ShortWord</literal> (integer): words of this length or less
|
|
|
|
will be dropped at the start and end of a headline, unless they are
|
|
|
|
query terms. The default value of three eliminates common English
|
|
|
|
articles.
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2020-04-09 21:11:08 +02:00
|
|
|
<literal>HighlightAll</literal> (boolean): if
|
|
|
|
<literal>true</literal> the whole document will be used as the
|
|
|
|
headline, ignoring the preceding three parameters. The default
|
|
|
|
is <literal>false</literal>.
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
2008-10-17 20:05:19 +02:00
|
|
|
<listitem>
|
|
|
|
<para>
|
2020-04-09 21:11:08 +02:00
|
|
|
<literal>MaxFragments</literal> (integer): maximum number of text
|
|
|
|
fragments to display. The default value of zero selects a
|
|
|
|
non-fragment-based headline generation method. A value greater
|
|
|
|
than zero selects fragment-based headline generation (see below).
|
2008-10-17 20:05:19 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2020-04-09 21:11:08 +02:00
|
|
|
<literal>StartSel</literal>, <literal>StopSel</literal> (strings):
|
|
|
|
the strings with which to delimit query words appearing in the
|
|
|
|
document, to distinguish them from other excerpted words. The
|
|
|
|
default values are <quote><literal><b></literal></quote> and
|
|
|
|
<quote><literal></b></literal></quote>, which can be suitable
|
|
|
|
for HTML output.
|
2008-10-17 20:05:19 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
2007-08-29 04:37:04 +02:00
|
|
|
<listitem>
|
|
|
|
<para>
|
2020-04-09 21:11:08 +02:00
|
|
|
<literal>FragmentDelimiter</literal> (string): When more than one
|
|
|
|
fragment is displayed, the fragments will be separated by this string.
|
|
|
|
The default is <quote><literal> ... </literal></quote>.
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</itemizedlist>
|
|
|
|
|
Avoid unnecessary use of pg_strcasecmp for already-downcased identifiers.
We have a lot of code in which option names, which from the user's
viewpoint are logically keywords, are passed through the grammar as plain
identifiers, and then matched to string literals during command execution.
This approach avoids making words into lexer keywords unnecessarily. Some
places matched these strings using plain strcmp, some using pg_strcasecmp.
But the latter should be unnecessary since identifiers would have been
downcased on their way through the parser. Aside from any efficiency
concerns (probably not a big factor), the lack of consistency in this area
creates a hazard of subtle bugs due to different places coming to different
conclusions about whether two option names are the same or different.
Hence, standardize on using strcmp() to match any option names that are
expected to have been fed through the parser.
This does create a user-visible behavioral change, which is that while
formerly all of these would work:
alter table foo set (fillfactor = 50);
alter table foo set (FillFactor = 50);
alter table foo set ("fillfactor" = 50);
alter table foo set ("FillFactor" = 50);
now the last case will fail because that double-quoted identifier is
different from the others. However, none of our documentation says that
you can use a quoted identifier in such contexts at all, and we should
discourage doing so since it would break if we ever decide to parse such
constructs as true lexer keywords rather than poor man's substitutes.
So this shouldn't create a significant compatibility issue for users.
Daniel Gustafsson, reviewed by Michael Paquier, small changes by me
Discussion: https://postgr.es/m/29405B24-564E-476B-98C0-677A29805B84@yesql.se
2018-01-27 00:25:02 +01:00
|
|
|
These option names are recognized case-insensitively.
|
2020-04-09 21:11:08 +02:00
|
|
|
You must double-quote string values if they contain spaces or commas.
|
|
|
|
</para>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2020-04-09 21:11:08 +02:00
|
|
|
<para>
|
|
|
|
In non-fragment-based headline
|
|
|
|
generation, <function>ts_headline</function> locates matches for the
|
|
|
|
given <replaceable class="parameter">query</replaceable> and chooses a
|
|
|
|
single one to display, preferring matches that have more query words
|
|
|
|
within the allowed headline length.
|
|
|
|
In fragment-based headline generation, <function>ts_headline</function>
|
|
|
|
locates the query matches and splits each match
|
|
|
|
into <quote>fragments</quote> of no more than <literal>MaxWords</literal>
|
|
|
|
words each, preferring fragments with more query words, and when
|
|
|
|
possible <quote>stretching</quote> fragments to include surrounding
|
|
|
|
words. The fragment-based mode is thus more useful when the query
|
|
|
|
matches span large sections of the document, or when it's desirable to
|
|
|
|
display multiple matches.
|
|
|
|
In either mode, if no query matches can be identified, then a single
|
|
|
|
fragment of the first <literal>MinWords</literal> words in the document
|
|
|
|
will be displayed.
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
<para>
|
|
|
|
For example:
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2009-04-14 02:49:56 +02:00
|
|
|
SELECT ts_headline('english',
|
|
|
|
'The most common type of search
|
2010-08-25 23:42:55 +02:00
|
|
|
is to find all documents containing given query terms
|
2007-10-29 02:55:11 +01:00
|
|
|
and return them in order of their similarity to the
|
2009-04-14 02:49:56 +02:00
|
|
|
query.',
|
2020-04-09 21:11:08 +02:00
|
|
|
to_tsquery('english', 'query & similarity'));
|
|
|
|
ts_headline
|
2007-10-29 02:55:11 +01:00
|
|
|
------------------------------------------------------------
|
2020-04-09 21:11:08 +02:00
|
|
|
containing given <b>query</b> terms +
|
|
|
|
and return them in order of their <b>similarity</b> to the+
|
2007-10-29 02:55:11 +01:00
|
|
|
<b>query</b>.
|
|
|
|
|
2009-04-14 02:49:56 +02:00
|
|
|
SELECT ts_headline('english',
|
2020-04-09 21:11:08 +02:00
|
|
|
'Search terms may occur
|
|
|
|
many times in a document,
|
|
|
|
requiring ranking of the search matches to decide which
|
|
|
|
occurrences to display in the result.',
|
|
|
|
to_tsquery('english', 'search & term'),
|
|
|
|
'MaxFragments=10, MaxWords=7, MinWords=3, StartSel=<<, StopSel=>>');
|
|
|
|
ts_headline
|
|
|
|
------------------------------------------------------------
|
|
|
|
<<Search>> <<terms>> may occur +
|
|
|
|
many times ... ranking of the <<search>> matches to decide
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
<function>ts_headline</function> uses the original document, not a
|
2007-10-21 22:04:37 +02:00
|
|
|
<type>tsvector</type> summary, so it can be slow and should be used with
|
2016-11-13 19:12:35 +01:00
|
|
|
care.
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
</sect2>
|
|
|
|
|
|
|
|
</sect1>
|
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<sect1 id="textsearch-features">
|
|
|
|
<title>Additional Features</title>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
|
|
|
<para>
|
2007-10-21 22:04:37 +02:00
|
|
|
This section describes additional functions and operators that are
|
|
|
|
useful in connection with text search.
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<sect2 id="textsearch-manipulate-tsvector">
|
|
|
|
<title>Manipulating Documents</title>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<para>
|
2017-11-23 15:39:47 +01:00
|
|
|
<xref linkend="textsearch-parsing-documents"/> showed how raw textual
|
2017-10-09 03:44:17 +02:00
|
|
|
documents can be converted into <type>tsvector</type> values.
|
2007-10-21 22:04:37 +02:00
|
|
|
<productname>PostgreSQL</productname> also provides functions and
|
|
|
|
operators that can be used to manipulate documents that are already
|
2017-10-09 03:44:17 +02:00
|
|
|
in <type>tsvector</type> form.
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<variablelist>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<varlistentry>
|
|
|
|
|
2014-05-07 03:28:58 +02:00
|
|
|
<term>
|
2007-10-21 22:04:37 +02:00
|
|
|
<indexterm>
|
|
|
|
<primary>tsvector concatenation</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
2017-10-09 03:44:17 +02:00
|
|
|
<literal><type>tsvector</type> || <type>tsvector</type></literal>
|
2007-10-21 22:04:37 +02:00
|
|
|
</term>
|
|
|
|
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
The <type>tsvector</type> concatenation operator
|
2007-10-21 22:04:37 +02:00
|
|
|
returns a vector which combines the lexemes and positional information
|
|
|
|
of the two vectors given as arguments. Positions and weight labels
|
|
|
|
are retained during the concatenation.
|
|
|
|
Positions appearing in the right-hand vector are offset by the largest
|
|
|
|
position mentioned in the left-hand vector, so that the result is
|
2017-10-09 03:44:17 +02:00
|
|
|
nearly equivalent to the result of performing <function>to_tsvector</function>
|
2007-10-21 22:04:37 +02:00
|
|
|
on the concatenation of the two original document strings. (The
|
|
|
|
equivalence is not exact, because any stop-words removed from the
|
|
|
|
end of the left-hand argument will not affect the result, whereas
|
|
|
|
they would have affected the positions of the lexemes in the
|
|
|
|
right-hand argument if textual concatenation were used.)
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
One advantage of using concatenation in the vector form, rather than
|
2017-10-09 03:44:17 +02:00
|
|
|
concatenating text before applying <function>to_tsvector</function>, is that
|
2007-10-21 22:04:37 +02:00
|
|
|
you can use different configurations to parse different sections
|
2017-10-09 03:44:17 +02:00
|
|
|
of the document. Also, because the <function>setweight</function> function
|
2007-10-21 22:04:37 +02:00
|
|
|
marks all lexemes of the given vector the same way, it is necessary
|
2017-10-09 03:44:17 +02:00
|
|
|
to parse the text and do <function>setweight</function> before concatenating
|
2007-10-21 22:04:37 +02:00
|
|
|
if you want to label different parts of the document with different
|
|
|
|
weights.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
|
|
|
<varlistentry>
|
|
|
|
|
2014-05-07 03:28:58 +02:00
|
|
|
<term>
|
2007-10-21 22:04:37 +02:00
|
|
|
<indexterm>
|
|
|
|
<primary>setweight</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
2017-10-09 03:44:17 +02:00
|
|
|
<literal>setweight(<replaceable class="parameter">vector</replaceable> <type>tsvector</type>, <replaceable class="parameter">weight</replaceable> <type>"char"</type>) returns <type>tsvector</type></literal>
|
2007-10-21 22:04:37 +02:00
|
|
|
</term>
|
|
|
|
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
<function>setweight</function> returns a copy of the input vector in which every
|
|
|
|
position has been labeled with the given <replaceable>weight</replaceable>, either
|
2007-10-21 22:04:37 +02:00
|
|
|
<literal>A</literal>, <literal>B</literal>, <literal>C</literal>, or
|
|
|
|
<literal>D</literal>. (<literal>D</literal> is the default for new
|
|
|
|
vectors and as such is not displayed on output.) These labels are
|
|
|
|
retained when vectors are concatenated, allowing words from different
|
|
|
|
parts of a document to be weighted differently by ranking functions.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
Note that weight labels apply to <emphasis>positions</emphasis>, not
|
|
|
|
<emphasis>lexemes</emphasis>. If the input vector has been stripped of
|
|
|
|
positions then <function>setweight</function> does nothing.
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
|
|
|
<varlistentry>
|
2014-05-07 03:28:58 +02:00
|
|
|
<term>
|
2007-10-21 22:04:37 +02:00
|
|
|
<indexterm>
|
|
|
|
<primary>length(tsvector)</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
2017-10-09 03:44:17 +02:00
|
|
|
<literal>length(<replaceable class="parameter">vector</replaceable> <type>tsvector</type>) returns <type>integer</type></literal>
|
2007-10-21 22:04:37 +02:00
|
|
|
</term>
|
|
|
|
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
Returns the number of lexemes stored in the vector.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
|
|
|
<varlistentry>
|
|
|
|
|
2014-05-07 03:28:58 +02:00
|
|
|
<term>
|
2007-10-21 22:04:37 +02:00
|
|
|
<indexterm>
|
|
|
|
<primary>strip</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
2017-10-09 03:44:17 +02:00
|
|
|
<literal>strip(<replaceable class="parameter">vector</replaceable> <type>tsvector</type>) returns <type>tsvector</type></literal>
|
2007-10-21 22:04:37 +02:00
|
|
|
</term>
|
|
|
|
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2016-06-09 06:30:59 +02:00
|
|
|
Returns a vector that lists the same lexemes as the given vector, but
|
|
|
|
lacks any position or weight information. The result is usually much
|
|
|
|
smaller than an unstripped vector, but it is also less useful.
|
|
|
|
Relevance ranking does not work as well on stripped vectors as
|
2016-06-29 21:00:25 +02:00
|
|
|
unstripped ones. Also,
|
2017-10-09 03:44:17 +02:00
|
|
|
the <literal><-></literal> (FOLLOWED BY) <type>tsquery</type> operator
|
2016-06-29 21:00:25 +02:00
|
|
|
will never match stripped input, since it cannot determine the
|
|
|
|
distance between lexeme occurrences.
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
|
|
|
|
</varlistentry>
|
|
|
|
|
|
|
|
</variablelist>
|
|
|
|
|
2016-03-11 17:22:36 +01:00
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
A full list of <type>tsvector</type>-related functions is available
|
2017-11-23 15:39:47 +01:00
|
|
|
in <xref linkend="textsearch-functions-table"/>.
|
2016-03-11 17:22:36 +01:00
|
|
|
</para>
|
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
</sect2>
|
|
|
|
|
|
|
|
<sect2 id="textsearch-manipulate-tsquery">
|
|
|
|
<title>Manipulating Queries</title>
|
|
|
|
|
|
|
|
<para>
|
2017-11-23 15:39:47 +01:00
|
|
|
<xref linkend="textsearch-parsing-queries"/> showed how raw textual
|
2017-10-09 03:44:17 +02:00
|
|
|
queries can be converted into <type>tsquery</type> values.
|
2007-10-21 22:04:37 +02:00
|
|
|
<productname>PostgreSQL</productname> also provides functions and
|
|
|
|
operators that can be used to manipulate queries that are already
|
2017-10-09 03:44:17 +02:00
|
|
|
in <type>tsquery</type> form.
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<variablelist>
|
|
|
|
|
|
|
|
<varlistentry>
|
|
|
|
|
|
|
|
<term>
|
2017-10-09 03:44:17 +02:00
|
|
|
<literal><type>tsquery</type> && <type>tsquery</type></literal>
|
2007-10-21 22:04:37 +02:00
|
|
|
</term>
|
|
|
|
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
Returns the AND-combination of the two given queries.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
|
|
|
|
</varlistentry>
|
|
|
|
|
|
|
|
<varlistentry>
|
|
|
|
|
|
|
|
<term>
|
2017-10-09 03:44:17 +02:00
|
|
|
<literal><type>tsquery</type> || <type>tsquery</type></literal>
|
2007-10-21 22:04:37 +02:00
|
|
|
</term>
|
|
|
|
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
Returns the OR-combination of the two given queries.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
|
|
|
|
</varlistentry>
|
|
|
|
|
|
|
|
<varlistentry>
|
|
|
|
|
|
|
|
<term>
|
2017-10-09 03:44:17 +02:00
|
|
|
<literal>!! <type>tsquery</type></literal>
|
2007-10-21 22:04:37 +02:00
|
|
|
</term>
|
|
|
|
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
Returns the negation (NOT) of the given query.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
|
|
|
|
</varlistentry>
|
|
|
|
|
2016-04-07 17:44:18 +02:00
|
|
|
<varlistentry>
|
|
|
|
|
|
|
|
<term>
|
2017-10-09 03:44:17 +02:00
|
|
|
<literal><type>tsquery</type> <-> <type>tsquery</type></literal>
|
2016-04-07 17:44:18 +02:00
|
|
|
</term>
|
|
|
|
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2016-06-09 06:30:59 +02:00
|
|
|
Returns a query that searches for a match to the first given query
|
|
|
|
immediately followed by a match to the second given query, using
|
2017-10-09 03:44:17 +02:00
|
|
|
the <literal><-></literal> (FOLLOWED BY)
|
|
|
|
<type>tsquery</type> operator. For example:
|
2016-04-07 17:44:18 +02:00
|
|
|
|
|
|
|
<screen>
|
|
|
|
SELECT to_tsquery('fat') <-> to_tsquery('cat | rat');
|
2020-10-19 17:50:33 +02:00
|
|
|
?column?
|
|
|
|
----------------------------
|
|
|
|
'fat' <-> ( 'cat' | 'rat' )
|
2016-04-07 17:44:18 +02:00
|
|
|
</screen>
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
|
|
|
|
</varlistentry>
|
|
|
|
|
|
|
|
<varlistentry>
|
|
|
|
|
|
|
|
<term>
|
|
|
|
<indexterm>
|
|
|
|
<primary>tsquery_phrase</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
2017-10-09 03:44:17 +02:00
|
|
|
<literal>tsquery_phrase(<replaceable class="parameter">query1</replaceable> <type>tsquery</type>, <replaceable class="parameter">query2</replaceable> <type>tsquery</type> [, <replaceable class="parameter">distance</replaceable> <type>integer</type> ]) returns <type>tsquery</type></literal>
|
2016-04-07 17:44:18 +02:00
|
|
|
</term>
|
|
|
|
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2016-06-09 06:30:59 +02:00
|
|
|
Returns a query that searches for a match to the first given query
|
2020-10-19 16:58:38 +02:00
|
|
|
followed by a match to the second given query at a distance of exactly
|
2016-06-27 19:41:00 +02:00
|
|
|
<replaceable>distance</replaceable> lexemes, using
|
2017-10-09 03:44:17 +02:00
|
|
|
the <literal><<replaceable>N</replaceable>></literal>
|
|
|
|
<type>tsquery</type> operator. For example:
|
2016-04-07 17:44:18 +02:00
|
|
|
|
|
|
|
<screen>
|
|
|
|
SELECT tsquery_phrase(to_tsquery('fat'), to_tsquery('cat'), 10);
|
|
|
|
tsquery_phrase
|
|
|
|
------------------
|
|
|
|
'fat' <10> 'cat'
|
|
|
|
</screen>
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
|
|
|
|
</varlistentry>
|
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<varlistentry>
|
|
|
|
|
2014-05-07 03:28:58 +02:00
|
|
|
<term>
|
2007-10-21 22:04:37 +02:00
|
|
|
<indexterm>
|
|
|
|
<primary>numnode</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
2017-10-09 03:44:17 +02:00
|
|
|
<literal>numnode(<replaceable class="parameter">query</replaceable> <type>tsquery</type>) returns <type>integer</type></literal>
|
2007-10-21 22:04:37 +02:00
|
|
|
</term>
|
|
|
|
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
Returns the number of nodes (lexemes plus operators) in a
|
2017-10-09 03:44:17 +02:00
|
|
|
<type>tsquery</type>. This function is useful
|
2007-10-21 22:04:37 +02:00
|
|
|
to determine if the <replaceable>query</replaceable> is meaningful
|
|
|
|
(returns > 0), or contains only stop words (returns 0).
|
|
|
|
Examples:
|
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
SELECT numnode(plainto_tsquery('the any'));
|
|
|
|
NOTICE: query contains only stopword(s) or doesn't contain lexeme(s), ignored
|
|
|
|
numnode
|
|
|
|
---------
|
|
|
|
0
|
|
|
|
|
|
|
|
SELECT numnode('foo & bar'::tsquery);
|
|
|
|
numnode
|
|
|
|
---------
|
|
|
|
3
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
|
|
|
<varlistentry>
|
|
|
|
|
2014-05-07 03:28:58 +02:00
|
|
|
<term>
|
2007-10-21 22:04:37 +02:00
|
|
|
<indexterm>
|
|
|
|
<primary>querytree</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
2017-10-09 03:44:17 +02:00
|
|
|
<literal>querytree(<replaceable class="parameter">query</replaceable> <type>tsquery</type>) returns <type>text</type></literal>
|
2007-10-21 22:04:37 +02:00
|
|
|
</term>
|
|
|
|
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
Returns the portion of a <type>tsquery</type> that can be used for
|
2007-10-21 22:04:37 +02:00
|
|
|
searching an index. This function is useful for detecting
|
|
|
|
unindexable queries, for example those containing only stop words
|
|
|
|
or only negated terms. For example:
|
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2020-04-24 21:51:35 +02:00
|
|
|
SELECT querytree(to_tsquery('defined'));
|
2007-10-21 22:04:37 +02:00
|
|
|
querytree
|
|
|
|
-----------
|
2020-04-24 21:51:35 +02:00
|
|
|
'defin'
|
2007-10-21 22:04:37 +02:00
|
|
|
|
2020-04-24 21:51:35 +02:00
|
|
|
SELECT querytree(to_tsquery('!defined'));
|
|
|
|
querytree
|
|
|
|
-----------
|
|
|
|
T
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
|
|
|
</variablelist>
|
|
|
|
|
|
|
|
<sect3 id="textsearch-query-rewriting">
|
|
|
|
<title>Query Rewriting</title>
|
|
|
|
|
|
|
|
<indexterm zone="textsearch-query-rewriting">
|
|
|
|
<primary>ts_rewrite</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
The <function>ts_rewrite</function> family of functions search a
|
2017-10-09 03:44:17 +02:00
|
|
|
given <type>tsquery</type> for occurrences of a target
|
2009-04-27 18:27:36 +02:00
|
|
|
subquery, and replace each occurrence with a
|
2007-10-21 22:04:37 +02:00
|
|
|
substitute subquery. In essence this operation is a
|
2017-10-09 03:44:17 +02:00
|
|
|
<type>tsquery</type>-specific version of substring replacement.
|
2007-10-21 22:04:37 +02:00
|
|
|
A target and substitute combination can be
|
2017-10-09 03:44:17 +02:00
|
|
|
thought of as a <firstterm>query rewrite rule</firstterm>. A collection
|
2007-10-21 22:04:37 +02:00
|
|
|
of such rewrite rules can be a powerful search aid.
|
|
|
|
For example, you can expand the search using synonyms
|
2017-10-09 03:44:17 +02:00
|
|
|
(e.g., <literal>new york</literal>, <literal>big apple</literal>, <literal>nyc</literal>,
|
|
|
|
<literal>gotham</literal>) or narrow the search to direct the user to some hot
|
2007-10-21 22:04:37 +02:00
|
|
|
topic. There is some overlap in functionality between this feature
|
2017-11-23 15:39:47 +01:00
|
|
|
and thesaurus dictionaries (<xref linkend="textsearch-thesaurus"/>).
|
2007-10-21 22:04:37 +02:00
|
|
|
However, you can modify a set of rewrite rules on-the-fly without
|
|
|
|
reindexing, whereas updating a thesaurus requires reindexing to be
|
|
|
|
effective.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<variablelist>
|
|
|
|
|
|
|
|
<varlistentry>
|
|
|
|
|
|
|
|
<term>
|
2017-10-09 03:44:17 +02:00
|
|
|
<literal>ts_rewrite (<replaceable class="parameter">query</replaceable> <type>tsquery</type>, <replaceable class="parameter">target</replaceable> <type>tsquery</type>, <replaceable class="parameter">substitute</replaceable> <type>tsquery</type>) returns <type>tsquery</type></literal>
|
2007-10-21 22:04:37 +02:00
|
|
|
</term>
|
|
|
|
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
This form of <function>ts_rewrite</function> simply applies a single
|
2017-10-09 04:00:57 +02:00
|
|
|
rewrite rule: <replaceable class="parameter">target</replaceable>
|
|
|
|
is replaced by <replaceable class="parameter">substitute</replaceable>
|
2007-10-21 22:04:37 +02:00
|
|
|
wherever it appears in <replaceable
|
2017-10-09 04:00:57 +02:00
|
|
|
class="parameter">query</replaceable>. For example:
|
2007-10-21 22:04:37 +02:00
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
SELECT ts_rewrite('a & b'::tsquery, 'a'::tsquery, 'c'::tsquery);
|
|
|
|
ts_rewrite
|
|
|
|
------------
|
|
|
|
'b' & 'c'
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
|
|
|
<varlistentry>
|
|
|
|
|
|
|
|
<term>
|
2017-10-09 03:44:17 +02:00
|
|
|
<literal>ts_rewrite (<replaceable class="parameter">query</replaceable> <type>tsquery</type>, <replaceable class="parameter">select</replaceable> <type>text</type>) returns <type>tsquery</type></literal>
|
2007-10-21 22:04:37 +02:00
|
|
|
</term>
|
|
|
|
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
This form of <function>ts_rewrite</function> accepts a starting
|
2021-06-11 03:38:04 +02:00
|
|
|
<replaceable>query</replaceable> and an SQL <replaceable>select</replaceable> command, which
|
2017-10-09 03:44:17 +02:00
|
|
|
is given as a text string. The <replaceable>select</replaceable> must yield two
|
|
|
|
columns of <type>tsquery</type> type. For each row of the
|
|
|
|
<replaceable>select</replaceable> result, occurrences of the first column value
|
2007-10-21 22:04:37 +02:00
|
|
|
(the target) are replaced by the second column value (the substitute)
|
2017-10-09 03:44:17 +02:00
|
|
|
within the current <replaceable>query</replaceable> value. For example:
|
2007-10-21 22:04:37 +02:00
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
CREATE TABLE aliases (t tsquery PRIMARY KEY, s tsquery);
|
|
|
|
INSERT INTO aliases VALUES('a', 'c');
|
|
|
|
|
|
|
|
SELECT ts_rewrite('a & b'::tsquery, 'SELECT t,s FROM aliases');
|
|
|
|
ts_rewrite
|
|
|
|
------------
|
|
|
|
'b' & 'c'
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Note that when multiple rewrite rules are applied in this way,
|
|
|
|
the order of application can be important; so in practice you will
|
2017-10-09 03:44:17 +02:00
|
|
|
want the source query to <literal>ORDER BY</literal> some ordering key.
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
|
|
|
</variablelist>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Let's consider a real-life astronomical example. We'll expand query
|
|
|
|
<literal>supernovae</literal> using table-driven rewriting rules:
|
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
CREATE TABLE aliases (t tsquery primary key, s tsquery);
|
|
|
|
INSERT INTO aliases VALUES(to_tsquery('supernovae'), to_tsquery('supernovae|sn'));
|
|
|
|
|
|
|
|
SELECT ts_rewrite(to_tsquery('supernovae & crab'), 'SELECT * FROM aliases');
|
2022-04-20 17:04:28 +02:00
|
|
|
ts_rewrite
|
2007-10-21 22:04:37 +02:00
|
|
|
---------------------------------
|
|
|
|
'crab' & ( 'supernova' | 'sn' )
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
|
|
|
|
We can change the rewriting rules just by updating the table:
|
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2009-04-27 18:27:36 +02:00
|
|
|
UPDATE aliases
|
|
|
|
SET s = to_tsquery('supernovae|sn & !nebulae')
|
|
|
|
WHERE t = to_tsquery('supernovae');
|
2007-10-21 22:04:37 +02:00
|
|
|
|
|
|
|
SELECT ts_rewrite(to_tsquery('supernovae & crab'), 'SELECT * FROM aliases');
|
2022-04-20 17:04:28 +02:00
|
|
|
ts_rewrite
|
2007-10-21 22:04:37 +02:00
|
|
|
---------------------------------------------
|
|
|
|
'crab' & ( 'supernova' | 'sn' & !'nebula' )
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Rewriting can be slow when there are many rewriting rules, since it
|
2009-04-27 18:27:36 +02:00
|
|
|
checks every rule for a possible match. To filter out obvious non-candidate
|
2007-10-21 22:04:37 +02:00
|
|
|
rules we can use the containment operators for the <type>tsquery</type>
|
|
|
|
type. In the example below, we select only those rules which might match
|
|
|
|
the original query:
|
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
SELECT ts_rewrite('a & b'::tsquery,
|
|
|
|
'SELECT t,s FROM aliases WHERE ''a & b''::tsquery @> t');
|
|
|
|
ts_rewrite
|
|
|
|
------------
|
|
|
|
'b' & 'c'
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
</sect3>
|
|
|
|
|
|
|
|
</sect2>
|
|
|
|
|
|
|
|
<sect2 id="textsearch-update-triggers">
|
|
|
|
<title>Triggers for Automatic Updates</title>
|
|
|
|
|
|
|
|
<indexterm>
|
|
|
|
<primary>trigger</primary>
|
|
|
|
<secondary>for updating a derived tsvector column</secondary>
|
|
|
|
</indexterm>
|
|
|
|
|
2019-03-30 08:13:09 +01:00
|
|
|
<note>
|
|
|
|
<para>
|
|
|
|
The method described in this section has been obsoleted by the use of
|
|
|
|
stored generated columns, as described in <xref
|
|
|
|
linkend="textsearch-tables-index"/>.
|
|
|
|
</para>
|
|
|
|
</note>
|
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
When using a separate column to store the <type>tsvector</type> representation
|
2007-10-21 22:04:37 +02:00
|
|
|
of your documents, it is necessary to create a trigger to update the
|
2017-10-09 03:44:17 +02:00
|
|
|
<type>tsvector</type> column when the document content columns change.
|
2007-10-21 22:04:37 +02:00
|
|
|
Two built-in trigger functions are available for this, or you can write
|
|
|
|
your own.
|
|
|
|
</para>
|
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<synopsis>
|
2020-05-10 22:20:28 +02:00
|
|
|
tsvector_update_trigger(<replaceable class="parameter">tsvector_column_name</replaceable>,&zwsp; <replaceable class="parameter">config_name</replaceable>, <replaceable class="parameter">text_column_name</replaceable> <optional>, ... </optional>)
|
|
|
|
tsvector_update_trigger_column(<replaceable class="parameter">tsvector_column_name</replaceable>,&zwsp; <replaceable class="parameter">config_column_name</replaceable>, <replaceable class="parameter">text_column_name</replaceable> <optional>, ... </optional>)
|
2010-07-29 21:34:41 +02:00
|
|
|
</synopsis>
|
2007-10-21 22:04:37 +02:00
|
|
|
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
These trigger functions automatically compute a <type>tsvector</type>
|
2007-10-21 22:04:37 +02:00
|
|
|
column from one or more textual columns, under the control of
|
2017-10-09 03:44:17 +02:00
|
|
|
parameters specified in the <command>CREATE TRIGGER</command> command.
|
2007-10-21 22:04:37 +02:00
|
|
|
An example of their use is:
|
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
CREATE TABLE messages (
|
|
|
|
title text,
|
|
|
|
body text,
|
|
|
|
tsv tsvector
|
|
|
|
);
|
|
|
|
|
|
|
|
CREATE TRIGGER tsvectorupdate BEFORE INSERT OR UPDATE
|
2018-08-15 23:08:34 +02:00
|
|
|
ON messages FOR EACH ROW EXECUTE FUNCTION
|
2007-10-21 22:04:37 +02:00
|
|
|
tsvector_update_trigger(tsv, 'pg_catalog.english', title, body);
|
|
|
|
|
|
|
|
INSERT INTO messages VALUES('title here', 'the body text is here');
|
|
|
|
|
|
|
|
SELECT * FROM messages;
|
2022-04-20 17:04:28 +02:00
|
|
|
title | body | tsv
|
2007-10-21 22:04:37 +02:00
|
|
|
------------+-----------------------+----------------------------
|
|
|
|
title here | the body text is here | 'bodi':4 'text':5 'titl':1
|
|
|
|
|
|
|
|
SELECT title, body FROM messages WHERE tsv @@ to_tsquery('title & body');
|
2022-04-20 17:04:28 +02:00
|
|
|
title | body
|
2007-10-21 22:04:37 +02:00
|
|
|
------------+-----------------------
|
|
|
|
title here | the body text is here
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
|
2017-10-09 03:44:17 +02:00
|
|
|
Having created this trigger, any change in <structfield>title</structfield> or
|
|
|
|
<structfield>body</structfield> will automatically be reflected into
|
|
|
|
<structfield>tsv</structfield>, without the application having to worry about it.
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
The first trigger argument must be the name of the <type>tsvector</type>
|
2007-10-21 22:04:37 +02:00
|
|
|
column to be updated. The second argument specifies the text search
|
|
|
|
configuration to be used to perform the conversion. For
|
2017-10-09 03:44:17 +02:00
|
|
|
<function>tsvector_update_trigger</function>, the configuration name is simply
|
2007-10-21 22:04:37 +02:00
|
|
|
given as the second trigger argument. It must be schema-qualified as
|
|
|
|
shown above, so that the trigger behavior will not change with changes
|
2017-10-09 03:44:17 +02:00
|
|
|
in <varname>search_path</varname>. For
|
|
|
|
<function>tsvector_update_trigger_column</function>, the second trigger argument
|
2007-10-21 22:04:37 +02:00
|
|
|
is the name of another table column, which must be of type
|
2017-10-09 03:44:17 +02:00
|
|
|
<type>regconfig</type>. This allows a per-row selection of configuration
|
2007-10-21 22:04:37 +02:00
|
|
|
to be made. The remaining argument(s) are the names of textual columns
|
2017-10-09 03:44:17 +02:00
|
|
|
(of type <type>text</type>, <type>varchar</type>, or <type>char</type>). These
|
2007-10-21 22:04:37 +02:00
|
|
|
will be included in the document in the order given. NULL values will
|
|
|
|
be skipped (but the other columns will still be indexed).
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2009-06-17 23:58:49 +02:00
|
|
|
A limitation of these built-in triggers is that they treat all the
|
2007-10-21 22:04:37 +02:00
|
|
|
input columns alike. To process columns differently — for
|
2009-06-17 23:58:49 +02:00
|
|
|
example, to weight title differently from body — it is necessary
|
2007-10-21 22:04:37 +02:00
|
|
|
to write a custom trigger. Here is an example using
|
|
|
|
<application>PL/pgSQL</application> as the trigger language:
|
|
|
|
|
|
|
|
<programlisting>
|
|
|
|
CREATE FUNCTION messages_trigger() RETURNS trigger AS $$
|
|
|
|
begin
|
|
|
|
new.tsv :=
|
|
|
|
setweight(to_tsvector('pg_catalog.english', coalesce(new.title,'')), 'A') ||
|
|
|
|
setweight(to_tsvector('pg_catalog.english', coalesce(new.body,'')), 'D');
|
|
|
|
return new;
|
|
|
|
end
|
|
|
|
$$ LANGUAGE plpgsql;
|
|
|
|
|
|
|
|
CREATE TRIGGER tsvectorupdate BEFORE INSERT OR UPDATE
|
2018-08-15 23:08:34 +02:00
|
|
|
ON messages FOR EACH ROW EXECUTE FUNCTION messages_trigger();
|
2007-10-21 22:04:37 +02:00
|
|
|
</programlisting>
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Keep in mind that it is important to specify the configuration name
|
2017-10-09 03:44:17 +02:00
|
|
|
explicitly when creating <type>tsvector</type> values inside triggers,
|
2007-10-21 22:04:37 +02:00
|
|
|
so that the column's contents will not be affected by changes to
|
2017-10-09 03:44:17 +02:00
|
|
|
<varname>default_text_search_config</varname>. Failure to do this is likely to
|
2022-07-21 20:55:23 +02:00
|
|
|
lead to problems such as search results changing after a dump and restore.
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
</sect2>
|
|
|
|
|
|
|
|
<sect2 id="textsearch-statistics">
|
|
|
|
<title>Gathering Document Statistics</title>
|
|
|
|
|
|
|
|
<indexterm>
|
|
|
|
<primary>ts_stat</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
The function <function>ts_stat</function> is useful for checking your
|
2007-10-21 22:04:37 +02:00
|
|
|
configuration and for finding stop-word candidates.
|
|
|
|
</para>
|
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<synopsis>
|
2017-10-09 03:44:17 +02:00
|
|
|
ts_stat(<replaceable class="parameter">sqlquery</replaceable> <type>text</type>, <optional> <replaceable class="parameter">weights</replaceable> <type>text</type>, </optional>
|
|
|
|
OUT <replaceable class="parameter">word</replaceable> <type>text</type>, OUT <replaceable class="parameter">ndoc</replaceable> <type>integer</type>,
|
|
|
|
OUT <replaceable class="parameter">nentry</replaceable> <type>integer</type>) returns <type>setof record</type>
|
2010-07-29 21:34:41 +02:00
|
|
|
</synopsis>
|
2007-10-21 22:04:37 +02:00
|
|
|
|
|
|
|
<para>
|
2009-04-27 18:27:36 +02:00
|
|
|
<replaceable>sqlquery</replaceable> is a text value containing an SQL
|
2007-10-21 22:04:37 +02:00
|
|
|
query which must return a single <type>tsvector</type> column.
|
2017-10-09 03:44:17 +02:00
|
|
|
<function>ts_stat</function> executes the query and returns statistics about
|
2007-10-21 22:04:37 +02:00
|
|
|
each distinct lexeme (word) contained in the <type>tsvector</type>
|
|
|
|
data. The columns returned are
|
|
|
|
|
|
|
|
<itemizedlist spacing="compact" mark="bullet">
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
<replaceable>word</replaceable> <type>text</type> — the value of a lexeme
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
<replaceable>ndoc</replaceable> <type>integer</type> — number of documents
|
|
|
|
(<type>tsvector</type>s) the word occurred in
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
<replaceable>nentry</replaceable> <type>integer</type> — total number of
|
2007-10-21 22:04:37 +02:00
|
|
|
occurrences of the word
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</itemizedlist>
|
|
|
|
|
|
|
|
If <replaceable>weights</replaceable> is supplied, only occurrences
|
|
|
|
having one of those weights are counted.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
For example, to find the ten most frequent words in a document collection:
|
|
|
|
|
|
|
|
<programlisting>
|
|
|
|
SELECT * FROM ts_stat('SELECT vector FROM apod')
|
|
|
|
ORDER BY nentry DESC, ndoc DESC, word
|
|
|
|
LIMIT 10;
|
|
|
|
</programlisting>
|
|
|
|
|
2017-10-09 03:44:17 +02:00
|
|
|
The same, but counting only word occurrences with weight <literal>A</literal>
|
|
|
|
or <literal>B</literal>:
|
2007-10-21 22:04:37 +02:00
|
|
|
|
|
|
|
<programlisting>
|
|
|
|
SELECT * FROM ts_stat('SELECT vector FROM apod', 'ab')
|
|
|
|
ORDER BY nentry DESC, ndoc DESC, word
|
|
|
|
LIMIT 10;
|
|
|
|
</programlisting>
|
|
|
|
</para>
|
|
|
|
|
|
|
|
</sect2>
|
|
|
|
|
|
|
|
</sect1>
|
|
|
|
|
|
|
|
<sect1 id="textsearch-parsers">
|
|
|
|
<title>Parsers</title>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Text search parsers are responsible for splitting raw document text
|
2017-10-09 03:44:17 +02:00
|
|
|
into <firstterm>tokens</firstterm> and identifying each token's type, where
|
2007-10-21 22:04:37 +02:00
|
|
|
the set of possible types is defined by the parser itself.
|
|
|
|
Note that a parser does not modify the text at all — it simply
|
|
|
|
identifies plausible word boundaries. Because of this limited scope,
|
|
|
|
there is less need for application-specific custom parsers than there is
|
|
|
|
for custom dictionaries. At present <productname>PostgreSQL</productname>
|
|
|
|
provides just one built-in parser, which has been found to be useful for a
|
|
|
|
wide range of applications.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
The built-in parser is named <literal>pg_catalog.default</literal>.
|
2017-11-23 15:39:47 +01:00
|
|
|
It recognizes 23 token types, shown in <xref linkend="textsearch-default-parser"/>.
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<table id="textsearch-default-parser">
|
|
|
|
<title>Default Parser's Token Types</title>
|
|
|
|
<tgroup cols="3">
|
2020-05-06 18:23:43 +02:00
|
|
|
<colspec colname="col1" colwidth="2*"/>
|
|
|
|
<colspec colname="col2" colwidth="2*"/>
|
|
|
|
<colspec colname="col3" colwidth="3*"/>
|
2007-10-21 22:04:37 +02:00
|
|
|
<thead>
|
|
|
|
<row>
|
|
|
|
<entry>Alias</entry>
|
|
|
|
<entry>Description</entry>
|
|
|
|
<entry>Example</entry>
|
|
|
|
</row>
|
|
|
|
</thead>
|
|
|
|
<tbody>
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><literal>asciiword</literal></entry>
|
2007-10-23 22:46:12 +02:00
|
|
|
<entry>Word, all ASCII letters</entry>
|
2007-10-25 15:06:35 +02:00
|
|
|
<entry><literal>elephant</literal></entry>
|
2007-10-21 22:04:37 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><literal>word</literal></entry>
|
2007-10-23 22:46:12 +02:00
|
|
|
<entry>Word, all letters</entry>
|
2007-10-25 15:06:35 +02:00
|
|
|
<entry><literal>mañana</literal></entry>
|
2007-10-21 22:04:37 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><literal>numword</literal></entry>
|
2007-10-23 22:46:12 +02:00
|
|
|
<entry>Word, letters and digits</entry>
|
2007-10-21 22:04:37 +02:00
|
|
|
<entry><literal>beta1</literal></entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><literal>asciihword</literal></entry>
|
2007-10-23 22:46:12 +02:00
|
|
|
<entry>Hyphenated word, all ASCII</entry>
|
2007-10-25 15:06:35 +02:00
|
|
|
<entry><literal>up-to-date</literal></entry>
|
2007-10-21 22:04:37 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><literal>hword</literal></entry>
|
2007-10-23 22:46:12 +02:00
|
|
|
<entry>Hyphenated word, all letters</entry>
|
2007-10-27 02:19:45 +02:00
|
|
|
<entry><literal>lógico-matemática</literal></entry>
|
2007-10-21 22:04:37 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><literal>numhword</literal></entry>
|
2007-10-23 22:46:12 +02:00
|
|
|
<entry>Hyphenated word, letters and digits</entry>
|
2007-10-25 15:06:35 +02:00
|
|
|
<entry><literal>postgresql-beta1</literal></entry>
|
2007-10-21 22:04:37 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><literal>hword_asciipart</literal></entry>
|
2007-10-23 22:46:12 +02:00
|
|
|
<entry>Hyphenated word part, all ASCII</entry>
|
2007-10-27 02:19:45 +02:00
|
|
|
<entry><literal>postgresql</literal> in the context <literal>postgresql-beta1</literal></entry>
|
2007-10-21 22:04:37 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><literal>hword_part</literal></entry>
|
2007-10-23 22:46:12 +02:00
|
|
|
<entry>Hyphenated word part, all letters</entry>
|
2007-10-27 02:19:45 +02:00
|
|
|
<entry><literal>lógico</literal> or <literal>matemática</literal>
|
|
|
|
in the context <literal>lógico-matemática</literal></entry>
|
2007-10-21 22:04:37 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><literal>hword_numpart</literal></entry>
|
2007-10-23 22:46:12 +02:00
|
|
|
<entry>Hyphenated word part, letters and digits</entry>
|
2007-10-21 22:04:37 +02:00
|
|
|
<entry><literal>beta1</literal> in the context
|
2007-10-25 15:06:35 +02:00
|
|
|
<literal>postgresql-beta1</literal></entry>
|
2007-10-21 22:04:37 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><literal>email</literal></entry>
|
2007-10-21 22:04:37 +02:00
|
|
|
<entry>Email address</entry>
|
2007-10-25 15:06:35 +02:00
|
|
|
<entry><literal>foo@example.com</literal></entry>
|
2007-10-21 22:04:37 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><literal>protocol</literal></entry>
|
2007-10-21 22:04:37 +02:00
|
|
|
<entry>Protocol head</entry>
|
|
|
|
<entry><literal>http://</literal></entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><literal>url</literal></entry>
|
2007-10-21 22:04:37 +02:00
|
|
|
<entry>URL</entry>
|
2007-10-25 15:06:35 +02:00
|
|
|
<entry><literal>example.com/stuff/index.html</literal></entry>
|
2007-10-21 22:04:37 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><literal>host</literal></entry>
|
2007-10-21 22:04:37 +02:00
|
|
|
<entry>Host</entry>
|
2007-10-25 15:06:35 +02:00
|
|
|
<entry><literal>example.com</literal></entry>
|
2007-10-21 22:04:37 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><literal>url_path</literal></entry>
|
2007-10-27 18:01:09 +02:00
|
|
|
<entry>URL path</entry>
|
2007-10-21 22:04:37 +02:00
|
|
|
<entry><literal>/stuff/index.html</literal>, in the context of a URL</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><literal>file</literal></entry>
|
2007-10-21 22:04:37 +02:00
|
|
|
<entry>File or path name</entry>
|
|
|
|
<entry><literal>/usr/local/foo.txt</literal>, if not within a URL</entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><literal>sfloat</literal></entry>
|
2007-10-21 22:04:37 +02:00
|
|
|
<entry>Scientific notation</entry>
|
|
|
|
<entry><literal>-1.234e56</literal></entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><literal>float</literal></entry>
|
2007-10-21 22:04:37 +02:00
|
|
|
<entry>Decimal notation</entry>
|
|
|
|
<entry><literal>-1.234</literal></entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><literal>int</literal></entry>
|
2007-10-21 22:04:37 +02:00
|
|
|
<entry>Signed integer</entry>
|
|
|
|
<entry><literal>-1234</literal></entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><literal>uint</literal></entry>
|
2007-10-21 22:04:37 +02:00
|
|
|
<entry>Unsigned integer</entry>
|
|
|
|
<entry><literal>1234</literal></entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><literal>version</literal></entry>
|
2007-10-21 22:04:37 +02:00
|
|
|
<entry>Version number</entry>
|
|
|
|
<entry><literal>8.3.0</literal></entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><literal>tag</literal></entry>
|
2007-11-20 03:25:22 +01:00
|
|
|
<entry>XML tag</entry>
|
|
|
|
<entry><literal><a href="dictionaries.html"></literal></entry>
|
2007-10-21 22:04:37 +02:00
|
|
|
</row>
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><literal>entity</literal></entry>
|
2007-11-20 03:25:22 +01:00
|
|
|
<entry>XML entity</entry>
|
2007-10-21 22:04:37 +02:00
|
|
|
<entry><literal>&amp;</literal></entry>
|
|
|
|
</row>
|
|
|
|
<row>
|
2017-10-09 03:44:17 +02:00
|
|
|
<entry><literal>blank</literal></entry>
|
2007-10-21 22:04:37 +02:00
|
|
|
<entry>Space symbols</entry>
|
|
|
|
<entry>(any whitespace or punctuation not otherwise recognized)</entry>
|
|
|
|
</row>
|
|
|
|
</tbody>
|
|
|
|
</tgroup>
|
|
|
|
</table>
|
|
|
|
|
2007-10-23 22:46:12 +02:00
|
|
|
<note>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
The parser's notion of a <quote>letter</quote> is determined by the database's
|
|
|
|
locale setting, specifically <varname>lc_ctype</varname>. Words containing
|
2007-10-23 22:46:12 +02:00
|
|
|
only the basic ASCII letters are reported as a separate token type,
|
|
|
|
since it is sometimes useful to distinguish them. In most European
|
2017-10-09 03:44:17 +02:00
|
|
|
languages, token types <literal>word</literal> and <literal>asciiword</literal>
|
2009-04-27 18:27:36 +02:00
|
|
|
should be treated alike.
|
2007-10-23 22:46:12 +02:00
|
|
|
</para>
|
2010-03-13 04:09:04 +01:00
|
|
|
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
<literal>email</literal> does not support all valid email characters as
|
2020-12-01 13:36:30 +01:00
|
|
|
defined by <ulink url="https://tools.ietf.org/html/rfc5322">RFC 5322</ulink>.
|
|
|
|
Specifically, the only non-alphanumeric characters supported for
|
|
|
|
email user names are period, dash, and underscore.
|
2010-03-13 04:09:04 +01:00
|
|
|
</para>
|
2007-10-23 22:46:12 +02:00
|
|
|
</note>
|
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<para>
|
|
|
|
It is possible for the parser to produce overlapping tokens from the same
|
|
|
|
piece of text. As an example, a hyphenated word will be reported both
|
|
|
|
as the entire word and as each component:
|
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2007-10-22 22:13:37 +02:00
|
|
|
SELECT alias, description, token FROM ts_debug('foo-bar-beta1');
|
2022-04-20 17:04:28 +02:00
|
|
|
alias | description | token
|
2007-10-23 22:46:12 +02:00
|
|
|
-----------------+------------------------------------------+---------------
|
|
|
|
numhword | Hyphenated word, letters and digits | foo-bar-beta1
|
|
|
|
hword_asciipart | Hyphenated word part, all ASCII | foo
|
|
|
|
blank | Space symbols | -
|
|
|
|
hword_asciipart | Hyphenated word part, all ASCII | bar
|
|
|
|
blank | Space symbols | -
|
|
|
|
hword_numpart | Hyphenated word part, letters and digits | beta1
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
|
|
|
|
This behavior is desirable since it allows searches to work for both
|
|
|
|
the whole compound word and for components. Here is another
|
|
|
|
instructive example:
|
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2007-10-27 02:19:45 +02:00
|
|
|
SELECT alias, description, token FROM ts_debug('http://example.com/stuff/index.html');
|
2022-04-20 17:04:28 +02:00
|
|
|
alias | description | token
|
2007-10-27 02:19:45 +02:00
|
|
|
----------+---------------+------------------------------
|
2007-10-21 22:04:37 +02:00
|
|
|
protocol | Protocol head | http://
|
2007-10-27 02:19:45 +02:00
|
|
|
url | URL | example.com/stuff/index.html
|
|
|
|
host | Host | example.com
|
2007-10-27 18:01:09 +02:00
|
|
|
url_path | URL path | /stuff/index.html
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
</sect1>
|
|
|
|
|
|
|
|
<sect1 id="textsearch-dictionaries">
|
|
|
|
<title>Dictionaries</title>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Dictionaries are used to eliminate words that should not be considered in a
|
2017-10-09 03:44:17 +02:00
|
|
|
search (<firstterm>stop words</firstterm>), and to <firstterm>normalize</firstterm> words so
|
2007-10-21 22:04:37 +02:00
|
|
|
that different derived forms of the same word will match. A successfully
|
2017-10-09 03:44:17 +02:00
|
|
|
normalized word is called a <firstterm>lexeme</firstterm>. Aside from
|
2007-10-21 22:04:37 +02:00
|
|
|
improving search quality, normalization and removal of stop words reduce the
|
|
|
|
size of the <type>tsvector</type> representation of a document, thereby
|
|
|
|
improving performance. Normalization does not always have linguistic meaning
|
|
|
|
and usually depends on application semantics.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Some examples of normalization:
|
|
|
|
|
|
|
|
<itemizedlist spacing="compact" mark="bullet">
|
|
|
|
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2019-10-25 20:23:44 +02:00
|
|
|
Linguistic — Ispell dictionaries try to reduce input words to a
|
2007-10-21 22:04:37 +02:00
|
|
|
normalized form; stemmer dictionaries remove word endings
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
<acronym>URL</acronym> locations can be canonicalized to make
|
|
|
|
equivalent URLs match:
|
|
|
|
|
|
|
|
<itemizedlist spacing="compact" mark="bullet">
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
http://www.pgsql.ru/db/mw/index.html
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
http://www.pgsql.ru/db/mw/
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
http://www.pgsql.ru/db/../db/mw/index.html
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</itemizedlist>
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2007-10-17 03:01:28 +02:00
|
|
|
Color names can be replaced by their hexadecimal values, e.g.,
|
2007-08-29 04:37:04 +02:00
|
|
|
<literal>red, green, blue, magenta -> FF0000, 00FF00, 0000FF, FF00FF</literal>
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2007-10-17 03:01:28 +02:00
|
|
|
If indexing numbers, we can
|
|
|
|
remove some fractional digits to reduce the range of possible
|
|
|
|
numbers, so for example <emphasis>3.14</emphasis>159265359,
|
2007-08-29 04:37:04 +02:00
|
|
|
<emphasis>3.14</emphasis>15926, <emphasis>3.14</emphasis> will be the same
|
|
|
|
after normalization if only two digits are kept after the decimal point.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</itemizedlist>
|
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
A dictionary is a program that accepts a token as
|
|
|
|
input and returns:
|
|
|
|
<itemizedlist spacing="compact" mark="bullet">
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
an array of lexemes if the input token is known to the dictionary
|
|
|
|
(notice that one token can produce more than one lexeme)
|
|
|
|
</para>
|
|
|
|
</listitem>
|
2010-08-25 23:42:55 +02:00
|
|
|
<listitem>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
a single lexeme with the <literal>TSL_FILTER</literal> flag set, to replace
|
2010-08-25 23:42:55 +02:00
|
|
|
the original token with a new token to be passed to subsequent
|
|
|
|
dictionaries (a dictionary that does this is called a
|
2017-10-09 03:44:17 +02:00
|
|
|
<firstterm>filtering dictionary</firstterm>)
|
2010-08-25 23:42:55 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
2007-10-21 22:04:37 +02:00
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
an empty array if the dictionary knows the token, but it is a stop word
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
<literal>NULL</literal> if the dictionary does not recognize the input token
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</itemizedlist>
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
<productname>PostgreSQL</productname> provides predefined dictionaries for
|
|
|
|
many languages. There are also several predefined templates that can be
|
|
|
|
used to create new dictionaries with custom parameters. Each predefined
|
|
|
|
dictionary template is described below. If no existing
|
|
|
|
template is suitable, it is possible to create new ones; see the
|
2017-10-09 03:44:17 +02:00
|
|
|
<filename>contrib/</filename> area of the <productname>PostgreSQL</productname> distribution
|
2007-10-21 22:04:37 +02:00
|
|
|
for examples.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
A text search configuration binds a parser together with a set of
|
|
|
|
dictionaries to process the parser's output tokens. For each token
|
|
|
|
type that the parser can return, a separate list of dictionaries is
|
|
|
|
specified by the configuration. When a token of that type is found
|
|
|
|
by the parser, each dictionary in the list is consulted in turn,
|
|
|
|
until some dictionary recognizes it as a known word. If it is identified
|
|
|
|
as a stop word, or if no dictionary recognizes the token, it will be
|
2009-06-17 23:58:49 +02:00
|
|
|
discarded and not indexed or searched for.
|
2017-10-09 03:44:17 +02:00
|
|
|
Normally, the first dictionary that returns a non-<literal>NULL</literal>
|
2010-08-25 23:42:55 +02:00
|
|
|
output determines the result, and any remaining dictionaries are not
|
|
|
|
consulted; but a filtering dictionary can replace the given word
|
|
|
|
with a modified word, which is then passed to subsequent dictionaries.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2007-10-21 22:04:37 +02:00
|
|
|
The general rule for configuring a list of dictionaries
|
|
|
|
is to place first the most narrow, most specific dictionary, then the more
|
|
|
|
general dictionaries, finishing with a very general dictionary, like
|
2017-10-09 03:44:17 +02:00
|
|
|
a <application>Snowball</application> stemmer or <literal>simple</literal>, which
|
2007-10-21 22:04:37 +02:00
|
|
|
recognizes everything. For example, for an astronomy-specific search
|
|
|
|
(<literal>astro_en</literal> configuration) one could bind token type
|
2007-10-23 22:46:12 +02:00
|
|
|
<type>asciiword</type> (ASCII word) to a synonym dictionary of astronomical
|
2017-10-09 03:44:17 +02:00
|
|
|
terms, a general English dictionary and a <application>Snowball</application> English
|
2007-10-21 22:04:37 +02:00
|
|
|
stemmer:
|
|
|
|
|
|
|
|
<programlisting>
|
|
|
|
ALTER TEXT SEARCH CONFIGURATION astro_en
|
2007-10-23 22:46:12 +02:00
|
|
|
ADD MAPPING FOR asciiword WITH astrosyn, english_ispell, english_stem;
|
2007-10-21 22:04:37 +02:00
|
|
|
</programlisting>
|
|
|
|
</para>
|
|
|
|
|
2010-08-25 23:42:55 +02:00
|
|
|
<para>
|
|
|
|
A filtering dictionary can be placed anywhere in the list, except at the
|
|
|
|
end where it'd be useless. Filtering dictionaries are useful to partially
|
|
|
|
normalize words to simplify the task of later dictionaries. For example,
|
|
|
|
a filtering dictionary could be used to remove accents from accented
|
2017-11-23 15:39:47 +01:00
|
|
|
letters, as is done by the <xref linkend="unaccent"/> module.
|
2010-08-25 23:42:55 +02:00
|
|
|
</para>
|
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<sect2 id="textsearch-stopwords">
|
|
|
|
<title>Stop Words</title>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Stop words are words that are very common, appear in almost every
|
|
|
|
document, and have no discrimination value. Therefore, they can be ignored
|
|
|
|
in the context of full text searching. For example, every English text
|
2017-10-09 03:44:17 +02:00
|
|
|
contains words like <literal>a</literal> and <literal>the</literal>, so it is
|
2007-10-21 22:04:37 +02:00
|
|
|
useless to store them in an index. However, stop words do affect the
|
|
|
|
positions in <type>tsvector</type>, which in turn affect ranking:
|
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2020-10-19 18:28:54 +02:00
|
|
|
SELECT to_tsvector('english', 'in the list of stop words');
|
2007-10-21 22:04:37 +02:00
|
|
|
to_tsvector
|
|
|
|
----------------------------
|
|
|
|
'list':3 'stop':5 'word':6
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
|
2007-11-05 16:55:53 +01:00
|
|
|
The missing positions 1,2,4 are because of stop words. Ranks
|
2007-10-21 22:04:37 +02:00
|
|
|
calculated for documents with and without stop words are quite different:
|
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2020-10-19 18:28:54 +02:00
|
|
|
SELECT ts_rank_cd (to_tsvector('english', 'in the list of stop words'), to_tsquery('list & stop'));
|
2007-10-21 22:04:37 +02:00
|
|
|
ts_rank_cd
|
|
|
|
------------
|
|
|
|
0.05
|
|
|
|
|
2020-10-19 18:28:54 +02:00
|
|
|
SELECT ts_rank_cd (to_tsvector('english', 'list stop words'), to_tsquery('list & stop'));
|
2007-10-21 22:04:37 +02:00
|
|
|
ts_rank_cd
|
|
|
|
------------
|
|
|
|
0.1
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<para>
|
|
|
|
It is up to the specific dictionary how it treats stop words. For example,
|
|
|
|
<literal>ispell</literal> dictionaries first normalize words and then
|
|
|
|
look at the list of stop words, while <literal>Snowball</literal> stemmers
|
|
|
|
first check the list of stop words. The reason for the different
|
|
|
|
behavior is an attempt to decrease noise.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
</sect2>
|
|
|
|
|
|
|
|
<sect2 id="textsearch-simple-dictionary">
|
|
|
|
<title>Simple Dictionary</title>
|
|
|
|
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
The <literal>simple</literal> dictionary template operates by converting the
|
2007-10-21 22:04:37 +02:00
|
|
|
input token to lower case and checking it against a file of stop words.
|
2007-11-14 19:36:37 +01:00
|
|
|
If it is found in the file then an empty array is returned, causing
|
2007-10-21 22:04:37 +02:00
|
|
|
the token to be discarded. If not, the lower-cased form of the word
|
2007-11-14 19:36:37 +01:00
|
|
|
is returned as the normalized lexeme. Alternatively, the dictionary
|
|
|
|
can be configured to report non-stop-words as unrecognized, allowing
|
|
|
|
them to be passed on to the next dictionary in the list.
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
Here is an example of a dictionary definition using the <literal>simple</literal>
|
2007-10-21 22:04:37 +02:00
|
|
|
template:
|
|
|
|
|
|
|
|
<programlisting>
|
|
|
|
CREATE TEXT SEARCH DICTIONARY public.simple_dict (
|
|
|
|
TEMPLATE = pg_catalog.simple,
|
|
|
|
STOPWORDS = english
|
|
|
|
);
|
|
|
|
</programlisting>
|
|
|
|
|
|
|
|
Here, <literal>english</literal> is the base name of a file of stop words.
|
|
|
|
The file's full name will be
|
2017-10-09 03:44:17 +02:00
|
|
|
<filename>$SHAREDIR/tsearch_data/english.stop</filename>,
|
|
|
|
where <literal>$SHAREDIR</literal> means the
|
2007-10-21 22:04:37 +02:00
|
|
|
<productname>PostgreSQL</productname> installation's shared-data directory,
|
2017-10-09 03:44:17 +02:00
|
|
|
often <filename>/usr/local/share/postgresql</filename> (use <command>pg_config
|
|
|
|
--sharedir</command> to determine it if you're not sure).
|
2007-10-21 22:04:37 +02:00
|
|
|
The file format is simply a list
|
|
|
|
of words, one per line. Blank lines and trailing spaces are ignored,
|
|
|
|
and upper case is folded to lower case, but no other processing is done
|
|
|
|
on the file contents.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Now we can test our dictionary:
|
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2020-10-19 18:28:54 +02:00
|
|
|
SELECT ts_lexize('public.simple_dict', 'YeS');
|
2007-10-21 22:04:37 +02:00
|
|
|
ts_lexize
|
|
|
|
-----------
|
|
|
|
{yes}
|
|
|
|
|
2020-10-19 18:28:54 +02:00
|
|
|
SELECT ts_lexize('public.simple_dict', 'The');
|
2007-10-21 22:04:37 +02:00
|
|
|
ts_lexize
|
|
|
|
-----------
|
|
|
|
{}
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
|
|
|
|
2007-11-14 19:36:37 +01:00
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
We can also choose to return <literal>NULL</literal>, instead of the lower-cased
|
2007-11-14 19:36:37 +01:00
|
|
|
word, if it is not found in the stop words file. This behavior is
|
2017-10-09 03:44:17 +02:00
|
|
|
selected by setting the dictionary's <literal>Accept</literal> parameter to
|
|
|
|
<literal>false</literal>. Continuing the example:
|
2007-11-14 19:36:37 +01:00
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2007-11-14 19:36:37 +01:00
|
|
|
ALTER TEXT SEARCH DICTIONARY public.simple_dict ( Accept = false );
|
|
|
|
|
2020-10-19 18:28:54 +02:00
|
|
|
SELECT ts_lexize('public.simple_dict', 'YeS');
|
2007-11-14 19:36:37 +01:00
|
|
|
ts_lexize
|
|
|
|
-----------
|
|
|
|
|
|
|
|
|
2020-10-19 18:28:54 +02:00
|
|
|
SELECT ts_lexize('public.simple_dict', 'The');
|
2007-11-14 19:36:37 +01:00
|
|
|
ts_lexize
|
|
|
|
-----------
|
|
|
|
{}
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-11-14 19:36:37 +01:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
With the default setting of <literal>Accept</literal> = <literal>true</literal>,
|
|
|
|
it is only useful to place a <literal>simple</literal> dictionary at the end
|
2007-11-14 19:36:37 +01:00
|
|
|
of a list of dictionaries, since it will never pass on any token to
|
2017-10-09 03:44:17 +02:00
|
|
|
a following dictionary. Conversely, <literal>Accept</literal> = <literal>false</literal>
|
2007-11-14 19:36:37 +01:00
|
|
|
is only useful when there is at least one following dictionary.
|
|
|
|
</para>
|
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<caution>
|
|
|
|
<para>
|
|
|
|
Most types of dictionaries rely on configuration files, such as files of
|
2017-10-09 03:44:17 +02:00
|
|
|
stop words. These files <emphasis>must</emphasis> be stored in UTF-8 encoding.
|
2007-10-21 22:04:37 +02:00
|
|
|
They will be translated to the actual database encoding, if that is
|
|
|
|
different, when they are read into the server.
|
|
|
|
</para>
|
|
|
|
</caution>
|
|
|
|
|
|
|
|
<caution>
|
|
|
|
<para>
|
|
|
|
Normally, a database session will read a dictionary configuration file
|
|
|
|
only once, when it is first used within the session. If you modify a
|
|
|
|
configuration file and want to force existing sessions to pick up the
|
2017-10-09 03:44:17 +02:00
|
|
|
new contents, issue an <command>ALTER TEXT SEARCH DICTIONARY</command> command
|
|
|
|
on the dictionary. This can be a <quote>dummy</quote> update that doesn't
|
2007-10-21 22:04:37 +02:00
|
|
|
actually change any parameter values.
|
|
|
|
</para>
|
|
|
|
</caution>
|
|
|
|
|
|
|
|
</sect2>
|
|
|
|
|
|
|
|
<sect2 id="textsearch-synonym-dictionary">
|
|
|
|
<title>Synonym Dictionary</title>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
This dictionary template is used to create dictionaries that replace a
|
|
|
|
word with a synonym. Phrases are not supported (use the thesaurus
|
2017-11-23 15:39:47 +01:00
|
|
|
template (<xref linkend="textsearch-thesaurus"/>) for that). A synonym
|
2007-10-21 22:04:37 +02:00
|
|
|
dictionary can be used to overcome linguistic problems, for example, to
|
2012-06-07 23:06:20 +02:00
|
|
|
prevent an English stemmer dictionary from reducing the word <quote>Paris</quote> to
|
|
|
|
<quote>pari</quote>. It is enough to have a <literal>Paris paris</literal> line in the
|
2017-10-09 03:44:17 +02:00
|
|
|
synonym dictionary and put it before the <literal>english_stem</literal>
|
2008-03-10 04:01:28 +01:00
|
|
|
dictionary. For example:
|
2007-10-21 22:04:37 +02:00
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2007-10-22 22:13:37 +02:00
|
|
|
SELECT * FROM ts_debug('english', 'Paris');
|
2022-04-20 17:04:28 +02:00
|
|
|
alias | description | token | dictionaries | dictionary | lexemes
|
2007-10-23 22:46:12 +02:00
|
|
|
-----------+-----------------+-------+----------------+--------------+---------
|
|
|
|
asciiword | Word, all ASCII | Paris | {english_stem} | english_stem | {pari}
|
2007-10-21 22:04:37 +02:00
|
|
|
|
2007-10-22 22:13:37 +02:00
|
|
|
CREATE TEXT SEARCH DICTIONARY my_synonym (
|
2007-10-21 22:04:37 +02:00
|
|
|
TEMPLATE = synonym,
|
|
|
|
SYNONYMS = my_synonyms
|
|
|
|
);
|
|
|
|
|
|
|
|
ALTER TEXT SEARCH CONFIGURATION english
|
2009-04-27 18:27:36 +02:00
|
|
|
ALTER MAPPING FOR asciiword
|
|
|
|
WITH my_synonym, english_stem;
|
2007-10-21 22:04:37 +02:00
|
|
|
|
2007-10-22 22:13:37 +02:00
|
|
|
SELECT * FROM ts_debug('english', 'Paris');
|
2022-04-20 17:04:28 +02:00
|
|
|
alias | description | token | dictionaries | dictionary | lexemes
|
2007-10-23 22:46:12 +02:00
|
|
|
-----------+-----------------+-------+---------------------------+------------+---------
|
|
|
|
asciiword | Word, all ASCII | Paris | {my_synonym,english_stem} | my_synonym | {paris}
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
2010-08-25 23:42:55 +02:00
|
|
|
|
2009-08-14 16:53:20 +02:00
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
The only parameter required by the <literal>synonym</literal> template is
|
|
|
|
<literal>SYNONYMS</literal>, which is the base name of its configuration file
|
|
|
|
— <literal>my_synonyms</literal> in the above example.
|
2010-08-25 23:42:55 +02:00
|
|
|
The file's full name will be
|
2017-10-09 03:44:17 +02:00
|
|
|
<filename>$SHAREDIR/tsearch_data/my_synonyms.syn</filename>
|
|
|
|
(where <literal>$SHAREDIR</literal> means the
|
|
|
|
<productname>PostgreSQL</productname> installation's shared-data directory).
|
2010-08-25 23:42:55 +02:00
|
|
|
The file format is just one line
|
|
|
|
per word to be substituted, with the word followed by its synonym,
|
|
|
|
separated by white space. Blank lines and trailing spaces are ignored.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
The <literal>synonym</literal> template also has an optional parameter
|
|
|
|
<literal>CaseSensitive</literal>, which defaults to <literal>false</literal>. When
|
|
|
|
<literal>CaseSensitive</literal> is <literal>false</literal>, words in the synonym file
|
2010-08-25 23:42:55 +02:00
|
|
|
are folded to lower case, as are input tokens. When it is
|
2017-10-09 03:44:17 +02:00
|
|
|
<literal>true</literal>, words and tokens are not folded to lower case,
|
2010-08-25 23:42:55 +02:00
|
|
|
but are compared as-is.
|
2009-08-14 16:53:20 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2010-08-25 23:42:55 +02:00
|
|
|
An asterisk (<literal>*</literal>) can be placed at the end of a synonym
|
|
|
|
in the configuration file. This indicates that the synonym is a prefix.
|
|
|
|
The asterisk is ignored when the entry is used in
|
|
|
|
<function>to_tsvector()</function>, but when it is used in
|
|
|
|
<function>to_tsquery()</function>, the result will be a query item with
|
|
|
|
the prefix match marker (see
|
2017-11-23 15:39:47 +01:00
|
|
|
<xref linkend="textsearch-parsing-queries"/>).
|
2010-08-25 23:42:55 +02:00
|
|
|
For example, suppose we have these entries in
|
2017-10-09 03:44:17 +02:00
|
|
|
<filename>$SHAREDIR/tsearch_data/synonym_sample.syn</filename>:
|
2009-08-14 16:53:20 +02:00
|
|
|
<programlisting>
|
|
|
|
postgres pgsql
|
|
|
|
postgresql pgsql
|
|
|
|
postgre pgsql
|
|
|
|
gogle googl
|
|
|
|
indices index*
|
|
|
|
</programlisting>
|
2010-08-25 23:42:55 +02:00
|
|
|
Then we will get these results:
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2010-08-25 23:42:55 +02:00
|
|
|
mydb=# CREATE TEXT SEARCH DICTIONARY syn (template=synonym, synonyms='synonym_sample');
|
2020-10-19 18:28:54 +02:00
|
|
|
mydb=# SELECT ts_lexize('syn', 'indices');
|
2009-08-14 16:53:20 +02:00
|
|
|
ts_lexize
|
|
|
|
-----------
|
|
|
|
{index}
|
|
|
|
(1 row)
|
|
|
|
|
2010-08-25 23:42:55 +02:00
|
|
|
mydb=# CREATE TEXT SEARCH CONFIGURATION tst (copy=simple);
|
|
|
|
mydb=# ALTER TEXT SEARCH CONFIGURATION tst ALTER MAPPING FOR asciiword WITH syn;
|
2020-10-19 18:28:54 +02:00
|
|
|
mydb=# SELECT to_tsvector('tst', 'indices');
|
2010-08-25 23:42:55 +02:00
|
|
|
to_tsvector
|
|
|
|
-------------
|
|
|
|
'index':1
|
|
|
|
(1 row)
|
|
|
|
|
2020-10-19 18:28:54 +02:00
|
|
|
mydb=# SELECT to_tsquery('tst', 'indices');
|
2009-08-14 16:53:20 +02:00
|
|
|
to_tsquery
|
|
|
|
------------
|
|
|
|
'index':*
|
|
|
|
(1 row)
|
|
|
|
|
2010-08-25 23:42:55 +02:00
|
|
|
mydb=# SELECT 'indexes are very useful'::tsvector;
|
2022-04-20 17:04:28 +02:00
|
|
|
tsvector
|
2009-08-14 16:53:20 +02:00
|
|
|
---------------------------------
|
|
|
|
'are' 'indexes' 'useful' 'very'
|
|
|
|
(1 row)
|
|
|
|
|
2020-10-19 18:28:54 +02:00
|
|
|
mydb=# SELECT 'indexes are very useful'::tsvector @@ to_tsquery('tst', 'indices');
|
2009-08-14 16:53:20 +02:00
|
|
|
?column?
|
|
|
|
----------
|
|
|
|
t
|
|
|
|
(1 row)
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
|
|
|
</para>
|
2007-10-21 22:04:37 +02:00
|
|
|
</sect2>
|
|
|
|
|
|
|
|
<sect2 id="textsearch-thesaurus">
|
|
|
|
<title>Thesaurus Dictionary</title>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
A thesaurus dictionary (sometimes abbreviated as <acronym>TZ</acronym>) is
|
|
|
|
a collection of words that includes information about the relationships
|
|
|
|
of words and phrases, i.e., broader terms (<acronym>BT</acronym>), narrower
|
|
|
|
terms (<acronym>NT</acronym>), preferred terms, non-preferred terms, related
|
|
|
|
terms, etc.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Basically a thesaurus dictionary replaces all non-preferred terms by one
|
|
|
|
preferred term and, optionally, preserves the original terms for indexing
|
2017-10-09 03:44:17 +02:00
|
|
|
as well. <productname>PostgreSQL</productname>'s current implementation of the
|
2007-10-21 22:04:37 +02:00
|
|
|
thesaurus dictionary is an extension of the synonym dictionary with added
|
|
|
|
<firstterm>phrase</firstterm> support. A thesaurus dictionary requires
|
|
|
|
a configuration file of the following format:
|
|
|
|
|
|
|
|
<programlisting>
|
|
|
|
# this is a comment
|
|
|
|
sample word(s) : indexed word(s)
|
|
|
|
more sample word(s) : more indexed word(s)
|
|
|
|
...
|
|
|
|
</programlisting>
|
|
|
|
|
|
|
|
where the colon (<symbol>:</symbol>) symbol acts as a delimiter between a
|
2014-08-30 17:52:36 +02:00
|
|
|
phrase and its replacement.
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
A thesaurus dictionary uses a <firstterm>subdictionary</firstterm> (which
|
|
|
|
is specified in the dictionary's configuration) to normalize the input
|
|
|
|
text before checking for phrase matches. It is only possible to select one
|
|
|
|
subdictionary. An error is reported if the subdictionary fails to
|
|
|
|
recognize a word. In that case, you should remove the use of the word or
|
|
|
|
teach the subdictionary about it. You can place an asterisk
|
|
|
|
(<symbol>*</symbol>) at the beginning of an indexed word to skip applying
|
2017-10-09 03:44:17 +02:00
|
|
|
the subdictionary to it, but all sample words <emphasis>must</emphasis> be known
|
2007-10-21 22:04:37 +02:00
|
|
|
to the subdictionary.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
The thesaurus dictionary chooses the longest match if there are multiple
|
|
|
|
phrases matching the input, and ties are broken by using the last
|
|
|
|
definition.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2007-11-10 16:39:34 +01:00
|
|
|
Specific stop words recognized by the subdictionary cannot be
|
2017-10-09 03:44:17 +02:00
|
|
|
specified; instead use <literal>?</literal> to mark the location where any
|
|
|
|
stop word can appear. For example, assuming that <literal>a</literal> and
|
|
|
|
<literal>the</literal> are stop words according to the subdictionary:
|
2007-10-21 22:04:37 +02:00
|
|
|
|
|
|
|
<programlisting>
|
2007-11-10 16:39:34 +01:00
|
|
|
? one ? two : swsw
|
2007-10-21 22:04:37 +02:00
|
|
|
</programlisting>
|
|
|
|
|
2017-10-09 03:44:17 +02:00
|
|
|
matches <literal>a one the two</literal> and <literal>the one a two</literal>;
|
|
|
|
both would be replaced by <literal>swsw</literal>.
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Since a thesaurus dictionary has the capability to recognize phrases it
|
|
|
|
must remember its state and interact with the parser. A thesaurus dictionary
|
|
|
|
uses these assignments to check if it should handle the next word or stop
|
|
|
|
accumulation. The thesaurus dictionary must be configured
|
|
|
|
carefully. For example, if the thesaurus dictionary is assigned to handle
|
2007-10-23 22:46:12 +02:00
|
|
|
only the <literal>asciiword</literal> token, then a thesaurus dictionary
|
2017-10-09 03:44:17 +02:00
|
|
|
definition like <literal>one 7</literal> will not work since token type
|
2007-10-21 22:04:37 +02:00
|
|
|
<literal>uint</literal> is not assigned to the thesaurus dictionary.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<caution>
|
|
|
|
<para>
|
|
|
|
Thesauruses are used during indexing so any change in the thesaurus
|
|
|
|
dictionary's parameters <emphasis>requires</emphasis> reindexing.
|
|
|
|
For most other dictionary types, small changes such as adding or
|
|
|
|
removing stopwords does not force reindexing.
|
|
|
|
</para>
|
|
|
|
</caution>
|
|
|
|
|
|
|
|
<sect3 id="textsearch-thesaurus-config">
|
|
|
|
<title>Thesaurus Configuration</title>
|
|
|
|
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
To define a new thesaurus dictionary, use the <literal>thesaurus</literal>
|
2007-10-21 22:04:37 +02:00
|
|
|
template. For example:
|
|
|
|
|
|
|
|
<programlisting>
|
|
|
|
CREATE TEXT SEARCH DICTIONARY thesaurus_simple (
|
|
|
|
TEMPLATE = thesaurus,
|
|
|
|
DictFile = mythesaurus,
|
|
|
|
Dictionary = pg_catalog.english_stem
|
|
|
|
);
|
|
|
|
</programlisting>
|
|
|
|
|
|
|
|
Here:
|
|
|
|
<itemizedlist spacing="compact" mark="bullet">
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
<literal>thesaurus_simple</literal> is the new dictionary's name
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
<literal>mythesaurus</literal> is the base name of the thesaurus
|
|
|
|
configuration file.
|
2017-10-09 03:44:17 +02:00
|
|
|
(Its full name will be <filename>$SHAREDIR/tsearch_data/mythesaurus.ths</filename>,
|
|
|
|
where <literal>$SHAREDIR</literal> means the installation shared-data
|
2007-10-21 22:04:37 +02:00
|
|
|
directory.)
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
<literal>pg_catalog.english_stem</literal> is the subdictionary (here,
|
|
|
|
a Snowball English stemmer) to use for thesaurus normalization.
|
|
|
|
Notice that the subdictionary will have its own
|
|
|
|
configuration (for example, stop words), which is not shown here.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</itemizedlist>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
Now it is possible to bind the thesaurus dictionary <literal>thesaurus_simple</literal>
|
|
|
|
to the desired token types in a configuration, for example:
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-08-21 23:08:47 +02:00
|
|
|
<programlisting>
|
2007-10-21 22:04:37 +02:00
|
|
|
ALTER TEXT SEARCH CONFIGURATION russian
|
2009-06-17 23:58:49 +02:00
|
|
|
ALTER MAPPING FOR asciiword, asciihword, hword_asciipart
|
2009-04-27 18:27:36 +02:00
|
|
|
WITH thesaurus_simple;
|
2007-08-21 23:08:47 +02:00
|
|
|
</programlisting>
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
</sect3>
|
|
|
|
|
|
|
|
<sect3 id="textsearch-thesaurus-examples">
|
|
|
|
<title>Thesaurus Example</title>
|
2007-08-31 18:33:36 +02:00
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
<para>
|
2007-10-21 22:04:37 +02:00
|
|
|
Consider a simple astronomical thesaurus <literal>thesaurus_astro</literal>,
|
|
|
|
which contains some astronomical word combinations:
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-08-21 23:08:47 +02:00
|
|
|
<programlisting>
|
2007-10-21 22:04:37 +02:00
|
|
|
supernovae stars : sn
|
|
|
|
crab nebulae : crab
|
2007-08-21 23:08:47 +02:00
|
|
|
</programlisting>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
Below we create a dictionary and bind some token types to
|
2007-11-28 16:42:31 +01:00
|
|
|
an astronomical thesaurus and English stemmer:
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-08-21 23:08:47 +02:00
|
|
|
<programlisting>
|
2007-10-21 22:04:37 +02:00
|
|
|
CREATE TEXT SEARCH DICTIONARY thesaurus_astro (
|
|
|
|
TEMPLATE = thesaurus,
|
|
|
|
DictFile = thesaurus_astro,
|
|
|
|
Dictionary = english_stem
|
|
|
|
);
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
ALTER TEXT SEARCH CONFIGURATION russian
|
2009-04-27 18:27:36 +02:00
|
|
|
ALTER MAPPING FOR asciiword, asciihword, hword_asciipart
|
|
|
|
WITH thesaurus_astro, english_stem;
|
2007-10-21 22:04:37 +02:00
|
|
|
</programlisting>
|
|
|
|
|
|
|
|
Now we can see how it works.
|
|
|
|
<function>ts_lexize</function> is not very useful for testing a thesaurus,
|
|
|
|
because it treats its input as a single token. Instead we can use
|
|
|
|
<function>plainto_tsquery</function> and <function>to_tsvector</function>
|
|
|
|
which will break their input strings into multiple tokens:
|
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
SELECT plainto_tsquery('supernova star');
|
|
|
|
plainto_tsquery
|
|
|
|
-----------------
|
|
|
|
'sn'
|
|
|
|
|
|
|
|
SELECT to_tsvector('supernova star');
|
|
|
|
to_tsvector
|
|
|
|
-------------
|
|
|
|
'sn':1
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
|
|
|
|
In principle, one can use <function>to_tsquery</function> if you quote
|
|
|
|
the argument:
|
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
SELECT to_tsquery('''supernova star''');
|
|
|
|
to_tsquery
|
2007-08-22 06:45:20 +02:00
|
|
|
------------
|
2007-10-21 22:04:37 +02:00
|
|
|
'sn'
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
Notice that <literal>supernova star</literal> matches <literal>supernovae
|
|
|
|
stars</literal> in <literal>thesaurus_astro</literal> because we specified
|
|
|
|
the <literal>english_stem</literal> stemmer in the thesaurus definition.
|
2017-10-09 03:44:17 +02:00
|
|
|
The stemmer removed the <literal>e</literal> and <literal>s</literal>.
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2007-10-21 22:04:37 +02:00
|
|
|
To index the original phrase as well as the substitute, just include it
|
|
|
|
in the right-hand part of the definition:
|
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
supernovae stars : sn supernovae stars
|
|
|
|
|
|
|
|
SELECT plainto_tsquery('supernova star');
|
|
|
|
plainto_tsquery
|
|
|
|
-----------------------------
|
|
|
|
'sn' & 'supernova' & 'star'
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
</sect3>
|
|
|
|
|
2007-10-17 03:01:28 +02:00
|
|
|
</sect2>
|
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<sect2 id="textsearch-ispell-dictionary">
|
2017-10-09 03:44:17 +02:00
|
|
|
<title><application>Ispell</application> Dictionary</title>
|
2007-10-17 03:01:28 +02:00
|
|
|
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
The <application>Ispell</application> dictionary template supports
|
|
|
|
<firstterm>morphological dictionaries</firstterm>, which can normalize many
|
2007-10-21 22:04:37 +02:00
|
|
|
different linguistic forms of a word into the same lexeme. For example,
|
2017-10-09 03:44:17 +02:00
|
|
|
an English <application>Ispell</application> dictionary can match all declensions and
|
2009-04-27 18:27:36 +02:00
|
|
|
conjugations of the search term <literal>bank</literal>, e.g.,
|
2017-10-09 03:44:17 +02:00
|
|
|
<literal>banking</literal>, <literal>banked</literal>, <literal>banks</literal>,
|
|
|
|
<literal>banks'</literal>, and <literal>bank's</literal>.
|
2007-10-17 03:01:28 +02:00
|
|
|
</para>
|
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
<para>
|
2007-10-21 22:04:37 +02:00
|
|
|
The standard <productname>PostgreSQL</productname> distribution does
|
2017-10-09 03:44:17 +02:00
|
|
|
not include any <application>Ispell</application> configuration files.
|
2007-10-21 22:04:37 +02:00
|
|
|
Dictionaries for a large number of languages are available from <ulink
|
2018-07-16 10:44:06 +02:00
|
|
|
url="https://www.cs.hmc.edu/~geoff/ispell.html">Ispell</ulink>.
|
2007-10-21 22:04:37 +02:00
|
|
|
Also, some more modern dictionary file formats are supported — <ulink
|
2018-07-16 10:44:06 +02:00
|
|
|
url="https://en.wikipedia.org/wiki/MySpell">MySpell</ulink> (OO < 2.0.1)
|
2022-11-21 23:25:48 +01:00
|
|
|
and <ulink url="https://hunspell.github.io/">Hunspell</ulink>
|
2007-10-21 22:04:37 +02:00
|
|
|
(OO >= 2.0.2). A large list of dictionaries is available on the <ulink
|
2018-07-16 10:44:06 +02:00
|
|
|
url="https://wiki.openoffice.org/wiki/Dictionaries">OpenOffice
|
2007-10-21 22:04:37 +02:00
|
|
|
Wiki</ulink>.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
To create an <application>Ispell</application> dictionary perform these steps:
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
2016-03-04 18:08:10 +01:00
|
|
|
<itemizedlist spacing="compact" mark="bullet">
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
download dictionary configuration files. <productname>OpenOffice</productname>
|
|
|
|
extension files have the <filename>.oxt</filename> extension. It is necessary
|
|
|
|
to extract <filename>.aff</filename> and <filename>.dic</filename> files, change
|
|
|
|
extensions to <filename>.affix</filename> and <filename>.dict</filename>. For some
|
2016-03-04 18:08:10 +01:00
|
|
|
dictionary files it is also needed to convert characters to the UTF-8
|
2016-07-29 04:46:15 +02:00
|
|
|
encoding with commands (for example, for a Norwegian language dictionary):
|
2007-08-21 23:08:47 +02:00
|
|
|
<programlisting>
|
2016-03-04 18:08:10 +01:00
|
|
|
iconv -f ISO_8859-1 -t UTF-8 -o nn_no.affix nn_NO.aff
|
|
|
|
iconv -f ISO_8859-1 -t UTF-8 -o nn_no.dict nn_NO.dic
|
|
|
|
</programlisting>
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
copy files to the <filename>$SHAREDIR/tsearch_data</filename> directory
|
2016-03-04 18:08:10 +01:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
load files into PostgreSQL with the following command:
|
|
|
|
<programlisting>
|
|
|
|
CREATE TEXT SEARCH DICTIONARY english_hunspell (
|
2007-10-21 22:04:37 +02:00
|
|
|
TEMPLATE = ispell,
|
2016-03-04 18:08:10 +01:00
|
|
|
DictFile = en_us,
|
|
|
|
AffFile = en_us,
|
|
|
|
Stopwords = english);
|
2007-08-21 23:08:47 +02:00
|
|
|
</programlisting>
|
2016-03-04 18:08:10 +01:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</itemizedlist>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
Here, <literal>DictFile</literal>, <literal>AffFile</literal>, and <literal>StopWords</literal>
|
2007-10-21 22:04:37 +02:00
|
|
|
specify the base names of the dictionary, affixes, and stop-words files.
|
|
|
|
The stop-words file has the same format explained above for the
|
2017-10-09 03:44:17 +02:00
|
|
|
<literal>simple</literal> dictionary type. The format of the other files is
|
2007-10-21 22:04:37 +02:00
|
|
|
not specified here but is available from the above-mentioned web sites.
|
2007-10-17 03:01:28 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2007-10-21 22:04:37 +02:00
|
|
|
Ispell dictionaries usually recognize a limited set of words, so they
|
|
|
|
should be followed by another broader dictionary; for
|
|
|
|
example, a Snowball dictionary, which recognizes everything.
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2016-03-04 18:08:10 +01:00
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
The <filename>.affix</filename> file of <application>Ispell</application> has the following
|
2016-03-04 18:08:10 +01:00
|
|
|
structure:
|
|
|
|
<programlisting>
|
|
|
|
prefixes
|
|
|
|
flag *A:
|
|
|
|
. > RE # As in enter > reenter
|
|
|
|
suffixes
|
|
|
|
flag T:
|
|
|
|
E > ST # As in late > latest
|
|
|
|
[^AEIOU]Y > -Y,IEST # As in dirty > dirtiest
|
|
|
|
[AEIOU]Y > EST # As in gray > grayest
|
|
|
|
[^EY] > EST # As in small > smallest
|
|
|
|
</programlisting>
|
|
|
|
</para>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
And the <filename>.dict</filename> file has the following structure:
|
2016-03-04 18:08:10 +01:00
|
|
|
<programlisting>
|
|
|
|
lapse/ADGRS
|
|
|
|
lard/DGRS
|
|
|
|
large/PRTY
|
|
|
|
lark/MRS
|
|
|
|
</programlisting>
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
Format of the <filename>.dict</filename> file is:
|
2016-03-04 18:08:10 +01:00
|
|
|
<programlisting>
|
|
|
|
basic_form/affix_class_name
|
|
|
|
</programlisting>
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
In the <filename>.affix</filename> file every affix flag is described in the
|
2016-03-04 18:08:10 +01:00
|
|
|
following format:
|
|
|
|
<programlisting>
|
|
|
|
condition > [-stripping_letters,] adding_affix
|
|
|
|
</programlisting>
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
Here, condition has a format similar to the format of regular expressions.
|
2017-10-09 03:44:17 +02:00
|
|
|
It can use groupings <literal>[...]</literal> and <literal>[^...]</literal>.
|
|
|
|
For example, <literal>[AEIOU]Y</literal> means that the last letter of the word
|
|
|
|
is <literal>"y"</literal> and the penultimate letter is <literal>"a"</literal>,
|
|
|
|
<literal>"e"</literal>, <literal>"i"</literal>, <literal>"o"</literal> or <literal>"u"</literal>.
|
|
|
|
<literal>[^EY]</literal> means that the last letter is neither <literal>"e"</literal>
|
|
|
|
nor <literal>"y"</literal>.
|
2016-03-04 18:08:10 +01:00
|
|
|
</para>
|
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
<para>
|
2009-04-27 18:27:36 +02:00
|
|
|
Ispell dictionaries support splitting compound words;
|
|
|
|
a useful feature.
|
2007-10-21 22:04:37 +02:00
|
|
|
Notice that the affix file should specify a special flag using the
|
|
|
|
<literal>compoundwords controlled</literal> statement that marks dictionary
|
|
|
|
words that can participate in compound formation:
|
2007-08-21 23:08:47 +02:00
|
|
|
|
|
|
|
<programlisting>
|
2007-10-21 22:04:37 +02:00
|
|
|
compoundwords controlled z
|
|
|
|
</programlisting>
|
2007-08-22 06:45:20 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
Here are some examples for the Norwegian language:
|
|
|
|
|
|
|
|
<programlisting>
|
|
|
|
SELECT ts_lexize('norwegian_ispell', 'overbuljongterningpakkmesterassistent');
|
|
|
|
{over,buljong,terning,pakk,mester,assistent}
|
|
|
|
SELECT ts_lexize('norwegian_ispell', 'sjokoladefabrikk');
|
|
|
|
{sjokoladefabrikk,sjokolade,fabrikk}
|
2007-08-21 23:08:47 +02:00
|
|
|
</programlisting>
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
|
2016-03-04 18:08:10 +01:00
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
<application>MySpell</application> format is a subset of <application>Hunspell</application>.
|
|
|
|
The <filename>.affix</filename> file of <application>Hunspell</application> has the following
|
2016-03-04 18:08:10 +01:00
|
|
|
structure:
|
|
|
|
<programlisting>
|
|
|
|
PFX A Y 1
|
|
|
|
PFX A 0 re .
|
|
|
|
SFX T N 4
|
|
|
|
SFX T 0 st e
|
|
|
|
SFX T y iest [^aeiou]y
|
|
|
|
SFX T 0 est [aeiou]y
|
|
|
|
SFX T 0 est [^ey]
|
|
|
|
</programlisting>
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
The first line of an affix class is the header. Fields of an affix rules are
|
|
|
|
listed after the header:
|
|
|
|
</para>
|
|
|
|
<itemizedlist spacing="compact" mark="bullet">
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
parameter name (PFX or SFX)
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
flag (name of the affix class)
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
stripping characters from beginning (at prefix) or end (at suffix) of the
|
|
|
|
word
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
adding affix
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
|
|
|
condition that has a format similar to the format of regular expressions.
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</itemizedlist>
|
|
|
|
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
The <filename>.dict</filename> file looks like the <filename>.dict</filename> file of
|
|
|
|
<application>Ispell</application>:
|
2016-03-04 18:08:10 +01:00
|
|
|
<programlisting>
|
|
|
|
larder/M
|
|
|
|
lardy/RT
|
|
|
|
large/RSPMYT
|
|
|
|
largehearted
|
|
|
|
</programlisting>
|
|
|
|
</para>
|
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<note>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
<application>MySpell</application> does not support compound words.
|
|
|
|
<application>Hunspell</application> has sophisticated support for compound words. At
|
2007-10-21 22:04:37 +02:00
|
|
|
present, <productname>PostgreSQL</productname> implements only the basic
|
|
|
|
compound word operations of Hunspell.
|
|
|
|
</para>
|
|
|
|
</note>
|
2007-10-17 03:01:28 +02:00
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
</sect2>
|
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<sect2 id="textsearch-snowball-dictionary">
|
2017-10-09 03:44:17 +02:00
|
|
|
<title><application>Snowball</application> Dictionary</title>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
The <application>Snowball</application> dictionary template is based on a project
|
2009-04-27 18:27:36 +02:00
|
|
|
by Martin Porter, inventor of the popular Porter's stemming algorithm
|
2007-10-21 22:04:37 +02:00
|
|
|
for the English language. Snowball now provides stemming algorithms for
|
2020-07-18 15:43:35 +02:00
|
|
|
many languages (see the <ulink url="https://snowballstem.org/">Snowball
|
2007-10-21 22:04:37 +02:00
|
|
|
site</ulink> for more information). Each algorithm understands how to
|
|
|
|
reduce common variant forms of words to a base, or stem, spelling within
|
2017-10-09 03:44:17 +02:00
|
|
|
its language. A Snowball dictionary requires a <literal>language</literal>
|
2007-10-21 22:04:37 +02:00
|
|
|
parameter to identify which stemmer to use, and optionally can specify a
|
2017-10-09 03:44:17 +02:00
|
|
|
<literal>stopword</literal> file name that gives a list of words to eliminate.
|
2007-10-21 22:04:37 +02:00
|
|
|
(<productname>PostgreSQL</productname>'s standard stopword lists are also
|
|
|
|
provided by the Snowball project.)
|
|
|
|
For example, there is a built-in definition equivalent to
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-08-21 23:08:47 +02:00
|
|
|
<programlisting>
|
2007-10-21 22:04:37 +02:00
|
|
|
CREATE TEXT SEARCH DICTIONARY english_stem (
|
|
|
|
TEMPLATE = snowball,
|
|
|
|
Language = english,
|
|
|
|
StopWords = english
|
|
|
|
);
|
2007-08-21 23:08:47 +02:00
|
|
|
</programlisting>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
The stopword file format is the same as already explained.
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
A <application>Snowball</application> dictionary recognizes everything, whether
|
2007-10-21 22:04:37 +02:00
|
|
|
or not it is able to simplify the word, so it should be placed
|
2007-11-05 16:55:53 +01:00
|
|
|
at the end of the dictionary list. It is useless to have it
|
2007-10-21 22:04:37 +02:00
|
|
|
before any other dictionary because a token will never pass through it to
|
|
|
|
the next dictionary.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
</sect2>
|
|
|
|
|
|
|
|
</sect1>
|
|
|
|
|
|
|
|
<sect1 id="textsearch-configuration">
|
|
|
|
<title>Configuration Example</title>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
A text search configuration specifies all options necessary to transform a
|
|
|
|
document into a <type>tsvector</type>: the parser to use to break text
|
|
|
|
into tokens, and the dictionaries to use to transform each token into a
|
|
|
|
lexeme. Every call of
|
|
|
|
<function>to_tsvector</function> or <function>to_tsquery</function>
|
|
|
|
needs a text search configuration to perform its processing.
|
|
|
|
The configuration parameter
|
2017-11-23 15:39:47 +01:00
|
|
|
<xref linkend="guc-default-text-search-config"/>
|
2007-10-21 22:04:37 +02:00
|
|
|
specifies the name of the default configuration, which is the
|
|
|
|
one used by text search functions if an explicit configuration
|
|
|
|
parameter is omitted.
|
|
|
|
It can be set in <filename>postgresql.conf</filename>, or set for an
|
2017-10-09 03:44:17 +02:00
|
|
|
individual session using the <command>SET</command> command.
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2007-10-21 22:04:37 +02:00
|
|
|
Several predefined text search configurations are available, and
|
|
|
|
you can create custom configurations easily. To facilitate management
|
|
|
|
of text search objects, a set of <acronym>SQL</acronym> commands
|
2007-11-28 16:42:31 +01:00
|
|
|
is available, and there are several <application>psql</application> commands that display information
|
2017-11-23 15:39:47 +01:00
|
|
|
about text search objects (<xref linkend="textsearch-psql"/>).
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2009-06-17 23:58:49 +02:00
|
|
|
As an example we will create a configuration
|
|
|
|
<literal>pg</literal>, starting by duplicating the built-in
|
2017-10-09 03:44:17 +02:00
|
|
|
<literal>english</literal> configuration:
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-08-21 23:08:47 +02:00
|
|
|
<programlisting>
|
2007-10-21 22:04:37 +02:00
|
|
|
CREATE TEXT SEARCH CONFIGURATION public.pg ( COPY = pg_catalog.english );
|
2007-08-21 23:08:47 +02:00
|
|
|
</programlisting>
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
<para>
|
2007-10-21 22:04:37 +02:00
|
|
|
We will use a PostgreSQL-specific synonym list
|
|
|
|
and store it in <filename>$SHAREDIR/tsearch_data/pg_dict.syn</filename>.
|
|
|
|
The file contents look like:
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<programlisting>
|
|
|
|
postgres pg
|
|
|
|
pgsql pg
|
|
|
|
postgresql pg
|
|
|
|
</programlisting>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
We define the synonym dictionary like this:
|
2007-08-21 23:08:47 +02:00
|
|
|
|
|
|
|
<programlisting>
|
2007-10-21 22:04:37 +02:00
|
|
|
CREATE TEXT SEARCH DICTIONARY pg_dict (
|
|
|
|
TEMPLATE = synonym,
|
|
|
|
SYNONYMS = pg_dict
|
2007-08-22 06:45:20 +02:00
|
|
|
);
|
2007-08-21 23:08:47 +02:00
|
|
|
</programlisting>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2017-10-09 03:44:17 +02:00
|
|
|
Next we register the <productname>Ispell</productname> dictionary
|
2007-10-21 22:04:37 +02:00
|
|
|
<literal>english_ispell</literal>, which has its own configuration files:
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<programlisting>
|
|
|
|
CREATE TEXT SEARCH DICTIONARY english_ispell (
|
|
|
|
TEMPLATE = ispell,
|
|
|
|
DictFile = english,
|
|
|
|
AffFile = english,
|
|
|
|
StopWords = english
|
|
|
|
);
|
|
|
|
</programlisting>
|
|
|
|
|
2007-10-23 22:46:12 +02:00
|
|
|
Now we can set up the mappings for words in configuration
|
2017-10-09 03:44:17 +02:00
|
|
|
<literal>pg</literal>:
|
2007-08-21 23:08:47 +02:00
|
|
|
|
|
|
|
<programlisting>
|
2007-10-21 22:04:37 +02:00
|
|
|
ALTER TEXT SEARCH CONFIGURATION pg
|
2007-10-23 22:46:12 +02:00
|
|
|
ALTER MAPPING FOR asciiword, asciihword, hword_asciipart,
|
|
|
|
word, hword, hword_part
|
2007-10-21 22:04:37 +02:00
|
|
|
WITH pg_dict, english_ispell, english_stem;
|
2007-08-21 23:08:47 +02:00
|
|
|
</programlisting>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
We choose not to index or search some token types that the built-in
|
|
|
|
configuration does handle:
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<programlisting>
|
|
|
|
ALTER TEXT SEARCH CONFIGURATION pg
|
2007-10-27 18:01:09 +02:00
|
|
|
DROP MAPPING FOR email, url, url_path, sfloat, float;
|
2007-10-21 22:04:37 +02:00
|
|
|
</programlisting>
|
|
|
|
</para>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
<para>
|
2007-10-21 22:04:37 +02:00
|
|
|
Now we can test our configuration:
|
2007-08-21 23:08:47 +02:00
|
|
|
|
|
|
|
<programlisting>
|
2007-10-21 22:04:37 +02:00
|
|
|
SELECT * FROM ts_debug('public.pg', '
|
|
|
|
PostgreSQL, the highly scalable, SQL compliant, open source object-relational
|
|
|
|
database management system, is now undergoing beta testing of the next
|
|
|
|
version of our software.
|
|
|
|
');
|
2007-08-21 23:08:47 +02:00
|
|
|
</programlisting>
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<para>
|
|
|
|
The next step is to set the session to use the new configuration, which was
|
2017-10-09 03:44:17 +02:00
|
|
|
created in the <literal>public</literal> schema:
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
=> \dF
|
|
|
|
List of text search configurations
|
|
|
|
Schema | Name | Description
|
|
|
|
---------+------+-------------
|
|
|
|
public | pg |
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
SET default_text_search_config = 'public.pg';
|
|
|
|
SET
|
|
|
|
|
|
|
|
SHOW default_text_search_config;
|
|
|
|
default_text_search_config
|
|
|
|
----------------------------
|
|
|
|
public.pg
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
</sect1>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<sect1 id="textsearch-debugging">
|
|
|
|
<title>Testing and Debugging Text Search</title>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<para>
|
|
|
|
The behavior of a custom text search configuration can easily become
|
2009-04-27 18:27:36 +02:00
|
|
|
confusing. The functions described
|
2007-10-21 22:04:37 +02:00
|
|
|
in this section are useful for testing text search objects. You can
|
|
|
|
test a complete configuration, or test parsers and dictionaries separately.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<sect2 id="textsearch-configuration-testing">
|
|
|
|
<title>Configuration Testing</title>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
The function <function>ts_debug</function> allows easy testing of a
|
|
|
|
text search configuration.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<indexterm>
|
|
|
|
<primary>ts_debug</primary>
|
|
|
|
</indexterm>
|
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<synopsis>
|
2017-10-09 03:44:17 +02:00
|
|
|
ts_debug(<optional> <replaceable class="parameter">config</replaceable> <type>regconfig</type>, </optional> <replaceable class="parameter">document</replaceable> <type>text</type>,
|
|
|
|
OUT <replaceable class="parameter">alias</replaceable> <type>text</type>,
|
|
|
|
OUT <replaceable class="parameter">description</replaceable> <type>text</type>,
|
|
|
|
OUT <replaceable class="parameter">token</replaceable> <type>text</type>,
|
|
|
|
OUT <replaceable class="parameter">dictionaries</replaceable> <type>regdictionary[]</type>,
|
|
|
|
OUT <replaceable class="parameter">dictionary</replaceable> <type>regdictionary</type>,
|
|
|
|
OUT <replaceable class="parameter">lexemes</replaceable> <type>text[]</type>)
|
2010-07-29 21:34:41 +02:00
|
|
|
returns setof record
|
|
|
|
</synopsis>
|
2007-10-21 22:04:37 +02:00
|
|
|
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
<function>ts_debug</function> displays information about every token of
|
2017-10-09 04:00:57 +02:00
|
|
|
<replaceable class="parameter">document</replaceable> as produced by the
|
2007-10-21 22:04:37 +02:00
|
|
|
parser and processed by the configured dictionaries. It uses the
|
|
|
|
configuration specified by <replaceable
|
2017-10-09 04:00:57 +02:00
|
|
|
class="parameter">config</replaceable>,
|
2007-10-21 22:04:37 +02:00
|
|
|
or <varname>default_text_search_config</varname> if that argument is
|
|
|
|
omitted.
|
|
|
|
</para>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
<function>ts_debug</function> returns one row for each token identified in the text
|
2007-10-22 22:13:37 +02:00
|
|
|
by the parser. The columns returned are
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-22 22:13:37 +02:00
|
|
|
<itemizedlist spacing="compact" mark="bullet">
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
<replaceable>alias</replaceable> <type>text</type> — short name of the token type
|
2007-10-22 22:13:37 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
<replaceable>description</replaceable> <type>text</type> — description of the
|
2007-10-22 22:13:37 +02:00
|
|
|
token type
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
<replaceable>token</replaceable> <type>text</type> — text of the token
|
2007-10-22 22:13:37 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
<replaceable>dictionaries</replaceable> <type>regdictionary[]</type> — the
|
2007-10-22 22:13:37 +02:00
|
|
|
dictionaries selected by the configuration for this token type
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
<replaceable>dictionary</replaceable> <type>regdictionary</type> — the dictionary
|
|
|
|
that recognized the token, or <literal>NULL</literal> if none did
|
2007-10-22 22:13:37 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
<listitem>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
<replaceable>lexemes</replaceable> <type>text[]</type> — the lexeme(s) produced
|
|
|
|
by the dictionary that recognized the token, or <literal>NULL</literal> if
|
|
|
|
none did; an empty array (<literal>{}</literal>) means it was recognized as a
|
2007-10-22 22:13:37 +02:00
|
|
|
stop word
|
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</itemizedlist>
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<para>
|
|
|
|
Here is a simple example:
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2020-10-19 18:28:54 +02:00
|
|
|
SELECT * FROM ts_debug('english', 'a fat cat sat on a mat - it ate a fat rats');
|
2022-04-20 17:04:28 +02:00
|
|
|
alias | description | token | dictionaries | dictionary | lexemes
|
2007-10-23 22:46:12 +02:00
|
|
|
-----------+-----------------+-------+----------------+--------------+---------
|
|
|
|
asciiword | Word, all ASCII | a | {english_stem} | english_stem | {}
|
2022-04-20 17:04:28 +02:00
|
|
|
blank | Space symbols | | {} | |
|
2007-10-23 22:46:12 +02:00
|
|
|
asciiword | Word, all ASCII | fat | {english_stem} | english_stem | {fat}
|
2022-04-20 17:04:28 +02:00
|
|
|
blank | Space symbols | | {} | |
|
2007-10-23 22:46:12 +02:00
|
|
|
asciiword | Word, all ASCII | cat | {english_stem} | english_stem | {cat}
|
2022-04-20 17:04:28 +02:00
|
|
|
blank | Space symbols | | {} | |
|
2007-10-23 22:46:12 +02:00
|
|
|
asciiword | Word, all ASCII | sat | {english_stem} | english_stem | {sat}
|
2022-04-20 17:04:28 +02:00
|
|
|
blank | Space symbols | | {} | |
|
2007-10-23 22:46:12 +02:00
|
|
|
asciiword | Word, all ASCII | on | {english_stem} | english_stem | {}
|
2022-04-20 17:04:28 +02:00
|
|
|
blank | Space symbols | | {} | |
|
2007-10-23 22:46:12 +02:00
|
|
|
asciiword | Word, all ASCII | a | {english_stem} | english_stem | {}
|
2022-04-20 17:04:28 +02:00
|
|
|
blank | Space symbols | | {} | |
|
2007-10-23 22:46:12 +02:00
|
|
|
asciiword | Word, all ASCII | mat | {english_stem} | english_stem | {mat}
|
2022-04-20 17:04:28 +02:00
|
|
|
blank | Space symbols | | {} | |
|
|
|
|
blank | Space symbols | - | {} | |
|
2007-10-23 22:46:12 +02:00
|
|
|
asciiword | Word, all ASCII | it | {english_stem} | english_stem | {}
|
2022-04-20 17:04:28 +02:00
|
|
|
blank | Space symbols | | {} | |
|
2007-10-23 22:46:12 +02:00
|
|
|
asciiword | Word, all ASCII | ate | {english_stem} | english_stem | {ate}
|
2022-04-20 17:04:28 +02:00
|
|
|
blank | Space symbols | | {} | |
|
2007-10-23 22:46:12 +02:00
|
|
|
asciiword | Word, all ASCII | a | {english_stem} | english_stem | {}
|
2022-04-20 17:04:28 +02:00
|
|
|
blank | Space symbols | | {} | |
|
2007-10-23 22:46:12 +02:00
|
|
|
asciiword | Word, all ASCII | fat | {english_stem} | english_stem | {fat}
|
2022-04-20 17:04:28 +02:00
|
|
|
blank | Space symbols | | {} | |
|
2007-10-23 22:46:12 +02:00
|
|
|
asciiword | Word, all ASCII | rats | {english_stem} | english_stem | {rat}
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<para>
|
|
|
|
For a more extensive demonstration, we
|
|
|
|
first create a <literal>public.english</literal> configuration and
|
2007-11-28 16:42:31 +01:00
|
|
|
Ispell dictionary for the English language:
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
|
|
|
<programlisting>
|
2007-10-21 22:04:37 +02:00
|
|
|
CREATE TEXT SEARCH CONFIGURATION public.english ( COPY = pg_catalog.english );
|
|
|
|
|
2007-08-25 08:26:57 +02:00
|
|
|
CREATE TEXT SEARCH DICTIONARY english_ispell (
|
2007-08-22 06:45:20 +02:00
|
|
|
TEMPLATE = ispell,
|
|
|
|
DictFile = english,
|
|
|
|
AffFile = english,
|
|
|
|
StopWords = english
|
|
|
|
);
|
2007-10-21 22:04:37 +02:00
|
|
|
|
|
|
|
ALTER TEXT SEARCH CONFIGURATION public.english
|
2007-10-23 22:46:12 +02:00
|
|
|
ALTER MAPPING FOR asciiword WITH english_ispell, english_stem;
|
2007-08-21 23:08:47 +02:00
|
|
|
</programlisting>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2020-10-19 18:28:54 +02:00
|
|
|
SELECT * FROM ts_debug('public.english', 'The Brightest supernovaes');
|
2022-04-20 17:04:28 +02:00
|
|
|
alias | description | token | dictionaries | dictionary | lexemes
|
2007-10-23 22:46:12 +02:00
|
|
|
-----------+-----------------+-------------+-------------------------------+----------------+-------------
|
|
|
|
asciiword | Word, all ASCII | The | {english_ispell,english_stem} | english_ispell | {}
|
2022-04-20 17:04:28 +02:00
|
|
|
blank | Space symbols | | {} | |
|
2007-10-23 22:46:12 +02:00
|
|
|
asciiword | Word, all ASCII | Brightest | {english_ispell,english_stem} | english_ispell | {bright}
|
2022-04-20 17:04:28 +02:00
|
|
|
blank | Space symbols | | {} | |
|
2007-10-23 22:46:12 +02:00
|
|
|
asciiword | Word, all ASCII | supernovaes | {english_ispell,english_stem} | english_stem | {supernova}
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
In this example, the word <literal>Brightest</literal> was recognized by the
|
2007-10-23 22:46:12 +02:00
|
|
|
parser as an <literal>ASCII word</literal> (alias <literal>asciiword</literal>).
|
2007-10-21 22:04:37 +02:00
|
|
|
For this token type the dictionary list is
|
2017-10-09 03:44:17 +02:00
|
|
|
<literal>english_ispell</literal> and
|
2007-10-22 22:13:37 +02:00
|
|
|
<literal>english_stem</literal>. The word was recognized by
|
|
|
|
<literal>english_ispell</literal>, which reduced it to the noun
|
2007-10-21 22:04:37 +02:00
|
|
|
<literal>bright</literal>. The word <literal>supernovaes</literal> is
|
2007-10-22 22:13:37 +02:00
|
|
|
unknown to the <literal>english_ispell</literal> dictionary so it
|
2007-10-21 22:04:37 +02:00
|
|
|
was passed to the next dictionary, and, fortunately, was recognized (in
|
2007-10-22 22:13:37 +02:00
|
|
|
fact, <literal>english_stem</literal> is a Snowball dictionary which
|
2007-10-21 22:04:37 +02:00
|
|
|
recognizes everything; that is why it was placed at the end of the
|
|
|
|
dictionary list).
|
|
|
|
</para>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<para>
|
|
|
|
The word <literal>The</literal> was recognized by the
|
2007-10-22 22:13:37 +02:00
|
|
|
<literal>english_ispell</literal> dictionary as a stop word (<xref
|
2017-11-23 15:39:47 +01:00
|
|
|
linkend="textsearch-stopwords"/>) and will not be indexed.
|
2007-10-21 22:04:37 +02:00
|
|
|
The spaces are discarded too, since the configuration provides no
|
|
|
|
dictionaries at all for them.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2009-04-27 18:27:36 +02:00
|
|
|
You can reduce the width of the output by explicitly specifying which columns
|
2007-10-21 22:04:37 +02:00
|
|
|
you want to see:
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2007-10-22 22:13:37 +02:00
|
|
|
SELECT alias, token, dictionary, lexemes
|
2020-10-19 18:28:54 +02:00
|
|
|
FROM ts_debug('public.english', 'The Brightest supernovaes');
|
2022-04-20 17:04:28 +02:00
|
|
|
alias | token | dictionary | lexemes
|
2007-10-23 22:46:12 +02:00
|
|
|
-----------+-------------+----------------+-------------
|
|
|
|
asciiword | The | english_ispell | {}
|
2022-04-20 17:04:28 +02:00
|
|
|
blank | | |
|
2007-10-23 22:46:12 +02:00
|
|
|
asciiword | Brightest | english_ispell | {bright}
|
2022-04-20 17:04:28 +02:00
|
|
|
blank | | |
|
2007-10-23 22:46:12 +02:00
|
|
|
asciiword | supernovaes | english_stem | {supernova}
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
</sect2>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<sect2 id="textsearch-parser-testing">
|
|
|
|
<title>Parser Testing</title>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<para>
|
|
|
|
The following functions allow direct testing of a text search parser.
|
|
|
|
</para>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<indexterm>
|
|
|
|
<primary>ts_parse</primary>
|
|
|
|
</indexterm>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<synopsis>
|
2017-10-09 03:44:17 +02:00
|
|
|
ts_parse(<replaceable class="parameter">parser_name</replaceable> <type>text</type>, <replaceable class="parameter">document</replaceable> <type>text</type>,
|
|
|
|
OUT <replaceable class="parameter">tokid</replaceable> <type>integer</type>, OUT <replaceable class="parameter">token</replaceable> <type>text</type>) returns <type>setof record</type>
|
|
|
|
ts_parse(<replaceable class="parameter">parser_oid</replaceable> <type>oid</type>, <replaceable class="parameter">document</replaceable> <type>text</type>,
|
|
|
|
OUT <replaceable class="parameter">tokid</replaceable> <type>integer</type>, OUT <replaceable class="parameter">token</replaceable> <type>text</type>) returns <type>setof record</type>
|
2010-07-29 21:34:41 +02:00
|
|
|
</synopsis>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
<function>ts_parse</function> parses the given <replaceable>document</replaceable>
|
2007-10-21 22:04:37 +02:00
|
|
|
and returns a series of records, one for each token produced by
|
|
|
|
parsing. Each record includes a <varname>tokid</varname> showing the
|
|
|
|
assigned token type and a <varname>token</varname> which is the text of the
|
|
|
|
token. For example:
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
SELECT * FROM ts_parse('default', '123 - a number');
|
|
|
|
tokid | token
|
|
|
|
-------+--------
|
|
|
|
22 | 123
|
|
|
|
12 |
|
|
|
|
12 | -
|
|
|
|
1 | a
|
|
|
|
12 |
|
|
|
|
1 | number
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
2007-10-17 03:01:28 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<indexterm>
|
|
|
|
<primary>ts_token_type</primary>
|
|
|
|
</indexterm>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<synopsis>
|
2017-10-09 03:44:17 +02:00
|
|
|
ts_token_type(<replaceable class="parameter">parser_name</replaceable> <type>text</type>, OUT <replaceable class="parameter">tokid</replaceable> <type>integer</type>,
|
|
|
|
OUT <replaceable class="parameter">alias</replaceable> <type>text</type>, OUT <replaceable class="parameter">description</replaceable> <type>text</type>) returns <type>setof record</type>
|
|
|
|
ts_token_type(<replaceable class="parameter">parser_oid</replaceable> <type>oid</type>, OUT <replaceable class="parameter">tokid</replaceable> <type>integer</type>,
|
|
|
|
OUT <replaceable class="parameter">alias</replaceable> <type>text</type>, OUT <replaceable class="parameter">description</replaceable> <type>text</type>) returns <type>setof record</type>
|
2010-07-29 21:34:41 +02:00
|
|
|
</synopsis>
|
2007-10-21 22:04:37 +02:00
|
|
|
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
<function>ts_token_type</function> returns a table which describes each type of
|
2007-10-21 22:04:37 +02:00
|
|
|
token the specified parser can recognize. For each token type, the table
|
|
|
|
gives the integer <varname>tokid</varname> that the parser uses to label a
|
|
|
|
token of that type, the <varname>alias</varname> that names the token type
|
|
|
|
in configuration commands, and a short <varname>description</varname>. For
|
|
|
|
example:
|
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2007-10-21 22:04:37 +02:00
|
|
|
SELECT * FROM ts_token_type('default');
|
2022-04-20 17:04:28 +02:00
|
|
|
tokid | alias | description
|
2007-10-23 22:46:12 +02:00
|
|
|
-------+-----------------+------------------------------------------
|
|
|
|
1 | asciiword | Word, all ASCII
|
|
|
|
2 | word | Word, all letters
|
|
|
|
3 | numword | Word, letters and digits
|
|
|
|
4 | email | Email address
|
|
|
|
5 | url | URL
|
|
|
|
6 | host | Host
|
|
|
|
7 | sfloat | Scientific notation
|
|
|
|
8 | version | Version number
|
|
|
|
9 | hword_numpart | Hyphenated word part, letters and digits
|
|
|
|
10 | hword_part | Hyphenated word part, all letters
|
|
|
|
11 | hword_asciipart | Hyphenated word part, all ASCII
|
|
|
|
12 | blank | Space symbols
|
2007-11-20 16:58:52 +01:00
|
|
|
13 | tag | XML tag
|
2007-10-23 22:46:12 +02:00
|
|
|
14 | protocol | Protocol head
|
|
|
|
15 | numhword | Hyphenated word, letters and digits
|
|
|
|
16 | asciihword | Hyphenated word, all ASCII
|
|
|
|
17 | hword | Hyphenated word, all letters
|
2007-10-27 18:01:09 +02:00
|
|
|
18 | url_path | URL path
|
2007-10-23 22:46:12 +02:00
|
|
|
19 | file | File or path name
|
|
|
|
20 | float | Decimal notation
|
|
|
|
21 | int | Signed integer
|
|
|
|
22 | uint | Unsigned integer
|
2007-11-20 16:58:52 +01:00
|
|
|
23 | entity | XML entity
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
</sect2>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
<sect2 id="textsearch-dictionary-testing">
|
2007-08-29 22:37:14 +02:00
|
|
|
<title>Dictionary Testing</title>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
The <function>ts_lexize</function> function facilitates dictionary testing.
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<indexterm>
|
|
|
|
<primary>ts_lexize</primary>
|
|
|
|
</indexterm>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<synopsis>
|
2017-10-09 03:44:17 +02:00
|
|
|
ts_lexize(<replaceable class="parameter">dict</replaceable> <type>regdictionary</type>, <replaceable class="parameter">token</replaceable> <type>text</type>) returns <type>text[]</type>
|
2010-07-29 21:34:41 +02:00
|
|
|
</synopsis>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
<function>ts_lexize</function> returns an array of lexemes if the input
|
2007-10-21 22:04:37 +02:00
|
|
|
<replaceable>token</replaceable> is known to the dictionary,
|
|
|
|
or an empty array if the token
|
|
|
|
is known to the dictionary but it is a stop word, or
|
|
|
|
<literal>NULL</literal> if it is an unknown word.
|
|
|
|
</para>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<para>
|
|
|
|
Examples:
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2007-08-25 08:26:57 +02:00
|
|
|
SELECT ts_lexize('english_stem', 'stars');
|
2007-08-22 06:45:20 +02:00
|
|
|
ts_lexize
|
|
|
|
-----------
|
2007-08-21 23:08:47 +02:00
|
|
|
{star}
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-08-25 08:26:57 +02:00
|
|
|
SELECT ts_lexize('english_stem', 'a');
|
2007-08-22 06:45:20 +02:00
|
|
|
ts_lexize
|
|
|
|
-----------
|
2007-08-21 23:08:47 +02:00
|
|
|
{}
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
<note>
|
|
|
|
<para>
|
2007-10-21 22:04:37 +02:00
|
|
|
The <function>ts_lexize</function> function expects a single
|
|
|
|
<emphasis>token</emphasis>, not text. Here is a case
|
|
|
|
where this can be confusing:
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2020-10-19 18:28:54 +02:00
|
|
|
SELECT ts_lexize('thesaurus_astro', 'supernovae stars') is null;
|
2007-08-21 23:08:47 +02:00
|
|
|
?column?
|
|
|
|
----------
|
|
|
|
t
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
The thesaurus dictionary <literal>thesaurus_astro</literal> does know the
|
2017-10-09 03:44:17 +02:00
|
|
|
phrase <literal>supernovae stars</literal>, but <function>ts_lexize</function>
|
2007-10-21 22:04:37 +02:00
|
|
|
fails since it does not parse the input text but treats it as a single
|
2017-10-09 03:44:17 +02:00
|
|
|
token. Use <function>plainto_tsquery</function> or <function>to_tsvector</function> to
|
2007-10-21 22:04:37 +02:00
|
|
|
test thesaurus dictionaries, for example:
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2007-08-21 23:08:47 +02:00
|
|
|
SELECT plainto_tsquery('supernovae stars');
|
|
|
|
plainto_tsquery
|
|
|
|
-----------------
|
|
|
|
'sn'
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
</note>
|
|
|
|
|
|
|
|
</sect2>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
</sect1>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
<sect1 id="textsearch-indexes">
|
2022-04-13 00:21:04 +02:00
|
|
|
<title>Preferred Index Types for Text Search</title>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
|
|
|
<indexterm zone="textsearch-indexes">
|
2007-08-30 22:37:26 +02:00
|
|
|
<primary>text search</primary>
|
2007-10-21 22:04:37 +02:00
|
|
|
<secondary>indexes</secondary>
|
2007-08-21 23:08:47 +02:00
|
|
|
</indexterm>
|
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
<para>
|
2009-06-17 23:58:49 +02:00
|
|
|
There are two kinds of indexes that can be used to speed up full text
|
2022-04-13 00:21:04 +02:00
|
|
|
searches:
|
|
|
|
<link linkend="gin"><acronym>GIN</acronym></link> and
|
|
|
|
<link linkend="gist"><acronym>GiST</acronym></link>.
|
2007-10-17 03:01:28 +02:00
|
|
|
Note that indexes are not mandatory for full text searching, but in
|
2009-04-27 18:27:36 +02:00
|
|
|
cases where a column is searched on a regular basis, an index is
|
|
|
|
usually desirable.
|
2022-04-13 00:21:04 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
To create such an index, do one of:
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
<variablelist>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
<varlistentry>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2014-05-07 03:28:58 +02:00
|
|
|
<term>
|
2007-08-30 22:37:26 +02:00
|
|
|
<indexterm zone="textsearch-indexes">
|
2007-08-31 22:55:57 +02:00
|
|
|
<primary>index</primary>
|
2015-10-05 19:38:36 +02:00
|
|
|
<secondary>GIN</secondary>
|
2007-08-31 22:55:57 +02:00
|
|
|
<tertiary>text search</tertiary>
|
2007-08-29 04:37:04 +02:00
|
|
|
</indexterm>
|
2007-08-31 22:55:57 +02:00
|
|
|
|
2015-10-05 19:38:36 +02:00
|
|
|
<literal>CREATE INDEX <replaceable>name</replaceable> ON <replaceable>table</replaceable> USING GIN (<replaceable>column</replaceable>);</literal>
|
2007-08-29 04:37:04 +02:00
|
|
|
</term>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
<listitem>
|
|
|
|
<para>
|
2015-10-05 19:38:36 +02:00
|
|
|
Creates a GIN (Generalized Inverted Index)-based index.
|
2017-10-09 03:44:17 +02:00
|
|
|
The <replaceable>column</replaceable> must be of <type>tsvector</type> type.
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
<varlistentry>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2014-05-07 03:28:58 +02:00
|
|
|
<term>
|
2007-08-30 22:37:26 +02:00
|
|
|
<indexterm zone="textsearch-indexes">
|
2007-08-31 22:55:57 +02:00
|
|
|
<primary>index</primary>
|
2015-10-05 19:38:36 +02:00
|
|
|
<secondary>GiST</secondary>
|
2007-08-31 22:55:57 +02:00
|
|
|
<tertiary>text search</tertiary>
|
2007-08-30 22:37:26 +02:00
|
|
|
</indexterm>
|
2007-08-31 18:33:36 +02:00
|
|
|
|
Implement operator class parameters
PostgreSQL provides set of template index access methods, where opclasses have
much freedom in the semantics of indexing. These index AMs are GiST, GIN,
SP-GiST and BRIN. There opclasses define representation of keys, operations on
them and supported search strategies. So, it's natural that opclasses may be
faced some tradeoffs, which require user-side decision. This commit implements
opclass parameters allowing users to set some values, which tell opclass how to
index the particular dataset.
This commit doesn't introduce new storage in system catalog. Instead it uses
pg_attribute.attoptions, which is used for table column storage options but
unused for index attributes.
In order to evade changing signature of each opclass support function, we
implement unified way to pass options to opclass support functions. Options
are set to fn_expr as the constant bytea expression. It's possible due to the
fact that opclass support functions are executed outside of expressions, so
fn_expr is unused for them.
This commit comes with some examples of opclass options usage. We parametrize
signature length in GiST. That applies to multiple opclasses: tsvector_ops,
gist__intbig_ops, gist_ltree_ops, gist__ltree_ops, gist_trgm_ops and
gist_hstore_ops. Also we parametrize maximum number of integer ranges for
gist__int_ops. However, the main future usage of this feature is expected
to be json, where users would be able to specify which way to index particular
json parts.
Catversion is bumped.
Discussion: https://postgr.es/m/d22c3a18-31c7-1879-fc11-4c1ce2f5e5af%40postgrespro.ru
Author: Nikita Glukhov, revised by me
Reviwed-by: Nikolay Shaplov, Robert Haas, Tom Lane, Tomas Vondra, Alvaro Herrera
2020-03-30 18:17:11 +02:00
|
|
|
<literal>CREATE INDEX <replaceable>name</replaceable> ON <replaceable>table</replaceable> USING GIST (<replaceable>column</replaceable> [ { DEFAULT | tsvector_ops } (siglen = <replaceable>number</replaceable>) ] );</literal>
|
2007-08-29 04:37:04 +02:00
|
|
|
</term>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
<listitem>
|
|
|
|
<para>
|
2015-10-05 19:38:36 +02:00
|
|
|
Creates a GiST (Generalized Search Tree)-based index.
|
2017-10-09 03:44:17 +02:00
|
|
|
The <replaceable>column</replaceable> can be of <type>tsvector</type> or
|
|
|
|
<type>tsquery</type> type.
|
Implement operator class parameters
PostgreSQL provides set of template index access methods, where opclasses have
much freedom in the semantics of indexing. These index AMs are GiST, GIN,
SP-GiST and BRIN. There opclasses define representation of keys, operations on
them and supported search strategies. So, it's natural that opclasses may be
faced some tradeoffs, which require user-side decision. This commit implements
opclass parameters allowing users to set some values, which tell opclass how to
index the particular dataset.
This commit doesn't introduce new storage in system catalog. Instead it uses
pg_attribute.attoptions, which is used for table column storage options but
unused for index attributes.
In order to evade changing signature of each opclass support function, we
implement unified way to pass options to opclass support functions. Options
are set to fn_expr as the constant bytea expression. It's possible due to the
fact that opclass support functions are executed outside of expressions, so
fn_expr is unused for them.
This commit comes with some examples of opclass options usage. We parametrize
signature length in GiST. That applies to multiple opclasses: tsvector_ops,
gist__intbig_ops, gist_ltree_ops, gist__ltree_ops, gist_trgm_ops and
gist_hstore_ops. Also we parametrize maximum number of integer ranges for
gist__int_ops. However, the main future usage of this feature is expected
to be json, where users would be able to specify which way to index particular
json parts.
Catversion is bumped.
Discussion: https://postgr.es/m/d22c3a18-31c7-1879-fc11-4c1ce2f5e5af%40postgrespro.ru
Author: Nikita Glukhov, revised by me
Reviwed-by: Nikolay Shaplov, Robert Haas, Tom Lane, Tomas Vondra, Alvaro Herrera
2020-03-30 18:17:11 +02:00
|
|
|
Optional integer parameter <literal>siglen</literal> determines
|
|
|
|
signature length in bytes (see below for details).
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
</variablelist>
|
|
|
|
</para>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-10-21 22:04:37 +02:00
|
|
|
<para>
|
2015-10-05 19:38:36 +02:00
|
|
|
GIN indexes are the preferred text search index type. As inverted
|
|
|
|
indexes, they contain an index entry for each word (lexeme), with a
|
|
|
|
compressed list of matching locations. Multi-word searches can find
|
|
|
|
the first match, then use the index to remove rows that are lacking
|
|
|
|
additional words. GIN indexes store only the words (lexemes) of
|
2017-10-09 03:44:17 +02:00
|
|
|
<type>tsvector</type> values, and not their weight labels. Thus a table
|
2015-10-05 19:38:36 +02:00
|
|
|
row recheck is needed when using a query that involves weights.
|
2007-10-21 22:04:37 +02:00
|
|
|
</para>
|
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
<para>
|
2007-10-27 02:19:45 +02:00
|
|
|
A GiST index is <firstterm>lossy</firstterm>, meaning that the index
|
2015-10-05 19:38:36 +02:00
|
|
|
might produce false matches, and it is necessary
|
2007-10-27 02:19:45 +02:00
|
|
|
to check the actual table row to eliminate such false matches.
|
2008-04-14 19:05:34 +02:00
|
|
|
(<productname>PostgreSQL</productname> does this automatically when needed.)
|
2007-10-21 22:04:37 +02:00
|
|
|
GiST indexes are lossy because each document is represented in the
|
2020-04-01 13:42:17 +02:00
|
|
|
index by a fixed-length signature. The signature length in bytes is determined
|
Implement operator class parameters
PostgreSQL provides set of template index access methods, where opclasses have
much freedom in the semantics of indexing. These index AMs are GiST, GIN,
SP-GiST and BRIN. There opclasses define representation of keys, operations on
them and supported search strategies. So, it's natural that opclasses may be
faced some tradeoffs, which require user-side decision. This commit implements
opclass parameters allowing users to set some values, which tell opclass how to
index the particular dataset.
This commit doesn't introduce new storage in system catalog. Instead it uses
pg_attribute.attoptions, which is used for table column storage options but
unused for index attributes.
In order to evade changing signature of each opclass support function, we
implement unified way to pass options to opclass support functions. Options
are set to fn_expr as the constant bytea expression. It's possible due to the
fact that opclass support functions are executed outside of expressions, so
fn_expr is unused for them.
This commit comes with some examples of opclass options usage. We parametrize
signature length in GiST. That applies to multiple opclasses: tsvector_ops,
gist__intbig_ops, gist_ltree_ops, gist__ltree_ops, gist_trgm_ops and
gist_hstore_ops. Also we parametrize maximum number of integer ranges for
gist__int_ops. However, the main future usage of this feature is expected
to be json, where users would be able to specify which way to index particular
json parts.
Catversion is bumped.
Discussion: https://postgr.es/m/d22c3a18-31c7-1879-fc11-4c1ce2f5e5af%40postgrespro.ru
Author: Nikita Glukhov, revised by me
Reviwed-by: Nikolay Shaplov, Robert Haas, Tom Lane, Tomas Vondra, Alvaro Herrera
2020-03-30 18:17:11 +02:00
|
|
|
by the value of the optional integer parameter <literal>siglen</literal>.
|
2020-04-01 13:42:17 +02:00
|
|
|
The default signature length (when <literal>siglen</literal> is not specified) is
|
|
|
|
124 bytes, the maximum signature length is 2024 bytes. The signature is generated by hashing
|
2010-08-20 15:59:45 +02:00
|
|
|
each word into a single bit in an n-bit string, with all these bits OR-ed
|
2007-10-21 22:04:37 +02:00
|
|
|
together to produce an n-bit document signature. When two words hash to
|
2007-10-27 02:19:45 +02:00
|
|
|
the same bit position there will be a false match. If all words in
|
2007-10-21 22:04:37 +02:00
|
|
|
the query have matches (real or false) then the table row must be
|
2020-04-01 13:42:17 +02:00
|
|
|
retrieved to see if the match is correct. Longer signatures lead to a more
|
|
|
|
precise search (scanning a smaller fraction of the index and fewer heap
|
|
|
|
pages), at the cost of a larger index.
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
|
2019-03-10 09:36:47 +01:00
|
|
|
<para>
|
2020-09-01 00:33:37 +02:00
|
|
|
A GiST index can be covering, i.e., use the <literal>INCLUDE</literal>
|
2019-03-10 09:36:47 +01:00
|
|
|
clause. Included columns can have data types without any GiST operator
|
|
|
|
class. Included attributes will be stored uncompressed.
|
|
|
|
</para>
|
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
<para>
|
2009-04-27 18:27:36 +02:00
|
|
|
Lossiness causes performance degradation due to unnecessary fetches of table
|
2007-10-27 02:19:45 +02:00
|
|
|
records that turn out to be false matches. Since random access to table
|
|
|
|
records is slow, this limits the usefulness of GiST indexes. The
|
2007-10-21 22:04:37 +02:00
|
|
|
likelihood of false matches depends on several factors, in particular the
|
|
|
|
number of unique words, so using dictionaries to reduce this number is
|
|
|
|
recommended.
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
|
|
|
|
2007-11-16 04:23:07 +01:00
|
|
|
<para>
|
|
|
|
Note that <acronym>GIN</acronym> index build time can often be improved
|
2017-11-23 15:39:47 +01:00
|
|
|
by increasing <xref linkend="guc-maintenance-work-mem"/>, while
|
2007-11-16 04:23:07 +01:00
|
|
|
<acronym>GiST</acronym> index build time is not sensitive to that
|
|
|
|
parameter.
|
|
|
|
</para>
|
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
<para>
|
2015-10-05 19:38:36 +02:00
|
|
|
Partitioning of big collections and the proper use of GIN and GiST indexes
|
2007-08-29 04:37:04 +02:00
|
|
|
allows the implementation of very fast searches with online update.
|
2009-01-07 23:40:49 +01:00
|
|
|
Partitioning can be done at the database level using table inheritance,
|
|
|
|
or by distributing documents over
|
2020-09-01 00:33:37 +02:00
|
|
|
servers and collecting external search results, e.g., via <link
|
2018-01-12 22:53:25 +01:00
|
|
|
linkend="ddl-foreign-data">Foreign Data</link> access.
|
|
|
|
The latter is possible because ranking functions use
|
2007-08-29 04:37:04 +02:00
|
|
|
only local information.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
</sect1>
|
|
|
|
|
|
|
|
<sect1 id="textsearch-psql">
|
2017-10-09 03:44:17 +02:00
|
|
|
<title><application>psql</application> Support</title>
|
2007-08-29 04:37:04 +02:00
|
|
|
|
|
|
|
<para>
|
2007-10-15 23:39:57 +02:00
|
|
|
Information about text search configuration objects can be obtained
|
2007-09-04 05:46:36 +02:00
|
|
|
in <application>psql</application> using a set of commands:
|
2010-07-29 21:34:41 +02:00
|
|
|
<synopsis>
|
|
|
|
\dF{d,p,t}<optional>+</optional> <optional>PATTERN</optional>
|
|
|
|
</synopsis>
|
2007-08-29 04:37:04 +02:00
|
|
|
An optional <literal>+</literal> produces more details.
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
2016-08-25 03:11:44 +02:00
|
|
|
The optional parameter <replaceable>PATTERN</replaceable> can be the name of
|
2007-10-15 23:39:57 +02:00
|
|
|
a text search object, optionally schema-qualified. If
|
2016-08-25 03:11:44 +02:00
|
|
|
<replaceable>PATTERN</replaceable> is omitted then information about all
|
|
|
|
visible objects will be displayed. <replaceable>PATTERN</replaceable> can be a
|
2007-09-04 05:46:36 +02:00
|
|
|
regular expression and can provide <emphasis>separate</emphasis> patterns
|
|
|
|
for the schema and object names. The following examples illustrate this:
|
2007-08-29 04:37:04 +02:00
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2007-08-21 23:08:47 +02:00
|
|
|
=> \dF *fulltext*
|
2007-09-04 05:46:36 +02:00
|
|
|
List of text search configurations
|
2007-08-21 23:08:47 +02:00
|
|
|
Schema | Name | Description
|
|
|
|
--------+--------------+-------------
|
|
|
|
public | fulltext_cfg |
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2007-08-21 23:08:47 +02:00
|
|
|
=> \dF *.fulltext*
|
2007-09-04 05:46:36 +02:00
|
|
|
List of text search configurations
|
2007-08-21 23:08:47 +02:00
|
|
|
Schema | Name | Description
|
|
|
|
----------+----------------------------
|
2007-08-31 18:33:36 +02:00
|
|
|
fulltext | fulltext_cfg |
|
2007-08-21 23:08:47 +02:00
|
|
|
public | fulltext_cfg |
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-10-15 23:39:57 +02:00
|
|
|
|
|
|
|
The available commands are:
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
<variablelist>
|
2007-08-21 23:08:47 +02:00
|
|
|
<varlistentry>
|
2013-06-08 04:00:59 +02:00
|
|
|
<term><literal>\dF<optional>+</optional> <optional>PATTERN</optional></literal></term>
|
2007-08-21 23:08:47 +02:00
|
|
|
<listitem>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
List text search configurations (add <literal>+</literal> for more detail).
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2007-08-21 23:08:47 +02:00
|
|
|
=> \dF russian
|
2007-09-04 05:46:36 +02:00
|
|
|
List of text search configurations
|
2022-04-20 17:04:28 +02:00
|
|
|
Schema | Name | Description
|
2007-09-04 05:46:36 +02:00
|
|
|
------------+---------+------------------------------------
|
|
|
|
pg_catalog | russian | configuration for russian language
|
2007-08-21 23:08:47 +02:00
|
|
|
|
|
|
|
=> \dF+ russian
|
2007-09-04 05:46:36 +02:00
|
|
|
Text search configuration "pg_catalog.russian"
|
|
|
|
Parser: "pg_catalog.default"
|
2022-04-20 17:04:28 +02:00
|
|
|
Token | Dictionaries
|
2007-10-23 22:46:12 +02:00
|
|
|
-----------------+--------------
|
|
|
|
asciihword | english_stem
|
|
|
|
asciiword | english_stem
|
|
|
|
email | simple
|
|
|
|
file | simple
|
|
|
|
float | simple
|
|
|
|
host | simple
|
|
|
|
hword | russian_stem
|
|
|
|
hword_asciipart | english_stem
|
|
|
|
hword_numpart | simple
|
|
|
|
hword_part | russian_stem
|
|
|
|
int | simple
|
|
|
|
numhword | simple
|
|
|
|
numword | simple
|
|
|
|
sfloat | simple
|
|
|
|
uint | simple
|
|
|
|
url | simple
|
2007-10-27 18:01:09 +02:00
|
|
|
url_path | simple
|
2007-10-23 22:46:12 +02:00
|
|
|
version | simple
|
|
|
|
word | russian_stem
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
2007-08-21 23:08:47 +02:00
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
|
|
|
<varlistentry>
|
2013-06-08 04:00:59 +02:00
|
|
|
<term><literal>\dFd<optional>+</optional> <optional>PATTERN</optional></literal></term>
|
2007-08-21 23:08:47 +02:00
|
|
|
<listitem>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
List text search dictionaries (add <literal>+</literal> for more detail).
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2007-08-21 23:08:47 +02:00
|
|
|
=> \dFd
|
Sync our Snowball stemmer dictionaries with current upstream.
We haven't touched these since text search functionality landed in core
in 2007 :-(. While the upstream project isn't a beehive of activity,
they do make additions and bug fixes from time to time. Update our
copies of these files.
Also update our documentation about how to keep things in sync, since
they're not making distribution tarballs these days. Fortunately,
their source code turns out to be a breeze to build.
Notable changes:
* The non-UTF8 version of the hungarian stemmer now works in LATIN2
not LATIN1.
* New stemmers have appeared for arabic, indonesian, irish, lithuanian,
nepali, and tamil. These all work in UTF8, and the indonesian and
irish ones also work in LATIN1.
(There are some new stemmers that I did not incorporate, mainly because
their names don't match the underlying languages, suggesting that they're
not to be considered mainstream.)
Worth noting: the upstream Nepali dictionary was contributed by
Arthur Zakirov.
initdb forced because the contents of snowball_create.sql have
changed.
Still TODO: see about updating the stopword lists.
Arthur Zakirov, minor mods and doc work by me
Discussion: https://postgr.es/m/20180626122025.GA12647@zakirov.localdomain
Discussion: https://postgr.es/m/20180219140849.GA9050@zakirov.localdomain
2018-09-24 23:29:08 +02:00
|
|
|
List of text search dictionaries
|
|
|
|
Schema | Name | Description
|
2007-09-04 05:46:36 +02:00
|
|
|
------------+-----------------+-----------------------------------------------------------
|
Sync our Snowball stemmer dictionaries with current upstream.
We haven't touched these since text search functionality landed in core
in 2007 :-(. While the upstream project isn't a beehive of activity,
they do make additions and bug fixes from time to time. Update our
copies of these files.
Also update our documentation about how to keep things in sync, since
they're not making distribution tarballs these days. Fortunately,
their source code turns out to be a breeze to build.
Notable changes:
* The non-UTF8 version of the hungarian stemmer now works in LATIN2
not LATIN1.
* New stemmers have appeared for arabic, indonesian, irish, lithuanian,
nepali, and tamil. These all work in UTF8, and the indonesian and
irish ones also work in LATIN1.
(There are some new stemmers that I did not incorporate, mainly because
their names don't match the underlying languages, suggesting that they're
not to be considered mainstream.)
Worth noting: the upstream Nepali dictionary was contributed by
Arthur Zakirov.
initdb forced because the contents of snowball_create.sql have
changed.
Still TODO: see about updating the stopword lists.
Arthur Zakirov, minor mods and doc work by me
Discussion: https://postgr.es/m/20180626122025.GA12647@zakirov.localdomain
Discussion: https://postgr.es/m/20180219140849.GA9050@zakirov.localdomain
2018-09-24 23:29:08 +02:00
|
|
|
pg_catalog | arabic_stem | snowball stemmer for arabic language
|
2021-02-19 07:57:42 +01:00
|
|
|
pg_catalog | armenian_stem | snowball stemmer for armenian language
|
2020-06-08 22:44:15 +02:00
|
|
|
pg_catalog | basque_stem | snowball stemmer for basque language
|
|
|
|
pg_catalog | catalan_stem | snowball stemmer for catalan language
|
2007-09-04 05:46:36 +02:00
|
|
|
pg_catalog | danish_stem | snowball stemmer for danish language
|
|
|
|
pg_catalog | dutch_stem | snowball stemmer for dutch language
|
|
|
|
pg_catalog | english_stem | snowball stemmer for english language
|
|
|
|
pg_catalog | finnish_stem | snowball stemmer for finnish language
|
|
|
|
pg_catalog | french_stem | snowball stemmer for french language
|
|
|
|
pg_catalog | german_stem | snowball stemmer for german language
|
2019-07-04 13:10:41 +02:00
|
|
|
pg_catalog | greek_stem | snowball stemmer for greek language
|
2020-06-08 22:44:15 +02:00
|
|
|
pg_catalog | hindi_stem | snowball stemmer for hindi language
|
2007-09-04 05:46:36 +02:00
|
|
|
pg_catalog | hungarian_stem | snowball stemmer for hungarian language
|
Sync our Snowball stemmer dictionaries with current upstream.
We haven't touched these since text search functionality landed in core
in 2007 :-(. While the upstream project isn't a beehive of activity,
they do make additions and bug fixes from time to time. Update our
copies of these files.
Also update our documentation about how to keep things in sync, since
they're not making distribution tarballs these days. Fortunately,
their source code turns out to be a breeze to build.
Notable changes:
* The non-UTF8 version of the hungarian stemmer now works in LATIN2
not LATIN1.
* New stemmers have appeared for arabic, indonesian, irish, lithuanian,
nepali, and tamil. These all work in UTF8, and the indonesian and
irish ones also work in LATIN1.
(There are some new stemmers that I did not incorporate, mainly because
their names don't match the underlying languages, suggesting that they're
not to be considered mainstream.)
Worth noting: the upstream Nepali dictionary was contributed by
Arthur Zakirov.
initdb forced because the contents of snowball_create.sql have
changed.
Still TODO: see about updating the stopword lists.
Arthur Zakirov, minor mods and doc work by me
Discussion: https://postgr.es/m/20180626122025.GA12647@zakirov.localdomain
Discussion: https://postgr.es/m/20180219140849.GA9050@zakirov.localdomain
2018-09-24 23:29:08 +02:00
|
|
|
pg_catalog | indonesian_stem | snowball stemmer for indonesian language
|
|
|
|
pg_catalog | irish_stem | snowball stemmer for irish language
|
2007-09-04 05:46:36 +02:00
|
|
|
pg_catalog | italian_stem | snowball stemmer for italian language
|
Sync our Snowball stemmer dictionaries with current upstream.
We haven't touched these since text search functionality landed in core
in 2007 :-(. While the upstream project isn't a beehive of activity,
they do make additions and bug fixes from time to time. Update our
copies of these files.
Also update our documentation about how to keep things in sync, since
they're not making distribution tarballs these days. Fortunately,
their source code turns out to be a breeze to build.
Notable changes:
* The non-UTF8 version of the hungarian stemmer now works in LATIN2
not LATIN1.
* New stemmers have appeared for arabic, indonesian, irish, lithuanian,
nepali, and tamil. These all work in UTF8, and the indonesian and
irish ones also work in LATIN1.
(There are some new stemmers that I did not incorporate, mainly because
their names don't match the underlying languages, suggesting that they're
not to be considered mainstream.)
Worth noting: the upstream Nepali dictionary was contributed by
Arthur Zakirov.
initdb forced because the contents of snowball_create.sql have
changed.
Still TODO: see about updating the stopword lists.
Arthur Zakirov, minor mods and doc work by me
Discussion: https://postgr.es/m/20180626122025.GA12647@zakirov.localdomain
Discussion: https://postgr.es/m/20180219140849.GA9050@zakirov.localdomain
2018-09-24 23:29:08 +02:00
|
|
|
pg_catalog | lithuanian_stem | snowball stemmer for lithuanian language
|
|
|
|
pg_catalog | nepali_stem | snowball stemmer for nepali language
|
2007-09-04 05:46:36 +02:00
|
|
|
pg_catalog | norwegian_stem | snowball stemmer for norwegian language
|
|
|
|
pg_catalog | portuguese_stem | snowball stemmer for portuguese language
|
|
|
|
pg_catalog | romanian_stem | snowball stemmer for romanian language
|
|
|
|
pg_catalog | russian_stem | snowball stemmer for russian language
|
2021-02-19 07:57:42 +01:00
|
|
|
pg_catalog | serbian_stem | snowball stemmer for serbian language
|
2007-09-04 05:46:36 +02:00
|
|
|
pg_catalog | simple | simple dictionary: just lower case and check for stopword
|
|
|
|
pg_catalog | spanish_stem | snowball stemmer for spanish language
|
|
|
|
pg_catalog | swedish_stem | snowball stemmer for swedish language
|
Sync our Snowball stemmer dictionaries with current upstream.
We haven't touched these since text search functionality landed in core
in 2007 :-(. While the upstream project isn't a beehive of activity,
they do make additions and bug fixes from time to time. Update our
copies of these files.
Also update our documentation about how to keep things in sync, since
they're not making distribution tarballs these days. Fortunately,
their source code turns out to be a breeze to build.
Notable changes:
* The non-UTF8 version of the hungarian stemmer now works in LATIN2
not LATIN1.
* New stemmers have appeared for arabic, indonesian, irish, lithuanian,
nepali, and tamil. These all work in UTF8, and the indonesian and
irish ones also work in LATIN1.
(There are some new stemmers that I did not incorporate, mainly because
their names don't match the underlying languages, suggesting that they're
not to be considered mainstream.)
Worth noting: the upstream Nepali dictionary was contributed by
Arthur Zakirov.
initdb forced because the contents of snowball_create.sql have
changed.
Still TODO: see about updating the stopword lists.
Arthur Zakirov, minor mods and doc work by me
Discussion: https://postgr.es/m/20180626122025.GA12647@zakirov.localdomain
Discussion: https://postgr.es/m/20180219140849.GA9050@zakirov.localdomain
2018-09-24 23:29:08 +02:00
|
|
|
pg_catalog | tamil_stem | snowball stemmer for tamil language
|
2007-09-04 05:46:36 +02:00
|
|
|
pg_catalog | turkish_stem | snowball stemmer for turkish language
|
2021-02-19 07:57:42 +01:00
|
|
|
pg_catalog | yiddish_stem | snowball stemmer for yiddish language
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
2007-08-21 23:08:47 +02:00
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
|
|
|
<varlistentry>
|
2013-06-08 04:00:59 +02:00
|
|
|
<term><literal>\dFp<optional>+</optional> <optional>PATTERN</optional></literal></term>
|
2007-08-21 23:08:47 +02:00
|
|
|
<listitem>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
List text search parsers (add <literal>+</literal> for more detail).
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2007-09-04 05:46:36 +02:00
|
|
|
=> \dFp
|
|
|
|
List of text search parsers
|
2022-04-20 17:04:28 +02:00
|
|
|
Schema | Name | Description
|
2007-08-21 23:08:47 +02:00
|
|
|
------------+---------+---------------------
|
|
|
|
pg_catalog | default | default word parser
|
|
|
|
=> \dFp+
|
2007-10-17 03:01:28 +02:00
|
|
|
Text search parser "pg_catalog.default"
|
2022-04-20 17:04:28 +02:00
|
|
|
Method | Function | Description
|
2007-10-17 03:01:28 +02:00
|
|
|
-----------------+----------------+-------------
|
2022-04-20 17:04:28 +02:00
|
|
|
Start parse | prsd_start |
|
|
|
|
Get next token | prsd_nexttoken |
|
|
|
|
End parse | prsd_end |
|
|
|
|
Get headline | prsd_headline |
|
|
|
|
Get token types | prsd_lextype |
|
2007-09-04 05:46:36 +02:00
|
|
|
|
2007-10-23 22:46:12 +02:00
|
|
|
Token types for parser "pg_catalog.default"
|
2022-04-20 17:04:28 +02:00
|
|
|
Token name | Description
|
2007-10-23 22:46:12 +02:00
|
|
|
-----------------+------------------------------------------
|
|
|
|
asciihword | Hyphenated word, all ASCII
|
|
|
|
asciiword | Word, all ASCII
|
|
|
|
blank | Space symbols
|
|
|
|
email | Email address
|
2007-11-20 16:58:52 +01:00
|
|
|
entity | XML entity
|
2007-10-23 22:46:12 +02:00
|
|
|
file | File or path name
|
|
|
|
float | Decimal notation
|
|
|
|
host | Host
|
|
|
|
hword | Hyphenated word, all letters
|
|
|
|
hword_asciipart | Hyphenated word part, all ASCII
|
|
|
|
hword_numpart | Hyphenated word part, letters and digits
|
|
|
|
hword_part | Hyphenated word part, all letters
|
|
|
|
int | Signed integer
|
|
|
|
numhword | Hyphenated word, letters and digits
|
|
|
|
numword | Word, letters and digits
|
|
|
|
protocol | Protocol head
|
|
|
|
sfloat | Scientific notation
|
2007-11-20 16:58:52 +01:00
|
|
|
tag | XML tag
|
2007-10-23 22:46:12 +02:00
|
|
|
uint | Unsigned integer
|
|
|
|
url | URL
|
2007-10-27 18:01:09 +02:00
|
|
|
url_path | URL path
|
2007-10-23 22:46:12 +02:00
|
|
|
version | Version number
|
|
|
|
word | Word, all letters
|
2007-08-21 23:08:47 +02:00
|
|
|
(23 rows)
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-08-29 04:37:04 +02:00
|
|
|
</para>
|
2007-08-21 23:08:47 +02:00
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
|
|
|
|
2007-09-04 05:46:36 +02:00
|
|
|
<varlistentry>
|
2013-06-08 04:00:59 +02:00
|
|
|
<term><literal>\dFt<optional>+</optional> <optional>PATTERN</optional></literal></term>
|
2007-09-04 05:46:36 +02:00
|
|
|
<listitem>
|
|
|
|
<para>
|
2017-10-09 03:44:17 +02:00
|
|
|
List text search templates (add <literal>+</literal> for more detail).
|
2010-07-29 21:34:41 +02:00
|
|
|
<screen>
|
2007-09-04 05:46:36 +02:00
|
|
|
=> \dFt
|
|
|
|
List of text search templates
|
2022-04-20 17:04:28 +02:00
|
|
|
Schema | Name | Description
|
2007-09-04 05:46:36 +02:00
|
|
|
------------+-----------+-----------------------------------------------------------
|
|
|
|
pg_catalog | ispell | ispell dictionary
|
|
|
|
pg_catalog | simple | simple dictionary: just lower case and check for stopword
|
|
|
|
pg_catalog | snowball | snowball stemmer
|
|
|
|
pg_catalog | synonym | synonym dictionary: replace word by its synonym
|
|
|
|
pg_catalog | thesaurus | thesaurus dictionary: phrase by phrase substitution
|
2010-07-29 21:34:41 +02:00
|
|
|
</screen>
|
2007-09-04 05:46:36 +02:00
|
|
|
</para>
|
|
|
|
</listitem>
|
|
|
|
</varlistentry>
|
2007-08-21 23:08:47 +02:00
|
|
|
</variablelist>
|
|
|
|
|
2007-08-29 04:37:04 +02:00
|
|
|
</sect1>
|
2007-08-21 23:08:47 +02:00
|
|
|
|
2007-10-15 23:39:57 +02:00
|
|
|
<sect1 id="textsearch-limitations">
|
|
|
|
<title>Limitations</title>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
The current limitations of <productname>PostgreSQL</productname>'s
|
|
|
|
text search features are:
|
|
|
|
<itemizedlist spacing="compact" mark="bullet">
|
|
|
|
<listitem>
|
2019-09-03 06:03:29 +02:00
|
|
|
<para>The length of each lexeme must be less than 2 kilobytes</para>
|
2007-10-15 23:39:57 +02:00
|
|
|
</listitem>
|
|
|
|
<listitem>
|
2007-10-21 22:04:37 +02:00
|
|
|
<para>The length of a <type>tsvector</type> (lexemes + positions) must be
|
|
|
|
less than 1 megabyte</para>
|
2007-10-15 23:39:57 +02:00
|
|
|
</listitem>
|
|
|
|
<listitem>
|
2007-10-21 22:04:37 +02:00
|
|
|
<!-- TODO: number of lexemes in what? This is unclear -->
|
|
|
|
<para>The number of lexemes must be less than
|
|
|
|
2<superscript>64</superscript></para>
|
2007-10-15 23:39:57 +02:00
|
|
|
</listitem>
|
|
|
|
<listitem>
|
2017-10-09 03:44:17 +02:00
|
|
|
<para>Position values in <type>tsvector</type> must be greater than 0 and
|
2007-10-21 22:04:37 +02:00
|
|
|
no more than 16,383</para>
|
2007-10-15 23:39:57 +02:00
|
|
|
</listitem>
|
2016-06-09 06:30:59 +02:00
|
|
|
<listitem>
|
2017-10-09 03:44:17 +02:00
|
|
|
<para>The match distance in a <literal><<replaceable>N</replaceable>></literal>
|
|
|
|
(FOLLOWED BY) <type>tsquery</type> operator cannot be more than
|
2016-06-09 06:30:59 +02:00
|
|
|
16,384</para>
|
|
|
|
</listitem>
|
2007-10-15 23:39:57 +02:00
|
|
|
<listitem>
|
2007-10-17 03:01:28 +02:00
|
|
|
<para>No more than 256 positions per lexeme</para>
|
2007-10-15 23:39:57 +02:00
|
|
|
</listitem>
|
|
|
|
<listitem>
|
2007-10-21 22:04:37 +02:00
|
|
|
<para>The number of nodes (lexemes + operators) in a <type>tsquery</type>
|
|
|
|
must be less than 32,768</para>
|
2007-10-15 23:39:57 +02:00
|
|
|
</listitem>
|
|
|
|
</itemizedlist>
|
|
|
|
</para>
|
|
|
|
|
|
|
|
<para>
|
|
|
|
For comparison, the <productname>PostgreSQL</productname> 8.1 documentation
|
2007-10-21 22:04:37 +02:00
|
|
|
contained 10,441 unique words, a total of 335,420 words, and the most
|
2017-10-09 03:44:17 +02:00
|
|
|
frequent word <quote>postgresql</quote> was mentioned 6,127 times in 655
|
2007-10-21 22:04:37 +02:00
|
|
|
documents.
|
2007-10-15 23:39:57 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
<!-- TODO we need to put a date on these numbers? -->
|
|
|
|
<para>
|
2007-10-21 22:04:37 +02:00
|
|
|
Another example — the <productname>PostgreSQL</productname> mailing
|
|
|
|
list archives contained 910,989 unique words with 57,491,343 lexemes in
|
|
|
|
461,020 messages.
|
2007-10-15 23:39:57 +02:00
|
|
|
</para>
|
|
|
|
|
|
|
|
</sect1>
|
|
|
|
|
2007-08-21 23:08:47 +02:00
|
|
|
</chapter>
|