postgresql/src/backend/regex/regcomp.c

2195 lines
57 KiB
C
Raw Normal View History

/*
* re_*comp and friends - compile REs
* This file #includes several others (see the bottom).
*
* Copyright (c) 1998, 1999 Henry Spencer. All rights reserved.
2003-08-04 02:43:34 +02:00
*
* Development of this software was funded, in part, by Cray Research Inc.,
* UUNET Communications Services Inc., Sun Microsystems Inc., and Scriptics
* Corporation, none of whom are responsible for the results. The author
2003-08-04 02:43:34 +02:00
* thanks all of them.
*
* Redistribution and use in source and binary forms -- with or without
* modification -- are permitted for any purpose, provided that
* redistributions in source form retain this entire copyright notice and
* indicate the origin and nature of any modifications.
2003-08-04 02:43:34 +02:00
*
* I'd appreciate being given credit for this package in the documentation
* of software which uses it, but that is not a requirement.
2003-08-04 02:43:34 +02:00
*
* THIS SOFTWARE IS PROVIDED ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES,
* INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY
* AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
* HENRY SPENCER BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
* EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
* PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS;
* OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
* WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR
* OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
* ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*
2010-09-20 22:08:53 +02:00
* src/backend/regex/regcomp.c
*
*/
#include "regex/regguts.h"
/*
* forward declarations, up here so forward datatypes etc. are defined early
*/
/* === regcomp.c === */
2003-08-04 02:43:34 +02:00
static void moresubs(struct vars *, int);
static int freev(struct vars *, int);
static void makesearch(struct vars *, struct nfa *);
static struct subre *parse(struct vars *, int, int, struct state *, struct state *);
static struct subre *parsebranch(struct vars *, int, int, struct state *, struct state *, int);
static void parseqatom(struct vars *, int, int, struct state *, struct state *, struct subre *);
static void nonword(struct vars *, int, struct state *, struct state *);
static void word(struct vars *, int, struct state *, struct state *);
static int scannum(struct vars *);
static void repeat(struct vars *, struct state *, struct state *, int, int);
static void bracket(struct vars *, struct state *, struct state *);
static void cbracket(struct vars *, struct state *, struct state *);
static void brackpart(struct vars *, struct state *, struct state *);
static const chr *scanplain(struct vars *);
2003-08-04 02:43:34 +02:00
static void onechr(struct vars *, chr, struct state *, struct state *);
static void wordchrs(struct vars *);
Implement lookbehind constraints in our regular-expression engine. A lookbehind constraint is like a lookahead constraint in that it consumes no text; but it checks for existence (or nonexistence) of a match *ending* at the current point in the string, rather than one *starting* at the current point. This is a long-requested feature since it exists in many other regex libraries, but Henry Spencer had never got around to implementing it in the code we use. Just making it work is actually pretty trivial; but naive copying of the logic for lookahead constraints leads to code that often spends O(N^2) time to scan an N-character string, because we have to run the match engine from string start to the current probe point each time the constraint is checked. In typical use-cases a lookbehind constraint will be written at the start of the regex and hence will need to be checked at every character --- so O(N^2) work overall. To fix that, I introduced a third copy of the core DFA matching loop, paralleling the existing longest() and shortest() loops. This version, matchuntil(), can suspend and resume matching given a couple of pointers' worth of storage space. So we need only run it across the string once, stopping at each interesting probe point and then resuming to advance to the next one. I also put in an optimization that simplifies one-character lookahead and lookbehind constraints, such as "(?=x)" or "(?<!\w)", into AHEAD and BEHIND constraints, which already existed in the engine. This avoids the overhead of the LACON machinery entirely for these rather common cases. The net result is that lookbehind constraints run a factor of three or so slower than Perl's for multi-character constraints, but faster than Perl's for one-character constraints ... and they work fine for variable-length constraints, which Perl gives up on entirely. So that's not bad from a competitive perspective, and there's room for further optimization if anyone cares. (In reality, raw scan rate across a large input string is probably not that big a deal for Postgres usage anyway; so I'm happy if it's linear.)
2015-10-31 00:14:19 +01:00
static void processlacon(struct vars *, struct state *, struct state *, int,
struct state *, struct state *);
2003-08-04 02:43:34 +02:00
static struct subre *subre(struct vars *, int, int, struct state *, struct state *);
static void freesubre(struct vars *, struct subre *);
static void freesrnode(struct vars *, struct subre *);
static void optst(struct vars *, struct subre *);
static int numst(struct subre *, int);
static void markst(struct subre *);
static void cleanst(struct vars *);
static long nfatree(struct vars *, struct subre *, FILE *);
Implement lookbehind constraints in our regular-expression engine. A lookbehind constraint is like a lookahead constraint in that it consumes no text; but it checks for existence (or nonexistence) of a match *ending* at the current point in the string, rather than one *starting* at the current point. This is a long-requested feature since it exists in many other regex libraries, but Henry Spencer had never got around to implementing it in the code we use. Just making it work is actually pretty trivial; but naive copying of the logic for lookahead constraints leads to code that often spends O(N^2) time to scan an N-character string, because we have to run the match engine from string start to the current probe point each time the constraint is checked. In typical use-cases a lookbehind constraint will be written at the start of the regex and hence will need to be checked at every character --- so O(N^2) work overall. To fix that, I introduced a third copy of the core DFA matching loop, paralleling the existing longest() and shortest() loops. This version, matchuntil(), can suspend and resume matching given a couple of pointers' worth of storage space. So we need only run it across the string once, stopping at each interesting probe point and then resuming to advance to the next one. I also put in an optimization that simplifies one-character lookahead and lookbehind constraints, such as "(?=x)" or "(?<!\w)", into AHEAD and BEHIND constraints, which already existed in the engine. This avoids the overhead of the LACON machinery entirely for these rather common cases. The net result is that lookbehind constraints run a factor of three or so slower than Perl's for multi-character constraints, but faster than Perl's for one-character constraints ... and they work fine for variable-length constraints, which Perl gives up on entirely. So that's not bad from a competitive perspective, and there's room for further optimization if anyone cares. (In reality, raw scan rate across a large input string is probably not that big a deal for Postgres usage anyway; so I'm happy if it's linear.)
2015-10-31 00:14:19 +01:00
static long nfanode(struct vars *, struct subre *, int, FILE *);
2003-08-04 02:43:34 +02:00
static int newlacon(struct vars *, struct state *, struct state *, int);
static void freelacons(struct subre *, int);
static void rfree(regex_t *);
static int rcancelrequested(void);
static int rstacktoodeep(void);
2003-08-04 02:43:34 +02:00
#ifdef REG_DEBUG
2003-08-04 02:43:34 +02:00
static void dump(regex_t *, FILE *);
static void dumpst(struct subre *, FILE *, int);
static void stdump(struct subre *, FILE *, int);
static const char *stid(struct subre *, char *, size_t);
#endif
/* === regc_lex.c === */
2003-08-04 02:43:34 +02:00
static void lexstart(struct vars *);
static void prefixes(struct vars *);
static void lexnest(struct vars *, const chr *, const chr *);
2003-08-04 02:43:34 +02:00
static void lexword(struct vars *);
static int next(struct vars *);
static int lexescape(struct vars *);
static chr lexdigits(struct vars *, int, int, int);
static int brenext(struct vars *, chr);
static void skip(struct vars *);
static chr newline(void);
static chr chrnamed(struct vars *, const chr *, const chr *, chr);
2003-08-04 02:43:34 +02:00
/* === regc_color.c === */
2003-08-04 02:43:34 +02:00
static void initcm(struct vars *, struct colormap *);
static void freecm(struct colormap *);
static color maxcolor(struct colormap *);
static color newcolor(struct colormap *);
static void freecolor(struct colormap *, color);
2003-08-04 02:43:34 +02:00
static color pseudocolor(struct colormap *);
Make locale-dependent regex character classes work for large char codes. Previously, we failed to recognize Unicode characters above U+7FF as being members of locale-dependent character classes such as [[:alpha:]]. (Actually, the same problem occurs for large pg_wchar values in any multibyte encoding, but UTF8 is the only case people have actually complained about.) It's impractical to get Spencer's original code to handle character classes or ranges containing many thousands of characters, because it insists on considering each member character individually at regex compile time, whether or not the character will ever be of interest at run time. To fix, choose a cutoff point MAX_SIMPLE_CHR below which we process characters individually as before, and deal with entire ranges or classes as single entities above that. We can actually make things cheaper than before for chars below the cutoff, because the color map can now be a simple linear array for those chars, rather than the multilevel tree structure Spencer designed. It's more expensive than before for chars above the cutoff, because we must do a binary search in a list of high chars and char ranges used in the regex pattern, plus call iswalpha() and friends for each locale-dependent character class used in the pattern. However, multibyte encodings are normally designed to give smaller codes to popular characters, so that we can expect that the slow path will be taken relatively infrequently. In any case, the speed penalty appears minor except when we have to apply iswalpha() etc. to high character codes at runtime --- and the previous coding gave wrong answers for those cases, so whether it was faster is moot. Tom Lane, reviewed by Heikki Linnakangas Discussion: <15563.1471913698@sss.pgh.pa.us>
2016-09-05 23:06:29 +02:00
static color subcolor(struct colormap *, chr);
static color subcolorhi(struct colormap *, color *);
static color newsub(struct colormap *, color);
Make locale-dependent regex character classes work for large char codes. Previously, we failed to recognize Unicode characters above U+7FF as being members of locale-dependent character classes such as [[:alpha:]]. (Actually, the same problem occurs for large pg_wchar values in any multibyte encoding, but UTF8 is the only case people have actually complained about.) It's impractical to get Spencer's original code to handle character classes or ranges containing many thousands of characters, because it insists on considering each member character individually at regex compile time, whether or not the character will ever be of interest at run time. To fix, choose a cutoff point MAX_SIMPLE_CHR below which we process characters individually as before, and deal with entire ranges or classes as single entities above that. We can actually make things cheaper than before for chars below the cutoff, because the color map can now be a simple linear array for those chars, rather than the multilevel tree structure Spencer designed. It's more expensive than before for chars above the cutoff, because we must do a binary search in a list of high chars and char ranges used in the regex pattern, plus call iswalpha() and friends for each locale-dependent character class used in the pattern. However, multibyte encodings are normally designed to give smaller codes to popular characters, so that we can expect that the slow path will be taken relatively infrequently. In any case, the speed penalty appears minor except when we have to apply iswalpha() etc. to high character codes at runtime --- and the previous coding gave wrong answers for those cases, so whether it was faster is moot. Tom Lane, reviewed by Heikki Linnakangas Discussion: <15563.1471913698@sss.pgh.pa.us>
2016-09-05 23:06:29 +02:00
static int newhicolorrow(struct colormap *, int);
static void newhicolorcols(struct colormap *);
static void subcolorcvec(struct vars *, struct cvec *, struct state *, struct state *);
static void subcoloronechr(struct vars *, chr, struct state *, struct state *, color *);
static void subcoloronerange(struct vars *, chr, chr, struct state *, struct state *, color *);
static void subcoloronerow(struct vars *, int, struct state *, struct state *, color *);
2003-08-04 02:43:34 +02:00
static void okcolors(struct nfa *, struct colormap *);
static void colorchain(struct colormap *, struct arc *);
static void uncolorchain(struct colormap *, struct arc *);
static void rainbow(struct nfa *, struct colormap *, int, color, struct state *, struct state *);
2003-08-04 02:43:34 +02:00
static void colorcomplement(struct nfa *, struct colormap *, int, struct state *, struct state *, struct state *);
#ifdef REG_DEBUG
2003-08-04 02:43:34 +02:00
static void dumpcolors(struct colormap *, FILE *);
static void dumpchr(chr, FILE *);
#endif
/* === regc_nfa.c === */
2003-08-04 02:43:34 +02:00
static struct nfa *newnfa(struct vars *, struct colormap *, struct nfa *);
static void freenfa(struct nfa *);
static struct state *newstate(struct nfa *);
static struct state *newfstate(struct nfa *, int flag);
static void dropstate(struct nfa *, struct state *);
static void freestate(struct nfa *, struct state *);
static void destroystate(struct nfa *, struct state *);
static void newarc(struct nfa *, int, color, struct state *, struct state *);
static void createarc(struct nfa *, int, color, struct state *, struct state *);
2003-08-04 02:43:34 +02:00
static struct arc *allocarc(struct nfa *, struct state *);
static void freearc(struct nfa *, struct arc *);
static void changearctarget(struct arc *, struct state *);
static int hasnonemptyout(struct state *);
static struct arc *findarc(struct state *, int, color);
2003-08-04 02:43:34 +02:00
static void cparc(struct nfa *, struct arc *, struct state *, struct state *);
static void sortins(struct nfa *, struct state *);
static int sortins_cmp(const void *, const void *);
static void sortouts(struct nfa *, struct state *);
static int sortouts_cmp(const void *, const void *);
2003-08-04 02:43:34 +02:00
static void moveins(struct nfa *, struct state *, struct state *);
static void copyins(struct nfa *, struct state *, struct state *);
static void mergeins(struct nfa *, struct state *, struct arc **, int);
2003-08-04 02:43:34 +02:00
static void moveouts(struct nfa *, struct state *, struct state *);
static void copyouts(struct nfa *, struct state *, struct state *);
2003-08-04 02:43:34 +02:00
static void cloneouts(struct nfa *, struct state *, struct state *, struct state *, int);
static void delsub(struct nfa *, struct state *, struct state *);
static void deltraverse(struct nfa *, struct state *, struct state *);
static void dupnfa(struct nfa *, struct state *, struct state *, struct state *, struct state *);
static void duptraverse(struct nfa *, struct state *, struct state *);
static void cleartraverse(struct nfa *, struct state *);
Implement lookbehind constraints in our regular-expression engine. A lookbehind constraint is like a lookahead constraint in that it consumes no text; but it checks for existence (or nonexistence) of a match *ending* at the current point in the string, rather than one *starting* at the current point. This is a long-requested feature since it exists in many other regex libraries, but Henry Spencer had never got around to implementing it in the code we use. Just making it work is actually pretty trivial; but naive copying of the logic for lookahead constraints leads to code that often spends O(N^2) time to scan an N-character string, because we have to run the match engine from string start to the current probe point each time the constraint is checked. In typical use-cases a lookbehind constraint will be written at the start of the regex and hence will need to be checked at every character --- so O(N^2) work overall. To fix that, I introduced a third copy of the core DFA matching loop, paralleling the existing longest() and shortest() loops. This version, matchuntil(), can suspend and resume matching given a couple of pointers' worth of storage space. So we need only run it across the string once, stopping at each interesting probe point and then resuming to advance to the next one. I also put in an optimization that simplifies one-character lookahead and lookbehind constraints, such as "(?=x)" or "(?<!\w)", into AHEAD and BEHIND constraints, which already existed in the engine. This avoids the overhead of the LACON machinery entirely for these rather common cases. The net result is that lookbehind constraints run a factor of three or so slower than Perl's for multi-character constraints, but faster than Perl's for one-character constraints ... and they work fine for variable-length constraints, which Perl gives up on entirely. So that's not bad from a competitive perspective, and there's room for further optimization if anyone cares. (In reality, raw scan rate across a large input string is probably not that big a deal for Postgres usage anyway; so I'm happy if it's linear.)
2015-10-31 00:14:19 +01:00
static struct state *single_color_transition(struct state *, struct state *);
2003-08-04 02:43:34 +02:00
static void specialcolors(struct nfa *);
static long optimize(struct nfa *, FILE *);
static void pullback(struct nfa *, FILE *);
static int pull(struct nfa *, struct arc *, struct state **);
2003-08-04 02:43:34 +02:00
static void pushfwd(struct nfa *, FILE *);
static int push(struct nfa *, struct arc *, struct state **);
2003-08-04 02:43:34 +02:00
#define INCOMPATIBLE 1 /* destroys arc */
#define SATISFIED 2 /* constraint satisfied */
#define COMPATIBLE 3 /* compatible but not satisfied yet */
static int combine(struct arc *, struct arc *);
static void fixempties(struct nfa *, FILE *);
static struct state *emptyreachable(struct nfa *, struct state *,
struct state *, struct arc **);
Fix regular-expression compiler to handle loops of constraint arcs. It's possible to construct regular expressions that contain loops of constraint arcs (that is, ^ $ AHEAD BEHIND or LACON arcs). There's no use in fully traversing such a loop at execution, since you'd just end up in the same NFA state without having consumed any input. Worse, such a loop leads to infinite looping in the pullback/pushfwd stage of compilation, because we keep pushing or pulling the same constraints around the loop in a vain attempt to move them to the pre or post state. Such looping was previously recognized in CVE-2007-4772; but the fix only handled the case of trivial single-state loops (that is, a constraint arc leading back to its source state) ... and not only that, it was incorrect even for that case, because it broke the admittedly-not-very-clearly-stated API contract of the pull() and push() subroutines. The first two regression test cases added by this commit exhibit patterns that result in assertion failures because of that (though there seem to be no ill effects in non-assert builds). The other new test cases exhibit multi-state constraint loops; in an unpatched build they will run until the NFA state-count limit is exceeded. To fix, remove the code added for CVE-2007-4772, and instead create a general-purpose constraint-loop-breaking phase of regex compilation that executes before we do pullback/pushfwd. Since we never need to traverse a constraint loop fully, we can just break the loop at any chosen spot, if we add clone states that can replicate any sequence of arc transitions that would've traversed just part of the loop. Also add some commentary clarifying why we have to have all these machinations in the first place. This class of problems has been known for some time --- we had a report from Marc Mamin about two years ago, for example, and there are related complaints in the Tcl bug tracker. I had discussed a fix of this kind off-list with Henry Spencer, but didn't get around to doing something about it until the issue was rediscovered by Greg Stark recently. Back-patch to all supported branches.
2015-10-16 20:14:40 +02:00
static int isconstraintarc(struct arc *);
static int hasconstraintout(struct state *);
static void fixconstraintloops(struct nfa *, FILE *);
static int findconstraintloop(struct nfa *, struct state *);
static void breakconstraintloop(struct nfa *, struct state *);
static void clonesuccessorstates(struct nfa *, struct state *, struct state *,
struct state *, struct arc *,
char *, char *, int);
2003-08-04 02:43:34 +02:00
static void cleanup(struct nfa *);
static void markreachable(struct nfa *, struct state *, struct state *, struct state *);
static void markcanreach(struct nfa *, struct state *, struct state *, struct state *);
static long analyze(struct nfa *);
static void compact(struct nfa *, struct cnfa *);
static void carcsort(struct carc *, size_t);
static int carc_cmp(const void *, const void *);
2003-08-04 02:43:34 +02:00
static void freecnfa(struct cnfa *);
static void dumpnfa(struct nfa *, FILE *);
#ifdef REG_DEBUG
2003-08-04 02:43:34 +02:00
static void dumpstate(struct state *, FILE *);
static void dumparcs(struct state *, FILE *);
static void dumparc(struct arc *, struct state *, FILE *);
static void dumpcnfa(struct cnfa *, FILE *);
static void dumpcstate(int, struct cnfa *, FILE *);
#endif
/* === regc_cvec.c === */
static struct cvec *newcvec(int, int);
2003-08-04 02:43:34 +02:00
static struct cvec *clearcvec(struct cvec *);
static void addchr(struct cvec *, chr);
static void addrange(struct cvec *, chr, chr);
static struct cvec *getcvec(struct vars *, int, int);
2003-08-04 02:43:34 +02:00
static void freecvec(struct cvec *);
/* === regc_pg_locale.c === */
static int pg_wc_isdigit(pg_wchar c);
static int pg_wc_isalpha(pg_wchar c);
static int pg_wc_isalnum(pg_wchar c);
static int pg_wc_isupper(pg_wchar c);
static int pg_wc_islower(pg_wchar c);
static int pg_wc_isgraph(pg_wchar c);
static int pg_wc_isprint(pg_wchar c);
static int pg_wc_ispunct(pg_wchar c);
static int pg_wc_isspace(pg_wchar c);
static pg_wchar pg_wc_toupper(pg_wchar c);
static pg_wchar pg_wc_tolower(pg_wchar c);
/* === regc_locale.c === */
static chr element(struct vars *, const chr *, const chr *);
static struct cvec *range(struct vars *, chr, chr, int);
static int before(chr, chr);
static struct cvec *eclass(struct vars *, chr, int);
static struct cvec *cclass(struct vars *, const chr *, const chr *, int);
Make locale-dependent regex character classes work for large char codes. Previously, we failed to recognize Unicode characters above U+7FF as being members of locale-dependent character classes such as [[:alpha:]]. (Actually, the same problem occurs for large pg_wchar values in any multibyte encoding, but UTF8 is the only case people have actually complained about.) It's impractical to get Spencer's original code to handle character classes or ranges containing many thousands of characters, because it insists on considering each member character individually at regex compile time, whether or not the character will ever be of interest at run time. To fix, choose a cutoff point MAX_SIMPLE_CHR below which we process characters individually as before, and deal with entire ranges or classes as single entities above that. We can actually make things cheaper than before for chars below the cutoff, because the color map can now be a simple linear array for those chars, rather than the multilevel tree structure Spencer designed. It's more expensive than before for chars above the cutoff, because we must do a binary search in a list of high chars and char ranges used in the regex pattern, plus call iswalpha() and friends for each locale-dependent character class used in the pattern. However, multibyte encodings are normally designed to give smaller codes to popular characters, so that we can expect that the slow path will be taken relatively infrequently. In any case, the speed penalty appears minor except when we have to apply iswalpha() etc. to high character codes at runtime --- and the previous coding gave wrong answers for those cases, so whether it was faster is moot. Tom Lane, reviewed by Heikki Linnakangas Discussion: <15563.1471913698@sss.pgh.pa.us>
2016-09-05 23:06:29 +02:00
static int cclass_column_index(struct colormap *, chr);
2003-08-04 02:43:34 +02:00
static struct cvec *allcases(struct vars *, chr);
static int cmp(const chr *, const chr *, size_t);
static int casecmp(const chr *, const chr *, size_t);
/* internal variables, bundled for easy passing around */
2003-08-04 02:43:34 +02:00
struct vars
{
regex_t *re;
const chr *now; /* scan pointer into string */
const chr *stop; /* end of string */
const chr *savenow; /* saved now and stop for "subroutine call" */
const chr *savestop;
2003-08-04 02:43:34 +02:00
int err; /* error code (0 if none) */
int cflags; /* copy of compile flags */
int lasttype; /* type of previous token */
int nexttype; /* type of next token */
chr nextvalue; /* value (if any) of next token */
int lexcon; /* lexical context type (see lex.c) */
int nsubexp; /* subexpression count */
struct subre **subs; /* subRE pointer vector */
size_t nsubs; /* length of vector */
struct subre *sub10[10]; /* initial vector, enough for most */
2003-08-04 02:43:34 +02:00
struct nfa *nfa; /* the NFA */
struct colormap *cm; /* character color map */
color nlcolor; /* color of newline */
struct state *wordchrs; /* state in nfa holding word-char outarcs */
struct subre *tree; /* subexpression tree */
struct subre *treechain; /* all tree nodes allocated */
struct subre *treefree; /* any free tree nodes */
int ntree; /* number of tree nodes, plus one */
2003-08-04 02:43:34 +02:00
struct cvec *cv; /* interface cvec */
struct cvec *cv2; /* utility cvec */
Implement lookbehind constraints in our regular-expression engine. A lookbehind constraint is like a lookahead constraint in that it consumes no text; but it checks for existence (or nonexistence) of a match *ending* at the current point in the string, rather than one *starting* at the current point. This is a long-requested feature since it exists in many other regex libraries, but Henry Spencer had never got around to implementing it in the code we use. Just making it work is actually pretty trivial; but naive copying of the logic for lookahead constraints leads to code that often spends O(N^2) time to scan an N-character string, because we have to run the match engine from string start to the current probe point each time the constraint is checked. In typical use-cases a lookbehind constraint will be written at the start of the regex and hence will need to be checked at every character --- so O(N^2) work overall. To fix that, I introduced a third copy of the core DFA matching loop, paralleling the existing longest() and shortest() loops. This version, matchuntil(), can suspend and resume matching given a couple of pointers' worth of storage space. So we need only run it across the string once, stopping at each interesting probe point and then resuming to advance to the next one. I also put in an optimization that simplifies one-character lookahead and lookbehind constraints, such as "(?=x)" or "(?<!\w)", into AHEAD and BEHIND constraints, which already existed in the engine. This avoids the overhead of the LACON machinery entirely for these rather common cases. The net result is that lookbehind constraints run a factor of three or so slower than Perl's for multi-character constraints, but faster than Perl's for one-character constraints ... and they work fine for variable-length constraints, which Perl gives up on entirely. So that's not bad from a competitive perspective, and there's room for further optimization if anyone cares. (In reality, raw scan rate across a large input string is probably not that big a deal for Postgres usage anyway; so I'm happy if it's linear.)
2015-10-31 00:14:19 +01:00
struct subre *lacons; /* lookaround-constraint vector */
int nlacons; /* size of lacons[]; note that only slots
* numbered 1 .. nlacons-1 are used */
size_t spaceused; /* approx. space used for compilation */
};
/* parsing macros; most know that `v' is the struct vars pointer */
2003-08-04 02:43:34 +02:00
#define NEXT() (next(v)) /* advance by one token */
#define SEE(t) (v->nexttype == (t)) /* is next token this? */
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
#define EAT(t) (SEE(t) && next(v)) /* if next is this, swallow it */
2003-08-04 02:43:34 +02:00
#define VISERR(vv) ((vv)->err != 0) /* have we seen an error yet? */
#define ISERR() VISERR(v)
#define VERR(vv,e) ((vv)->nexttype = EOS, \
(vv)->err = ((vv)->err ? (vv)->err : (e)))
2003-08-04 02:43:34 +02:00
#define ERR(e) VERR(v, e) /* record an error */
#define NOERR() {if (ISERR()) return;} /* if error seen, return */
#define NOERRN() {if (ISERR()) return NULL;} /* NOERR with retval */
#define NOERRZ() {if (ISERR()) return 0;} /* NOERR with retval */
2011-04-10 17:42:00 +02:00
#define INSIST(c, e) do { if (!(c)) ERR(e); } while (0) /* error if c false */
2003-08-04 02:43:34 +02:00
#define NOTE(b) (v->re->re_info |= (b)) /* note visible condition */
#define EMPTYARC(x, y) newarc(v->nfa, EMPTY, 0, x, y)
/* token type codes, some also used as NFA arc types */
2003-08-04 02:43:34 +02:00
#define EMPTY 'n' /* no token present */
#define EOS 'e' /* end of string */
#define PLAIN 'p' /* ordinary character */
#define DIGIT 'd' /* digit (in bound) */
#define BACKREF 'b' /* back reference */
#define COLLEL 'I' /* start of [. */
#define ECLASS 'E' /* start of [= */
#define CCLASS 'C' /* start of [: */
#define END 'X' /* end of [. [= [: */
#define RANGE 'R' /* - within [] which might be range delim. */
Implement lookbehind constraints in our regular-expression engine. A lookbehind constraint is like a lookahead constraint in that it consumes no text; but it checks for existence (or nonexistence) of a match *ending* at the current point in the string, rather than one *starting* at the current point. This is a long-requested feature since it exists in many other regex libraries, but Henry Spencer had never got around to implementing it in the code we use. Just making it work is actually pretty trivial; but naive copying of the logic for lookahead constraints leads to code that often spends O(N^2) time to scan an N-character string, because we have to run the match engine from string start to the current probe point each time the constraint is checked. In typical use-cases a lookbehind constraint will be written at the start of the regex and hence will need to be checked at every character --- so O(N^2) work overall. To fix that, I introduced a third copy of the core DFA matching loop, paralleling the existing longest() and shortest() loops. This version, matchuntil(), can suspend and resume matching given a couple of pointers' worth of storage space. So we need only run it across the string once, stopping at each interesting probe point and then resuming to advance to the next one. I also put in an optimization that simplifies one-character lookahead and lookbehind constraints, such as "(?=x)" or "(?<!\w)", into AHEAD and BEHIND constraints, which already existed in the engine. This avoids the overhead of the LACON machinery entirely for these rather common cases. The net result is that lookbehind constraints run a factor of three or so slower than Perl's for multi-character constraints, but faster than Perl's for one-character constraints ... and they work fine for variable-length constraints, which Perl gives up on entirely. So that's not bad from a competitive perspective, and there's room for further optimization if anyone cares. (In reality, raw scan rate across a large input string is probably not that big a deal for Postgres usage anyway; so I'm happy if it's linear.)
2015-10-31 00:14:19 +01:00
#define LACON 'L' /* lookaround constraint subRE */
2003-08-04 02:43:34 +02:00
#define AHEAD 'a' /* color-lookahead arc */
#define BEHIND 'r' /* color-lookbehind arc */
#define WBDRY 'w' /* word boundary constraint */
#define NWBDRY 'W' /* non-word-boundary constraint */
#define SBEGIN 'A' /* beginning of string (even if not BOL) */
#define SEND 'Z' /* end of string (even if not EOL) */
/* is an arc colored, and hence on a color chain? */
#define COLORED(a) \
((a)->type == PLAIN || (a)->type == AHEAD || (a)->type == BEHIND)
/* static function list */
static const struct fns functions = {
2003-08-04 02:43:34 +02:00
rfree, /* regfree insides */
rcancelrequested, /* check for cancel request */
rstacktoodeep /* check for stack getting dangerously deep */
};
/*
* pg_regcomp - compile regular expression
*
* Note: on failure, no resources remain allocated, so pg_regfree()
* need not be applied to re.
*/
int
pg_regcomp(regex_t *re,
const chr *string,
size_t len,
int flags,
Oid collation)
{
struct vars var;
struct vars *v = &var;
struct guts *g;
2003-08-04 02:43:34 +02:00
int i;
size_t j;
#ifdef REG_DEBUG
2003-08-04 02:43:34 +02:00
FILE *debug = (flags & REG_PROGRESS) ? stdout : (FILE *) NULL;
#else
2003-08-04 02:43:34 +02:00
FILE *debug = (FILE *) NULL;
#endif
2003-08-04 02:43:34 +02:00
#define CNOERR() { if (ISERR()) return freev(v, v->err); }
/* sanity checks */
if (re == NULL || string == NULL)
return REG_INVARG;
2003-08-04 02:43:34 +02:00
if ((flags & REG_QUOTE) &&
(flags & (REG_ADVANCED | REG_EXPANDED | REG_NEWLINE)))
return REG_INVARG;
2003-08-04 02:43:34 +02:00
if (!(flags & REG_EXTENDED) && (flags & REG_ADVF))
return REG_INVARG;
/* Initialize locale-dependent support */
pg_set_regex_collation(collation);
/* initial setup (after which freev() is callable) */
v->re = re;
v->now = string;
v->stop = v->now + len;
v->savenow = v->savestop = NULL;
v->err = 0;
v->cflags = flags;
v->nsubexp = 0;
v->subs = v->sub10;
v->nsubs = 10;
for (j = 0; j < v->nsubs; j++)
v->subs[j] = NULL;
v->nfa = NULL;
v->cm = NULL;
v->nlcolor = COLORLESS;
v->wordchrs = NULL;
v->tree = NULL;
v->treechain = NULL;
v->treefree = NULL;
v->cv = NULL;
v->cv2 = NULL;
v->lacons = NULL;
v->nlacons = 0;
v->spaceused = 0;
re->re_magic = REMAGIC;
2003-08-04 02:43:34 +02:00
re->re_info = 0; /* bits get set during parse */
re->re_csize = sizeof(chr);
re->re_collation = collation;
re->re_guts = NULL;
re->re_fns = VS(&functions);
/* more complex setup, malloced things */
re->re_guts = VS(MALLOC(sizeof(struct guts)));
if (re->re_guts == NULL)
return freev(v, REG_ESPACE);
2003-08-04 02:43:34 +02:00
g = (struct guts *) re->re_guts;
g->tree = NULL;
initcm(v, &g->cmap);
v->cm = &g->cmap;
g->lacons = NULL;
g->nlacons = 0;
ZAPCNFA(g->search);
2003-08-04 02:43:34 +02:00
v->nfa = newnfa(v, v->cm, (struct nfa *) NULL);
CNOERR();
/* set up a reasonably-sized transient cvec for getcvec usage */
v->cv = newcvec(100, 20);
if (v->cv == NULL)
return freev(v, REG_ESPACE);
/* parsing */
2003-08-04 02:43:34 +02:00
lexstart(v); /* also handles prefixes */
if ((v->cflags & REG_NLSTOP) || (v->cflags & REG_NLANCH))
{
/* assign newline a unique color */
v->nlcolor = subcolor(v->cm, newline());
okcolors(v->nfa, v->cm);
}
CNOERR();
v->tree = parse(v, EOS, PLAIN, v->nfa->init, v->nfa->final);
2003-08-04 02:43:34 +02:00
assert(SEE(EOS)); /* even if error; ISERR() => SEE(EOS) */
CNOERR();
assert(v->tree != NULL);
/* finish setup of nfa and its subre tree */
specialcolors(v->nfa);
CNOERR();
#ifdef REG_DEBUG
2003-08-04 02:43:34 +02:00
if (debug != NULL)
{
fprintf(debug, "\n\n\n========= RAW ==========\n");
dumpnfa(v->nfa, debug);
dumpst(v->tree, debug, 1);
}
#endif
optst(v, v->tree);
v->ntree = numst(v->tree, 1);
markst(v->tree);
cleanst(v);
#ifdef REG_DEBUG
2003-08-04 02:43:34 +02:00
if (debug != NULL)
{
fprintf(debug, "\n\n\n========= TREE FIXED ==========\n");
dumpst(v->tree, debug, 1);
}
#endif
/* build compacted NFAs for tree and lacons */
re->re_info |= nfatree(v, v->tree, debug);
CNOERR();
assert(v->nlacons == 0 || v->lacons != NULL);
2003-08-04 02:43:34 +02:00
for (i = 1; i < v->nlacons; i++)
{
Implement lookbehind constraints in our regular-expression engine. A lookbehind constraint is like a lookahead constraint in that it consumes no text; but it checks for existence (or nonexistence) of a match *ending* at the current point in the string, rather than one *starting* at the current point. This is a long-requested feature since it exists in many other regex libraries, but Henry Spencer had never got around to implementing it in the code we use. Just making it work is actually pretty trivial; but naive copying of the logic for lookahead constraints leads to code that often spends O(N^2) time to scan an N-character string, because we have to run the match engine from string start to the current probe point each time the constraint is checked. In typical use-cases a lookbehind constraint will be written at the start of the regex and hence will need to be checked at every character --- so O(N^2) work overall. To fix that, I introduced a third copy of the core DFA matching loop, paralleling the existing longest() and shortest() loops. This version, matchuntil(), can suspend and resume matching given a couple of pointers' worth of storage space. So we need only run it across the string once, stopping at each interesting probe point and then resuming to advance to the next one. I also put in an optimization that simplifies one-character lookahead and lookbehind constraints, such as "(?=x)" or "(?<!\w)", into AHEAD and BEHIND constraints, which already existed in the engine. This avoids the overhead of the LACON machinery entirely for these rather common cases. The net result is that lookbehind constraints run a factor of three or so slower than Perl's for multi-character constraints, but faster than Perl's for one-character constraints ... and they work fine for variable-length constraints, which Perl gives up on entirely. So that's not bad from a competitive perspective, and there's room for further optimization if anyone cares. (In reality, raw scan rate across a large input string is probably not that big a deal for Postgres usage anyway; so I'm happy if it's linear.)
2015-10-31 00:14:19 +01:00
struct subre *lasub = &v->lacons[i];
#ifdef REG_DEBUG
if (debug != NULL)
fprintf(debug, "\n\n\n========= LA%d ==========\n", i);
#endif
Implement lookbehind constraints in our regular-expression engine. A lookbehind constraint is like a lookahead constraint in that it consumes no text; but it checks for existence (or nonexistence) of a match *ending* at the current point in the string, rather than one *starting* at the current point. This is a long-requested feature since it exists in many other regex libraries, but Henry Spencer had never got around to implementing it in the code we use. Just making it work is actually pretty trivial; but naive copying of the logic for lookahead constraints leads to code that often spends O(N^2) time to scan an N-character string, because we have to run the match engine from string start to the current probe point each time the constraint is checked. In typical use-cases a lookbehind constraint will be written at the start of the regex and hence will need to be checked at every character --- so O(N^2) work overall. To fix that, I introduced a third copy of the core DFA matching loop, paralleling the existing longest() and shortest() loops. This version, matchuntil(), can suspend and resume matching given a couple of pointers' worth of storage space. So we need only run it across the string once, stopping at each interesting probe point and then resuming to advance to the next one. I also put in an optimization that simplifies one-character lookahead and lookbehind constraints, such as "(?=x)" or "(?<!\w)", into AHEAD and BEHIND constraints, which already existed in the engine. This avoids the overhead of the LACON machinery entirely for these rather common cases. The net result is that lookbehind constraints run a factor of three or so slower than Perl's for multi-character constraints, but faster than Perl's for one-character constraints ... and they work fine for variable-length constraints, which Perl gives up on entirely. So that's not bad from a competitive perspective, and there's room for further optimization if anyone cares. (In reality, raw scan rate across a large input string is probably not that big a deal for Postgres usage anyway; so I'm happy if it's linear.)
2015-10-31 00:14:19 +01:00
/* Prepend .* to pattern if it's a lookbehind LACON */
nfanode(v, lasub, !LATYPE_IS_AHEAD(lasub->subno), debug);
}
CNOERR();
2003-08-04 02:43:34 +02:00
if (v->tree->flags & SHORTER)
NOTE(REG_USHORTEST);
/* build compacted NFAs for tree, lacons, fast search */
#ifdef REG_DEBUG
if (debug != NULL)
fprintf(debug, "\n\n\n========= SEARCH ==========\n");
#endif
/* can sacrifice main NFA now, so use it as work area */
2003-08-04 02:43:34 +02:00
(DISCARD) optimize(v->nfa, debug);
CNOERR();
makesearch(v, v->nfa);
CNOERR();
compact(v->nfa, &g->search);
CNOERR();
/* looks okay, package it up */
re->re_nsub = v->nsubexp;
2003-08-04 02:43:34 +02:00
v->re = NULL; /* freev no longer frees re */
g->magic = GUTSMAGIC;
g->cflags = v->cflags;
g->info = re->re_info;
g->nsub = re->re_nsub;
g->tree = v->tree;
v->tree = NULL;
g->ntree = v->ntree;
2003-08-04 02:43:34 +02:00
g->compare = (v->cflags & REG_ICASE) ? casecmp : cmp;
g->lacons = v->lacons;
v->lacons = NULL;
g->nlacons = v->nlacons;
#ifdef REG_DEBUG
2003-08-04 02:43:34 +02:00
if (flags & REG_DUMP)
dump(re, stdout);
#endif
assert(v->err == 0);
return freev(v, 0);
}
/*
* moresubs - enlarge subRE vector
*/
static void
2017-06-21 20:39:04 +02:00
moresubs(struct vars *v,
int wanted) /* want enough room for this one */
{
struct subre **p;
2003-08-04 02:43:34 +02:00
size_t n;
2003-08-04 02:43:34 +02:00
assert(wanted > 0 && (size_t) wanted >= v->nsubs);
2017-06-21 20:39:04 +02:00
n = (size_t) wanted * 3 / 2 + 1;
2003-08-04 02:43:34 +02:00
if (v->subs == v->sub10)
{
p = (struct subre **) MALLOC(n * sizeof(struct subre *));
if (p != NULL)
memcpy(VS(p), VS(v->subs),
2003-08-04 02:43:34 +02:00
v->nsubs * sizeof(struct subre *));
}
else
p = (struct subre **) REALLOC(v->subs, n * sizeof(struct subre *));
if (p == NULL)
{
ERR(REG_ESPACE);
return;
}
v->subs = p;
for (p = &v->subs[v->nsubs]; v->nsubs < n; p++, v->nsubs++)
*p = NULL;
assert(v->nsubs == n);
2003-08-04 02:43:34 +02:00
assert((size_t) wanted < v->nsubs);
}
/*
* freev - free vars struct's substructures where necessary
*
* Optionally does error-number setting, and always returns error code
* (if any), to make error-handling code terser.
*/
static int
2017-06-21 20:39:04 +02:00
freev(struct vars *v,
int err)
{
if (v->re != NULL)
rfree(v->re);
if (v->subs != v->sub10)
FREE(v->subs);
if (v->nfa != NULL)
freenfa(v->nfa);
if (v->tree != NULL)
freesubre(v, v->tree);
if (v->treechain != NULL)
cleanst(v);
if (v->cv != NULL)
freecvec(v->cv);
if (v->cv2 != NULL)
freecvec(v->cv2);
if (v->lacons != NULL)
freelacons(v->lacons, v->nlacons);
2003-08-04 02:43:34 +02:00
ERR(err); /* nop if err==0 */
return v->err;
}
/*
* makesearch - turn an NFA into a search NFA (implicit prepend of .*?)
* NFA must have been optimize()d already.
*/
static void
2017-06-21 20:39:04 +02:00
makesearch(struct vars *v,
struct nfa *nfa)
{
struct arc *a;
struct arc *b;
struct state *pre = nfa->pre;
struct state *s;
struct state *s2;
struct state *slist;
/* no loops are needed if it's anchored */
2003-08-04 02:43:34 +02:00
for (a = pre->outs; a != NULL; a = a->outchain)
{
assert(a->type == PLAIN);
if (a->co != nfa->bos[0] && a->co != nfa->bos[1])
break;
}
2003-08-04 02:43:34 +02:00
if (a != NULL)
{
/* add implicit .* in front */
rainbow(nfa, v->cm, PLAIN, COLORLESS, pre, pre);
/* and ^* and \A* too -- not always necessary, but harmless */
newarc(nfa, PLAIN, nfa->bos[0], pre, pre);
newarc(nfa, PLAIN, nfa->bos[1], pre, pre);
}
/*
* Now here's the subtle part. Because many REs have no lookback
2003-08-04 02:43:34 +02:00
* constraints, often knowing when you were in the pre state tells you
2005-10-15 04:49:52 +02:00
* little; it's the next state(s) that are informative. But some of them
* may have other inarcs, i.e. it may be possible to make actual progress
* and then return to one of them. We must de-optimize such cases,
2005-10-15 04:49:52 +02:00
* splitting each such state into progress and no-progress states.
*/
/* first, make a list of the states reachable from pre and elsewhere */
slist = NULL;
2003-08-04 02:43:34 +02:00
for (a = pre->outs; a != NULL; a = a->outchain)
{
s = a->to;
for (b = s->ins; b != NULL; b = b->inchain)
{
if (b->from != pre)
break;
}
/*
* We want to mark states as being in the list already by having non
* NULL tmp fields, but we can't just store the old slist value in tmp
* because that doesn't work for the first such state. Instead, the
* first list entry gets its own address in tmp.
*/
if (b != NULL && s->tmp == NULL)
{
s->tmp = (slist != NULL) ? slist : s;
slist = s;
}
}
/* do the splits */
2003-08-04 02:43:34 +02:00
for (s = slist; s != NULL; s = s2)
{
s2 = newstate(nfa);
NOERR();
copyouts(nfa, s, s2);
NOERR();
2003-08-04 02:43:34 +02:00
for (a = s->ins; a != NULL; a = b)
{
b = a->inchain;
2003-08-04 02:43:34 +02:00
if (a->from != pre)
{
cparc(nfa, a, a->from, s2);
freearc(nfa, a);
}
}
s2 = (s->tmp != s) ? s->tmp : NULL;
2003-08-04 02:43:34 +02:00
s->tmp = NULL; /* clean up while we're at it */
}
}
/*
* parse - parse an RE
*
* This is actually just the top level, which parses a bunch of branches
* tied together with '|'. They appear in the tree as the left children
* of a chain of '|' subres.
*/
static struct subre *
2017-06-21 20:39:04 +02:00
parse(struct vars *v,
2003-08-04 02:43:34 +02:00
int stopper, /* EOS or ')' */
Implement lookbehind constraints in our regular-expression engine. A lookbehind constraint is like a lookahead constraint in that it consumes no text; but it checks for existence (or nonexistence) of a match *ending* at the current point in the string, rather than one *starting* at the current point. This is a long-requested feature since it exists in many other regex libraries, but Henry Spencer had never got around to implementing it in the code we use. Just making it work is actually pretty trivial; but naive copying of the logic for lookahead constraints leads to code that often spends O(N^2) time to scan an N-character string, because we have to run the match engine from string start to the current probe point each time the constraint is checked. In typical use-cases a lookbehind constraint will be written at the start of the regex and hence will need to be checked at every character --- so O(N^2) work overall. To fix that, I introduced a third copy of the core DFA matching loop, paralleling the existing longest() and shortest() loops. This version, matchuntil(), can suspend and resume matching given a couple of pointers' worth of storage space. So we need only run it across the string once, stopping at each interesting probe point and then resuming to advance to the next one. I also put in an optimization that simplifies one-character lookahead and lookbehind constraints, such as "(?=x)" or "(?<!\w)", into AHEAD and BEHIND constraints, which already existed in the engine. This avoids the overhead of the LACON machinery entirely for these rather common cases. The net result is that lookbehind constraints run a factor of three or so slower than Perl's for multi-character constraints, but faster than Perl's for one-character constraints ... and they work fine for variable-length constraints, which Perl gives up on entirely. So that's not bad from a competitive perspective, and there's room for further optimization if anyone cares. (In reality, raw scan rate across a large input string is probably not that big a deal for Postgres usage anyway; so I'm happy if it's linear.)
2015-10-31 00:14:19 +01:00
int type, /* LACON (lookaround subRE) or PLAIN */
2017-06-21 20:39:04 +02:00
struct state *init, /* initial state */
struct state *final) /* final state */
{
2003-08-04 02:43:34 +02:00
struct state *left; /* scaffolding for branch */
struct state *right;
2003-08-04 02:43:34 +02:00
struct subre *branches; /* top level */
struct subre *branch; /* current branch */
struct subre *t; /* temporary */
int firstbranch; /* is this the first branch? */
assert(stopper == ')' || stopper == EOS);
branches = subre(v, '|', LONGER, init, final);
NOERRN();
branch = branches;
firstbranch = 1;
2003-08-04 02:43:34 +02:00
do
{ /* a branch */
if (!firstbranch)
{
/* need a place to hang it */
branch->right = subre(v, '|', LONGER, init, final);
NOERRN();
branch = branch->right;
}
firstbranch = 0;
left = newstate(v->nfa);
right = newstate(v->nfa);
NOERRN();
EMPTYARC(init, left);
EMPTYARC(right, final);
NOERRN();
branch->left = parsebranch(v, stopper, type, left, right, 0);
NOERRN();
branch->flags |= UP(branch->flags | branch->left->flags);
2003-08-04 02:43:34 +02:00
if ((branch->flags & ~branches->flags) != 0) /* new flags */
for (t = branches; t != branch; t = t->right)
t->flags |= branch->flags;
} while (EAT('|'));
assert(SEE(stopper) || SEE(EOS));
2003-08-04 02:43:34 +02:00
if (!SEE(stopper))
{
assert(stopper == ')' && SEE(EOS));
ERR(REG_EPAREN);
}
/* optimize out simple cases */
2003-08-04 02:43:34 +02:00
if (branch == branches)
{ /* only one branch */
assert(branch->right == NULL);
t = branch->left;
branch->left = NULL;
freesubre(v, branches);
branches = t;
2003-08-04 02:43:34 +02:00
}
else if (!MESSY(branches->flags))
{ /* no interesting innards */
freesubre(v, branches->left);
branches->left = NULL;
freesubre(v, branches->right);
branches->right = NULL;
branches->op = '=';
}
return branches;
}
/*
* parsebranch - parse one branch of an RE
*
* This mostly manages concatenation, working closely with parseqatom().
* Concatenated things are bundled up as much as possible, with separate
* ',' nodes introduced only when necessary due to substructure.
*/
static struct subre *
2017-06-21 20:39:04 +02:00
parsebranch(struct vars *v,
2003-08-04 02:43:34 +02:00
int stopper, /* EOS or ')' */
Implement lookbehind constraints in our regular-expression engine. A lookbehind constraint is like a lookahead constraint in that it consumes no text; but it checks for existence (or nonexistence) of a match *ending* at the current point in the string, rather than one *starting* at the current point. This is a long-requested feature since it exists in many other regex libraries, but Henry Spencer had never got around to implementing it in the code we use. Just making it work is actually pretty trivial; but naive copying of the logic for lookahead constraints leads to code that often spends O(N^2) time to scan an N-character string, because we have to run the match engine from string start to the current probe point each time the constraint is checked. In typical use-cases a lookbehind constraint will be written at the start of the regex and hence will need to be checked at every character --- so O(N^2) work overall. To fix that, I introduced a third copy of the core DFA matching loop, paralleling the existing longest() and shortest() loops. This version, matchuntil(), can suspend and resume matching given a couple of pointers' worth of storage space. So we need only run it across the string once, stopping at each interesting probe point and then resuming to advance to the next one. I also put in an optimization that simplifies one-character lookahead and lookbehind constraints, such as "(?=x)" or "(?<!\w)", into AHEAD and BEHIND constraints, which already existed in the engine. This avoids the overhead of the LACON machinery entirely for these rather common cases. The net result is that lookbehind constraints run a factor of three or so slower than Perl's for multi-character constraints, but faster than Perl's for one-character constraints ... and they work fine for variable-length constraints, which Perl gives up on entirely. So that's not bad from a competitive perspective, and there's room for further optimization if anyone cares. (In reality, raw scan rate across a large input string is probably not that big a deal for Postgres usage anyway; so I'm happy if it's linear.)
2015-10-31 00:14:19 +01:00
int type, /* LACON (lookaround subRE) or PLAIN */
2017-06-21 20:39:04 +02:00
struct state *left, /* leftmost state */
struct state *right, /* rightmost state */
2003-08-04 02:43:34 +02:00
int partial) /* is this only part of a branch? */
{
2003-08-04 02:43:34 +02:00
struct state *lp; /* left end of current construct */
int seencontent; /* is there anything in this branch yet? */
struct subre *t;
lp = left;
seencontent = 0;
t = subre(v, '=', 0, left, right); /* op '=' is tentative */
NOERRN();
2003-08-04 02:43:34 +02:00
while (!SEE('|') && !SEE(stopper) && !SEE(EOS))
{
if (seencontent)
{ /* implicit concat operator */
lp = newstate(v->nfa);
NOERRN();
moveins(v->nfa, right, lp);
}
seencontent = 1;
/* NB, recursion in parseqatom() may swallow rest of branch */
parseqatom(v, stopper, type, lp, right, t);
NOERRN();
}
2003-08-04 02:43:34 +02:00
if (!seencontent)
{ /* empty branch */
if (!partial)
NOTE(REG_UUNSPEC);
assert(lp == left);
EMPTYARC(left, right);
}
return t;
}
/*
* parseqatom - parse one quantified atom or constraint of an RE
*
* The bookkeeping near the end cooperates very closely with parsebranch();
* in particular, it contains a recursion that can involve parsing the rest
* of the branch, making this function's name somewhat inaccurate.
*/
static void
2017-06-21 20:39:04 +02:00
parseqatom(struct vars *v,
int stopper, /* EOS or ')' */
Implement lookbehind constraints in our regular-expression engine. A lookbehind constraint is like a lookahead constraint in that it consumes no text; but it checks for existence (or nonexistence) of a match *ending* at the current point in the string, rather than one *starting* at the current point. This is a long-requested feature since it exists in many other regex libraries, but Henry Spencer had never got around to implementing it in the code we use. Just making it work is actually pretty trivial; but naive copying of the logic for lookahead constraints leads to code that often spends O(N^2) time to scan an N-character string, because we have to run the match engine from string start to the current probe point each time the constraint is checked. In typical use-cases a lookbehind constraint will be written at the start of the regex and hence will need to be checked at every character --- so O(N^2) work overall. To fix that, I introduced a third copy of the core DFA matching loop, paralleling the existing longest() and shortest() loops. This version, matchuntil(), can suspend and resume matching given a couple of pointers' worth of storage space. So we need only run it across the string once, stopping at each interesting probe point and then resuming to advance to the next one. I also put in an optimization that simplifies one-character lookahead and lookbehind constraints, such as "(?=x)" or "(?<!\w)", into AHEAD and BEHIND constraints, which already existed in the engine. This avoids the overhead of the LACON machinery entirely for these rather common cases. The net result is that lookbehind constraints run a factor of three or so slower than Perl's for multi-character constraints, but faster than Perl's for one-character constraints ... and they work fine for variable-length constraints, which Perl gives up on entirely. So that's not bad from a competitive perspective, and there's room for further optimization if anyone cares. (In reality, raw scan rate across a large input string is probably not that big a deal for Postgres usage anyway; so I'm happy if it's linear.)
2015-10-31 00:14:19 +01:00
int type, /* LACON (lookaround subRE) or PLAIN */
2017-06-21 20:39:04 +02:00
struct state *lp, /* left state to hang it on */
struct state *rp, /* right state to hang it on */
struct subre *top) /* subtree top */
{
2003-08-04 02:43:34 +02:00
struct state *s; /* temporaries for new states */
struct state *s2;
2003-08-04 02:43:34 +02:00
#define ARCV(t, val) newarc(v->nfa, t, val, lp, rp)
int m,
n;
struct subre *atom; /* atom's subtree */
struct subre *t;
2003-08-04 02:43:34 +02:00
int cap; /* capturing parens? */
Implement lookbehind constraints in our regular-expression engine. A lookbehind constraint is like a lookahead constraint in that it consumes no text; but it checks for existence (or nonexistence) of a match *ending* at the current point in the string, rather than one *starting* at the current point. This is a long-requested feature since it exists in many other regex libraries, but Henry Spencer had never got around to implementing it in the code we use. Just making it work is actually pretty trivial; but naive copying of the logic for lookahead constraints leads to code that often spends O(N^2) time to scan an N-character string, because we have to run the match engine from string start to the current probe point each time the constraint is checked. In typical use-cases a lookbehind constraint will be written at the start of the regex and hence will need to be checked at every character --- so O(N^2) work overall. To fix that, I introduced a third copy of the core DFA matching loop, paralleling the existing longest() and shortest() loops. This version, matchuntil(), can suspend and resume matching given a couple of pointers' worth of storage space. So we need only run it across the string once, stopping at each interesting probe point and then resuming to advance to the next one. I also put in an optimization that simplifies one-character lookahead and lookbehind constraints, such as "(?=x)" or "(?<!\w)", into AHEAD and BEHIND constraints, which already existed in the engine. This avoids the overhead of the LACON machinery entirely for these rather common cases. The net result is that lookbehind constraints run a factor of three or so slower than Perl's for multi-character constraints, but faster than Perl's for one-character constraints ... and they work fine for variable-length constraints, which Perl gives up on entirely. So that's not bad from a competitive perspective, and there's room for further optimization if anyone cares. (In reality, raw scan rate across a large input string is probably not that big a deal for Postgres usage anyway; so I'm happy if it's linear.)
2015-10-31 00:14:19 +01:00
int latype; /* lookaround constraint type */
2003-08-04 02:43:34 +02:00
int subno; /* capturing-parens or backref number */
int atomtype;
int qprefer; /* quantifier short/long preference */
int f;
struct subre **atomp; /* where the pointer to atom is */
/* initial bookkeeping */
atom = NULL;
2003-08-04 02:43:34 +02:00
assert(lp->nouts == 0); /* must string new code */
assert(rp->nins == 0); /* between lp and rp */
subno = 0; /* just to shut lint up */
/* an atom or constraint... */
atomtype = v->nexttype;
2003-08-04 02:43:34 +02:00
switch (atomtype)
{
/* first, constraints, which end by returning */
case '^':
ARCV('^', 1);
if (v->cflags & REG_NLANCH)
ARCV(BEHIND, v->nlcolor);
NEXT();
return;
2003-08-04 02:43:34 +02:00
break;
case '$':
ARCV('$', 1);
if (v->cflags & REG_NLANCH)
ARCV(AHEAD, v->nlcolor);
NEXT();
return;
break;
case SBEGIN:
ARCV('^', 1); /* BOL */
ARCV('^', 0); /* or BOS */
NEXT();
return;
break;
case SEND:
ARCV('$', 1); /* EOL */
ARCV('$', 0); /* or EOS */
NEXT();
return;
break;
case '<':
wordchrs(v); /* does NEXT() */
s = newstate(v->nfa);
NOERR();
2003-08-04 02:43:34 +02:00
nonword(v, BEHIND, lp, s);
word(v, AHEAD, s, rp);
return;
break;
case '>':
wordchrs(v); /* does NEXT() */
s = newstate(v->nfa);
NOERR();
word(v, BEHIND, lp, s);
nonword(v, AHEAD, s, rp);
return;
break;
case WBDRY:
wordchrs(v); /* does NEXT() */
s = newstate(v->nfa);
NOERR();
nonword(v, BEHIND, lp, s);
word(v, AHEAD, s, rp);
s = newstate(v->nfa);
NOERR();
word(v, BEHIND, lp, s);
nonword(v, AHEAD, s, rp);
return;
break;
case NWBDRY:
wordchrs(v); /* does NEXT() */
s = newstate(v->nfa);
NOERR();
word(v, BEHIND, lp, s);
word(v, AHEAD, s, rp);
s = newstate(v->nfa);
NOERR();
nonword(v, BEHIND, lp, s);
nonword(v, AHEAD, s, rp);
return;
break;
Implement lookbehind constraints in our regular-expression engine. A lookbehind constraint is like a lookahead constraint in that it consumes no text; but it checks for existence (or nonexistence) of a match *ending* at the current point in the string, rather than one *starting* at the current point. This is a long-requested feature since it exists in many other regex libraries, but Henry Spencer had never got around to implementing it in the code we use. Just making it work is actually pretty trivial; but naive copying of the logic for lookahead constraints leads to code that often spends O(N^2) time to scan an N-character string, because we have to run the match engine from string start to the current probe point each time the constraint is checked. In typical use-cases a lookbehind constraint will be written at the start of the regex and hence will need to be checked at every character --- so O(N^2) work overall. To fix that, I introduced a third copy of the core DFA matching loop, paralleling the existing longest() and shortest() loops. This version, matchuntil(), can suspend and resume matching given a couple of pointers' worth of storage space. So we need only run it across the string once, stopping at each interesting probe point and then resuming to advance to the next one. I also put in an optimization that simplifies one-character lookahead and lookbehind constraints, such as "(?=x)" or "(?<!\w)", into AHEAD and BEHIND constraints, which already existed in the engine. This avoids the overhead of the LACON machinery entirely for these rather common cases. The net result is that lookbehind constraints run a factor of three or so slower than Perl's for multi-character constraints, but faster than Perl's for one-character constraints ... and they work fine for variable-length constraints, which Perl gives up on entirely. So that's not bad from a competitive perspective, and there's room for further optimization if anyone cares. (In reality, raw scan rate across a large input string is probably not that big a deal for Postgres usage anyway; so I'm happy if it's linear.)
2015-10-31 00:14:19 +01:00
case LACON: /* lookaround constraint */
latype = v->nextvalue;
2003-08-04 02:43:34 +02:00
NEXT();
s = newstate(v->nfa);
s2 = newstate(v->nfa);
NOERR();
t = parse(v, ')', LACON, s, s2);
freesubre(v, t); /* internal structure irrelevant */
NOERR();
Implement lookbehind constraints in our regular-expression engine. A lookbehind constraint is like a lookahead constraint in that it consumes no text; but it checks for existence (or nonexistence) of a match *ending* at the current point in the string, rather than one *starting* at the current point. This is a long-requested feature since it exists in many other regex libraries, but Henry Spencer had never got around to implementing it in the code we use. Just making it work is actually pretty trivial; but naive copying of the logic for lookahead constraints leads to code that often spends O(N^2) time to scan an N-character string, because we have to run the match engine from string start to the current probe point each time the constraint is checked. In typical use-cases a lookbehind constraint will be written at the start of the regex and hence will need to be checked at every character --- so O(N^2) work overall. To fix that, I introduced a third copy of the core DFA matching loop, paralleling the existing longest() and shortest() loops. This version, matchuntil(), can suspend and resume matching given a couple of pointers' worth of storage space. So we need only run it across the string once, stopping at each interesting probe point and then resuming to advance to the next one. I also put in an optimization that simplifies one-character lookahead and lookbehind constraints, such as "(?=x)" or "(?<!\w)", into AHEAD and BEHIND constraints, which already existed in the engine. This avoids the overhead of the LACON machinery entirely for these rather common cases. The net result is that lookbehind constraints run a factor of three or so slower than Perl's for multi-character constraints, but faster than Perl's for one-character constraints ... and they work fine for variable-length constraints, which Perl gives up on entirely. So that's not bad from a competitive perspective, and there's room for further optimization if anyone cares. (In reality, raw scan rate across a large input string is probably not that big a deal for Postgres usage anyway; so I'm happy if it's linear.)
2015-10-31 00:14:19 +01:00
assert(SEE(')'));
NEXT();
processlacon(v, s, s2, latype, lp, rp);
2003-08-04 02:43:34 +02:00
return;
break;
/* then errors, to get them out of the way */
case '*':
case '+':
case '?':
case '{':
ERR(REG_BADRPT);
return;
break;
default:
ERR(REG_ASSERT);
return;
break;
/* then plain characters, and minor variants on that theme */
case ')': /* unbalanced paren */
if ((v->cflags & REG_ADVANCED) != REG_EXTENDED)
{
ERR(REG_EPAREN);
return;
}
/* legal in EREs due to specification botch */
NOTE(REG_UPBOTCH);
/* fall through into case PLAIN */
/* FALLTHROUGH */
2003-08-04 02:43:34 +02:00
case PLAIN:
onechr(v, v->nextvalue, lp, rp);
okcolors(v->nfa, v->cm);
NOERR();
NEXT();
break;
case '[':
if (v->nextvalue == 1)
bracket(v, lp, rp);
else
cbracket(v, lp, rp);
assert(SEE(']') || ISERR());
NEXT();
break;
case '.':
rainbow(v->nfa, v->cm, PLAIN,
(v->cflags & REG_NLSTOP) ? v->nlcolor : COLORLESS,
lp, rp);
NEXT();
break;
/* and finally the ugly stuff */
case '(': /* value flags as capturing or non */
cap = (type == LACON) ? 0 : v->nextvalue;
if (cap)
{
v->nsubexp++;
subno = v->nsubexp;
if ((size_t) subno >= v->nsubs)
moresubs(v, subno);
assert((size_t) subno < v->nsubs);
}
else
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
atomtype = PLAIN; /* something that's not '(' */
2003-08-04 02:43:34 +02:00
NEXT();
/* need new endpoints because tree will contain pointers */
s = newstate(v->nfa);
s2 = newstate(v->nfa);
NOERR();
EMPTYARC(lp, s);
EMPTYARC(s2, rp);
NOERR();
atom = parse(v, ')', type, s, s2);
2003-08-04 02:43:34 +02:00
assert(SEE(')') || ISERR());
NEXT();
NOERR();
if (cap)
{
v->subs[subno] = atom;
t = subre(v, '(', atom->flags | CAP, lp, rp);
NOERR();
t->subno = subno;
t->left = atom;
atom = t;
}
/* postpone everything else pending possible {0} */
break;
case BACKREF: /* the Feature From The Black Lagoon */
INSIST(type != LACON, REG_ESUBREG);
INSIST(v->nextvalue < v->nsubs, REG_ESUBREG);
INSIST(v->subs[v->nextvalue] != NULL, REG_ESUBREG);
NOERR();
assert(v->nextvalue > 0);
atom = subre(v, 'b', BACKR, lp, rp);
NOERR();
2003-08-04 02:43:34 +02:00
subno = v->nextvalue;
atom->subno = subno;
EMPTYARC(lp, rp); /* temporarily, so there's something */
NEXT();
break;
}
/* ...and an atom may be followed by a quantifier */
2003-08-04 02:43:34 +02:00
switch (v->nexttype)
{
case '*':
m = 0;
n = DUPINF;
2003-08-04 02:43:34 +02:00
qprefer = (v->nextvalue) ? LONGER : SHORTER;
NEXT();
break;
case '+':
m = 1;
n = DUPINF;
2003-08-04 02:43:34 +02:00
qprefer = (v->nextvalue) ? LONGER : SHORTER;
NEXT();
break;
case '?':
m = 0;
n = 1;
qprefer = (v->nextvalue) ? LONGER : SHORTER;
NEXT();
break;
case '{':
NEXT();
m = scannum(v);
if (EAT(','))
{
if (SEE(DIGIT))
n = scannum(v);
else
n = DUPINF;
2003-08-04 02:43:34 +02:00
if (m > n)
{
ERR(REG_BADBR);
return;
}
/* {m,n} exercises preference, even if it's {m,m} */
qprefer = (v->nextvalue) ? LONGER : SHORTER;
}
else
2003-08-04 02:43:34 +02:00
{
n = m;
/* {m} passes operand's preference through */
qprefer = 0;
}
if (!SEE('}'))
{ /* catches errors too */
ERR(REG_BADBR);
return;
}
2003-08-04 02:43:34 +02:00
NEXT();
break;
default: /* no quantifier */
m = n = 1;
qprefer = 0;
2003-08-04 02:43:34 +02:00
break;
}
/* annoying special case: {0} or {0,0} cancels everything */
2003-08-04 02:43:34 +02:00
if (m == 0 && n == 0)
{
if (atom != NULL)
freesubre(v, atom);
if (atomtype == '(')
v->subs[subno] = NULL;
delsub(v->nfa, lp, rp);
EMPTYARC(lp, rp);
return;
}
/* if not a messy case, avoid hard part */
assert(!MESSY(top->flags));
f = top->flags | qprefer | ((atom != NULL) ? atom->flags : 0);
2003-08-04 02:43:34 +02:00
if (atomtype != '(' && atomtype != BACKREF && !MESSY(UP(f)))
{
if (!(m == 1 && n == 1))
repeat(v, lp, rp, m, n);
if (atom != NULL)
freesubre(v, atom);
top->flags = f;
return;
}
/*
* hard part: something messy
*
* That is, capturing parens, back reference, short/long clash, or an atom
* with substructure containing one of those.
*/
/* now we'll need a subre for the contents even if they're boring */
2003-08-04 02:43:34 +02:00
if (atom == NULL)
{
atom = subre(v, '=', 0, lp, rp);
NOERR();
}
/*----------
* Prepare a general-purpose state skeleton.
*
* In the no-backrefs case, we want this:
*
* [lp] ---> [s] ---prefix---> [begin] ---atom---> [end] ---rest---> [rp]
*
* where prefix is some repetitions of atom. In the general case we need
*
* [lp] ---> [s] ---iterator---> [s2] ---rest---> [rp]
*
* where the iterator wraps around [begin] ---atom---> [end]
*
* We make the s state here for both cases; s2 is made below if needed
*----------
*/
s = newstate(v->nfa); /* first, new endpoints for the atom */
s2 = newstate(v->nfa);
NOERR();
moveouts(v->nfa, lp, s);
moveins(v->nfa, rp, s2);
NOERR();
atom->begin = s;
atom->end = s2;
s = newstate(v->nfa); /* set up starting state */
NOERR();
EMPTYARC(lp, s);
NOERR();
/* break remaining subRE into x{...} and what follows */
t = subre(v, '.', COMBINE(qprefer, atom->flags), lp, rp);
NOERR();
t->left = atom;
atomp = &t->left;
/* here we should recurse... but we must postpone that to the end */
/* split top into prefix and remaining */
assert(top->op == '=' && top->left == NULL && top->right == NULL);
top->left = subre(v, '=', top->flags, top->begin, lp);
NOERR();
top->op = '.';
top->right = t;
/* if it's a backref, now is the time to replicate the subNFA */
2003-08-04 02:43:34 +02:00
if (atomtype == BACKREF)
{
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
assert(atom->begin->nouts == 1); /* just the EMPTY */
delsub(v->nfa, atom->begin, atom->end);
assert(v->subs[subno] != NULL);
/*
* And here's why the recursion got postponed: it must wait until the
* skeleton is filled in, because it may hit a backref that wants to
* copy the filled-in skeleton.
*/
dupnfa(v->nfa, v->subs[subno]->begin, v->subs[subno]->end,
2003-08-04 02:43:34 +02:00
atom->begin, atom->end);
NOERR();
}
Fix regex back-references that are directly quantified with *. The syntax "\n*", that is a backref with a * quantifier directly applied to it, has never worked correctly in Spencer's library. This has been an open bug in the Tcl bug tracker since 2005: https://sourceforge.net/tracker/index.php?func=detail&aid=1115587&group_id=10894&atid=110894 The core of the problem is in parseqatom(), which first changes "\n*" to "\n+|" and then applies repeat() to the NFA representing the backref atom. repeat() thinks that any arc leading into its "rp" argument is part of the sub-NFA to be repeated. Unfortunately, since parseqatom() already created the arc that was intended to represent the empty bypass around "\n+", this arc gets moved too, so that it now leads into the state loop created by repeat(). Thus, what was supposed to be an "empty" bypass gets turned into something that represents zero or more repetitions of the NFA representing the backref atom. In the original example, in place of ^([bc])\1*$ we now have something that acts like ^([bc])(\1+|[bc]*)$ At runtime, the branch involving the actual backref fails, as it's supposed to, but then the other branch succeeds anyway. We could no doubt fix this by some rearrangement of the operations in parseqatom(), but that code is plenty ugly already, and what's more the whole business of converting "x*" to "x+|" probably needs to go away to fix another problem I'll mention in a moment. Instead, this patch suppresses the *-conversion when the target is a simple backref atom, leaving the case of m == 0 to be handled at runtime. This makes the patch in regcomp.c a one-liner, at the cost of having to tweak cbrdissect() a little. In the event I went a bit further than that and rewrote cbrdissect() to check all the string-length-related conditions before it starts comparing characters. It seems a bit stupid to possibly iterate through many copies of an n-character backreference, only to fail at the end because the target string's length isn't a multiple of n --- we could have found that out before starting. The existing coding could only be a win if integer division is hugely expensive compared to character comparison, but I don't know of any modern machine where that might be true. This does not fix all the problems with quantified back-references. In particular, the code is still broken for back-references that appear within a larger expression that is quantified (so that direct insertion of the quantification limits into the BACKREF node doesn't apply). I think fixing that will take some major surgery on the NFA code, specifically introducing an explicit iteration node type instead of trying to transform iteration into concatenation of modified regexps. Back-patch to all supported branches. In HEAD, also add a regression test case for this. (It may seem a bit silly to create a regression test file for just one test case; but I'm expecting that we will soon import a whole bunch of regex regression tests from Tcl, so might as well create the infrastructure now.)
2012-02-20 06:52:33 +01:00
/*
* It's quantifier time. If the atom is just a backref, we'll let it deal
* with quantifiers internally.
Fix regex back-references that are directly quantified with *. The syntax "\n*", that is a backref with a * quantifier directly applied to it, has never worked correctly in Spencer's library. This has been an open bug in the Tcl bug tracker since 2005: https://sourceforge.net/tracker/index.php?func=detail&aid=1115587&group_id=10894&atid=110894 The core of the problem is in parseqatom(), which first changes "\n*" to "\n+|" and then applies repeat() to the NFA representing the backref atom. repeat() thinks that any arc leading into its "rp" argument is part of the sub-NFA to be repeated. Unfortunately, since parseqatom() already created the arc that was intended to represent the empty bypass around "\n+", this arc gets moved too, so that it now leads into the state loop created by repeat(). Thus, what was supposed to be an "empty" bypass gets turned into something that represents zero or more repetitions of the NFA representing the backref atom. In the original example, in place of ^([bc])\1*$ we now have something that acts like ^([bc])(\1+|[bc]*)$ At runtime, the branch involving the actual backref fails, as it's supposed to, but then the other branch succeeds anyway. We could no doubt fix this by some rearrangement of the operations in parseqatom(), but that code is plenty ugly already, and what's more the whole business of converting "x*" to "x+|" probably needs to go away to fix another problem I'll mention in a moment. Instead, this patch suppresses the *-conversion when the target is a simple backref atom, leaving the case of m == 0 to be handled at runtime. This makes the patch in regcomp.c a one-liner, at the cost of having to tweak cbrdissect() a little. In the event I went a bit further than that and rewrote cbrdissect() to check all the string-length-related conditions before it starts comparing characters. It seems a bit stupid to possibly iterate through many copies of an n-character backreference, only to fail at the end because the target string's length isn't a multiple of n --- we could have found that out before starting. The existing coding could only be a win if integer division is hugely expensive compared to character comparison, but I don't know of any modern machine where that might be true. This does not fix all the problems with quantified back-references. In particular, the code is still broken for back-references that appear within a larger expression that is quantified (so that direct insertion of the quantification limits into the BACKREF node doesn't apply). I think fixing that will take some major surgery on the NFA code, specifically introducing an explicit iteration node type instead of trying to transform iteration into concatenation of modified regexps. Back-patch to all supported branches. In HEAD, also add a regression test case for this. (It may seem a bit silly to create a regression test file for just one test case; but I'm expecting that we will soon import a whole bunch of regex regression tests from Tcl, so might as well create the infrastructure now.)
2012-02-20 06:52:33 +01:00
*/
2003-08-04 02:43:34 +02:00
if (atomtype == BACKREF)
{
/* special case: backrefs have internal quantifiers */
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
EMPTYARC(s, atom->begin); /* empty prefix */
/* just stuff everything into atom */
repeat(v, atom->begin, atom->end, m, n);
2003-08-04 02:43:34 +02:00
atom->min = (short) m;
atom->max = (short) n;
atom->flags |= COMBINE(qprefer, atom->flags);
/* rest of branch can be strung starting from atom->end */
s2 = atom->end;
2003-08-04 02:43:34 +02:00
}
Fix misoptimization of "{1,1}" quantifiers in regular expressions. A bounded quantifier with m = n = 1 might be thought a no-op. But according to our documentation (which traces back to Henry Spencer's original man page) it still imposes greediness, or non-greediness in the case of the non-greedy variant "{1,1}?", on whatever it's attached to. This turns out not to work though, because parseqatom() optimizes away the m = n = 1 case without regard for whether it's supposed to change the greediness of the argument RE. We can fix this by just not applying the optimization when the greediness needs to change; the subsequent general cases handle it fine. The three cases in which we can still apply the optimization are (a) no quantifier, or quantifier does not impose a preference; (b) atom has no greediness property, implying it cannot match a variable amount of text anyway; or (c) quantifier's greediness is same as atom's. Note that in most cases where one of these applies, we'd have exited earlier in the "not a messy case" fast path. I think it's now only possible to get to the optimization when the atom involves capturing parentheses or a non-top-level backref. Back-patch to all supported branches. I'd ordinarily be hesitant to put a subtle behavioral change into back branches, but in this case it's very hard to see a reason why somebody would write "{1,1}?" unless they're trying to get the documented change-of-greediness behavior. Discussion: https://postgr.es/m/5bb27a41-350d-37bf-901e-9d26f5592dd0@charter.net
2019-05-13 00:53:12 +02:00
else if (m == 1 && n == 1 &&
(qprefer == 0 ||
(atom->flags & (LONGER | SHORTER | MIXED)) == 0 ||
qprefer == (atom->flags & (LONGER | SHORTER | MIXED))))
2003-08-04 02:43:34 +02:00
{
/* no/vacuous quantifier: done */
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
EMPTYARC(s, atom->begin); /* empty prefix */
/* rest of branch can be strung starting from atom->end */
s2 = atom->end;
2003-08-04 02:43:34 +02:00
}
else if (m > 0 && !(atom->flags & BACKR))
2003-08-04 02:43:34 +02:00
{
/*
* If there's no backrefs involved, we can turn x{m,n} into
* x{m-1,n-1}x, with capturing parens in only the second x. This is
* valid because we only care about capturing matches from the final
* iteration of the quantifier. It's a win because we can implement
* the backref-free left side as a plain DFA node, since we don't
* really care where its submatches are.
*/
dupnfa(v->nfa, atom->begin, atom->end, s, atom->begin);
assert(m >= 1 && m != DUPINF && n >= 1);
repeat(v, s, atom->begin, m - 1, (n == DUPINF) ? n : n - 1);
f = COMBINE(qprefer, atom->flags);
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
t = subre(v, '.', f, s, atom->end); /* prefix and atom */
NOERR();
t->left = subre(v, '=', PREF(f), s, atom->begin);
NOERR();
t->right = atom;
*atomp = t;
/* rest of branch can be strung starting from atom->end */
s2 = atom->end;
}
else
{
/* general case: need an iteration node */
s2 = newstate(v->nfa);
NOERR();
moveouts(v->nfa, atom->end, s2);
NOERR();
dupnfa(v->nfa, atom->begin, atom->end, s, s2);
repeat(v, s, s2, m, n);
f = COMBINE(qprefer, atom->flags);
t = subre(v, '*', f, s, s2);
NOERR();
t->min = (short) m;
t->max = (short) n;
t->left = atom;
*atomp = t;
/* rest of branch is to be strung from iteration's end state */
}
/* and finally, look after that postponed recursion */
t = top->right;
if (!(SEE('|') || SEE(stopper) || SEE(EOS)))
t->right = parsebranch(v, stopper, type, s2, rp, 1);
2003-08-04 02:43:34 +02:00
else
{
EMPTYARC(s2, rp);
t->right = subre(v, '=', 0, s2, rp);
}
NOERR();
assert(SEE('|') || SEE(stopper) || SEE(EOS));
t->flags |= COMBINE(t->flags, t->right->flags);
top->flags |= COMBINE(top->flags, t->flags);
}
/*
* nonword - generate arcs for non-word-character ahead or behind
*/
static void
2017-06-21 20:39:04 +02:00
nonword(struct vars *v,
2003-08-04 02:43:34 +02:00
int dir, /* AHEAD or BEHIND */
2017-06-21 20:39:04 +02:00
struct state *lp,
struct state *rp)
{
2003-08-04 02:43:34 +02:00
int anchor = (dir == AHEAD) ? '$' : '^';
assert(dir == AHEAD || dir == BEHIND);
newarc(v->nfa, anchor, 1, lp, rp);
newarc(v->nfa, anchor, 0, lp, rp);
colorcomplement(v->nfa, v->cm, dir, v->wordchrs, lp, rp);
/* (no need for special attention to \n) */
}
/*
* word - generate arcs for word character ahead or behind
*/
static void
2017-06-21 20:39:04 +02:00
word(struct vars *v,
2003-08-04 02:43:34 +02:00
int dir, /* AHEAD or BEHIND */
2017-06-21 20:39:04 +02:00
struct state *lp,
struct state *rp)
{
assert(dir == AHEAD || dir == BEHIND);
cloneouts(v->nfa, v->wordchrs, lp, rp, dir);
/* (no need for special attention to \n) */
}
/*
* scannum - scan a number
*/
2003-08-04 02:43:34 +02:00
static int /* value, <= DUPMAX */
2017-06-21 20:39:04 +02:00
scannum(struct vars *v)
{
2003-08-04 02:43:34 +02:00
int n = 0;
2003-08-04 02:43:34 +02:00
while (SEE(DIGIT) && n < DUPMAX)
{
n = n * 10 + v->nextvalue;
NEXT();
}
2003-08-04 02:43:34 +02:00
if (SEE(DIGIT) || n > DUPMAX)
{
ERR(REG_BADBR);
1998-09-01 05:29:17 +02:00
return 0;
}
return n;
}
/*
* repeat - replicate subNFA for quantifiers
*
* The sub-NFA strung from lp to rp is modified to represent m to n
* repetitions of its initial contents.
*
* The duplication sequences used here are chosen carefully so that any
* pointers starting out pointing into the subexpression end up pointing into
* the last occurrence. (Note that it may not be strung between the same
* left and right end states, however!) This used to be important for the
* subRE tree, although the important bits are now handled by the in-line
* code in parse(), and when this is called, it doesn't matter any more.
*/
static void
2017-06-21 20:39:04 +02:00
repeat(struct vars *v,
struct state *lp,
struct state *rp,
int m,
int n)
{
2003-08-04 02:43:34 +02:00
#define SOME 2
#define INF 3
2003-08-04 02:43:34 +02:00
#define PAIR(x, y) ((x)*4 + (y))
#define REDUCE(x) ( ((x) == DUPINF) ? INF : (((x) > 1) ? SOME : (x)) )
2003-08-04 02:43:34 +02:00
const int rm = REDUCE(m);
const int rn = REDUCE(n);
struct state *s;
struct state *s2;
2003-08-04 02:43:34 +02:00
switch (PAIR(rm, rn))
{
case PAIR(0, 0): /* empty string */
delsub(v->nfa, lp, rp);
EMPTYARC(lp, rp);
break;
case PAIR(0, 1): /* do as x| */
EMPTYARC(lp, rp);
break;
case PAIR(0, SOME): /* do as x{1,n}| */
repeat(v, lp, rp, 1, n);
NOERR();
EMPTYARC(lp, rp);
break;
case PAIR(0, INF): /* loop x around */
s = newstate(v->nfa);
NOERR();
moveouts(v->nfa, lp, s);
moveins(v->nfa, rp, s);
EMPTYARC(lp, s);
EMPTYARC(s, rp);
break;
case PAIR(1, 1): /* no action required */
break;
case PAIR(1, SOME): /* do as x{0,n-1}x = (x{1,n-1}|)x */
s = newstate(v->nfa);
NOERR();
moveouts(v->nfa, lp, s);
dupnfa(v->nfa, s, rp, lp, s);
NOERR();
repeat(v, lp, s, 1, n - 1);
NOERR();
EMPTYARC(lp, s);
break;
case PAIR(1, INF): /* add loopback arc */
s = newstate(v->nfa);
s2 = newstate(v->nfa);
NOERR();
moveouts(v->nfa, lp, s);
moveins(v->nfa, rp, s2);
EMPTYARC(lp, s);
EMPTYARC(s2, rp);
EMPTYARC(s2, s);
break;
case PAIR(SOME, SOME): /* do as x{m-1,n-1}x */
s = newstate(v->nfa);
NOERR();
moveouts(v->nfa, lp, s);
dupnfa(v->nfa, s, rp, lp, s);
NOERR();
repeat(v, lp, s, m - 1, n - 1);
break;
case PAIR(SOME, INF): /* do as x{m-1,}x */
s = newstate(v->nfa);
NOERR();
moveouts(v->nfa, lp, s);
dupnfa(v->nfa, s, rp, lp, s);
NOERR();
repeat(v, lp, s, m - 1, n);
break;
default:
ERR(REG_ASSERT);
break;
}
}
/*
* bracket - handle non-complemented bracket expression
* Also called from cbracket for complemented bracket expressions.
*/
static void
2017-06-21 20:39:04 +02:00
bracket(struct vars *v,
struct state *lp,
struct state *rp)
{
assert(SEE('['));
NEXT();
while (!SEE(']') && !SEE(EOS))
brackpart(v, lp, rp);
assert(SEE(']') || ISERR());
okcolors(v->nfa, v->cm);
}
/*
* cbracket - handle complemented bracket expression
* We do it by calling bracket() with dummy endpoints, and then complementing
* the result. The alternative would be to invoke rainbow(), and then delete
* arcs as the b.e. is seen... but that gets messy.
*/
static void
2017-06-21 20:39:04 +02:00
cbracket(struct vars *v,
struct state *lp,
struct state *rp)
{
struct state *left = newstate(v->nfa);
struct state *right = newstate(v->nfa);
NOERR();
bracket(v, left, right);
2003-08-04 02:43:34 +02:00
if (v->cflags & REG_NLSTOP)
newarc(v->nfa, PLAIN, v->nlcolor, left, right);
NOERR();
assert(lp->nouts == 0); /* all outarcs will be ours */
/*
* Easy part of complementing, and all there is to do since the MCCE code
* was removed.
*/
colorcomplement(v->nfa, v->cm, PLAIN, left, lp, rp);
NOERR();
dropstate(v->nfa, left);
assert(right->nins == 0);
freestate(v->nfa, right);
}
2003-08-04 02:43:34 +02:00
/*
* brackpart - handle one item (or range) within a bracket expression
*/
static void
2017-06-21 20:39:04 +02:00
brackpart(struct vars *v,
struct state *lp,
struct state *rp)
{
chr startc;
chr endc;
struct cvec *cv;
const chr *startp;
const chr *endp;
2003-08-04 02:43:34 +02:00
chr c[1];
/* parse something, get rid of special cases, take shortcuts */
2003-08-04 02:43:34 +02:00
switch (v->nexttype)
{
case RANGE: /* a-b-c or other botch */
ERR(REG_ERANGE);
return;
2003-08-04 02:43:34 +02:00
break;
case PLAIN:
c[0] = v->nextvalue;
NEXT();
/* shortcut for ordinary chr (not range) */
if (!SEE(RANGE))
2003-08-04 02:43:34 +02:00
{
onechr(v, c[0], lp, rp);
return;
}
startc = element(v, c, c + 1);
NOERR();
break;
case COLLEL:
startp = v->now;
endp = scanplain(v);
INSIST(startp < endp, REG_ECOLLATE);
NOERR();
2003-08-04 02:43:34 +02:00
startc = element(v, startp, endp);
NOERR();
break;
case ECLASS:
startp = v->now;
endp = scanplain(v);
INSIST(startp < endp, REG_ECOLLATE);
NOERR();
startc = element(v, startp, endp);
NOERR();
cv = eclass(v, startc, (v->cflags & REG_ICASE));
NOERR();
Make locale-dependent regex character classes work for large char codes. Previously, we failed to recognize Unicode characters above U+7FF as being members of locale-dependent character classes such as [[:alpha:]]. (Actually, the same problem occurs for large pg_wchar values in any multibyte encoding, but UTF8 is the only case people have actually complained about.) It's impractical to get Spencer's original code to handle character classes or ranges containing many thousands of characters, because it insists on considering each member character individually at regex compile time, whether or not the character will ever be of interest at run time. To fix, choose a cutoff point MAX_SIMPLE_CHR below which we process characters individually as before, and deal with entire ranges or classes as single entities above that. We can actually make things cheaper than before for chars below the cutoff, because the color map can now be a simple linear array for those chars, rather than the multilevel tree structure Spencer designed. It's more expensive than before for chars above the cutoff, because we must do a binary search in a list of high chars and char ranges used in the regex pattern, plus call iswalpha() and friends for each locale-dependent character class used in the pattern. However, multibyte encodings are normally designed to give smaller codes to popular characters, so that we can expect that the slow path will be taken relatively infrequently. In any case, the speed penalty appears minor except when we have to apply iswalpha() etc. to high character codes at runtime --- and the previous coding gave wrong answers for those cases, so whether it was faster is moot. Tom Lane, reviewed by Heikki Linnakangas Discussion: <15563.1471913698@sss.pgh.pa.us>
2016-09-05 23:06:29 +02:00
subcolorcvec(v, cv, lp, rp);
2003-08-04 02:43:34 +02:00
return;
break;
case CCLASS:
startp = v->now;
endp = scanplain(v);
INSIST(startp < endp, REG_ECTYPE);
NOERR();
cv = cclass(v, startp, endp, (v->cflags & REG_ICASE));
NOERR();
Make locale-dependent regex character classes work for large char codes. Previously, we failed to recognize Unicode characters above U+7FF as being members of locale-dependent character classes such as [[:alpha:]]. (Actually, the same problem occurs for large pg_wchar values in any multibyte encoding, but UTF8 is the only case people have actually complained about.) It's impractical to get Spencer's original code to handle character classes or ranges containing many thousands of characters, because it insists on considering each member character individually at regex compile time, whether or not the character will ever be of interest at run time. To fix, choose a cutoff point MAX_SIMPLE_CHR below which we process characters individually as before, and deal with entire ranges or classes as single entities above that. We can actually make things cheaper than before for chars below the cutoff, because the color map can now be a simple linear array for those chars, rather than the multilevel tree structure Spencer designed. It's more expensive than before for chars above the cutoff, because we must do a binary search in a list of high chars and char ranges used in the regex pattern, plus call iswalpha() and friends for each locale-dependent character class used in the pattern. However, multibyte encodings are normally designed to give smaller codes to popular characters, so that we can expect that the slow path will be taken relatively infrequently. In any case, the speed penalty appears minor except when we have to apply iswalpha() etc. to high character codes at runtime --- and the previous coding gave wrong answers for those cases, so whether it was faster is moot. Tom Lane, reviewed by Heikki Linnakangas Discussion: <15563.1471913698@sss.pgh.pa.us>
2016-09-05 23:06:29 +02:00
subcolorcvec(v, cv, lp, rp);
2003-08-04 02:43:34 +02:00
return;
break;
default:
2003-08-04 02:43:34 +02:00
ERR(REG_ASSERT);
return;
break;
2003-08-04 02:43:34 +02:00
}
if (SEE(RANGE))
{
NEXT();
switch (v->nexttype)
{
case PLAIN:
case RANGE:
c[0] = v->nextvalue;
NEXT();
endc = element(v, c, c + 1);
NOERR();
break;
case COLLEL:
startp = v->now;
endp = scanplain(v);
INSIST(startp < endp, REG_ECOLLATE);
NOERR();
endc = element(v, startp, endp);
NOERR();
break;
default:
ERR(REG_ERANGE);
return;
break;
}
2003-08-04 02:43:34 +02:00
}
else
endc = startc;
/*
2005-10-15 04:49:52 +02:00
* Ranges are unportable. Actually, standard C does guarantee that digits
* are contiguous, but making that an exception is just too complicated.
*/
if (startc != endc)
NOTE(REG_UUNPORT);
2003-08-04 02:43:34 +02:00
cv = range(v, startc, endc, (v->cflags & REG_ICASE));
NOERR();
Make locale-dependent regex character classes work for large char codes. Previously, we failed to recognize Unicode characters above U+7FF as being members of locale-dependent character classes such as [[:alpha:]]. (Actually, the same problem occurs for large pg_wchar values in any multibyte encoding, but UTF8 is the only case people have actually complained about.) It's impractical to get Spencer's original code to handle character classes or ranges containing many thousands of characters, because it insists on considering each member character individually at regex compile time, whether or not the character will ever be of interest at run time. To fix, choose a cutoff point MAX_SIMPLE_CHR below which we process characters individually as before, and deal with entire ranges or classes as single entities above that. We can actually make things cheaper than before for chars below the cutoff, because the color map can now be a simple linear array for those chars, rather than the multilevel tree structure Spencer designed. It's more expensive than before for chars above the cutoff, because we must do a binary search in a list of high chars and char ranges used in the regex pattern, plus call iswalpha() and friends for each locale-dependent character class used in the pattern. However, multibyte encodings are normally designed to give smaller codes to popular characters, so that we can expect that the slow path will be taken relatively infrequently. In any case, the speed penalty appears minor except when we have to apply iswalpha() etc. to high character codes at runtime --- and the previous coding gave wrong answers for those cases, so whether it was faster is moot. Tom Lane, reviewed by Heikki Linnakangas Discussion: <15563.1471913698@sss.pgh.pa.us>
2016-09-05 23:06:29 +02:00
subcolorcvec(v, cv, lp, rp);
}
/*
* scanplain - scan PLAIN contents of [. etc.
*
* Certain bits of trickery in lex.c know that this code does not try
* to look past the final bracket of the [. etc.
*/
static const chr * /* just after end of sequence */
2017-06-21 20:39:04 +02:00
scanplain(struct vars *v)
{
const chr *endp;
assert(SEE(COLLEL) || SEE(ECLASS) || SEE(CCLASS));
NEXT();
endp = v->now;
2003-08-04 02:43:34 +02:00
while (SEE(PLAIN))
{
endp = v->now;
NEXT();
}
assert(SEE(END) || ISERR());
NEXT();
return endp;
}
/*
* onechr - fill in arcs for a plain character, and possible case complements
* This is mostly a shortcut for efficient handling of the common case.
*/
static void
2017-06-21 20:39:04 +02:00
onechr(struct vars *v,
chr c,
2017-06-21 20:39:04 +02:00
struct state *lp,
struct state *rp)
{
2003-08-04 02:43:34 +02:00
if (!(v->cflags & REG_ICASE))
{
Make locale-dependent regex character classes work for large char codes. Previously, we failed to recognize Unicode characters above U+7FF as being members of locale-dependent character classes such as [[:alpha:]]. (Actually, the same problem occurs for large pg_wchar values in any multibyte encoding, but UTF8 is the only case people have actually complained about.) It's impractical to get Spencer's original code to handle character classes or ranges containing many thousands of characters, because it insists on considering each member character individually at regex compile time, whether or not the character will ever be of interest at run time. To fix, choose a cutoff point MAX_SIMPLE_CHR below which we process characters individually as before, and deal with entire ranges or classes as single entities above that. We can actually make things cheaper than before for chars below the cutoff, because the color map can now be a simple linear array for those chars, rather than the multilevel tree structure Spencer designed. It's more expensive than before for chars above the cutoff, because we must do a binary search in a list of high chars and char ranges used in the regex pattern, plus call iswalpha() and friends for each locale-dependent character class used in the pattern. However, multibyte encodings are normally designed to give smaller codes to popular characters, so that we can expect that the slow path will be taken relatively infrequently. In any case, the speed penalty appears minor except when we have to apply iswalpha() etc. to high character codes at runtime --- and the previous coding gave wrong answers for those cases, so whether it was faster is moot. Tom Lane, reviewed by Heikki Linnakangas Discussion: <15563.1471913698@sss.pgh.pa.us>
2016-09-05 23:06:29 +02:00
color lastsubcolor = COLORLESS;
subcoloronechr(v, c, lp, rp, &lastsubcolor);
return;
}
/* rats, need general case anyway... */
Make locale-dependent regex character classes work for large char codes. Previously, we failed to recognize Unicode characters above U+7FF as being members of locale-dependent character classes such as [[:alpha:]]. (Actually, the same problem occurs for large pg_wchar values in any multibyte encoding, but UTF8 is the only case people have actually complained about.) It's impractical to get Spencer's original code to handle character classes or ranges containing many thousands of characters, because it insists on considering each member character individually at regex compile time, whether or not the character will ever be of interest at run time. To fix, choose a cutoff point MAX_SIMPLE_CHR below which we process characters individually as before, and deal with entire ranges or classes as single entities above that. We can actually make things cheaper than before for chars below the cutoff, because the color map can now be a simple linear array for those chars, rather than the multilevel tree structure Spencer designed. It's more expensive than before for chars above the cutoff, because we must do a binary search in a list of high chars and char ranges used in the regex pattern, plus call iswalpha() and friends for each locale-dependent character class used in the pattern. However, multibyte encodings are normally designed to give smaller codes to popular characters, so that we can expect that the slow path will be taken relatively infrequently. In any case, the speed penalty appears minor except when we have to apply iswalpha() etc. to high character codes at runtime --- and the previous coding gave wrong answers for those cases, so whether it was faster is moot. Tom Lane, reviewed by Heikki Linnakangas Discussion: <15563.1471913698@sss.pgh.pa.us>
2016-09-05 23:06:29 +02:00
subcolorcvec(v, allcases(v, c), lp, rp);
}
/*
* wordchrs - set up word-chr list for word-boundary stuff, if needed
*
* The list is kept as a bunch of arcs between two dummy states; it's
* disposed of by the unreachable-states sweep in NFA optimization.
* Does NEXT(). Must not be called from any unusual lexical context.
* This should be reconciled with the \w etc. handling in lex.c, and
* should be cleaned up to reduce dependencies on input scanning.
*/
static void
2017-06-21 20:39:04 +02:00
wordchrs(struct vars *v)
{
struct state *left;
struct state *right;
2003-08-04 02:43:34 +02:00
if (v->wordchrs != NULL)
{
NEXT(); /* for consistency */
return;
}
left = newstate(v->nfa);
right = newstate(v->nfa);
NOERR();
2003-08-04 02:43:34 +02:00
/* fine point: implemented with [::], and lexer will set REG_ULOCALE */
lexword(v);
NEXT();
assert(v->savenow != NULL && SEE('['));
bracket(v, left, right);
assert((v->savenow != NULL && SEE(']')) || ISERR());
NEXT();
NOERR();
v->wordchrs = left;
}
Implement lookbehind constraints in our regular-expression engine. A lookbehind constraint is like a lookahead constraint in that it consumes no text; but it checks for existence (or nonexistence) of a match *ending* at the current point in the string, rather than one *starting* at the current point. This is a long-requested feature since it exists in many other regex libraries, but Henry Spencer had never got around to implementing it in the code we use. Just making it work is actually pretty trivial; but naive copying of the logic for lookahead constraints leads to code that often spends O(N^2) time to scan an N-character string, because we have to run the match engine from string start to the current probe point each time the constraint is checked. In typical use-cases a lookbehind constraint will be written at the start of the regex and hence will need to be checked at every character --- so O(N^2) work overall. To fix that, I introduced a third copy of the core DFA matching loop, paralleling the existing longest() and shortest() loops. This version, matchuntil(), can suspend and resume matching given a couple of pointers' worth of storage space. So we need only run it across the string once, stopping at each interesting probe point and then resuming to advance to the next one. I also put in an optimization that simplifies one-character lookahead and lookbehind constraints, such as "(?=x)" or "(?<!\w)", into AHEAD and BEHIND constraints, which already existed in the engine. This avoids the overhead of the LACON machinery entirely for these rather common cases. The net result is that lookbehind constraints run a factor of three or so slower than Perl's for multi-character constraints, but faster than Perl's for one-character constraints ... and they work fine for variable-length constraints, which Perl gives up on entirely. So that's not bad from a competitive perspective, and there's room for further optimization if anyone cares. (In reality, raw scan rate across a large input string is probably not that big a deal for Postgres usage anyway; so I'm happy if it's linear.)
2015-10-31 00:14:19 +01:00
/*
* processlacon - generate the NFA representation of a LACON
*
* In the general case this is just newlacon() + newarc(), but some cases
* can be optimized.
*/
static void
2017-06-21 20:39:04 +02:00
processlacon(struct vars *v,
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
struct state *begin, /* start of parsed LACON sub-re */
2017-06-21 20:39:04 +02:00
struct state *end, /* end of parsed LACON sub-re */
Implement lookbehind constraints in our regular-expression engine. A lookbehind constraint is like a lookahead constraint in that it consumes no text; but it checks for existence (or nonexistence) of a match *ending* at the current point in the string, rather than one *starting* at the current point. This is a long-requested feature since it exists in many other regex libraries, but Henry Spencer had never got around to implementing it in the code we use. Just making it work is actually pretty trivial; but naive copying of the logic for lookahead constraints leads to code that often spends O(N^2) time to scan an N-character string, because we have to run the match engine from string start to the current probe point each time the constraint is checked. In typical use-cases a lookbehind constraint will be written at the start of the regex and hence will need to be checked at every character --- so O(N^2) work overall. To fix that, I introduced a third copy of the core DFA matching loop, paralleling the existing longest() and shortest() loops. This version, matchuntil(), can suspend and resume matching given a couple of pointers' worth of storage space. So we need only run it across the string once, stopping at each interesting probe point and then resuming to advance to the next one. I also put in an optimization that simplifies one-character lookahead and lookbehind constraints, such as "(?=x)" or "(?<!\w)", into AHEAD and BEHIND constraints, which already existed in the engine. This avoids the overhead of the LACON machinery entirely for these rather common cases. The net result is that lookbehind constraints run a factor of three or so slower than Perl's for multi-character constraints, but faster than Perl's for one-character constraints ... and they work fine for variable-length constraints, which Perl gives up on entirely. So that's not bad from a competitive perspective, and there's room for further optimization if anyone cares. (In reality, raw scan rate across a large input string is probably not that big a deal for Postgres usage anyway; so I'm happy if it's linear.)
2015-10-31 00:14:19 +01:00
int latype,
2017-06-21 20:39:04 +02:00
struct state *lp, /* left state to hang it on */
struct state *rp) /* right state to hang it on */
Implement lookbehind constraints in our regular-expression engine. A lookbehind constraint is like a lookahead constraint in that it consumes no text; but it checks for existence (or nonexistence) of a match *ending* at the current point in the string, rather than one *starting* at the current point. This is a long-requested feature since it exists in many other regex libraries, but Henry Spencer had never got around to implementing it in the code we use. Just making it work is actually pretty trivial; but naive copying of the logic for lookahead constraints leads to code that often spends O(N^2) time to scan an N-character string, because we have to run the match engine from string start to the current probe point each time the constraint is checked. In typical use-cases a lookbehind constraint will be written at the start of the regex and hence will need to be checked at every character --- so O(N^2) work overall. To fix that, I introduced a third copy of the core DFA matching loop, paralleling the existing longest() and shortest() loops. This version, matchuntil(), can suspend and resume matching given a couple of pointers' worth of storage space. So we need only run it across the string once, stopping at each interesting probe point and then resuming to advance to the next one. I also put in an optimization that simplifies one-character lookahead and lookbehind constraints, such as "(?=x)" or "(?<!\w)", into AHEAD and BEHIND constraints, which already existed in the engine. This avoids the overhead of the LACON machinery entirely for these rather common cases. The net result is that lookbehind constraints run a factor of three or so slower than Perl's for multi-character constraints, but faster than Perl's for one-character constraints ... and they work fine for variable-length constraints, which Perl gives up on entirely. So that's not bad from a competitive perspective, and there's room for further optimization if anyone cares. (In reality, raw scan rate across a large input string is probably not that big a deal for Postgres usage anyway; so I'm happy if it's linear.)
2015-10-31 00:14:19 +01:00
{
struct state *s1;
int n;
/*
* Check for lookaround RE consisting of a single plain color arc (or set
* of arcs); this would typically be a simple chr or a bracket expression.
*/
s1 = single_color_transition(begin, end);
switch (latype)
{
case LATYPE_AHEAD_POS:
/* If lookahead RE is just colorset C, convert to AHEAD(C) */
if (s1 != NULL)
{
cloneouts(v->nfa, s1, lp, rp, AHEAD);
return;
}
break;
case LATYPE_AHEAD_NEG:
/* If lookahead RE is just colorset C, convert to AHEAD(^C)|$ */
if (s1 != NULL)
{
colorcomplement(v->nfa, v->cm, AHEAD, s1, lp, rp);
newarc(v->nfa, '$', 1, lp, rp);
newarc(v->nfa, '$', 0, lp, rp);
return;
}
break;
case LATYPE_BEHIND_POS:
/* If lookbehind RE is just colorset C, convert to BEHIND(C) */
if (s1 != NULL)
{
cloneouts(v->nfa, s1, lp, rp, BEHIND);
return;
}
break;
case LATYPE_BEHIND_NEG:
/* If lookbehind RE is just colorset C, convert to BEHIND(^C)|^ */
if (s1 != NULL)
{
colorcomplement(v->nfa, v->cm, BEHIND, s1, lp, rp);
newarc(v->nfa, '^', 1, lp, rp);
newarc(v->nfa, '^', 0, lp, rp);
return;
}
break;
default:
assert(NOTREACHED);
}
/* General case: we need a LACON subre and arc */
n = newlacon(v, begin, end, latype);
newarc(v->nfa, LACON, n, lp, rp);
}
/*
* subre - allocate a subre
*/
static struct subre *
2017-06-21 20:39:04 +02:00
subre(struct vars *v,
int op,
int flags,
2017-06-21 20:39:04 +02:00
struct state *begin,
struct state *end)
{
struct subre *ret = v->treefree;
/*
* Checking for stack overflow here is sufficient to protect parse() and
* its recursive subroutines.
*/
if (STACK_TOO_DEEP(v->re))
{
ERR(REG_ETOOBIG);
return NULL;
}
if (ret != NULL)
v->treefree = ret->left;
2003-08-04 02:43:34 +02:00
else
{
ret = (struct subre *) MALLOC(sizeof(struct subre));
if (ret == NULL)
{
ERR(REG_ESPACE);
return NULL;
}
ret->chain = v->treechain;
v->treechain = ret;
}
assert(strchr("=b|.*(", op) != NULL);
ret->op = op;
ret->flags = flags;
ret->id = 0; /* will be assigned later */
ret->subno = 0;
ret->min = ret->max = 1;
ret->left = NULL;
ret->right = NULL;
ret->begin = begin;
ret->end = end;
ZAPCNFA(ret->cnfa);
return ret;
}
/*
* freesubre - free a subRE subtree
*/
static void
2017-06-21 20:39:04 +02:00
freesubre(struct vars *v, /* might be NULL */
struct subre *sr)
{
if (sr == NULL)
return;
if (sr->left != NULL)
freesubre(v, sr->left);
if (sr->right != NULL)
freesubre(v, sr->right);
freesrnode(v, sr);
}
/*
* freesrnode - free one node in a subRE subtree
*/
static void
2017-06-21 20:39:04 +02:00
freesrnode(struct vars *v, /* might be NULL */
struct subre *sr)
{
if (sr == NULL)
return;
if (!NULLCNFA(sr->cnfa))
freecnfa(&sr->cnfa);
sr->flags = 0;
if (v != NULL && v->treechain != NULL)
2003-08-04 02:43:34 +02:00
{
/* we're still parsing, maybe we can reuse the subre */
sr->left = v->treefree;
v->treefree = sr;
2003-08-04 02:43:34 +02:00
}
else
FREE(sr);
}
/*
* optst - optimize a subRE subtree
*/
static void
2017-06-21 20:39:04 +02:00
optst(struct vars *v,
struct subre *t)
{
/*
* DGP (2007-11-13): I assume it was the programmer's intent to eventually
* come back and add code to optimize subRE trees, but the routine coded
* just spends effort traversing the tree and doing nothing. We can do
* nothing with less effort.
*/
return;
}
/*
* numst - number tree nodes (assigning "id" indexes)
*/
2003-08-04 02:43:34 +02:00
static int /* next number */
2017-06-21 20:39:04 +02:00
numst(struct subre *t,
2003-08-04 02:43:34 +02:00
int start) /* starting point for subtree numbers */
{
2003-08-04 02:43:34 +02:00
int i;
assert(t != NULL);
i = start;
t->id = (short) i++;
if (t->left != NULL)
i = numst(t->left, i);
if (t->right != NULL)
i = numst(t->right, i);
return i;
}
/*
* markst - mark tree nodes as INUSE
*
* Note: this is a great deal more subtle than it looks. During initial
* parsing of a regex, all subres are linked into the treechain list;
* discarded ones are also linked into the treefree list for possible reuse.
* After we are done creating all subres required for a regex, we run markst()
* then cleanst(), which results in discarding all subres not reachable from
* v->tree. We then clear v->treechain, indicating that subres must be found
* by descending from v->tree. This changes the behavior of freesubre(): it
* will henceforth FREE() unwanted subres rather than sticking them into the
* treefree list. (Doing that any earlier would result in dangling links in
* the treechain list.) This all means that freev() will clean up correctly
* if invoked before or after markst()+cleanst(); but it would not work if
* called partway through this state conversion, so we mustn't error out
* in or between these two functions.
*/
static void
2017-06-21 20:39:04 +02:00
markst(struct subre *t)
{
assert(t != NULL);
t->flags |= INUSE;
if (t->left != NULL)
markst(t->left);
if (t->right != NULL)
markst(t->right);
}
/*
* cleanst - free any tree nodes not marked INUSE
*/
static void
2017-06-21 20:39:04 +02:00
cleanst(struct vars *v)
{
struct subre *t;
struct subre *next;
2003-08-04 02:43:34 +02:00
for (t = v->treechain; t != NULL; t = next)
{
next = t->chain;
2003-08-04 02:43:34 +02:00
if (!(t->flags & INUSE))
FREE(t);
}
v->treechain = NULL;
2003-08-04 02:43:34 +02:00
v->treefree = NULL; /* just on general principles */
}
/*
* nfatree - turn a subRE subtree into a tree of compacted NFAs
*/
2003-08-04 02:43:34 +02:00
static long /* optimize results from top node */
2017-06-21 20:39:04 +02:00
nfatree(struct vars *v,
struct subre *t,
FILE *f) /* for debug output */
{
assert(t != NULL && t->begin != NULL);
if (t->left != NULL)
2003-08-04 02:43:34 +02:00
(DISCARD) nfatree(v, t->left, f);
if (t->right != NULL)
2003-08-04 02:43:34 +02:00
(DISCARD) nfatree(v, t->right, f);
Implement lookbehind constraints in our regular-expression engine. A lookbehind constraint is like a lookahead constraint in that it consumes no text; but it checks for existence (or nonexistence) of a match *ending* at the current point in the string, rather than one *starting* at the current point. This is a long-requested feature since it exists in many other regex libraries, but Henry Spencer had never got around to implementing it in the code we use. Just making it work is actually pretty trivial; but naive copying of the logic for lookahead constraints leads to code that often spends O(N^2) time to scan an N-character string, because we have to run the match engine from string start to the current probe point each time the constraint is checked. In typical use-cases a lookbehind constraint will be written at the start of the regex and hence will need to be checked at every character --- so O(N^2) work overall. To fix that, I introduced a third copy of the core DFA matching loop, paralleling the existing longest() and shortest() loops. This version, matchuntil(), can suspend and resume matching given a couple of pointers' worth of storage space. So we need only run it across the string once, stopping at each interesting probe point and then resuming to advance to the next one. I also put in an optimization that simplifies one-character lookahead and lookbehind constraints, such as "(?=x)" or "(?<!\w)", into AHEAD and BEHIND constraints, which already existed in the engine. This avoids the overhead of the LACON machinery entirely for these rather common cases. The net result is that lookbehind constraints run a factor of three or so slower than Perl's for multi-character constraints, but faster than Perl's for one-character constraints ... and they work fine for variable-length constraints, which Perl gives up on entirely. So that's not bad from a competitive perspective, and there's room for further optimization if anyone cares. (In reality, raw scan rate across a large input string is probably not that big a deal for Postgres usage anyway; so I'm happy if it's linear.)
2015-10-31 00:14:19 +01:00
return nfanode(v, t, 0, f);
}
/*
Implement lookbehind constraints in our regular-expression engine. A lookbehind constraint is like a lookahead constraint in that it consumes no text; but it checks for existence (or nonexistence) of a match *ending* at the current point in the string, rather than one *starting* at the current point. This is a long-requested feature since it exists in many other regex libraries, but Henry Spencer had never got around to implementing it in the code we use. Just making it work is actually pretty trivial; but naive copying of the logic for lookahead constraints leads to code that often spends O(N^2) time to scan an N-character string, because we have to run the match engine from string start to the current probe point each time the constraint is checked. In typical use-cases a lookbehind constraint will be written at the start of the regex and hence will need to be checked at every character --- so O(N^2) work overall. To fix that, I introduced a third copy of the core DFA matching loop, paralleling the existing longest() and shortest() loops. This version, matchuntil(), can suspend and resume matching given a couple of pointers' worth of storage space. So we need only run it across the string once, stopping at each interesting probe point and then resuming to advance to the next one. I also put in an optimization that simplifies one-character lookahead and lookbehind constraints, such as "(?=x)" or "(?<!\w)", into AHEAD and BEHIND constraints, which already existed in the engine. This avoids the overhead of the LACON machinery entirely for these rather common cases. The net result is that lookbehind constraints run a factor of three or so slower than Perl's for multi-character constraints, but faster than Perl's for one-character constraints ... and they work fine for variable-length constraints, which Perl gives up on entirely. So that's not bad from a competitive perspective, and there's room for further optimization if anyone cares. (In reality, raw scan rate across a large input string is probably not that big a deal for Postgres usage anyway; so I'm happy if it's linear.)
2015-10-31 00:14:19 +01:00
* nfanode - do one NFA for nfatree or lacons
*
* If converttosearch is true, apply makesearch() to the NFA.
*/
2003-08-04 02:43:34 +02:00
static long /* optimize results */
2017-06-21 20:39:04 +02:00
nfanode(struct vars *v,
struct subre *t,
Implement lookbehind constraints in our regular-expression engine. A lookbehind constraint is like a lookahead constraint in that it consumes no text; but it checks for existence (or nonexistence) of a match *ending* at the current point in the string, rather than one *starting* at the current point. This is a long-requested feature since it exists in many other regex libraries, but Henry Spencer had never got around to implementing it in the code we use. Just making it work is actually pretty trivial; but naive copying of the logic for lookahead constraints leads to code that often spends O(N^2) time to scan an N-character string, because we have to run the match engine from string start to the current probe point each time the constraint is checked. In typical use-cases a lookbehind constraint will be written at the start of the regex and hence will need to be checked at every character --- so O(N^2) work overall. To fix that, I introduced a third copy of the core DFA matching loop, paralleling the existing longest() and shortest() loops. This version, matchuntil(), can suspend and resume matching given a couple of pointers' worth of storage space. So we need only run it across the string once, stopping at each interesting probe point and then resuming to advance to the next one. I also put in an optimization that simplifies one-character lookahead and lookbehind constraints, such as "(?=x)" or "(?<!\w)", into AHEAD and BEHIND constraints, which already existed in the engine. This avoids the overhead of the LACON machinery entirely for these rather common cases. The net result is that lookbehind constraints run a factor of three or so slower than Perl's for multi-character constraints, but faster than Perl's for one-character constraints ... and they work fine for variable-length constraints, which Perl gives up on entirely. So that's not bad from a competitive perspective, and there's room for further optimization if anyone cares. (In reality, raw scan rate across a large input string is probably not that big a deal for Postgres usage anyway; so I'm happy if it's linear.)
2015-10-31 00:14:19 +01:00
int converttosearch,
FILE *f) /* for debug output */
{
struct nfa *nfa;
2003-08-04 02:43:34 +02:00
long ret = 0;
assert(t->begin != NULL);
#ifdef REG_DEBUG
if (f != NULL)
{
2003-08-04 02:43:34 +02:00
char idbuf[50];
fprintf(f, "\n\n\n========= TREE NODE %s ==========\n",
2003-08-04 02:43:34 +02:00
stid(t, idbuf, sizeof(idbuf)));
}
#endif
nfa = newnfa(v, v->cm, v->nfa);
NOERRZ();
dupnfa(nfa, t->begin, t->end, nfa->init, nfa->final);
2003-08-04 02:43:34 +02:00
if (!ISERR())
specialcolors(nfa);
Implement lookbehind constraints in our regular-expression engine. A lookbehind constraint is like a lookahead constraint in that it consumes no text; but it checks for existence (or nonexistence) of a match *ending* at the current point in the string, rather than one *starting* at the current point. This is a long-requested feature since it exists in many other regex libraries, but Henry Spencer had never got around to implementing it in the code we use. Just making it work is actually pretty trivial; but naive copying of the logic for lookahead constraints leads to code that often spends O(N^2) time to scan an N-character string, because we have to run the match engine from string start to the current probe point each time the constraint is checked. In typical use-cases a lookbehind constraint will be written at the start of the regex and hence will need to be checked at every character --- so O(N^2) work overall. To fix that, I introduced a third copy of the core DFA matching loop, paralleling the existing longest() and shortest() loops. This version, matchuntil(), can suspend and resume matching given a couple of pointers' worth of storage space. So we need only run it across the string once, stopping at each interesting probe point and then resuming to advance to the next one. I also put in an optimization that simplifies one-character lookahead and lookbehind constraints, such as "(?=x)" or "(?<!\w)", into AHEAD and BEHIND constraints, which already existed in the engine. This avoids the overhead of the LACON machinery entirely for these rather common cases. The net result is that lookbehind constraints run a factor of three or so slower than Perl's for multi-character constraints, but faster than Perl's for one-character constraints ... and they work fine for variable-length constraints, which Perl gives up on entirely. So that's not bad from a competitive perspective, and there's room for further optimization if anyone cares. (In reality, raw scan rate across a large input string is probably not that big a deal for Postgres usage anyway; so I'm happy if it's linear.)
2015-10-31 00:14:19 +01:00
if (!ISERR())
ret = optimize(nfa, f);
Implement lookbehind constraints in our regular-expression engine. A lookbehind constraint is like a lookahead constraint in that it consumes no text; but it checks for existence (or nonexistence) of a match *ending* at the current point in the string, rather than one *starting* at the current point. This is a long-requested feature since it exists in many other regex libraries, but Henry Spencer had never got around to implementing it in the code we use. Just making it work is actually pretty trivial; but naive copying of the logic for lookahead constraints leads to code that often spends O(N^2) time to scan an N-character string, because we have to run the match engine from string start to the current probe point each time the constraint is checked. In typical use-cases a lookbehind constraint will be written at the start of the regex and hence will need to be checked at every character --- so O(N^2) work overall. To fix that, I introduced a third copy of the core DFA matching loop, paralleling the existing longest() and shortest() loops. This version, matchuntil(), can suspend and resume matching given a couple of pointers' worth of storage space. So we need only run it across the string once, stopping at each interesting probe point and then resuming to advance to the next one. I also put in an optimization that simplifies one-character lookahead and lookbehind constraints, such as "(?=x)" or "(?<!\w)", into AHEAD and BEHIND constraints, which already existed in the engine. This avoids the overhead of the LACON machinery entirely for these rather common cases. The net result is that lookbehind constraints run a factor of three or so slower than Perl's for multi-character constraints, but faster than Perl's for one-character constraints ... and they work fine for variable-length constraints, which Perl gives up on entirely. So that's not bad from a competitive perspective, and there's room for further optimization if anyone cares. (In reality, raw scan rate across a large input string is probably not that big a deal for Postgres usage anyway; so I'm happy if it's linear.)
2015-10-31 00:14:19 +01:00
if (converttosearch && !ISERR())
makesearch(v, nfa);
if (!ISERR())
compact(nfa, &t->cnfa);
freenfa(nfa);
return ret;
}
/*
Implement lookbehind constraints in our regular-expression engine. A lookbehind constraint is like a lookahead constraint in that it consumes no text; but it checks for existence (or nonexistence) of a match *ending* at the current point in the string, rather than one *starting* at the current point. This is a long-requested feature since it exists in many other regex libraries, but Henry Spencer had never got around to implementing it in the code we use. Just making it work is actually pretty trivial; but naive copying of the logic for lookahead constraints leads to code that often spends O(N^2) time to scan an N-character string, because we have to run the match engine from string start to the current probe point each time the constraint is checked. In typical use-cases a lookbehind constraint will be written at the start of the regex and hence will need to be checked at every character --- so O(N^2) work overall. To fix that, I introduced a third copy of the core DFA matching loop, paralleling the existing longest() and shortest() loops. This version, matchuntil(), can suspend and resume matching given a couple of pointers' worth of storage space. So we need only run it across the string once, stopping at each interesting probe point and then resuming to advance to the next one. I also put in an optimization that simplifies one-character lookahead and lookbehind constraints, such as "(?=x)" or "(?<!\w)", into AHEAD and BEHIND constraints, which already existed in the engine. This avoids the overhead of the LACON machinery entirely for these rather common cases. The net result is that lookbehind constraints run a factor of three or so slower than Perl's for multi-character constraints, but faster than Perl's for one-character constraints ... and they work fine for variable-length constraints, which Perl gives up on entirely. So that's not bad from a competitive perspective, and there's room for further optimization if anyone cares. (In reality, raw scan rate across a large input string is probably not that big a deal for Postgres usage anyway; so I'm happy if it's linear.)
2015-10-31 00:14:19 +01:00
* newlacon - allocate a lookaround-constraint subRE
*/
2003-08-04 02:43:34 +02:00
static int /* lacon number */
2017-06-21 20:39:04 +02:00
newlacon(struct vars *v,
struct state *begin,
struct state *end,
Implement lookbehind constraints in our regular-expression engine. A lookbehind constraint is like a lookahead constraint in that it consumes no text; but it checks for existence (or nonexistence) of a match *ending* at the current point in the string, rather than one *starting* at the current point. This is a long-requested feature since it exists in many other regex libraries, but Henry Spencer had never got around to implementing it in the code we use. Just making it work is actually pretty trivial; but naive copying of the logic for lookahead constraints leads to code that often spends O(N^2) time to scan an N-character string, because we have to run the match engine from string start to the current probe point each time the constraint is checked. In typical use-cases a lookbehind constraint will be written at the start of the regex and hence will need to be checked at every character --- so O(N^2) work overall. To fix that, I introduced a third copy of the core DFA matching loop, paralleling the existing longest() and shortest() loops. This version, matchuntil(), can suspend and resume matching given a couple of pointers' worth of storage space. So we need only run it across the string once, stopping at each interesting probe point and then resuming to advance to the next one. I also put in an optimization that simplifies one-character lookahead and lookbehind constraints, such as "(?=x)" or "(?<!\w)", into AHEAD and BEHIND constraints, which already existed in the engine. This avoids the overhead of the LACON machinery entirely for these rather common cases. The net result is that lookbehind constraints run a factor of three or so slower than Perl's for multi-character constraints, but faster than Perl's for one-character constraints ... and they work fine for variable-length constraints, which Perl gives up on entirely. So that's not bad from a competitive perspective, and there's room for further optimization if anyone cares. (In reality, raw scan rate across a large input string is probably not that big a deal for Postgres usage anyway; so I'm happy if it's linear.)
2015-10-31 00:14:19 +01:00
int latype)
{
2003-08-04 02:43:34 +02:00
int n;
struct subre *newlacons;
struct subre *sub;
2003-08-04 02:43:34 +02:00
if (v->nlacons == 0)
{
n = 1; /* skip 0th */
newlacons = (struct subre *) MALLOC(2 * sizeof(struct subre));
2003-08-04 02:43:34 +02:00
}
else
{
n = v->nlacons;
newlacons = (struct subre *) REALLOC(v->lacons,
(n + 1) * sizeof(struct subre));
}
if (newlacons == NULL)
2003-08-04 02:43:34 +02:00
{
ERR(REG_ESPACE);
return 0;
}
v->lacons = newlacons;
v->nlacons = n + 1;
sub = &v->lacons[n];
sub->begin = begin;
sub->end = end;
Implement lookbehind constraints in our regular-expression engine. A lookbehind constraint is like a lookahead constraint in that it consumes no text; but it checks for existence (or nonexistence) of a match *ending* at the current point in the string, rather than one *starting* at the current point. This is a long-requested feature since it exists in many other regex libraries, but Henry Spencer had never got around to implementing it in the code we use. Just making it work is actually pretty trivial; but naive copying of the logic for lookahead constraints leads to code that often spends O(N^2) time to scan an N-character string, because we have to run the match engine from string start to the current probe point each time the constraint is checked. In typical use-cases a lookbehind constraint will be written at the start of the regex and hence will need to be checked at every character --- so O(N^2) work overall. To fix that, I introduced a third copy of the core DFA matching loop, paralleling the existing longest() and shortest() loops. This version, matchuntil(), can suspend and resume matching given a couple of pointers' worth of storage space. So we need only run it across the string once, stopping at each interesting probe point and then resuming to advance to the next one. I also put in an optimization that simplifies one-character lookahead and lookbehind constraints, such as "(?=x)" or "(?<!\w)", into AHEAD and BEHIND constraints, which already existed in the engine. This avoids the overhead of the LACON machinery entirely for these rather common cases. The net result is that lookbehind constraints run a factor of three or so slower than Perl's for multi-character constraints, but faster than Perl's for one-character constraints ... and they work fine for variable-length constraints, which Perl gives up on entirely. So that's not bad from a competitive perspective, and there's room for further optimization if anyone cares. (In reality, raw scan rate across a large input string is probably not that big a deal for Postgres usage anyway; so I'm happy if it's linear.)
2015-10-31 00:14:19 +01:00
sub->subno = latype;
ZAPCNFA(sub->cnfa);
return n;
}
/*
Implement lookbehind constraints in our regular-expression engine. A lookbehind constraint is like a lookahead constraint in that it consumes no text; but it checks for existence (or nonexistence) of a match *ending* at the current point in the string, rather than one *starting* at the current point. This is a long-requested feature since it exists in many other regex libraries, but Henry Spencer had never got around to implementing it in the code we use. Just making it work is actually pretty trivial; but naive copying of the logic for lookahead constraints leads to code that often spends O(N^2) time to scan an N-character string, because we have to run the match engine from string start to the current probe point each time the constraint is checked. In typical use-cases a lookbehind constraint will be written at the start of the regex and hence will need to be checked at every character --- so O(N^2) work overall. To fix that, I introduced a third copy of the core DFA matching loop, paralleling the existing longest() and shortest() loops. This version, matchuntil(), can suspend and resume matching given a couple of pointers' worth of storage space. So we need only run it across the string once, stopping at each interesting probe point and then resuming to advance to the next one. I also put in an optimization that simplifies one-character lookahead and lookbehind constraints, such as "(?=x)" or "(?<!\w)", into AHEAD and BEHIND constraints, which already existed in the engine. This avoids the overhead of the LACON machinery entirely for these rather common cases. The net result is that lookbehind constraints run a factor of three or so slower than Perl's for multi-character constraints, but faster than Perl's for one-character constraints ... and they work fine for variable-length constraints, which Perl gives up on entirely. So that's not bad from a competitive perspective, and there's room for further optimization if anyone cares. (In reality, raw scan rate across a large input string is probably not that big a deal for Postgres usage anyway; so I'm happy if it's linear.)
2015-10-31 00:14:19 +01:00
* freelacons - free lookaround-constraint subRE vector
*/
static void
2017-06-21 20:39:04 +02:00
freelacons(struct subre *subs,
int n)
{
struct subre *sub;
2003-08-04 02:43:34 +02:00
int i;
assert(n > 0);
for (sub = subs + 1, i = n - 1; i > 0; sub++, i--) /* no 0th */
if (!NULLCNFA(sub->cnfa))
freecnfa(&sub->cnfa);
FREE(subs);
}
/*
* rfree - free a whole RE (insides of regfree)
*/
static void
rfree(regex_t *re)
{
struct guts *g;
if (re == NULL || re->re_magic != REMAGIC)
return;
2003-08-04 02:43:34 +02:00
re->re_magic = 0; /* invalidate RE */
g = (struct guts *) re->re_guts;
re->re_guts = NULL;
re->re_fns = NULL;
if (g != NULL)
{
g->magic = 0;
freecm(&g->cmap);
if (g->tree != NULL)
freesubre((struct vars *) NULL, g->tree);
if (g->lacons != NULL)
freelacons(g->lacons, g->nlacons);
if (!NULLCNFA(g->search))
freecnfa(&g->search);
FREE(g);
}
}
/*
* rcancelrequested - check for external request to cancel regex operation
*
* Return nonzero to fail the operation with error code REG_CANCEL,
* zero to keep going
*
* The current implementation is Postgres-specific. If we ever get around
* to splitting the regex code out as a standalone library, there will need
* to be some API to let applications define a callback function for this.
*/
static int
rcancelrequested(void)
{
return InterruptPending && (QueryCancelPending || ProcDiePending);
}
/*
* rstacktoodeep - check for stack getting dangerously deep
*
* Return nonzero to fail the operation with error code REG_ETOOBIG,
* zero to keep going
*
* The current implementation is Postgres-specific. If we ever get around
* to splitting the regex code out as a standalone library, there will need
* to be some API to let applications define a callback function for this.
*/
static int
rstacktoodeep(void)
{
return stack_is_too_deep();
}
#ifdef REG_DEBUG
/*
* dump - dump an RE in human-readable form
*/
static void
dump(regex_t *re,
FILE *f)
{
struct guts *g;
2003-08-04 02:43:34 +02:00
int i;
if (re->re_magic != REMAGIC)
fprintf(f, "bad magic number (0x%x not 0x%x)\n", re->re_magic,
2003-08-04 02:43:34 +02:00
REMAGIC);
if (re->re_guts == NULL)
{
fprintf(f, "NULL guts!!!\n");
return;
}
2003-08-04 02:43:34 +02:00
g = (struct guts *) re->re_guts;
if (g->magic != GUTSMAGIC)
fprintf(f, "bad guts magic number (0x%x not 0x%x)\n", g->magic,
2003-08-04 02:43:34 +02:00
GUTSMAGIC);
fprintf(f, "\n\n\n========= DUMP ==========\n");
2003-08-04 02:43:34 +02:00
fprintf(f, "nsub %d, info 0%lo, csize %d, ntree %d\n",
(int) re->re_nsub, re->re_info, re->re_csize, g->ntree);
dumpcolors(&g->cmap, f);
2003-08-04 02:43:34 +02:00
if (!NULLCNFA(g->search))
{
fprintf(f, "\nsearch:\n");
dumpcnfa(&g->search, f);
}
2003-08-04 02:43:34 +02:00
for (i = 1; i < g->nlacons; i++)
{
Implement lookbehind constraints in our regular-expression engine. A lookbehind constraint is like a lookahead constraint in that it consumes no text; but it checks for existence (or nonexistence) of a match *ending* at the current point in the string, rather than one *starting* at the current point. This is a long-requested feature since it exists in many other regex libraries, but Henry Spencer had never got around to implementing it in the code we use. Just making it work is actually pretty trivial; but naive copying of the logic for lookahead constraints leads to code that often spends O(N^2) time to scan an N-character string, because we have to run the match engine from string start to the current probe point each time the constraint is checked. In typical use-cases a lookbehind constraint will be written at the start of the regex and hence will need to be checked at every character --- so O(N^2) work overall. To fix that, I introduced a third copy of the core DFA matching loop, paralleling the existing longest() and shortest() loops. This version, matchuntil(), can suspend and resume matching given a couple of pointers' worth of storage space. So we need only run it across the string once, stopping at each interesting probe point and then resuming to advance to the next one. I also put in an optimization that simplifies one-character lookahead and lookbehind constraints, such as "(?=x)" or "(?<!\w)", into AHEAD and BEHIND constraints, which already existed in the engine. This avoids the overhead of the LACON machinery entirely for these rather common cases. The net result is that lookbehind constraints run a factor of three or so slower than Perl's for multi-character constraints, but faster than Perl's for one-character constraints ... and they work fine for variable-length constraints, which Perl gives up on entirely. So that's not bad from a competitive perspective, and there's room for further optimization if anyone cares. (In reality, raw scan rate across a large input string is probably not that big a deal for Postgres usage anyway; so I'm happy if it's linear.)
2015-10-31 00:14:19 +01:00
struct subre *lasub = &g->lacons[i];
const char *latype;
switch (lasub->subno)
{
case LATYPE_AHEAD_POS:
latype = "positive lookahead";
break;
case LATYPE_AHEAD_NEG:
latype = "negative lookahead";
break;
case LATYPE_BEHIND_POS:
latype = "positive lookbehind";
break;
case LATYPE_BEHIND_NEG:
latype = "negative lookbehind";
break;
default:
latype = "???";
break;
}
fprintf(f, "\nla%d (%s):\n", i, latype);
dumpcnfa(&lasub->cnfa, f);
}
fprintf(f, "\n");
dumpst(g->tree, f, 0);
}
/*
* dumpst - dump a subRE tree
*/
static void
2017-06-21 20:39:04 +02:00
dumpst(struct subre *t,
FILE *f,
int nfapresent) /* is the original NFA still around? */
{
if (t == NULL)
fprintf(f, "null tree\n");
else
stdump(t, f, nfapresent);
fflush(f);
}
/*
* stdump - recursive guts of dumpst
*/
static void
2017-06-21 20:39:04 +02:00
stdump(struct subre *t,
FILE *f,
int nfapresent) /* is the original NFA still around? */
{
2003-08-04 02:43:34 +02:00
char idbuf[50];
fprintf(f, "%s. `%c'", stid(t, idbuf, sizeof(idbuf)), t->op);
2003-08-04 02:43:34 +02:00
if (t->flags & LONGER)
fprintf(f, " longest");
2003-08-04 02:43:34 +02:00
if (t->flags & SHORTER)
fprintf(f, " shortest");
2003-08-04 02:43:34 +02:00
if (t->flags & MIXED)
fprintf(f, " hasmixed");
2003-08-04 02:43:34 +02:00
if (t->flags & CAP)
fprintf(f, " hascapture");
2003-08-04 02:43:34 +02:00
if (t->flags & BACKR)
fprintf(f, " hasbackref");
2003-08-04 02:43:34 +02:00
if (!(t->flags & INUSE))
fprintf(f, " UNUSED");
if (t->subno != 0)
fprintf(f, " (#%d)", t->subno);
2003-08-04 02:43:34 +02:00
if (t->min != 1 || t->max != 1)
{
fprintf(f, " {%d,", t->min);
if (t->max != DUPINF)
fprintf(f, "%d", t->max);
fprintf(f, "}");
}
if (nfapresent)
2003-08-04 02:43:34 +02:00
fprintf(f, " %ld-%ld", (long) t->begin->no, (long) t->end->no);
if (t->left != NULL)
fprintf(f, " L:%s", stid(t->left, idbuf, sizeof(idbuf)));
if (t->right != NULL)
fprintf(f, " R:%s", stid(t->right, idbuf, sizeof(idbuf)));
2003-08-04 02:43:34 +02:00
if (!NULLCNFA(t->cnfa))
{
fprintf(f, "\n");
dumpcnfa(&t->cnfa, f);
}
fprintf(f, "\n");
if (t->left != NULL)
stdump(t->left, f, nfapresent);
if (t->right != NULL)
stdump(t->right, f, nfapresent);
}
/*
* stid - identify a subtree node for dumping
*/
static const char * /* points to buf or constant string */
2017-06-21 20:39:04 +02:00
stid(struct subre *t,
char *buf,
size_t bufsize)
{
/* big enough for hex int or decimal t->id? */
if (bufsize < sizeof(void *) * 2 + 3 || bufsize < sizeof(t->id) * 3 + 1)
return "unable";
if (t->id != 0)
sprintf(buf, "%d", t->id);
else
sprintf(buf, "%p", t);
return buf;
}
Phase 2 of pgindent updates. Change pg_bsd_indent to follow upstream rules for placement of comments to the right of code, and remove pgindent hack that caused comments following #endif to not obey the general rule. Commit e3860ffa4dd0dad0dd9eea4be9cc1412373a8c89 wasn't actually using the published version of pg_bsd_indent, but a hacked-up version that tried to minimize the amount of movement of comments to the right of code. The situation of interest is where such a comment has to be moved to the right of its default placement at column 33 because there's code there. BSD indent has always moved right in units of tab stops in such cases --- but in the previous incarnation, indent was working in 8-space tab stops, while now it knows we use 4-space tabs. So the net result is that in about half the cases, such comments are placed one tab stop left of before. This is better all around: it leaves more room on the line for comment text, and it means that in such cases the comment uniformly starts at the next 4-space tab stop after the code, rather than sometimes one and sometimes two tabs after. Also, ensure that comments following #endif are indented the same as comments following other preprocessor commands such as #else. That inconsistency turns out to have been self-inflicted damage from a poorly-thought-through post-indent "fixup" in pgindent. This patch is much less interesting than the first round of indent changes, but also bulkier, so I thought it best to separate the effects. Discussion: https://postgr.es/m/E1dAmxK-0006EE-1r@gemulon.postgresql.org Discussion: https://postgr.es/m/30527.1495162840@sss.pgh.pa.us
2017-06-21 21:18:54 +02:00
#endif /* REG_DEBUG */
2002-09-04 22:31:48 +02:00
#include "regc_lex.c"
#include "regc_color.c"
#include "regc_nfa.c"
#include "regc_cvec.c"
#include "regc_pg_locale.c"
#include "regc_locale.c"