Re: tsearch in core patch, for inclusion
От | Florian G. Pflug |
---|---|
Тема | Re: tsearch in core patch, for inclusion |
Дата | |
Msg-id | 45DC4B2A.7080309@phlo.org обсуждение исходный текст |
Ответ на | Re: tsearch in core patch, for inclusion (Markus Schiltknecht <markus@bluegap.ch>) |
Ответы |
Re: tsearch in core patch, for inclusion
Re: tsearch in core patch, for inclusion |
Список | pgsql-hackers |
Markus Schiltknecht wrote: > Hi, > > Tom Lane wrote: >> You mean four different object types. I'm not totally clear on bison's >> scaling behavior relative to the number of productions > > You really want to trade parser performance (which is *very* > implementation specific) for ease of use? > > Bison generates a LALR [1] parser, which depend quite a bit on the > number of productions. But AFAIK the dependency is mostly on memory > consumption for the internal symbol sets, not so much on runtime > complexity. I didn't find hard facts about runtime complexity of LALR, > though (pointers are very welcome). According to http://en.wikipedia.org/wiki/LR_parser processing one token in any LR(1) parser in the worst case needs to a) Do a lookup in the action table with the current (state, token) pairb) Do a lookup in the goto table with a (state, rule) pair. c) Push one state onto the stack, and pop n states with n being the number of symbols (tokens or other rules) on the right hand side of a rule. a) and b) should be O(1). Processing one token pushes at most one state onto the stack, so overall no more than N stats can be popped off again, making the whole algorithm O(N) with N being the number of tokens of the input stream. AFAIK the only difference between SLR, LALR and LR(1) lies in the generation of the goto and action tables. > Are there any ongoing efforts to rewrite the parser (i.e. using another > algorithm, like a recursive descent parser)? Why would you want to do that? greetings, Florian Pflug
В списке pgsql-hackers по дате отправления: