Re: WIP Incremental JSON Parser

Поиск
Список
Период
Сортировка
От Andrew Dunstan
Тема Re: WIP Incremental JSON Parser
Дата
Msg-id CAD5tBcLi2ffZkktV2qrsKSBykE-N8CiYgrfbv0vZ-F7=xLFeqw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: WIP Incremental JSON Parser  (Jacob Champion <jacob.champion@enterprisedb.com>)
Ответы Re: WIP Incremental JSON Parser  (Andrew Dunstan <andrew@dunslane.net>)
Список pgsql-hackers


On Mon, Mar 18, 2024 at 3:35 PM Jacob Champion <jacob.champion@enterprisedb.com> wrote:
On Mon, Mar 18, 2024 at 3:32 AM Andrew Dunstan <andrew@dunslane.net> wrote:
> Not very easily. But I think and hope I've fixed the issue you've identified above about returning before lex->token_start is properly set.
>
>  Attached is a new set of patches that does that and is updated for the json_errdetaiil() changes.

Thanks!

>    ++           * Normally token_start would be ptok->data, but it could be later,
>    ++           * see json_lex_string's handling of invalid escapes.
>     +           */
>    -+          lex->token_start = ptok->data;
>    ++          lex->token_start = dummy_lex.token_start;
>     +          lex->token_terminator = ptok->data + ptok->len;

By the same token (ha), the lex->token_terminator needs to be updated
from dummy_lex for some error paths. (IIUC, on success, the
token_terminator should always point to the end of the buffer. If it's
not possible to combine the two code paths, maybe it'd be good to
check that and assert/error out if we've incorrectly pulled additional
data into the partial token.)


Yes, good point. Will take a look at that.

 

With the incremental parser, I think prev_token_terminator is not
likely to be safe to use except in very specific circumstances, since
it could be pointing into a stale chunk. Some documentation around how
to use that safely in a semantic action would be good.

Quite right. It's not safe. Should we ensure it's set to something like NULL or -1?

Also, where do you think we should put a warning about it?

 

It looks like some of the newly added error handling paths cannot be
hit, because the production stack makes it logically impossible to get
there. (For example, if it takes a successfully lexed comma to
transition into JSON_PROD_MORE_ARRAY_ELEMENTS to begin with, then when
we pull that production's JSON_TOKEN_COMMA off the stack, we can't
somehow fail to match that same comma.) Assuming I haven't missed a
different way to get into that situation, could the "impossible" cases
have assert calls added?

Good idea.

 

I've attached two diffs. One is the group of tests I've been using
locally (called 002_inline.pl; I replaced the existing inline tests
with it), and the other is a set of potential fixes to get those tests
green.




Thanks. Here's a patch set that incorporates your two patches.

It also removes the frontend exits I had. In the case of stack depth, we follow the example of the RD parser and only check stack depth for backend code. In the case of the check that the lexer is set up for incremental parsing, the exit is replaced by an Assert. That means your test for an over-nested array doesn't work any more, so I have commented it out.


cheers

andrew


 
Вложения

В списке pgsql-hackers по дате отправления:

Предыдущее
От: Dean Rasheed
Дата:
Сообщение: Re: Improving EXPLAIN's display of SubPlan nodes
Следующее
От: Tom Lane
Дата:
Сообщение: Re: Improving EXPLAIN's display of SubPlan nodes