Re: BUG #15922: Simple select with multiple exists filters returns duplicates from a primary key field
От | Tom Lane |
---|---|
Тема | Re: BUG #15922: Simple select with multiple exists filters returns duplicates from a primary key field |
Дата | |
Msg-id | 31206.1563918579@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | RE: BUG #15922: Simple select with multiple exists filters returnsduplicates from a primary key field (David Raymond <David.Raymond@tomtom.com>) |
Ответы |
RE: BUG #15922: Simple select with multiple exists filters returnsduplicates from a primary key field
|
Список | pgsql-bugs |
David Raymond <David.Raymond@tomtom.com> writes: > Update so far: I did manage to go and replace all the UUIDs with random ones and it's still doing it, so I do have a sanitizedversion now. No real luck with trimming down the record count though. When deleting too many records it would changethe query plan to something not broken. Even after replacing the UUIDs and not deleting anything I ran analyze andit came up clean, and I had to vacuum analyze for it to pick the broken plan again. (That example pasted below) The dumpfile is at least consistently doing the same thing where immediately after load the plan chosen gives a consistent answer,but once analyzed it gives the bad duplicates. As it stands the dump file is 130 MB (30MB zipped), is that too bigto send in to you? Given that the problem seems to be specific to parallel query, likely the reason is that reducing the number of rows brings it below the threshold where the planner wants to use parallel query. So you could probably reduce the parallel-query cost parameters to get a failure with a smaller test case. However, if you don't feel like doing that, that's fine. Please *don't* send a 30MB message to the whole list, but you can send it to me privately. regards, tom lane
В списке pgsql-bugs по дате отправления: