Re: Prevent internal error at concurrent CREATE OR REPLACE FUNCTION
От | Yugo Nagata |
---|---|
Тема | Re: Prevent internal error at concurrent CREATE OR REPLACE FUNCTION |
Дата | |
Msg-id | 20250704145805.47ceefb0ea848db6654ed3f2@sraoss.co.jp обсуждение исходный текст |
Ответ на | Re: Prevent internal error at concurrent CREATE OR REPLACE FUNCTION (Daniil Davydov <3danissimo@gmail.com>) |
Список | pgsql-hackers |
On Fri, 4 Jul 2025 10:48:26 +0700 Daniil Davydov <3danissimo@gmail.com> wrote: > Hi, > > On Thu, Jul 3, 2025 at 9:18 PM Yugo Nagata <nagata@sraoss.co.jp> wrote: > > > > On Tue, 1 Jul 2025 18:56:11 +0700 > > Daniil Davydov <3danissimo@gmail.com> wrote: > > > > > CatalogIndexInsert is kinda "popular" function. It can be called in > > > different situations, not in all of which a violation of unique > > > constraint means an error due to competitiveness. > > > > > > For example, with this patch such a query : "CREATE TYPE mood AS ENUM > > > ('happy', 'sad', 'happy');" > > > Will throw this error : "operation failed due to a concurrent command" > > > Of course, it isn't true > > > > You're right — this error is not caused by a concurrent command. > > However, I believe the error message in cases like creating an ENUM type with > > duplicate labels could be improved to explain the issue more clearly, rather > > than just reporting it as a unique constraint violation. > > > > In any case, a unique constraint violation in a system catalog is not necessarily > > due to concurrent DDL. Therefore, the error message shouldn't suggest that as the > > only cause. Instead, it should clearly report the constraint violation as the primary > > issue, and mention concurrent DDL as just one possible explanation in HINT. > > > > I've updated the patch accordingly to reflect this direction in the error message. > > > > ERROR: operation failed due to duplicate key object > > DETAIL: Key (proname, proargtypes, pronamespace)=(fnc, , 2200) already exists in unique index pg_proc_proname_args_nsp_index. > > HINT: Another command might have created a object with the same key in a concurrent session. > > > > However, as a result, the message ends up being similar to the current one raised > > by the btree code, so the overall improvement in user-friendliness might be limited. > > > > Thanks for updating the patch! > +1 for adding such a hint for this error. > > > > That is why I suggested handling unique violations exactly inside > > > ProcedureCreate - the only place where we can be sure about reasons of > > > error. > > > > If we were to fix the error message outside of CatalogIndexInsert, we would need to > > modify CatalogTupleInsert, CatalogTupleUpdate, and related functions to allow them to > > report the failure appropriately. > > > > You suggested using PG_TRY/PG_CATCH, but these do not suppress the error message from > > the btree code, so this approach seems not to fully address the issue. > > > > Moreover, the places affected are not limited to ProcedureCreate, for example, > > concurrent CREATE TABLE commands can also lead to the same situation, and possibly > > other commands as well. > > Actually, we can suppress errors from btree (by flushing error context > and creating another), Right. That was my misunderstanding. Regards, Yugo Nagata -- Yugo Nagata <nagata@sraoss.co.jp>
В списке pgsql-hackers по дате отправления: