Re: pg_upgrade failing for 200+ million Large Objects

Поиск
Список
Период
Сортировка
От Kumar, Sachin
Тема Re: pg_upgrade failing for 200+ million Large Objects
Дата
Msg-id 240D05EC-8B28-4112-BEAB-85ECBAF3F871@amazon.com
обсуждение исходный текст
Ответ на Re: pg_upgrade failing for 200+ million Large Objects  (Jan Wieck <jan@wi3ck.info>)
Ответы Re: pg_upgrade failing for 200+ million Large Objects  (Tom Lane <tgl@sss.pgh.pa.us>)
Список pgsql-hackers
> I have updated the patch to use heuristic, During pg_upgrade we count
> Large objects per database. During pg_restore execution if db large_objects
> count is greater than LARGE_OBJECTS_THRESOLD (1k) we will use 
> --restore-blob-batch-size.


I think both SECTION_DATA and SECTION_POST_DATA can be parallelized by pg_restore, So instead of storing 
large objects in heuristics, we can store SECTION_DATA + SECTION_POST_DATA.

Regards
Sachin


В списке pgsql-hackers по дате отправления:

Предыдущее
От: Joe Conway
Дата:
Сообщение: Re: Emitting JSON to file using COPY TO
Следующее
От: Daniel Gustafsson
Дата:
Сообщение: Re: initdb caching during tests