Re: [ADMIN] Processing very large TEXT columns (300MB+) using C/libpq
От | Cory Nemelka |
---|---|
Тема | Re: [ADMIN] Processing very large TEXT columns (300MB+) using C/libpq |
Дата | |
Msg-id | CAMe5Gn1ksuVLRcR6DPp+nzsEsNZVVRHjMUoM-9t_JBAW2Xg4HA@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: [ADMIN] Processing very large TEXT columns (300MB+) using C/libpq (Aldo Sarmiento <aldo@bigpurpledot.com>) |
Ответы |
Re: [ADMIN] Processing very large TEXT columns (300MB+) using C/libpq
Re: [ADMIN] Processing very large TEXT columns (300MB+) using C/libpq |
Список | pgsql-admin |
Yes, but I should be able to read them much faster. The psql client can display an 11MB column in a little over a minute, while in C using libpg library, it takes over an hour.
Anyone have any experience with the same issue that can help me resolve?
--cnemelka
On Thu, Oct 19, 2017 at 5:20 PM, Aldo Sarmiento <aldo@bigpurpledot.com> wrote:
I believe large columns get put into a TOAST table. Max page size is 8k. So you'll have lots of pages per row that need to be joined with a size like that: https://www.postgresql.org/docs/9.5/static/storage- toast.html Aldo SarmientoPresident & CTO
On Thu, Oct 19, 2017 at 2:03 PM, Cory Nemelka <cnemelka@gmail.com> wrote:I have getting very poor performance using libpq to process very large TEXT columns (300MB+). I suspect it is IO related but can't be sure.Anyone had experience with same issue that can help me resolve?--cnemelka
В списке pgsql-admin по дате отправления: