Re: Analyzing foreign tables & memory problems
От | Albe Laurenz |
---|---|
Тема | Re: Analyzing foreign tables & memory problems |
Дата | |
Msg-id | D960CB61B694CF459DCFB4B0128514C2049FCE83@exadv11.host.magwien.gv.at обсуждение исходный текст |
Ответ на | Analyzing foreign tables & memory problems ("Albe Laurenz" <laurenz.albe@wien.gv.at>) |
Список | pgsql-hackers |
Simon Riggs wrote: >>> During ANALYZE, in analyze.c, functions compute_minimal_stats >>> and compute_scalar_stats, values whose length exceed >>> WIDTH_THRESHOLD (= 1024) are not used for calculating statistics >>> other than that they are counted as "too wide rows" and assumed >>> to be all different. >> >>> This works fine with regular tables; values exceeding that threshold >>> don't get detoasted and won't consume excessive memory. >> >>> With foreign tables the situation is different. Even though >>> values exceeding WIDTH_THRESHOLD won't get used, the complete >>> rows will be fetched from the foreign table. This can easily >>> exhaust maintenance_work_mem. >> >> I'm fairly skeptical that this is a real problem > AFAIK its not possible to select all columns from an Oracle database. > If you use an unqualified LONG column as part of the query then you > get an error. > > So there are issues with simply requesting data for analysis. To detail on the specific case of Oracle, I have given up on LONG since a) it has been deprecated for a long time and b) it is not possible to retrieve a LONG column unless you know in advance how long it is. But you can have several BLOB and CLOB columns in a table, each of which can be arbitrarily large and can lead to the problem I described. Yours, Laurenz Albe
В списке pgsql-hackers по дате отправления: