Re: Analyzing foreign tables & memory problems
От | Tom Lane |
---|---|
Тема | Re: Analyzing foreign tables & memory problems |
Дата | |
Msg-id | 8329.1335799405@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: Analyzing foreign tables & memory problems ("Albe Laurenz" <laurenz.albe@wien.gv.at>) |
Список | pgsql-hackers |
"Albe Laurenz" <laurenz.albe@wien.gv.at> writes: > Tom Lane wrote: >> I'm fairly skeptical that this is a real problem, and would prefer not >> to complicate wrappers until we see some evidence from the field that >> it's worth worrying about. > If I have a table with 100000 rows and default_statistics_target > at 100, then a sample of 30000 rows will be taken. > If each row contains binary data of 1MB (an Image), then the > data structure returned will use about 30 GB of memory, which > will probably exceed maintenance_work_mem. > Or is there a flaw in my reasoning? Only that I don't believe this is a real-world scenario for a foreign table. If you have a foreign table in which all, or even many, of the rows are that wide, its performance is going to suck so badly that you'll soon look for a different schema design anyway. I don't want to complicate FDWs for this until it's an actual bottleneck in real applications, which it may never be, and certainly won't be until we've gone through a few rounds of performance refinement for basic operations. regards, tom lane
В списке pgsql-hackers по дате отправления: