Big copy slowdown
От | Brian Hurt |
---|---|
Тема | Big copy slowdown |
Дата | |
Msg-id | 471CFDA6.8050003@janestcapital.com обсуждение исходный текст |
Ответы |
Re: Big copy slowdown
|
Список | pgsql-novice |
I'm not sure if this is the right forum for this question, so I'll start here and escalate as necessary. One of the things I do a lot is copy large tables from one database to another. I'd started life using pgdump, but this requires I save the whole table into an intermediate file (or play games with pipes). Rather than doing either, I instead wrote a quick script which creates a cursor at the source, and starts schlepping data from one to other. For "small" tables (say, a million rows or less), this works just fine, but when I try to copy a bigger table (tens of millions of rows), the performance continually drops off, with the copy becoming slower and slower. On 8.3-beta1, I see the memory utilization skyrocketing, specifically the non-shared residential space of the postmaster daemon, driving the whole system deep into swap (sooner or later), and bringing things to a screeching halt. But on 8.1.4, I don't see memory utilization problem- the memory utilization of a daemon pegs out at just a little larger than the shared buffers size (i.e. what I'd expect)- but I still see the slowdown. It is definately the copy that is the problem- I've tried a variation of my script with the copy commented out (just throwing the data away) to test if it's the cursor- and I get 20K+ rows/second.from the cursor. I've also tried breaking the copy up- finishing the current copy command and starting a new one every million rows or so, and that doesn't help. I've also fiddled with transactions, and rate-limiting the copy, and neither of those help. Is this a bug in postgres? If not, as I'm assume, what should I be doing to make this work fast? Brian
В списке pgsql-novice по дате отправления: