Re: are there any methods to disable updating index before inserting large number tuples?
От | Andres Freund |
---|---|
Тема | Re: are there any methods to disable updating index before inserting large number tuples? |
Дата | |
Msg-id | 201111221953.36954.andres@anarazel.de обсуждение исходный текст |
Ответ на | Re: are there any methods to disable updating index before inserting large number tuples? (John R Pierce <pierce@hogranch.com>) |
Ответы |
Re: are there any methods to disable updating index before
inserting large number tuples?
|
Список | pgsql-general |
Hi, On Tuesday 22 Nov 2011 19:01:02 John R Pierce wrote: > On 11/22/11 7:52 AM, Andrew Sullivan wrote: > > But I think performance on that table is going to be pretty bad. I > > suspect that COPY is going to be your friend here. > > indeed. 20M rows/hour is 5500 rows/second. you'd better have a > seriously fast disk system, say, 20 15k RPM SAS drives in a RAID10 with > a decent SAS raid controller that has 1GB of writeback battery-or-flash > backed cache. 20M rows inserted inside one transaction doesn't cause *that* many writes. I guess the bigger problem than the actual disk throughput because of heap/wal writes will be the index size once the table gets bigger. As soon as that reaches a size bigger than the available shared buffers the performance will suffer rather much. For that you probably need a sensible partitioning strategy... Which is likely to be important anyway to be able to throw away old data efficiently. Using COPY is advantageous in to using INSERT because it can do some operation in a bulk mode which INSERT cannot do. How wide will those rows be, how long do you plan to store the data, how are you querying it? Andres
В списке pgsql-general по дате отправления: