Re: Performance large tables.
От | Roger Hand |
---|---|
Тема | Re: Performance large tables. |
Дата | |
Msg-id | DB28E9B548192448A4E8C8A3C1B1E475FC317A@sj1-exch-01.us.corp.kailea.com обсуждение исходный текст |
Ответ на | Performance large tables. (Benjamin Arai <barai@cs.ucr.edu>) |
Список | pgsql-general |
Benjamin Arai wrote on Saturday, December 10, 2005 3:37 PM > ... On the other hand there is a weekly update (This is the > problem) that updates all of the modified records for a bunch of > finacial data such as closes and etc. For the most part they are > records of the type name,date,value. The update currently takes almost > two days. The update does deletions, insertion, and updates depending > on what has happened from the previous week. > > For the most part the updates are simple one liners. I currently commit > in large batch to increase performance but it still takes a while as > stated above. From evaluating the computers performance during an > update, the system is thrashing both memory and disk. I experimented with batch size and found that large batches (thousands or tens of thousands) slowed things down in our situation, while using a batch size of around 100 or so sped things up tremendously. Of course, YMMV ... -Roger
В списке pgsql-general по дате отправления: