Re: huge price database question..
От | Jim Green |
---|---|
Тема | Re: huge price database question.. |
Дата | |
Msg-id | CACAe89x_XSh=ZYZA9esjXCUQE5T3PLXKph5HgSFsat8DeDSWyQ@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: huge price database question.. (Michael Nolan <htfoot@gmail.com>) |
Список | pgsql-general |
On 20 March 2012 19:45, Michael Nolan <htfoot@gmail.com> wrote: > >> >> right now I am having about 7000 tables for individual stock and I use >> perl to do inserts, it's very slow. I would like to use copy or other >> bulk loading tool to load the daily raw gz data. but I need the split >> the file to per stock files first before I do bulk loading. I consider >> this a bit messy. > > > Are you committing each insert separately or doing them in batches using > 'begin transaction' and 'commit'? > > I have a database that I do inserts in from a text file. Doing a commit > every 1000 transactions cut the time by over 90%. I use perl dbi and prepared statement. also I set shared_buffers = 4GB work_mem = 1GB synchronous_commit = off effective_cache_size = 8GB fsync=off full_page_writes = off when I do the insert. Thanks! > -- > Mike Nolan
В списке pgsql-general по дате отправления: