Re: huge price database question..
От | Jim Green |
---|---|
Тема | Re: huge price database question.. |
Дата | |
Msg-id | CACAe89yC1XoDfah1TZsZMn+tg1DROSOJtdA6gb4kL2oOU2q49g@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: huge price database question.. (Jim Green <student.northwestern@gmail.com>) |
Ответы |
Re: huge price database question..
Re: huge price database question.. |
Список | pgsql-general |
On 20 March 2012 22:08, Jim Green <student.northwestern@gmail.com> wrote: > On 20 March 2012 22:03, David Kerr <dmk@mr-paradox.net> wrote: > >> \copy on 1.2million rows should only take a minute or two, you could make >> that table "unlogged" >> as well to speed it up more. If you could truncate / drop / create / load / >> then index the table each >> time then you'll get the best throughput. > > Thanks, Could you explain on the "runcate / drop / create / load / > then index the table each time then you'll get the best throughput." > part.. or point me to some docs?.. Also if I use copy, I would be tempted to go the one table route, or else I need to parse my raw daily file, separate to individual symbol file and copy to individual table for each symbol(this sounds like not very efficient).. > > Jim >> >> Dave >> >> >> >> >> -- >> Sent via pgsql-general mailing list (pgsql-general@postgresql.org) >> To make changes to your subscription: >> http://www.postgresql.org/mailpref/pgsql-general
В списке pgsql-general по дате отправления: