Re: Performance problem with 50,000,000 rows

Поиск
Список
Период
Сортировка
От Mark kirkwood
Тема Re: Performance problem with 50,000,000 rows
Дата
Msg-id 01101117575900.01048@spikey.slithery.org
обсуждение исходный текст
Ответ на Performance problem with 50,000,000 rows  (David Link <dlink@soundscan.com>)
Список pgsql-general
Previously :
>
>Table sizes .....
>
>bk_inv  : 46,790,877
>bk_title:  2,311,710
>
>Query :
>
>select   i.isbn,
>         t.vendor,
>         i.store,
>         i.qty
>from     bk_inv i,
>         bk_title t
>where    i.isbn = t.isbn
>and      t.vendor = '01672708' ;
>

It might be worth putting and index on bk_title.vendor, then the scan of this
table will quickly find the appropriate rows( assuming vendor is
selective)... then the join to bk_inv can use the bk_inv.isbn index as
before, but hopefully with a smaller dataset....(so hopefully it might be
able to use a nested loop from bk_title -> bk_inv instead of a huge merge
join)

It also might be worth playing with sort_mem and shared_buffers (if you have
not done so already) - since you have a reasonable amount of memory.

good luck

Mark



В списке pgsql-general по дате отправления:

Предыдущее
От: "Marc G. Fournier"
Дата:
Сообщение: Two corruptions in as many days?
Следующее
От: Allan Engelhardt
Дата:
Сообщение: Re: error codes when running pg_dumpall from a perl script.