Re: poor pefrormance with regexp searches on large tables
От | Kevin Grittner |
---|---|
Тема | Re: poor pefrormance with regexp searches on large tables |
Дата | |
Msg-id | 4E4276E8020000250003FD52@gw.wicourts.gov обсуждение исходный текст |
Ответ на | Re: poor pefrormance with regexp searches on large tables (Grzegorz Blinowski <g.blinowski@gmail.com>) |
Ответы |
Re: poor pefrormance with regexp searches on large
tables
Re: poor pefrormance with regexp searches on large tables |
Список | pgsql-performance |
Grzegorz Blinowski <g.blinowski@gmail.com> wrote: > the problem is not disk transfer/access but rather the way > Postgres handles regexp queries. As a diagnostic step, could you figure out some non-regexp way to select about the same percentage of rows with about the same distribution across the table, and compare times? So far I haven't seen any real indication that the time is spent in evaluating the regular expressions, versus just loading pages from the OS into shared buffers and picking out individual tuples and columns from the table. For all we know, the time is mostly spent decompressing the 2K values. Perhaps you need to save them without compression. If they are big enough after compression to be stored out-of-line by default, you might want to experiment with having them in-line in the tuple. http://www.postgresql.org/docs/8.4/interactive/storage-toast.html -Kevin
В списке pgsql-performance по дате отправления: