Re: [GENERAL] Large data and slow queries
От | Samuel Williams |
---|---|
Тема | Re: [GENERAL] Large data and slow queries |
Дата | |
Msg-id | CAHkN8V-3sycvA37Xv5fOU08UfuKfAD+f-VCRT4kOvC_AhX29jg@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: [GENERAL] Large data and slow queries (Scott Marlowe <scott.marlowe@gmail.com>) |
Список | pgsql-general |
Scott, Vick, the vast majority of the data is generic. But there are some specific events we need to look up quickly which are probably less than a few 100,000 records. We did evaluate partial indexes vs full indexes. The partial index speeds up our specific queries significantly while only taking a very small amount of space (often < 100MB, compared to a full index on the DB which is up around 20Gb at the moment). On 21 April 2017 at 03:01, Scott Marlowe <scott.marlowe@gmail.com> wrote: > On Thu, Apr 20, 2017 at 6:30 AM, Vick Khera <vivek@khera.org> wrote: >> I'm curious why you have so many partial indexes. Are you trying to make >> custom indexes per query? It seems to me you might want to consider making >> the indexes general, and remove the redundant ones (that have the same >> prefix list of indexed fields). >> >> Secondly your table is 102Gb. Clearly there's a lot of data here. How many >> rows does that take? I would further suggest that you partition this table >> such that there are no more than about 10 million rows per partition (I've >> done this by using a id % 100 computation). Maybe in your case it makes >> sense to partition it based on the "what" field, because it appears you are >> trying to do that with your partial indexes already. > > I would think a two field index might be just as effective and not > require a lot of maintenance etc. > > -- > To understand recursion, one must first understand recursion. > > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general
В списке pgsql-general по дате отправления: