Re: 600 million rows of data. Bad hardware or need partitioning?

Поиск
Список
Период
Сортировка
От Arya F
Тема Re: 600 million rows of data. Bad hardware or need partitioning?
Дата
Msg-id CAFoK1ayPoLXGDHsco4=fr0podW48eg43LdWppOvb-bE==T4DRw@mail.gmail.com
обсуждение исходный текст
Ответ на Re: 600 million rows of data. Bad hardware or need partitioning?  (Justin Pryzby <pryzby@telsasoft.com>)
Список pgsql-performance
On Tue, May 5, 2020 at 9:37 PM Justin Pryzby <pryzby@telsasoft.com> wrote:
>
> On Tue, May 05, 2020 at 08:31:29PM -0400, Arya F wrote:
> > On Mon, May 4, 2020 at 5:21 AM Justin Pryzby <pryzby@telsasoft.com> wrote:
> >
> > > I mentioned in February and March that you should plan to set shared_buffers
> > > to fit the indexes currently being updated.
> >
> > The following command gives me
> >
> > select pg_size_pretty (pg_indexes_size('test_table'));
> >  pg_size_pretty >  5216 MB
> >
> > So right now, the indexes on that table are taking about 5.2 GB, if a
> > machine has 512 GB of RAM and SSDs, is it safe to assume I can achieve
> > the same update that takes 1.5 minutes in less than 5 seconds while
> > having 600 million rows of data without partitioning?
>
> I am not prepared to guarantee server performance..
>
> But, to my knowledge, you haven't configured shared_buffers at all.  Which I
> think might be the single most important thing to configure for loading speed
> (with indexes).
>

Just  wanted to give an update. I tried this on a VPS with 8GB ram and
SSDs, the same query now takes 1.2 seconds! What a huge difference!
that's without making any changes to postgres.conf file. Very
impressive.



В списке pgsql-performance по дате отправления:

Предыдущее
От: Jeff Janes
Дата:
Сообщение: Re: pg_attribute, pg_class, pg_depend grow huge in count and sizewith multiple tenants.
Следующее
От: Steve Pritchard
Дата:
Сообщение: Re: Inaccurate Rows estimate for "Bitmap And" causes Planner tochoose wrong join