Re: Postgres performace with large tables.
От | Wim |
---|---|
Тема | Re: Postgres performace with large tables. |
Дата | |
Msg-id | 3E423F03.7080806@belbone.be обсуждение исходный текст |
Ответ на | Postgres performace with large tables. (Wim <wdh@belbone.be>) |
Список | pgsql-novice |
Andrew McMillan wrote: >On Thu, 2003-02-06 at 20:39, Wim wrote: > > >>Andrew McMillan wrote: >> >> >>>If you have processes that are updating/deleting within the table in >>>parallel then you probably want to vacuum the table (much) more often. >>> >>> >>> >>I Think I'll try that in the first place, >>I do: >>BEGIN >> SELECT routers FROM routers_table WHERE blabla; >> UPDATE routers_table SET timestamp=blabla; >> INSERT INTO routers_counters VALUES blablabla; >>END >>COMMIT >> >>How much should I vacuum the table? After every run of the script or 2 >>or 3 times/day? >> >> > >I would tend to "VACUUM routers_table" after each run of the script, >given that it is a small table (presumably 30 rows) it should be next to >know overhead and will ensure it remains within 1-2 physical pages. > >If you continue to experience problems, you are best advised to provide >full schema and EXPLAIN output along with your questions. > >Cheers, > Andrew. > > Thanx, Let's work on it! Wim
В списке pgsql-novice по дате отправления: