Re: Performance problems
От | Dennis Gearon |
---|---|
Тема | Re: Performance problems |
Дата | |
Msg-id | 3EA9579B.3090009@cvc.net обсуждение исходный текст |
Ответ на | Re: Performance problems (Shridhar Daithankar <shridhar_daithankar@persistent.co.in>) |
Ответы |
Re: Performance problems
|
Список | pgsql-general |
You know, It'd be nice if there was a system table, or command that showed the number and/or percentage of tuples needing to bevacuumed. If table are only affected by tuples formerly in them, then the table/function could show the value per table.If any discarded tuples affect all tables, then a global table/function would be warranted. A minimally compute intensive chron job or ON DELETE trigger could then call vacuum full at a certain percentage. Also,what kind of memory management ( in the postgres application ) could be written that pushes deleted/unused tuples outof any caches or scopes to at least lengthen the time between vacuums? Shridhar Daithankar wrote: > On Friday 25 April 2003 20:23, marco wrote: > >>I unfortunately don't understand the whole thing totally, but if I dump >>the database (with pg_dump), delete it and restore it, the time values >>for reading and writing have decreased to a normal level and begin to >>increase again. >> >>It seems to me, that I do sth. fundamentally wrong :( But even after >>searching google and the postgresql archives I don't see the light at >>all... > > > You need to vacuum full everytime you delete large amount of rows and vacuum > analyze every time you insert/update large amount of rows.. > > I would say large amount==50K row is a good start.. So after 20 runs of tool, > run vacuum once.. Try it and let us know.. > > Shridhar > > > ---------------------------(end of broadcast)--------------------------- > TIP 6: Have you searched our list archives? > > http://archives.postgresql.org >
В списке pgsql-general по дате отправления: