Re: TODO : Allow parallel cores to be used by vacuumdb [ WIP ]
От | Jeff Janes |
---|---|
Тема | Re: TODO : Allow parallel cores to be used by vacuumdb [ WIP ] |
Дата | |
Msg-id | CAMkU=1xdbaw7RSPS1pWhwj7WUiRoh+HNAhV3d2a5zuJjQo3ovQ@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: TODO : Allow parallel cores to be used by vacuumdb [ WIP ] (Alvaro Herrera <alvherre@2ndquadrant.com>) |
Ответы |
Re: TODO : Allow parallel cores to be used by vacuumdb [
WIP ]
|
Список | pgsql-hackers |
On Fri, Sep 26, 2014 at 11:47 AM, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:
Gavin Flower wrote:
> Curious: would it be both feasible and useful to have multiple
> workers process a 'large' table, without complicating things too
> much? The could each start at a different position in the file.
Feasible: no. Useful: maybe, we don't really know. (You could just as
well have a worker at double the speed, i.e. double vacuum_cost_limit).
Vacuum_cost_delay is already 0 by default. So unless you changed that, vacuum_cost_limit does not take effect under vacuumdb.
It is pretty easy for vacuum to be CPU limited, and even easier for analyze to be CPU limited (It does a lot of sorting). I think analyzing is the main use case for this patch, to shorten the pg_upgrade window. At least, that is how I anticipate using it.
Cheers,
Jeff
В списке pgsql-hackers по дате отправления: