Re: Reduce the time required for a database recovery from archive.
От | Anastasia Lubennikova |
---|---|
Тема | Re: Reduce the time required for a database recovery from archive. |
Дата | |
Msg-id | 134fd431-c736-7b62-0ee7-09972c799189@postgrespro.ru обсуждение исходный текст |
Ответ на | Re: Reduce the time required for a database recovery from archive. (Stephen Frost <sfrost@snowman.net>) |
Список | pgsql-hackers |
On 09.11.2020 19:31, Stephen Frost wrote: > Greetings, > > * Dmitry Shulga (d.shulga@postgrespro.ru) wrote: >>> On 19 Oct 2020, at 23:25, Stephen Frost <sfrost@snowman.net> wrote: >>> >>> process finishes a WAL file but then just sit around doing nothing while >>> waiting for the applying process to finish another segment. >> I believe that for typical set-up the parameter max_restore_command_workers would have value 2 or 3 in order to supply >> a delivered WAL file on time just before it be started processing. >> >> This use case is for environment where time required for delivering WAL file from archive is greater than time requiredfor applying records contained in the WAL file. >> If time required for WAL file delivering lesser than than time required for handling records contained in it then max_restore_command_workersshouldn't be specified at all > That's certainly not correct at all- the two aren't really all that > related, because any time spent waiting for a WAL file to be delivered > is time that the applying process *could* be working to apply WAL > instead of waiting. At a minimum, I'd expect us to want to have, by > default, at least one worker process running out in front of the > applying process to hopefully eliminate most, if not all, time where the > applying process is waiting for a WAL to show up. In cases where the > applying process is faster than a single fetching process, a user might > want to have two or more restore workers, though ultimately I still > contend that what they really want is as many workers as needed to make > sure that the applying process doesn't ever need to wait- up to some > limit based on the amount of space that's available. > > And back to the configuration side of this- have you considered the > challenge that a user who is using very large WAL files might run > into with the proposed approach that doesn't allow them to control the > amount of space used? If I'm using 1G WAL files, then I need to have > 16G available to have *any* pre-fetching done with this proposed > approach, right? That doesn't seem great. > > Thanks, > > Stephen Status update for a commitfest entry. The commitfest is closed now. As this entry has been Waiting on Author for a while, I've marked it as returned with feedback. Dmitry, feel free to resubmit an updated version to a future commitfest. -- Anastasia Lubennikova Postgres Professional: http://www.postgrespro.com The Russian Postgres Company
В списке pgsql-hackers по дате отправления: