Re: Improve WALRead() to suck data directly from WAL buffers when possible
От | Bharath Rupireddy |
---|---|
Тема | Re: Improve WALRead() to suck data directly from WAL buffers when possible |
Дата | |
Msg-id | CALj2ACXQZH90rLm2ncv-4c_a3Tkqkkczeq_tO25nUUO80eML_g@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Improve WALRead() to suck data directly from WAL buffers when possible (Nathan Bossart <nathandbossart@gmail.com>) |
Ответы |
Re: Improve WALRead() to suck data directly from WAL buffers when possible
Re: Improve WALRead() to suck data directly from WAL buffers when possible |
Список | pgsql-hackers |
On Tue, Feb 28, 2023 at 6:14 AM Nathan Bossart <nathandbossart@gmail.com> wrote: > > On Wed, Feb 08, 2023 at 08:00:00PM +0530, Bharath Rupireddy wrote: > > + /* > > + * We read some of the requested bytes. Continue to read remaining > > + * bytes. > > + */ > > + ptr += nread; > > + nbytes -= nread; > > + dst += nread; > > + *read_bytes += nread; > > Why do we only read a page at a time in XLogReadFromBuffersGuts()? What is > preventing us from copying all the data we need in one go? Note that most of the WALRead() callers request a single page of XLOG_BLCKSZ bytes even if the server has less or more available WAL pages. It's the streaming replication wal sender that can request less than XLOG_BLCKSZ bytes and upto MAX_SEND_SIZE (16 * XLOG_BLCKSZ). And, if we read, say, MAX_SEND_SIZE at once while holding WALBufMappingLock, that might impact concurrent inserters (at least, I can say it in theory) - one of the main intentions of this patch is not to impact inserters much. Therefore, I feel reading one WAL buffer page at a time, which works for most of the cases, without impacting concurrent inserters much is better - https://www.postgresql.org/message-id/CALj2ACWXHP6Ha1BfDB14txm%3DXP272wCbOV00mcPg9c6EXbnp5A%40mail.gmail.com. -- Bharath Rupireddy PostgreSQL Contributors Team RDS Open Source Databases Amazon Web Services: https://aws.amazon.com
В списке pgsql-hackers по дате отправления: