Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and more?)
От | Melanie Plageman |
---|---|
Тема | Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and more?) |
Дата | |
Msg-id | CAAKRu_b0vEFpTK1yA=Y7G=v3ZgpBHxbe6SjxWvbAW==_CSW1hg@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: pg_stat_bgwriter.buffers_backend is pretty meaningless (and more?) (Andres Freund <andres@anarazel.de>) |
Список | pgsql-hackers |
On Wed, Jul 20, 2022 at 12:50 PM Andres Freund <andres@anarazel.de> wrote:
On 2022-07-14 18:44:48 -0400, Melanie Plageman wrote:
> @@ -1427,8 +1445,10 @@ pgstat_read_statsfile(void)
> FILE *fpin;
> int32 format_id;
> bool found;
> + PgStat_BackendIOPathOps io_stats;
> const char *statfile = PGSTAT_STAT_PERMANENT_FILENAME;
> PgStat_ShmemControl *shmem = pgStatLocal.shmem;
> + PgStatShared_BackendIOPathOps *io_stats_shmem = &shmem->io_ops;
>
> /* shouldn't be called from postmaster */
> Assert(IsUnderPostmaster || !IsPostmasterEnvironment);
> @@ -1486,6 +1506,22 @@ pgstat_read_statsfile(void)
> if (!read_chunk_s(fpin, &shmem->checkpointer.stats))
> goto error;
>
> + /*
> + * Read IO Operations stats struct
> + */
> + if (!read_chunk_s(fpin, &io_stats))
> + goto error;
> +
> + io_stats_shmem->stat_reset_timestamp = io_stats.stat_reset_timestamp;
> +
> + for (int i = 0; i < BACKEND_NUM_TYPES; i++)
> + {
> + PgStat_IOPathOps *stats = &io_stats.stats[i];
> + PgStatShared_IOPathOps *stats_shmem = &io_stats_shmem->stats[i];
> +
> + memcpy(stats_shmem->data, stats->data, sizeof(stats->data));
> + }
Why can't the data be read directly into shared memory?
It is not the same lock. Each PgStatShared_IOPathOps has a lock so that
they can be accessed individually (per BackendType in
PgStatShared_BackendIOPathOps). It is optimized for the more common
operation of flushing at the expense of the snapshot operation (which
should be less common) and reset operation.
they can be accessed individually (per BackendType in
PgStatShared_BackendIOPathOps). It is optimized for the more common
operation of flushing at the expense of the snapshot operation (which
should be less common) and reset operation.
> +void
> +pgstat_io_ops_snapshot_cb(void)
> +{
> + PgStatShared_BackendIOPathOps *all_backend_stats_shmem = &pgStatLocal.shmem->io_ops;
> + PgStat_BackendIOPathOps *all_backend_stats_snap = &pgStatLocal.snapshot.io_ops;
> +
> + for (int i = 0; i < BACKEND_NUM_TYPES; i++)
> + {
> + PgStatShared_IOPathOps *stats_shmem = &all_backend_stats_shmem->stats[i];
> + PgStat_IOPathOps *stats_snap = &all_backend_stats_snap->stats[i];
> +
> + LWLockAcquire(&stats_shmem->lock, LW_EXCLUSIVE);
Why acquire the same lock repeatedly for each type, rather than once for
the whole?
This is also because of having a LWLock in each PgStatShared_IOPathOps.
Because I don't want a lock in the backend local stats, I have two data
structures PgStatShared_IOPathOps and PgStat_IOPathOps. I thought it was
odd to write out the lock to the file, so when persisting the stats, I
write out the relevant data only and when reading it back in to shared
memory, I read in the data member of PgStatShared_IOPathOps.
Because I don't want a lock in the backend local stats, I have two data
structures PgStatShared_IOPathOps and PgStat_IOPathOps. I thought it was
odd to write out the lock to the file, so when persisting the stats, I
write out the relevant data only and when reading it back in to shared
memory, I read in the data member of PgStatShared_IOPathOps.
В списке pgsql-hackers по дате отправления: