Re: Perf Benchmarking and regression.
От | Ashutosh Sharma |
---|---|
Тема | Re: Perf Benchmarking and regression. |
Дата | |
Msg-id | CAE9k0P=X2S4jScDaMYFAvqVeUHOrHJTB_Wn6mu-wDkPw_y-wGA@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Perf Benchmarking and regression. (Andres Freund <andres@anarazel.de>) |
Ответы |
Re: Perf Benchmarking and regression.
|
Список | pgsql-hackers |
Hi Andres,
I am extremely sorry for the delayed response. As suggested by you, I have taken the performance readings at 128 client counts after making the following two changes:
1). Removed AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL); from pq_init(). Below is the git diff for the same.
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index 8d6eb0b..399d54b 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -206,7 +206,9 @@ pq_init(void)
AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
NULL, NULL);
AddWaitEventToSet(FeBeWaitSet, WL_LATCH_SET, -1, MyLatch, NULL);
+#if 0
AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
+#endif
2). Disabled the guc vars "bgwriter_flush_after", "checkpointer_flush_after" and "backend_flush_after" by setting them to zero.
After doing the above two changes below are the readings i got for 128 client counts:
CASE : Read-Write Tests when data exceeds shared buffers.
Non Default settings and test
./postgres -c shared_buffers=8GB -N 200 -c min_wal_size=15GB -c max_wal_size=20GB -c checkpoint_timeout=900 -c maintenance_work_mem=1GB -c checkpoint_completion_target=0.9 &
./pgbench -i -s 1000 postgres
./pgbench -c 128 -j 128 -T 1800 -M prepared postgres
Run1 : tps = 9690.678225
Run2 : tps = 9904.320645
Run3 : tps = 9943.547176
Please let me know if i need to take readings with other client counts as well.
Note: I have taken these readings on postgres master head at,
commit 91fd1df4aad2141859310564b498a3e28055ee28
Author: Tom Lane <tgl@sss.pgh.pa.us>
Date: Sun May 8 16:53:55 2016 -0400
With Regards,I am extremely sorry for the delayed response. As suggested by you, I have taken the performance readings at 128 client counts after making the following two changes:
1). Removed AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL); from pq_init(). Below is the git diff for the same.
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index 8d6eb0b..399d54b 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -206,7 +206,9 @@ pq_init(void)
AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE, MyProcPort->sock,
NULL, NULL);
AddWaitEventToSet(FeBeWaitSet, WL_LATCH_SET, -1, MyLatch, NULL);
+#if 0
AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL, NULL);
+#endif
2). Disabled the guc vars "bgwriter_flush_after", "checkpointer_flush_after" and "backend_flush_after" by setting them to zero.
After doing the above two changes below are the readings i got for 128 client counts:
CASE : Read-Write Tests when data exceeds shared buffers.
Non Default settings and test
./postgres -c shared_buffers=8GB -N 200 -c min_wal_size=15GB -c max_wal_size=20GB -c checkpoint_timeout=900 -c maintenance_work_mem=1GB -c checkpoint_completion_target=0.9 &
./pgbench -i -s 1000 postgres
./pgbench -c 128 -j 128 -T 1800 -M prepared postgres
Run1 : tps = 9690.678225
Run2 : tps = 9904.320645
Run3 : tps = 9943.547176
Please let me know if i need to take readings with other client counts as well.
Note: I have taken these readings on postgres master head at,
commit 91fd1df4aad2141859310564b498a3e28055ee28
Author: Tom Lane <tgl@sss.pgh.pa.us>
Date: Sun May 8 16:53:55 2016 -0400
On Wed, May 11, 2016 at 3:53 AM, Andres Freund <andres@anarazel.de> wrote:
Hi,
On 2016-05-06 21:21:11 +0530, Mithun Cy wrote:
> I will try to run the tests as you have suggested and will report the same.
Any news on that front?
Regards,
Andres
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers
В списке pgsql-hackers по дате отправления: