Re: Random number generation, take two
От | Michael Paquier |
---|---|
Тема | Re: Random number generation, take two |
Дата | |
Msg-id | CAB7nPqS81UaHe-5rRc5OB3Pp3V_BkRmeYk4vwS2H7SDyhdH3JA@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Random number generation, take two (Michael Paquier <michael.paquier@gmail.com>) |
Список | pgsql-hackers |
On Wed, Nov 30, 2016 at 10:22 PM, Michael Paquier <michael.paquier@gmail.com> wrote: > On Wed, Nov 30, 2016 at 8:51 PM, Heikki Linnakangas <hlinnaka@iki.fi> wrote: >> On 11/30/2016 09:01 AM, Michael Paquier wrote: >>> +bool >>> +pg_backend_random(char *dst, int len) >>> +{ >>> + int i; >>> + char *end = dst + len; >>> + >>> + /* should not be called in postmaster */ >>> + Assert (IsUnderPostmaster || !IsPostmasterEnvironment); >>> + >>> + LWLockAcquire(BackendRandomLock, LW_EXCLUSIVE); >>> Shouldn't an exclusive lock be taken only when the initialization >>> phase is called? When reading the value a shared lock would be fine. > > Do we need to worry about performance in the case of application doing > small transactions and creating new connections for each transaction? > This would become a contention point when calculating cancel keys for > newly-forked backends. It could be rather easy to measure a > concurrency impact with for example pgbench -C with many concurrent > transactions running something as light as SELECT 1. I got curious about this point, so I have done a couple of tests with my laptop using the following pgbench command: pgbench -f test.sql -C -c 128 -j 4 -t 100 And test.sql is just that: \set aid random(1,10) In short, a backend is spawned and a cancel key is generated but nothing is done with it to avoid any overhead. With HEAD and the patch without/with --disable-strong-random, I am seeing pretty close numbers. My laptop has only 4 cores, so we may see something on a machine with a higher number of cores. But as far as things go I am in the noise range. -- Michael
В списке pgsql-hackers по дате отправления: