Re: [ADMIN] v7.1b4 bad performance
От | Tom Lane |
---|---|
Тема | Re: [ADMIN] v7.1b4 bad performance |
Дата | |
Msg-id | 2081.982385009@sss.pgh.pa.us обсуждение исходный текст |
Ответы |
Re: [ADMIN] v7.1b4 bad performance
Re: Re: [ADMIN] v7.1b4 bad performance Re: Re: [ADMIN] v7.1b4 bad performance |
Список | pgsql-hackers |
"Schmidt, Peter" <peter.schmidt@prismedia.com> writes: > So, is it OK to use commit_delay=0? Certainly. In fact, I think that's about to become the default ;-) I have now experimented with several different platforms --- HPUX, FreeBSD, and two considerably different strains of Linux --- and I find that the minimum delay supported by select(2) is 10 or more milliseconds on all of them, as much as 20 msec on some popular platforms. Try it yourself (my test program is attached). Thus, our past arguments about whether a few microseconds of delay before commit are a good idea seem moot; we do not have any portable way of implementing that, and a ten millisecond delay for commit is clearly Not Good. regards, tom lane /* To use: gcc test.c, then time ./a.out N N=0 should return almost instantly, if your select(2) does not block as per spec. N=1 shows the minimum achievable delay, * 1000 --- for example, if time reports the elapsed time as 10 seconds, then select has rounded your 1-microsecond delay request up to 10 milliseconds. Some Unixen seem to throw in an extra ten millisec of delay just for good measure, eg, on FreeBSD 4.2 N=1 takes 20 sec, N=20000 takes 30. */ #include <stdio.h> #include <stdlib.h> #include <sys/stat.h> #include <sys/time.h> #include <sys/types.h> int main(int argc, char** argv) { struct timeval delay; int i, del; del = atoi(argv[1]); for (i = 0; i < 1000; i++) { delay.tv_sec = 0; delay.tv_usec = del; (void) select(0, NULL, NULL, NULL, &delay); } return 0; }
В списке pgsql-hackers по дате отправления: