Re: Large number of open(2) calls with bulk INSERT into empty table
От | Andres Freund |
---|---|
Тема | Re: Large number of open(2) calls with bulk INSERT into empty table |
Дата | |
Msg-id | 201112070212.01651.andres@anarazel.de обсуждение исходный текст |
Ответ на | Re: Large number of open(2) calls with bulk INSERT into empty table (Robert Haas <robertmhaas@gmail.com>) |
Ответы |
Re: Large number of open(2) calls with bulk INSERT into
empty table
|
Список | pgsql-hackers |
On Tuesday, December 06, 2011 08:53:42 PM Robert Haas wrote: > On Tue, Dec 6, 2011 at 7:12 AM, Florian Weimer <fweimer@bfk.de> wrote: > > * Robert Haas: > >> I tried whacking out the call to GetPageWithFreeSpace() in > >> RelationGetBufferForTuple(), and also with the unpatched code, but the > >> run-to-run randomness was way more than any difference the change > >> made. Is there a better test case? > > > > I think that if you want to exercise file system lookup performance, you > > need a larger directory, which presumably means a large number of > > tables. > > OK. I created 100,000 dummy tables, 10,000 at a time avoid blowing up > the lock manager. I then repeated my previous tests, and I still > can't see any meaningful difference (on my MacBook Pro, running MacOS > X v10.6.8). So at least on this OS, it doesn't seem to matter much. > I'm inclined to defer putting any more work into it until such time as > someone can demonstrate that it actually causes a problem and provides > a reproducible test case. I don't deny that there's probably an > effect and it would be nice to improve this, but it doesn't seem worth > spending a lot of time on until we can find a case where the effect is > measurable. I think if at all youre going to notice differences at a high concurrency because you then would start to hit the price of synchronizing the dcache between cpu cores in the kernel. Andres
В списке pgsql-hackers по дате отправления: