Re: [PERFORM] Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit
От | Pavan Deolasee |
---|---|
Тема | Re: [PERFORM] Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit |
Дата | |
Msg-id | 2e78013d0803121002o2b7a8531y1553ce66cb6d1537@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: [PERFORM] Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: [PERFORM] Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit
Re: [PERFORM] Very slow (2 tuples/second) sequential scan after bulk insert; speed returns to ~500 tuples/second after commit |
Список | pgsql-patches |
On Wed, Mar 12, 2008 at 9:27 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote: > > I didn't like it; it seemed overly complicated (consider dealing with > XID wraparound), We are talking about subtransactions here. I don't think we support subtransaction wrap-around, do we ? > and it would have problems with a slow transaction > generating a sparse set of subtransaction XIDs. I agree thats the worst case. But is that common ? Thats what I was thinking when I proposed the alternate solution. I thought that can happen only if most of the subtransactions abort, which again I thought is not a normal case. But frankly I am not sure if my assumption is correct. > I think getting rid of > the linear search will be enough to fix the performance problem. > I wonder if a skewed binary search would help more ? For example, if we know that the range of xids stored in the array is 1 to 1000 and if we are searching for a number closer to 1000, we can break the array into <large,small> parts instead of equal parts and then search. Well, may be I making simple things complicated ;-) Thanks, Pavan -- Pavan Deolasee EnterpriseDB http://www.enterprisedb.com
В списке pgsql-patches по дате отправления: