Re: BUG #6629: Creating a gist index fails with "too many LWLocks taken"
От | Heikki Linnakangas |
---|---|
Тема | Re: BUG #6629: Creating a gist index fails with "too many LWLocks taken" |
Дата | |
Msg-id | 4FA7EC37.5070703@enterprisedb.com обсуждение исходный текст |
Ответ на | BUG #6629: Creating a gist index fails with "too many LWLocks taken" (tom@tomforb.es) |
Ответы |
Re: BUG #6629: Creating a gist index fails with "too many LWLocks taken"
Re: BUG #6629: Creating a gist index fails with "too many LWLocks taken" |
Список | pgsql-bugs |
On 05.05.2012 22:49, tom@tomforb.es wrote: > The following bug has been logged on the website: > > Bug reference: 6629 > Logged by: Tom Forbes > Email address: tom@tomforb.es > PostgreSQL version: 9.1.3 > Operating system: Windows 7 64bit > Description: > > On a test database with 10,000 rows, each containing a array of 50 unique > random integers from 0 to 1000 creating a gist index on the column with > gist__int_ops as an option fails with the following error: > "too many LWLocks taken". I modified the way GiST page splitting works in 9.1, this seems to be caused by those changes. When a page is split and the downlink for the new page is inserted to the parent, we keep a lock on the child and the parent. But inserting the downlink to the parent can cause the parent to split too, and so forth, all the way to the root. There's a hard-coded limit that a backend can hold at most 100 lwlocks simultaneously, and what happens is that when the tree is very tall, about 50 levels tall in this case, you run into that limit when you have to do a page split at every level. We could rearrange the page splitting algorithm to release locks earlier, before traversing to the next parent level. I didn't do that because I thought no-one would create an index that tall and the code was a bit easier to follow when locks are released in the same function where they're acquired, but looks like I was wrong. I'm not sure how useful such an index is in practice, but at least it's apparently easy to create one. I wrote a quick patch to do that, and with the patch the index build finished - but it took hours. And the index was 10GB in size, where the heap is just 12 MB, and searches using the index take ages. Do you have a real-life scenario where you run into this limit? I'm a bit reluctant to change the code unless there's an actual use case for a gist index more than 50 levels deep. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com
В списке pgsql-bugs по дате отправления: