Re: Large index operation crashes postgres
От | Frans Hals |
---|---|
Тема | Re: Large index operation crashes postgres |
Дата | |
Msg-id | 39af1ed21003261643k676b96c3s11bfeb9ecbc875d4@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Large index operation crashes postgres (Frans Hals <fhals7@googlemail.com>) |
Список | pgsql-general |
The index mentioned below has been created in some minutes without problems. Dropped it and created it again. Uses around 36 % of memorywhile creating, after completion postmaster stays at 26 %. > I'm not sure, what you're thinking about generating a self-contained > test that exhibits similar bloat. > I have started an index creation using my data without calling postgis > functions. Just to make it busy: > <CREATE INDEX idx_placex_sector ON placex USING btree > (substring(geometry,1,100), rank_address, osm_type, osm_id);> > This is now running against the 50.000.000 rows in placex. I will > update you about the memory usage it takes. > >> Can you generate a self-contained test case that exhibits similar bloat? >> I would think it's probably not very dependent on the specific data in >> the column, so a simple script that constructs a lot of random data >> similar to yours might be enough, if you would rather not show us your >> real data. >> >> regards, tom lane >> >
В списке pgsql-general по дате отправления: