Re: unlogged tables
От | Jim Nasby |
---|---|
Тема | Re: unlogged tables |
Дата | |
Msg-id | 552C4561.4060009@BlueTreble.com обсуждение исходный текст |
Ответ на | Re: unlogged tables (Alvaro Herrera <alvherre@2ndquadrant.com>) |
Список | pgsql-performance |
On 4/13/15 4:13 PM, Alvaro Herrera wrote: > Jim Nasby wrote: > >> Yeah, this is not something that would be very easy to accomplish, because a >> buffer can get evicted and written to disk at any point. It wouldn't be too >> hard to read every unlogged table during recovery and see if there are any >> pages that were written after the last checkpoint, but that obviously won't >> be very fast. > > If you consider only tables, then yeah perhaps this is easy to > accomplish (not really convinced myself). But if you consider indexes, > things are not so easy anymore. Are indexes not guaranteed to have LSNs? I thought they basically followed the same write rules as heap pages in regard to WAL first. Though, if you have an index that doesn't support logging (like hash) you're still hosed... > In the thread from 2011 (which this started as a reply to) the OP was I don't keep PGSQL emails from that far back... ;) > doing frequent UPDATEs to keep track of counts of something. I think > that would be better served by using INSERTs of deltas and periodic > accumulation of grouped values, as suggested in > http://www.postgresql.org/message-id/20150305211601.GW3291@alvh.no-ip.org > This has actually been suggested many times over the years. What I was suggesting certainly wouldn't help you if you were getting any serious amount of changes to the count. I am wondering though what the bottleneck in HEAD is with doing an UPDATE instead of an INSERT, at least where unlogged would help significantly. I didn't think we logged all that much more for an UPDATE. Heck, with HOT you might even be able to log less. -- Jim Nasby, Data Architect, Blue Treble Consulting Data in Trouble? Get it in Treble! http://BlueTreble.com
В списке pgsql-performance по дате отправления: