Re: Invalid indexes should not consume update overhead
От | Rader, David |
---|---|
Тема | Re: Invalid indexes should not consume update overhead |
Дата | |
Msg-id | CAABt7R6BnEnmnuehXtXtzKi0Up3NxwvAqX0ugjCaLm52VyhH9g@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Invalid indexes should not consume update overhead (Peter Geoghegan <pg@heroku.com>) |
Ответы |
Re: Invalid indexes should not consume update overhead
Re: Invalid indexes should not consume update overhead |
Список | pgsql-bugs |
On Sunday, July 17, 2016, Peter Geoghegan <pg@heroku.com> wrote: > On Sun, Jul 17, 2016 at 1:42 PM, Rader, David <davidr@openscg.com > <javascript:;>> wrote: > > For example, in SQL Server you can "alter index disable". If you are > about > > to do a lot of bulk operations. But there is no "re-enable"; instead you > > have to "alter index rebuild" because as has been said on this thread you > > don't know what has changed since the disable. > > > > Basically this is very similar to dropping and recreating indexes around > > bulk loads/updates. > > That seems pretty pointless. Why not actually drop the index, then? > > The only reason I can think of is that there is value in representing > that indexes should continue to have optimizer statistics (that would > happen for expression indexes in Postgres) without actually paying for > the ongoing maintenance of the index during write statements. Even > that seems like kind of a stretch, though. > > -- > Peter Geoghegan > There's some DBA benefit in that the index disable also disables constraints and foreign keys that depend on the index. instead of having to drop and recreate dependent objects you can leave all the definitions in place but disabled. So it makes laziness easier. Of course then you have to be sure that your data is right when you bulk load since the engine is not enforcing it. -- -- David Rader davidr@openscg.com
В списке pgsql-bugs по дате отправления: