Re: BUG #10329: Could not read block 0 in file "base/56100265/57047884": read only 0 of 8192 bytes
От | David G Johnston |
---|---|
Тема | Re: BUG #10329: Could not read block 0 in file "base/56100265/57047884": read only 0 of 8192 bytes |
Дата | |
Msg-id | 1400184326838-5804123.post@n5.nabble.com обсуждение исходный текст |
Ответ на | Re: BUG #10329: Could not read block 0 in file "base/56100265/57047884": read only 0 of 8192 bytes (Tom Lane <tgl@sss.pgh.pa.us>) |
Список | pgsql-bugs |
Tom Lane-2 wrote > Bruce Momjian < > bruce@ > > writes: >> On Thu, May 15, 2014 at 05:20:35PM +0200, Olivier Macchioni wrote: >>> I guess my best bet is to replace it by another kind of indexes... and >>> maybe one day PostgreSQL will be clever enough to issue a warning / >>> error in such a case for the people like me who don't read *all the doc* >>> :P > >> Yes, streaming replication has made our hash indexes even worse. In the >> past, I have suggested we issue a warning for the creation of hash >> indexes, but did not get enough agreement. > > Mainly because it wouldn't be a very helpful message. > > I wonder though if we could throw a flat-out error for attempts to use > a hash index on a hot standby server. That would get people's attention > without being mere nagging in other situations. It's not a 100% solution > because you'd still lose if you tried to use a hash index on a slave > since promoted to master. But it would help without being a large > sink for effort. At least a promoted slave can "REINDEX" and get back to functioning with minimal fuss. Side question: if one were to do this intentionally is there a recommended way to have the REINDEX run immediately upon the former slave becoming promoted? I have to presume there is some reason why we do not currently resolve; base/56100265/57047884 into something more useful. It is obviously possible since oid2name exists. I suspect some of it may just be "hasn't been worth the effort" and some of it is "expensive to compute and if the error is happening repeatedly it could bog down the system". But knowing what type of relation is affected, and conditionally reporting additional diagnostic detail based upon that type, has value since, as this case shows, when an error like this arises the typical user is going into a state of panic and very little info is immediately at hand to couch that for non-critical situations. OpenERP should be more helpful in their own right since they know they are using a feature with limitations; though given the lack of complaints we are not that popular with them and/or people are not using hot-standby slaves for queries on this particular table. All that said I don't see how it would really hurt to issue a notice upon creation of a hash index. Providing multiple opportunities for someone to see the message, question its meaning, and learn why it is being issued would decrease the chances of people being surprised; and I cannot imagine the check for index type, and the resultant logging, would be considered expensive relative to how long a CREATE INDEX typically would run. David J. -- View this message in context: http://postgresql.1045698.n5.nabble.com/BUG-10329-Could-not-read-block-0-in-file-base-56100265-57047884-read-only-0-of-8192-bytes-tp5804037p5804123.html Sent from the PostgreSQL - bugs mailing list archive at Nabble.com.
В списке pgsql-bugs по дате отправления: