Re: Why we panic in pglz_decompress
От | Zdenek Kotala |
---|---|
Тема | Re: Why we panic in pglz_decompress |
Дата | |
Msg-id | 47C831E6.5080209@sun.com обсуждение исходный текст |
Ответ на | Re: Why we panic in pglz_decompress (Tom Lane <tgl@sss.pgh.pa.us>) |
Список | pgsql-hackers |
Tom Lane napsal(a): > Alvaro Herrera <alvherre@commandprompt.com> writes: >> Zdenek Kotala wrote: >>> I'm now looking into toast code and I found following code in >>> pglz_decompress: >>> >>> 00704 if (destsize != source->rawsize) >>> 00705 elog(destsize > source->rawsize ? FATAL : ERROR, >>> 00706 "compressed data is corrupt"); >>> >>> >>> I'm surprise why we there panic? > >> Agreed, FATAL is too strong. > > Did either of you read the comment just before this code? The reason > it's panicing is that it's possibly already tromped on some critical > data structure inside the backend. Yes I did, but if you know how big memory you have for uncompress data you can check a boundaries. It is better then overwrite a data in memory. Yes, it little bit slow down a routine but you will able work with a table. >>> My idea is to improve this piece of code and move error logging to >>> callers (heap_tuple_untoast_attr() and heap_tuple_untoast_attr_slice()) >>> where we have a little bit more details (especially for external >>> storage). > >> Why move it? Just adding errcontext in the callers should be enough. > > AFAIR this error has never once been reported from the field, so I don't > see the point of investing a lot of effort in it. Please, increment a counter :-). I'm now analyzing one core file and it fails finally in elog function (called from pglz_decompress), because memory was overwritten -> no error message in a log file. :( Zdenek
В списке pgsql-hackers по дате отправления: