RE: CRC was: Re: beta testing version
От | Mikheev, Vadim |
---|---|
Тема | RE: CRC was: Re: beta testing version |
Дата | |
Msg-id | 8F4C99C66D04D4118F580090272A7A234D31DB@sectorbase1.sectorbase.com обсуждение исходный текст |
Ответ на | CRC was: Re: beta testing version ("Horst Herb" <hherb@malleenet.net.au>) |
Ответы |
Re: CRC was: Re: beta testing version
|
Список | pgsql-hackers |
> > This may be implemented very fast (if someone points me where > > I can find CRC func). And I could implement "physical log" > > till next monday. > > I have been experimenting with CRCs for the past 6 month in > our database for internal logging purposes. Downloaded a lot of > hash libraries, tried different algorithms, and implemented a few > myself. Which algorithm do you want? Have a look at the openssl > libraries (www.openssl.org) for a start -if you don't find what > you want let me know. Thanks. > As the logging might include large data blocks, especially > now that we can TOAST our data, TOAST breaks data into a few 2K (or so) tuples to be inserted separately. But first after checkpoint btree split will require logging of 2x8K record -:( > I would strongly suggest to use strong hashes like RIPEMD or > MD5 instead of CRC-32 and the like. Sure, it takes more time > tocalculate and more place on the hard disk, but then: a database > without data integrity (and means of _proofing_ integrity) is > pretty worthless. Other opinions? Also, we shouldn't forget licence issues. Vadim
В списке pgsql-hackers по дате отправления: