Re: setting up raid10 with more than 4 drives
От | mark@mark.mielke.cc |
---|---|
Тема | Re: setting up raid10 with more than 4 drives |
Дата | |
Msg-id | 20070530155733.GA18830@mark.mielke.cc обсуждение исходный текст |
Ответ на | Re: setting up raid10 with more than 4 drives ("Luke Lonergan" <llonergan@greenplum.com>) |
Ответы |
Re: setting up raid10 with more than 4 drives
Re: setting up raid10 with more than 4 drives |
Список | pgsql-performance |
On Wed, May 30, 2007 at 08:51:45AM -0700, Luke Lonergan wrote: > > This is standard stuff, very well proven: try googling 'self healing zfs'. > The first hit on this search is a demo of ZFS detecting corruption of one of > the mirror pair using checksums, very cool: > > http://www.opensolaris.org/os/community/zfs/demos/selfheal/;jsessionid=52508 > D464883F194061E341F58F4E7E1 > > The bad drive is pointed out directly using the checksum and the data > integrity is preserved. One part is corruption. Another is ordering and consistency. ZFS represents both RAID-style storage *and* journal-style file system. I imagine consistency and ordering is handled through journalling. Cheers, mark -- mark@mielke.cc / markm@ncf.ca / markm@nortel.com __________________________ . . _ ._ . . .__ . . ._. .__ . . . .__ | Neighbourhood Coder |\/| |_| |_| |/ |_ |\/| | |_ | |/ |_ | | | | | | \ | \ |__ . | | .|. |__ |__ | \ |__ | Ottawa, Ontario, Canada One ring to rule them all, one ring to find them, one ring to bring them all and in the darkness bind them... http://mark.mielke.cc/
В списке pgsql-performance по дате отправления: