Re: OT - 2 of 4 drives in a Raid10 array failed - Any chance of recovery?
| От | Scott Marlowe |
|---|---|
| Тема | Re: OT - 2 of 4 drives in a Raid10 array failed - Any chance of recovery? |
| Дата | |
| Msg-id | dcc563d10910202325q363fdbc2u3249cd76ff162d63@mail.gmail.com обсуждение исходный текст |
| Ответ на | Re: OT - 2 of 4 drives in a Raid10 array failed - Any chance of recovery? (Greg Smith <gsmith@gregsmith.com>) |
| Ответы |
Re: OT - 2 of 4 drives in a Raid10 array failed - Any
chance of recovery?
|
| Список | pgsql-general |
On Wed, Oct 21, 2009 at 12:10 AM, Greg Smith <gsmith@gregsmith.com> wrote: > On Tue, 20 Oct 2009, Ow Mun Heng wrote: > >> Raid10 is supposed to be able to withstand up to 2 drive failures if the >> failures are from different sides of the mirror. Right now, I'm not sure >> which drive belongs to which. How do I determine that? Does it depend on the >> output of /prod/mdstat and in that order? > > You build a 4-disk RAID10 array on Linux by first building two RAID1 pairs, > then striping both of the resulting /dev/mdX devices together via RAID0. Actually, later models of linux have a direct RAID-10 level built in. I haven't used it. Not sure how it would look in /proc/mdstat either. > You'll actually have 3 /dev/mdX devices around as a result. I suspect > you're trying to execute mdadm operations on the outer RAID0, when what you > actually should be doing is fixing the bottom-level RAID1 volumes. > Unfortunately I'm not too optimistic about your case though, because if you > had a repairable situation you technically shouldn't have lost the array in > the first place--it should still be running, just in degraded mode on both > underlying RAID1 halves. Exactly. Sounds like both drives in a pair failed.
В списке pgsql-general по дате отправления: