Re: Peformance Tuning Opterons/ Hard Disk Layout
От | John Arbash Meinel |
---|---|
Тема | Re: Peformance Tuning Opterons/ Hard Disk Layout |
Дата | |
Msg-id | 421CE699.1060300@arbash-meinel.com обсуждение исходный текст |
Ответ на | Re: Peformance Tuning Opterons/ Hard Disk Layout (John Allgood <john@turbocorp.com>) |
Ответы |
Re: Peformance Tuning Opterons/ Hard Disk Layout
|
Список | pgsql-performance |
John Allgood wrote: > This some good info. The type of attached storage is a Kingston 14 bay > Fibre Channel Infostation. I have 14 36GB 15,000 RPM drives. I think > the way it is being explained that I should build a mirror with two > disk for the pg_xlog and the striping and mirroring the rest and put > all my databases into one cluster. Also I might mention that I am > running clustering using Redhat Clustering Suite. So are these 14-disks supposed to be shared across all of your 9 databases? It seems to me that you have a few architectural issues here. First, you can't really have 2 masters writing to the same disk array. I'm not sure if Redhat Clustering gets around this. But second is that you can't run 2 postgres engines on the same database. Postgres doesn't support a clustered setup. There are too many issues with concurancy and keeping everyone in sync. Since you seem to be okay with having a bunch of smaller localized databases, which update a master database 1/day, I would think you would want hardware to go something like this. 1 master server, at least dual opteron with access to lots of disks (likely the whole 14 if you can get away with it). Put 2 as a RAID1 for the OS, 4 as a RAID10 for pg_xlog, and then the other 8 as RAID10 for the rest of the database. 8-9 other servers, these don't need to be as powerful, since they are local domains. Probably a 4-disk RAID10 for the OS and pg_xlog is plenty good, and whatever extra disks you can get for the local database. The master database holds all information for all domains, but the other databases only hold whatever is the local information. Every night your script sequences through the domain databases one-by-one, updating the master database, and synchronizing whatever data is necesary back to the local domain. I would guess that this script could actually just continually run, going to each local db in turn, but you may want nighttime only updating depending on what kind of load they have. John =:->
Вложения
В списке pgsql-performance по дате отправления: