Re: SMP scaling
От | Mark Rae |
---|---|
Тема | Re: SMP scaling |
Дата | |
Msg-id | 20050318230027.GA29131@purplebat.com обсуждение исходный текст |
Ответ на | Re: SMP scaling (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: SMP scaling
|
Список | pgsql-general |
On Fri, Mar 18, 2005 at 01:31:51PM -0500, Tom Lane wrote: > BTW, although I know next to nothing about NUMA, I do know that it is > configurable to some extent (eg, via numactl). What was the > configuration here exactly, and did you try alternatives? Also, > what was the OS exactly? (I've heard that RHEL4 is a whole lot better > than RHEL3 in managing NUMA, for example. This may be generic to 2.6 vs > 2.4 Linux kernels, or maybe Red Hat did some extra hacking.) The Altix uses a 2.4.21 kernel with SGI's own modifications to support up to 256 CPUs and their NUMALink hadware. (Some of which has become the NUMA code in the 2.6 kernel) Even with the numa support, which makes sure any memory allocated by malloc or the stack ends up local to the processor which originally called it, and then continues to schedule the process on that CPU, there is still the problem that all table accesses* go through the shared buffer cache which resides in one location. [* is this true in all cases?] I was about to write a long explaination about how the only way to scale out to this size would be to have separate buffer caches in each memory domain, and this would then require some kind of cache coherency mechanism. But after reading a few bits of documentation, it looks like SGI already have a solution in the form of symmetric data objects. In particular, the symmetric heap, an area of shared memory which is replicated across all memory domains with the coherency being handled in hardware. So it looks like all that might be needed is to replace the shmget calls in src/backend/port with the equivalent SGI functions. -Mark
В списке pgsql-general по дате отправления: