Shared buffers vs large files
От | Francisco Reyes |
---|---|
Тема | Shared buffers vs large files |
Дата | |
Msg-id | 20020301163403.K6753-100000@zoraida.natserv.net обсуждение исходный текст |
Ответы |
Re: Shared buffers vs large files
|
Список | pgsql-general |
Do shared_buffers help when at all when using large tables which do not fit on the buffer? Does it help to brake tables so their most heavily used columns are by themselves? While we were testing Postgresql we were using subsets of data. Now that we are in limited production the tables have grown significantly. My settings are: shared_buffers at 4096 (32MB if my calculations are correct), sort_mem = 65536 # min 32 vacuum_mem = 16384 # min 1024 The machine has 1GB of ram. I don't expect to have more than a handfull of connections at a time (from 1 to 10). Should I increate the shared buffers to 64MB? 128MB? My sys V shared memory parameters are: kern.ipc.shmall=65535 kern.ipc.shmmax=100663296 kern.ipc.shm_use_phys=1 (From /etc/sysctl.conf on a FreeBSD machine) My most commonly used tables are from 30MB to 200MB One of those was 300MB+, but we split it and kept the most commonly used columns in one table and the rest in another hoping this would help. This, however, was a lot of work and we are wondering whether it would be benefitial to do it with any other tables. After we split this one table in two the most heavily used part is about 30MB and the other part about 270MB. NOTE: For all reported sizes I am using the file size of the ASCII file we import. I am just starting to look at how size of the files reported by oid2name relates to this. My assumption is that the ASCII files may be representative of the sizes they may use as tables.
В списке pgsql-general по дате отправления: