Re: Using quicksort for every external sort run
От | Peter Geoghegan |
---|---|
Тема | Re: Using quicksort for every external sort run |
Дата | |
Msg-id | CAM3SWZTGpnHw7_3-FzhT9OR=_fiTJo5nu+ghR8Ten9NdAsoHHw@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Using quicksort for every external sort run (Greg Stark <stark@mit.edu>) |
Список | pgsql-hackers |
On Mon, Nov 30, 2015 at 5:12 PM, Greg Stark <stark@mit.edu> wrote: > I think the take-away is that this is outside the domain where any interesting break points occur. I think that these are representative of what people want to do with external sorts. We have already had Jeff look for a regression. He found one only with less than 4MB of work_mem (the default), with over 100 million tuples. What exactly are we looking for? > And can you calculate an estimate where the domain would be where multiple passes would be needed for this table at thesework_mem sizes? Is it feasible to test around there? Well, you said that 1GB of work_mem was enough to avoid that within about 4TB - 8TB of data. So, I believe the answer is "no": [pg@hydra ~]$ df -h Filesystem Size Used Avail Use% Mounted on rootfs 20G 19G 519M 98% / devtmpfs 31G 128K 31G 1% /dev tmpfs 31G 384K 31G 1% /dev/shm /dev/mapper/vg_hydra-root 20G 19G 519M 98% / tmpfs 31G 127M 31G 1% /run tmpfs 31G 0 31G 0% /sys/fs/cgroup tmpfs 31G 0 31G 0% /media /dev/md0 497M 145M 328M 31% /boot /dev/mapper/vg_hydra-data 1023G 322G 651G 34% /data -- Peter Geoghegan
В списке pgsql-hackers по дате отправления: