Re: [HACKERS] sort on huge table
От | Tatsuo Ishii |
---|---|
Тема | Re: [HACKERS] sort on huge table |
Дата | |
Msg-id | 199910140134.KAA10375@ext16.sra.co.jp обсуждение исходный текст |
Ответ на | Re: [HACKERS] sort on huge table (Tom Lane <tgl@sss.pgh.pa.us>) |
Ответы |
Re: [HACKERS] sort on huge table
|
Список | pgsql-hackers |
> > The current sorting code will fail if the data volume exceeds whatever > > the maximum file size is on your OS. (Actually, if long is 32 bits, > > it might fail at 2gig even if your OS can handle 4gig; not sure, but > > it is doing signed-long arithmetic with byte offsets...) > > > I am just about to commit code that fixes this by allowing temp files > > to have multiple segments like tables can. > > OK, committed. I have tested this code using a small RELSEG_SIZE, > and it seems to work, but I don't have the spare disk space to try > a full-scale test with > 4Gb of data. Anyone care to try it? I will test it with my 2GB table. Creating 4GB would probably be possible, but I don't have enough sort space for that:-) I ran my previous test on 6.5.2, not on current. I hope current is stable enough to perform my testing. > I have not yet done anything about the excessive space consumption > (4x data volume), so plan on using 16+Gb of diskspace to sort a 4+Gb > table --- and that's not counting where you put the output ;-) Talking about the -S, I did use the default since setting -S seems to consume too much memory. For example, if I set it to 128MB, backend process grows over 512MB and it was killed due to swap space was run out. Maybe 4x law can be also applicated to -S? --- Tatsuo Ishii
В списке pgsql-hackers по дате отправления: