Re: pg_dump and large files - is this a problem?
От | Bruce Momjian |
---|---|
Тема | Re: pg_dump and large files - is this a problem? |
Дата | |
Msg-id | 200210232150.g9NLooU27760@candle.pha.pa.us обсуждение исходный текст |
Ответ на | Re: pg_dump and large files - is this a problem? (Peter Eisentraut <peter_e@gmx.net>) |
Ответы |
Re: pg_dump and large files - is this a problem?
Re: pg_dump and large files - is this a problem? Re: pg_dump and large files - is this a problem? |
Список | pgsql-hackers |
Peter Eisentraut wrote: > Bruce Momjian writes: > > > I think you are right that we have to not use off_t and use long if we > > can't find a proper 64-bit seek function, but what are the failure modes > > of doing this? Exactly what happens for larger files? > > First we need to decide what we want to happen and after that think about > how to implement it. Given sizeof(off_t) > sizeof(long) and no fseeko(), > we have the following options: > > 1. Disable access to large files. > > 2. Seek in some other way. > > What's it gonna be? OK, well BSD/OS now works, but I wonder if there are any other quad off_t OS's out there without fseeko. How would we disable access to large files? Do we fstat the file and see if it is too large? I suppose we are looking for cases where the file system has large files, but fseeko doesn't allow us to access them. Should we leave this issue alone and wait to find another OS with this problem, and we can then rejigger fseeko.c to handle that OS too? Looking at the pg_dump code, it seems the fseeks are optional in there anyway because it already has code to read the file sequentially rather than use fseek, and the TOC case in pg_backup_custom.c says that is optional too. -- Bruce Momjian | http://candle.pha.pa.us pgman@candle.pha.pa.us | (610) 359-1001+ If your life is a hard drive, | 13 Roberts Road + Christ can be your backup. | Newtown Square, Pennsylvania19073
В списке pgsql-hackers по дате отправления: