Re: pg_dump and large files - is this a problem?
От | Tom Lane |
---|---|
Тема | Re: pg_dump and large files - is this a problem? |
Дата | |
Msg-id | 17007.1035209771@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: pg_dump and large files - is this a problem? (Philip Warner <pjw@rhyme.com.au>) |
Ответы |
Re: pg_dump and large files - is this a problem?
Re: pg_dump and large files - is this a problem? |
Список | pgsql-hackers |
Philip Warner <pjw@rhyme.com.au> writes: > then checking the first byte? This should give me the endianness, and makes > a non-destructive write (not sure it it's important). Currently the > commonly used code does not rely on off_t arithmetic, so if possible I'd > like to avoid shift. Does that sound reasonable? Or overly cautious? I think it's pointless. Let's assume off_t is not an arithmetic type but some weird struct dreamed up by a crazed kernel hacker. What are the odds that dumping the bytes in it, in either order, will produce something that's compatible with any other platform? There could be padding, or the fields might be in an order that doesn't match the byte order within the fields, or something else. The shift method requires *no* directly endian-dependent code, and I think it will work on any platform where you have any hope of portability anyway. regards, tom lane
В списке pgsql-hackers по дате отправления: