Re: [HACKERS] Priorities for 6.6
От | Bruce Momjian |
---|---|
Тема | Re: [HACKERS] Priorities for 6.6 |
Дата | |
Msg-id | 199907080007.UAA16640@candle.pha.pa.us обсуждение исходный текст |
Ответ на | Re: [HACKERS] Priorities for 6.6 (Hannu Krosing <hannu@trust.ee>) |
Список | pgsql-hackers |
> C. Look over the protocol and unify the _binary_ representations of > datatypes on wire. in fact each type already has two sets of > in/out conversion functions in its definition tuple, one for disk and > another for net, it's only that until now they are the same for > all types and thus probably used wromg in some parts of code. Added to TODO: * remove duplicate type in/out functions for disk and net > > D. After B. and C., add a possibility to insert binary data > in "(small)binary" field without relying on LOs or expensive > (4x the size) quoting. Allow any characters in said binary field I will add this to the TODO list if you can tell me how does the user pass this into the backend via a query? * Add non-large-object binary field > F. As a lousy alternative to 1. fix the LO storage. Currently _all_ of > the LO files are kept in the same directory as the tables and > indexes. > this can bog down the whole database quite fast if one lots of LOs > and > a file system that does linear scans on open (like ext2). > A sheme where LOs are kept in subdirectories based on the hex > representation of their oids would avoid that (so LO with OID > 0x12345678 > would be stored in $PG_DATA/DBNAME/LO/12/34/56/78.lo or maybe > reversed > $PG_DATA/DBNAME/LO/78/56/34/12.lo to distribute them more evenly in > "buckets" I have already added a TODO item to use hash directories for large objects. Probably single or double-level 256 directory buckets are enough: 04/4A/file09/B3/file -- Bruce Momjian | http://www.op.net/~candle maillist@candle.pha.pa.us | (610) 853-3000+ If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania19026
В списке pgsql-hackers по дате отправления: