Re: db design question
От | Josh Berkus |
---|---|
Тема | Re: db design question |
Дата | |
Msg-id | web-1788669@davinci.ethosmedia.com обсуждение исходный текст |
Ответ на | Re: db design question ("Jules Alberts" <jules.alberts@arbodienst-limburg.nl>) |
Ответы |
Re: db design question
|
Список | pgsql-novice |
Jules, > I'll drop it in this case, your approach "feels" a lot safer. > However, > I plan to store BLOBs in my db, and is this case I'm afraid I will > _have_ to use OIDs. The idea is to be able to "attach" a blob to > _any_ > row in the db. An example of how I planned to use it (this works BTW) Yeah, you need to use OIDs for lo_export. I'm not personally familiar with the issues on referencing these OIDs in tables. Instead of using the row_oid for the standard tables, though, consider doing this: 1) Create a table-independant sequence, "global_seq" 2) Make the primary key of each significant table DEFAULT NEXTVAL ('global_seq') instead of SERIAL. 3) Reference the primary keys in your blobs table, instead of the OID. The above will work as well as using the OID, without the potential headaches and with a greater degree of control. I used this scheme, in fact, to collectivize modification timestamps and journaled notes for 5 tables in one DB design. Keep in mind one other thing, though: while collectivizing your BLOBS in the fashion above simplifies your database schema (almost always a good thing) it can come at a substantial performance penalty if your database contains many large tables. For example, I did *not* merge my modification timestamps into a single table for my latest database effort, as it contains 6 significant tables totalling 2.5 million rows. And a single, 2.5 million row mod_data table searches and sorts very much slower than 6 sets of columns with an average of 400,000 rows each. -Josh Berkus
В списке pgsql-novice по дате отправления: