Re: [HACKERS] Brain-Dead Sort Algorithm??
От | Bruce Momjian |
---|---|
Тема | Re: [HACKERS] Brain-Dead Sort Algorithm?? |
Дата | |
Msg-id | 199912031732.MAA04230@candle.pha.pa.us обсуждение исходный текст |
Ответ на | Re: [HACKERS] Brain-Dead Sort Algorithm?? (Thomas Lockhart <lockhart@alumni.caltech.edu>) |
Список | pgsql-hackers |
> Sigh. Y'all like the sweeping statement, which got you in a bit of > trouble the first time too :) > > Without knowing your schema, I can't say why you have *exactly* the > storage requirement you see. But, you have chosen the absolute worst > case for *any* relational database: a schema with only a single, very > small column. > > For Postgres (and other DBs, but the details will vary) there is a 36 > byte overhead per row to manage the tuple and the transaction > behavior. So if you stored your data as int8 (int4 is too small for 10 > digits, right?) I see an average usage of slightly over 44 bytes per > row (36+8). So, for 6.8 million rows, you will require 300MB. I'm > guessing that you are using char(10) fields, which gives 50 bytes/row > or a total of 340MB, which matches your number to two digits. > > Note that the tuple header size will stay the same (with possibly some > modest occasional bumps) for rows with more columns, so the overhead > decreases as you increase the number of columns in your tables. > > By the way, I was going to say to RTFM, but I see a big blank spot on > this topic (I could have sworn that some of the info posted to the > mailing lists on this topic had made it into the manual, but maybe > not). This is an FAQ item. -- Bruce Momjian | http://www.op.net/~candle maillist@candle.pha.pa.us | (610) 853-3000+ If your life is a hard drive, | 830 Blythe Avenue + Christ can be your backup. | Drexel Hill, Pennsylvania19026
В списке pgsql-hackers по дате отправления: