Re: Performance issues with large amounts of time-series data
От | Tom Lane |
---|---|
Тема | Re: Performance issues with large amounts of time-series data |
Дата | |
Msg-id | 18555.1251312734@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: Performance issues with large amounts of time-series data (Hrishikesh (हृषीकेश मेहेंदळे) <hashinclude@gmail.com>) |
Ответы |
Re: Performance issues with large amounts of time-series data
|
Список | pgsql-performance |
=?UTF-8?B?SHJpc2hpa2VzaCAo4KS54KWD4KS34KWA4KSV4KWH4KS2IOCkruClh+CkueClh+CkguCkpuCksw==?= =?UTF-8?B?4KWHKQ==?= <hashinclude@gmail.com>writes: > 2009/8/26 Tom Lane <tgl@sss.pgh.pa.us> >> Do the data columns have to be bigint, or would int be enough to hold >> the expected range? > For the 300-sec tables I probably can drop it to an integer, but for > 3600 and 86400 tables (1 hr, 1 day) will probably need to be BIGINTs. > However, given that I'm on a 64-bit platform (sorry if I didn't > mention it earlier), does it make that much of a difference? Even more so. > How does a float ("REAL") compare in terms of SUM()s ? Casting to float or float8 is certainly a useful alternative if you don't mind the potential for roundoff error. On any non-ancient platform those will be considerably faster than numeric. BTW, I think that 8.4 might be noticeably faster than 8.3 for summing floats, because of the switch to pass-by-value for them. regards, tom lane
В списке pgsql-performance по дате отправления: