Re: Querying 19million records very slowly
От | Kjell Tore Fossbakk |
---|---|
Тема | Re: Querying 19million records very slowly |
Дата | |
Msg-id | e79986c5050622011828dd4b41@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Querying 19million records very slowly (Tobias Brox <tobias@nordicbet.com>) |
Ответы |
Re: Querying 19million records very slowly
|
Список | pgsql-performance |
Appreciate your time, Mr Brox. I'll test the use of current_timestamp, rather than now(). I am not sure if Pg can do a match between a fixed timestamp and a datetime? time > current_timestamp - interval '24 hours', when time is yyyy-mm-dd hh-mm-ss+02, like 2005-06-22 16:00:00+02. If Pg cant do it, and current_time is faster, i could possibly convert the time field in my database to timestamp, and insert all rows as timestamp rather than a timedate. But that is some script to work over 19 mill rows, so I need to know if that will give me any more speed.. Kjell Tore. On 6/22/05, Tobias Brox <tobias@nordicbet.com> wrote: > [Kjell Tore Fossbakk - Wed at 09:45:22AM +0200] > > database=> explain analyze select count(*) from test where p1=53 and > > time > now() - interval '24 hours' ; > > Sorry to say that I have not followed the entire thread neither read the > entire email I'm replying to, but I have a quick hint on this one (ref my > earlier thread about timestamp indices) - the postgresql planner will > generally behave smarter when using a fixed timestamp (typically generated > by the app server) than logics based on now(). > > One of my colleagues also claimed that he found the usage of > localtimestamp faster than now(). > > -- > Tobias Brox, +86-13521622905 > Nordicbet, IT dept >
В списке pgsql-performance по дате отправления: