Date operation efficiency
От | Oliver Smith |
---|---|
Тема | Date operation efficiency |
Дата | |
Msg-id | 19990330170033.A28307@kfs.org обсуждение исходный текст |
Список | pgsql-sql |
I have an activity log file which includes an event time stamp. It's my intention to use this log file to apply flexible quotas. The table is email text NOT NULL,resource text NOT NULL,action char(8) NOT NULL,bytes int4 DEFAULT 0,at < some time value> Right now, 'at' as defined as an INT4, and I'm maniuplating the database from a series of perl scripts (<< scripts; not programs ;-P). So I can do fairly trivial operations by manualy doing things like time() - 30 * 60; I then want to extract hourly and daily instance and byte usages: $oneHourAgo = time() - 3600;$oneDayAgo = time() - 86400; SELECT COUNT(bytes), SUM(bytes) FROM ActivityLog WHERE EMAIL = ? AND action = ? AND at >= $oneHourAgo UNIONSELECT COUNT(bytes), SUM(bytes) FROM ActivityLog WHERE EMAIL = ? AND action = ? AND at >= $oneDayAgo; However, I'm wondering if it is better to do this with SQL dates, and if anyone has any recommendations on dealing with this sort of thing. I've been tinkering around with datetime and timespan and reltime conversions, but I'm a bit wary of the kind of processing overhead this might have: Assuming 'at' is a datetime: SELECT COUNT(bytes), SUM(bytes) FROM ActivityLog WHERE EMAIL = ? AND action = ? AND AGE(at) >= '60 mins'::timespan That seems wrong, since I'm performing a repeated data operatoin. But what I also don't know is if the '60 mins'::timespan is repeatedly interepreted? I presume not? So how about SELECT COUNT(bytes), SUM(bytes) FROM ActivityLog WHERE EMAIL = ? AND action = ? AND at >= datetime('now' + reltime('-60 mins'::timespan)); This seems pretty horrible, as I suspect the whole 'datetime(...)' bit gets re-interpreted? If not, then that's pretty much the solution I want I suppose :) My last question is a bit trickier. At the minute the quotas I enforce say that the maximum per-access byte limit is larger than the hourly byte limit. The idea being that it allows infrequent large-byte accesses, but at the cost of several hours of access (intended to stop people spamming the service by bouncing one huge request off it at regular intervals). But because I only look at the byte usage per hour and per day, there is a twilight zone where the user can bounce two large requests off the service 61 minutes apart. Has anyone else do any kind of quota-implementation system with a database like this, and if so, do you recommend a different data model than I've used? With this kind of log-trawling, might it actually be better to have fixed values in the per-user record which are updated upon every successful transaction? A select statement for determining where a user is currently over quota (assuming I supply it with the quota value)? Oliver -- If at first you don't succeed, skydiving is not for you...
В списке pgsql-sql по дате отправления: