Re: data dump help
От | Terry |
---|---|
Тема | Re: data dump help |
Дата | |
Msg-id | 8ee061011001181549q3c990f40uf1066a768fe3e071@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: data dump help (Terry <td3201@gmail.com>) |
Ответы |
Re: data dump help
|
Список | pgsql-general |
On Mon, Jan 18, 2010 at 5:07 PM, Terry <td3201@gmail.com> wrote: > On Mon, Jan 18, 2010 at 4:48 PM, Andy Colson <andy@squeakycode.net> wrote: >> On 1/18/2010 4:08 PM, Terry wrote: >>> >>> Hello, >>> >>> Sorry for the poor subject. Not sure how to describe what I need >>> here. I have an application that logs to a single table in pgsql. >>> In order for me to get into our log management, I need to dump it out >>> to a file on a periodic basis to get new logs. I am not sure how to >>> tackle this. I thought about doing a date calculation and just >>> grabbing the previous 6 hours of logs and writing that to a new log >>> file and setting up a rotation like that. Unfortunately, the log >>> management solution can't go into pgsql directly. Thoughts? >>> >>> Thanks! >>> >> >> How about a flag in the db, like: dumped. >> >> inside one transactions you'd be safe doing: >> >> begin >> SET TRANSACTION ISOLATION LEVEL REPEATABLE READ; >> select * from log where dumped = 0; >> -- app code to format/write/etc >> update log set dumped = 1 where dumped = 0; >> commit; >> >> Even if other transactions insert new records, you're existing transaction >> wont see them, and the update wont touch them. >> >> -Andy >> > > I like your thinking but I shouldn't add a new column to this > database. It's a 3rd party application. > Although. I really like your idea so I might create another table where I will log whether the data has been dumped or not. I just need to come up with a query to check this with the other table.
В списке pgsql-general по дате отправления: