When should log events be captured in a database?
От | James Hartley |
---|---|
Тема | When should log events be captured in a database? |
Дата | |
Msg-id | CAKeNXXvaP7h_rmGpPYjJt3iFqoWz9u47eJxarp5poMHKN5Y34g@mail.gmail.com обсуждение исходный текст |
Ответы |
Re: When should log events be captured in a database?
|
Список | pgsql-novice |
This is slightly off-topic, but the PostgreSQL cognoscenti is likely to be the best audience for the question.
I am writing an application -- an application-specific (scaled down) Web server in Node.js. A question I keep asking myself is whether it it better to simply log incoming requests, or write them to a database. Ultimately, I would like to analyze the data, so moving into a database makes sense. However, I also can see the point of logging as a simpler, less CPU & I/O intensive activity. When life goes wrong, capturing data in a log file may be the easier. Plus, resources are freed to handle requests which is the fundamental goal of a Web server anyways.
Yet if the database resides on another machine, this dismisses some of the CPU - I/O load argument.
I can't believe that the frequency data is acquired is the determinant, but I may be wrong.
I am also aware that there are various tools available for parsing log file data into databases, & writing such tools is not altogether complicated. Nevertheless, this seems to be a redundant exercise.
So, I come back full circle. Even PostgreSQL itself has its log files. Not everything is written to database tables proper. Yet at what point does data take on a new status such that it should be collected in a database over simply being written to logs?
Thanks for all candor shared.
I am writing an application -- an application-specific (scaled down) Web server in Node.js. A question I keep asking myself is whether it it better to simply log incoming requests, or write them to a database. Ultimately, I would like to analyze the data, so moving into a database makes sense. However, I also can see the point of logging as a simpler, less CPU & I/O intensive activity. When life goes wrong, capturing data in a log file may be the easier. Plus, resources are freed to handle requests which is the fundamental goal of a Web server anyways.
Yet if the database resides on another machine, this dismisses some of the CPU - I/O load argument.
I can't believe that the frequency data is acquired is the determinant, but I may be wrong.
I am also aware that there are various tools available for parsing log file data into databases, & writing such tools is not altogether complicated. Nevertheless, this seems to be a redundant exercise.
So, I come back full circle. Even PostgreSQL itself has its log files. Not everything is written to database tables proper. Yet at what point does data take on a new status such that it should be collected in a database over simply being written to logs?
Thanks for all candor shared.
В списке pgsql-novice по дате отправления: