Hi, There,
I am a new sysadmin and I am new to postgresql and database. I have got
a very old gentoo server that has been running postgresql db for years
and now the log files alone are about 10 Giga bytes.
we run apache server and we have a ruby script that parse the
access_log of apache and put the access data in to this db. the script
runs daily in a cron job.
Now the problem is, since we did not rotate the access_log generated by
apache , access_log is getting so big that each day the parsing job of
the log file and data entry in to the db takes hours to finish and it
eats a lot of cpu resources and puts great pressure on the old server.
I would like to know how the my log data is organized in the database
and then I will see if I can do something to optimize the data entry
into the db.
Any pointers is greatly appreciated.
Zhengquan