Обсуждение: DB Performance
I need to find a way to increase performance on my server. We are currently using postgres as a back-end to our web-based corporate wide application. The application is used for everything from collecting large amount of data, updating current data and creating large reports based on this data. At this point we have about 3000 users on the system and this is going to grow rapidly. We are running apache, mod-perl, sendmail, and postgres on our server. The machine is a dual 900Mhz processor with 2 gigs of ram, and fast 10k raid drives. I have set the shared memory on the machine to 512MB. Postgres is configured as follows: sort_mem = 128672 shared_buffers = 60800 fsync = false We will be purchasing new machines to split off the web server from the database server. What else can I do to help performance? Will a beowulf cluster help to increase performance? Any suggestions would be greatly appreciated. Thanks, Gary Gary DeSorbo isasitis@uchicago.edu Cell: 415.606.3857
Gary DeSorbo <isasitis@uchicago.edu> writes: > Postgres is configured as follows: > sort_mem = 128672 > shared_buffers = 60800 > fsync = false Yipes. Back off that sort_mem setting --- that's 128M *per sort*, which will undoubtedly run you out of memory (or at least into serious swapping) as soon as several processes try to do concurrent sorts. Something in the vicinity of 5 or 10 meg is probably more reasonable. If you have multiple drives consider relocating the WAL (pg_xlog/) onto a different drive, preferably one that normally doesn't touch anything but WAL. regards, tom lane
The first step definitely would have to split of the DB server. Secondly u shud check the application part, is it some specific queries that take too much time? in that case consider optimising the queries. to identify the slow queries you could use pg_stat_activity view for log the time of various queries. Also make sure that regualr mantainece task like VACUUM ANALYZE are run on periodic basis. Lastly you could pump in lots of RAM , SCSI RAID5 and Xeons ;-) into the new m/c Also such question shud be asked in performance list . Regds Mallah. > I need to find a way to increase performance on my server. > > We are currently using postgres as a back-end to our web-based > corporate wide application. The application is used for everything from collecting large > amount of data, updating current data and > creating large reports based on this data. At this point we have > about 3000 users on the system and this is going to grow rapidly. > > > We are running apache, mod-perl, sendmail, and postgres on our > server. The machine is a dual 900Mhz processor with 2 gigs of ram, and fast 10k raid drives. > > I have set the shared memory on the machine to 512MB. > > Postgres is configured as follows: > > sort_mem = 128672 > shared_buffers = 60800 > fsync = false > > > We will be purchasing new machines to split off the web server from the database server. > > What else can I do to help performance? > Will a beowulf cluster help to increase performance? > > Any suggestions would be greatly appreciated. > > Thanks, > > Gary > Gary DeSorbo > isasitis@uchicago.edu > Cell: 415.606.3857 > > ---------------------------(end of broadcast)--------------------------- TIP 1: subscribe and > unsubscribe commands go to majordomo@postgresql.org ----------------------------------------- Get your free web based email at trade-india.com. "India's Leading B2B eMarketplace.!" http://www.trade-india.com/
also check for application logic that loops when you could retrieve a single result with a group by statement, perform nightly maintaince/recreation of indexes, and vacuums. a beowulf cluster will not help with postgres due to the use of shared memory. checking the logs to see that you're not recycling wal files too often is also useful. On Fri, 1 Nov 2002 09:56:09 -0800 Gary DeSorbo <isasitis@uchicago.edu> wrote: > I need to find a way to increase performance on my server. > > We are currently using postgres as a back-end to our web-based > corporate wide application. The application is used for everything > from collecting large amount of data, updating current data and > creating large reports based on this data. At this point we have > about 3000 users on the system and this is going to grow rapidly.