This may be low hanging fruit but...
1) Get your WAL files onto their own disk. This is fairly easy to do and
has had very good payback for us.
2) Use pgmonitor to observe activity, identify long running queries, and
assess indexing opportunities.
3) Keep those daily vacuum/analyze going.
Hope this helps.
Marc Mitchell - Senior Application Architect
Enterprise Information Solutions, Inc.
Downers Grove, IL 60515
marcm@eisolution.com
----- Original Message -----
From: "Gary DeSorbo" <gdesorbo@pro-unlimited.com>
To: <pgsql-admin@postgresql.org>
Sent: Friday, November 01, 2002 11:24 AM
Subject: [ADMIN] DB Performance
> I need to find a way to increase performance on my server.
>
> We are currently using postgres as a back-end to our web-based corporate
> wide application. The application is used for everything from collecting
> large amount of data, updating current data and creating large reports
based
> on this data. At this point we have about 3000 users on the system and
this
> is going to grow rapidly.
>
>
> We are running apache, mod-perl, sendmail, and postgres on our server.
The
> machine is a dual 900Mhz processor with 2 gigs of ram, and fast 10k raid
> drives.
>
> I have set the shared memory on the machine to 512MB.
>
> Postgres is configured as follows:
>
> sort_mem = 128672
> shared_buffers = 60800
> fsync = false
>
>
> We will be purchasing new machines to split off the web server from the
> database server.
>
> What else can I do to help performance?
> Will a beowulf cluster help to increase performance?
>
> Any suggestions would be greatly appreciated.
>
> Thanks,
>
> Gary
>
> ---------------------------(end of broadcast)---------------------------
> TIP 3: if posting/reading through Usenet, please send an appropriate
> subscribe-nomail command to majordomo@postgresql.org so that your
> message can get through to the mailing list cleanly