1) Make sure you have vacuumed & analyzed your DB on a regular basis. Note that you can actually lose your entire DB due to the XID wraparound issue if you don't vacuum. (Maximum of 4 billion transactions before the IDs wrap around and your data is dead -- vacuum resets the counter each time.) This is roughly the schedule I use -- hourly analyze, nightly vacuum, weekly vacuum full. There's also an autovacuum daemon that detects when your tables have been updated enough to require analyze or vacuum.
2) Check your postgres configuration. The default config is designed to run under extremely old machines and runs very poorly in any sensible server environment. If nobody has ever touched the config, postgres is probably running in crippled mode. The following is a pretty good link to read up.
3) Check your hardware config. RAID-1 for OS + WAL, RAID-10 for Postgres database is the optimal solution. Some people say RAID-5 reaches parity with RAID-10 when you hit 8+ hard drives.
4) Examine the queries using the EXPLAIN ANALYZE command. Even if you have indexes on elements, it's possible they aren't being used due to data type mismatches or the statistics tell the query planner a seqscan would be faster. (Whether it actually is or not requires further analysis.)
Originally Posted by mufeed_usman
A client of the firm where I work sent a complaint stating that their application (banking software) developed in VB has begun to slow down when they excute certain queries. The DB is maintained using postgreSQL and the server is running on fedora. Nodes are all windows machines.