Optimal database table optimization method

Поиск
Список
Период
Сортировка
От Roger Tannous
Тема Optimal database table optimization method
Дата
Msg-id f45202fa1003041132o52badd9x560dc843cb999f9@mail.gmail.com
обсуждение исходный текст
Ответы Re: Optimal database table optimization method  (John R Pierce <pierce@hogranch.com>)
Re: Optimal database table optimization method  (Adrian von Bidder <avbidder@fortytwo.ch>)
Re: Optimal database table optimization method  ("Dann Corbit" <DCorbit@connx.com>)
Список pgsql-general

Hello,

I have a database table that is growing too big (few hundred million rows) that needs to be optimized, but before I get into partitioning it, I thought I'd ask about suggestions.

Here is the usage:

     0 . Table contains about 10 columns of length about 20 bytes each.

  1. INSERTS are performed at a rate of hundreds of times per second.

  2. SELECT statements are performed based on column 'a' (where a='xxxx' ) a few times per hour.

  3. DELETE statements are performed based on a DATE column. (delete where date older than 1 year) usually once per day.


The key requirement is to speed up INSERT and SELECT statements, and be able to keep history data of 1 year back without locking the whole table down while deleting.

I would guess that I must have two indexes, one for column 'a', and the other for the date field. or is it possible to optimize both ?

Will there be a necessary trade-off between speed on select and speed of delete?

Is partitioning the only solution ? What are good strategies for partitioning such table?

I'm using a PostgreSQL 8.4 database.


Best Regards,

Roger Tannous.

В списке pgsql-general по дате отправления:

Предыдущее
От: Simon Riggs
Дата:
Сообщение: Re: The REAL cost of joins
Следующее
От: Terry
Дата:
Сообщение: Re: join from multiple tables