Re: Postgres-7.0.2 optimization question
От | Adam Ruth |
---|---|
Тема | Re: Postgres-7.0.2 optimization question |
Дата | |
Msg-id | 8s747n$j7b$1@news.aros.net обсуждение исходный текст |
Ответ на | Postgres-7.0.2 optimization question ("Igor V. Rafienko" <igorr@ifi.uio.no>) |
Ответы |
Re: Postgres-7.0.2 optimization question
|
Список | pgsql-general |
Post the query you're using, there may be a way to rewrite it to use the index. I've found this to be true on all kinds of DBMSs. -- Adam Ruth InterCation, Inc. www.intercation.com ""Igor V. Rafienko"" <igorr@ifi.uio.no> wrote in message news:Pine.SOL.4.21.0010131345100.23627-100000@vigrid.ifi.uio.no... > > > Hi, > > > I've got a slight optimization problem with postgres and I was hoping > someone could give me a clue as to what could be tweaked. > > I have a couple of tables which contain little data (around 500,000 tuples > each), and most operations take insanely long time to complete. The > primary keys in both tables are ints (int8, iirc). When I perform a delete > (with a where clause on a part of a primary key), an strace shows that > postgres reads the entire table sequentially (lseek() and read()). Since > each table is around 200MB, things take time. > > I tried vacuumdb --analyze. It did not help. I tried creating an index on > the part of the primary key that is used in the abovementioned delete. It > did not help either. > > Has anyone encountered the same kind of problems before? In that case, has > anyone found a solution? (the problem is that the DB can very fast get 20 > times larger (i.e. 10,000,000 tuples per table is a moderate size), and > I'd rather not witness a delete that takes around 90 minutes (100,000 > tuples were deleted) more than once). > > > TIA, > > > ivr > -- > Women wearing Wonder bras and low-cut blouses lose their right to > complain about having their boobs stared at. > "Things men wish women knew" > >
В списке pgsql-general по дате отправления: