Re: [QUESTIONS] Business cases
От | Tom |
---|---|
Тема | Re: [QUESTIONS] Business cases |
Дата | |
Msg-id | Pine.BSF.3.95q.980117145905.11450C-100000@misery.sdf.com обсуждение исходный текст |
Ответ на | Re: [QUESTIONS] Business cases (The Hermit Hacker <scrappy@hub.org>) |
Список | pgsql-hackers |
On Sat, 17 Jan 1998, The Hermit Hacker wrote: > On Sat, 17 Jan 1998, Tom wrote: > > > How are large users handling the vacuum problem? vaccuum locks other > > users out of tables too long. I don't need a lot performance (a few per > > minutes), but I need to be handle queries non-stop). > > Not sure, but this one is about the only major thing that is continuing > to bother me :( Is there any method of improving this? vacuum seems to do a _lot_ of stuff. It seems that crash recovery features, and maintenance features should be separated. I believe the only required maintenance features are recovering space used by deleted tuples and updating stats? Both of these shouldn't need to lock the database for long periods of time. > > Also, how are people handling tables with lots of rows? The 8k tuple > > size can waste a lot of space. I need to be able to handle a 2 million > > row table, which will eat up 16GB, plus more for indexes. > > This oen is improved upon in v6.3, where at compile time you can stipulate > the tuple size. We are looking into making this an 'initdb' option instead, > so that you can have the same binary for multiple "servers", but any database > created under a particular server will be constrained by that tuple size. That might help a bit, but same tables may have big rows and some not. For example, my 2 million row table requires only requires two date fields, and 7 integer fields. That isn't very much data. However, I'd like to be able to join against another table with much larger rows. > Marc G. Fournier > Systems Administrator @ hub.org > primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org Tom
В списке pgsql-hackers по дате отправления: