Re: Large Tables

Поиск
Список
Период
Сортировка
От Dave Cramer
Тема Re: Large Tables
Дата
Msg-id 1092231018.1561.50.camel@localhost.localdomain
обсуждение исходный текст
Ответ на Large Tables  ("Waldomiro" <wmiro@ig.com.br>)
Список pgsql-jdbc
This is sort of the wrong list, but I'll address jdbc specific issues
with large tables.

First of all, there are postgres databases with terabytes of data, so
large tables are not an issue.

Now you need to be aware that if you do a select * from large table
without cursors, then the server will return the entire result set at
once. This means that the driver will try to build the result set with 1
million rows, and chances are you will get an out of memory exception.

So, you will have to use setFetchSize() to allow the driver to fetch the
data in smaller chunks, or use cursors explicitly.

Additionally, the driver will only use cursors internally if you turn
auto commit off. This is an artifact of the way cursors work in pg. They
are only updateable within a transaction.

Dave
On Wed, 2004-08-11 at 08:15, Waldomiro wrote:
> Hi,
>
> It's my first time working with PostgreSql, so I have some doubts.
>
> My application is working very fine, but I have few records in my
> tables.
>
> I need to build a table which I know will have more than 1 milion of
> records.
>
> I'm afraid if It will work, I mean the indexes will not get corrupted
> ?
>
> Or maybe it can get slower when every computer is reading the table ?
>
> I supose someone has already  worked with large tables.
>
> How is It ?
>
> There is something I would know ? because I don't want to have
> surprises.
>
> Your sugestions will be very useful.
>
> Thank you.
>
> SHX INFORMÁTICA LTDA.
> Waldomiro Caraiani
> Desenvolvimento de Produto
> + 55 11 5581 1551
> wmiro@shx.com.br
> www.shx.com.br
--
Dave Cramer
519 939 0336
ICQ # 14675561


В списке pgsql-jdbc по дате отправления:

Предыдущее
От: "Waldomiro"
Дата:
Сообщение: Large Tables
Следующее
От: Tom Lane
Дата:
Сообщение: Re: Cleaning up large objects