Re: Large Tables(>1 Gb)
От | Fred_Zellinger@seagate.com |
---|---|
Тема | Re: Large Tables(>1 Gb) |
Дата | |
Msg-id | OF0A2CE73B.6C82F8EE-ON8625690E.0066595B@stsv.seagate.com обсуждение исходный текст |
Ответ на | Large Tables(>1 Gb) (Fred_Zellinger@seagate.com) |
Список | pgsql-general |
Thanks for all the great responses on this(doing select * from large tables and hanging psql). Here is what I have: --- psql uses libpq which tries to load everything into memory before spooling it. --- use cursors to FETCH selected amount of rows and then spool those. --- use "select * from big_table limit 1000 offset 0;" for simple queries. Sometimes you want to do a simple select * from mytable just to get a look at the data, but you don't care which data. I am about to go take my multiple broken up tables and dump them back into one table(and then shut off all those BASH shell scripts I wrote which checked the system date and created new monthly tables if needed...good scripting practice but a waste of time). However, there is still something bugging me. Even though many people related stories of 7.5 Gb+ Dbs, I still can't make that little voice in me quit saying "breaking things into smaller chunks means faster work" theories. There must exist a relationship between file sizes and DB performance. This relationship can be broken into 3 parts: 1. How the hardware is arranged to pull in large files(fragmentation, partitions, etc) 2. How the underlying OS deals with large files 3. How Postgres deals with(or is affected by) large files. I imagine that the first two are the dominant factors in the relationship, but does anyone have any experience with how small/removed of a factor the internals Postgres are? Are there any internal coding concerns that have had to deal with this(like the one mentioned about files being split at about 1Gb)? (Curious) Fred
В списке pgsql-general по дате отправления: