Re: Create/Erase 5000 Tables in PostGRE SQL in execution
От | Sergey Moiseev |
---|---|
Тема | Re: Create/Erase 5000 Tables in PostGRE SQL in execution |
Дата | |
Msg-id | 43CCB8EB.3010900@maloletka.ru обсуждение исходный текст |
Ответ на | Re: Create/Erase 5000 Tables in PostGRE SQL in execution (Christopher Browne <cbbrowne@acm.org>) |
Список | pgsql-general |
Christopher Browne wrote: >> Orlando Giovanny Solarte Delgado wrote: >>> It is a system web and each user can >>> to do out near 50 consultations for session. I can have simultaneously >>> around 100 users. Therefore I can have 5000 consultations >>> simultaneously. Each consultation goes join to a space component in >>> Postgis, therefore I need to store each consultation in PostgreSQL to >>> be able to use all the capacity of PostGIS. The question is if for >>> each consultation in execution time build a table in PostGRESQL I use >>> it and then I erase it. Is a system efficient this way? Is it possible >>> to have 5000 tables in PostGRESQL? How much performance? >> Use TEMP tables. > Hmm. To what degree do temp tables leave dead tuples lying around in > pg_class, pg_attribute, and such? > I expect that each one of these connections will leave a bunch of dead > tuples lying around in the system tables. The system tables will need > more vacuuming than if the data was placed in some set of > more-persistent tables... > None of this seems forcibly bad; you just need to be sure that you > vacuum the right things :-). Since there is pg_autovacuum you don't need to think about it. -- Wbr, Sergey Moiseev
В списке pgsql-general по дате отправления: