Re: Re: Re: is PG able to handle a >500 GB Database?
От | Tom Lane |
---|---|
Тема | Re: Re: Re: is PG able to handle a >500 GB Database? |
Дата | |
Msg-id | 1672.980008957@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: Re: Re: is PG able to handle a >500 GB Database? ("Brett W. McCoy" <bmccoy@chapelperilous.net>) |
Список | pgsql-general |
"Brett W. McCoy" <bmccoy@chapelperilous.net> writes: >> last_value will return whatever value was last assigned >> by any backend, therefore you might not get the value that was inserted >> into your tuple, but someone else's. > In that case you would call next_val *before* you insert and use that > value in the INSERT statement. Yup, that works too. Which one you use is a matter of style, I think. (Actually I prefer the nextval-first approach myself, just because it seems simpler and more obviously correct. But currval-after does work.) To bring this discussion back to the original topic: sequences are also 4-byte counters, at present. But there's still some value in using a sequence to label rows in a huge table, rather than OIDs. Namely, you can use a separate sequence for each large table. That way, you only get into trouble when you exceed 4G rows entered into a particular table, not 4G rows created in the entire database cluster. regards, tom lane
В списке pgsql-general по дате отправления: