Re: 4 billion record limit?
От | Paul Caskey |
---|---|
Тема | Re: 4 billion record limit? |
Дата | |
Msg-id | 3981BD92.B5B479E0@nmxs.com обсуждение исходный текст |
Ответ на | 4 billion record limit? (Paul Caskey <paul@nmxs.com>) |
Список | pgsql-general |
Thomas Lockhart wrote: > > > FWIW, I checked into MySQL, and as far as I can tell, they have nothing > > like this implicit 4 billion transactional "limit". So maybe competitive > > spirit will drive the postgres hackers to fix this problem sooner than > > later. ;-) > > We have *never* had a report of someone pushing this 4GB limit, and > theoretical problems usually go into the long-term development plan, not > in the "OHMYGODITSBROKEN" list. That's absolutely true, and I'm sorry for pushing your MySQL button. I'm not in some sort of panic to get this fixed. I just raised this issue to (1) see if I was the only one who noticed/cared and (2) learn some details to see if/what limits were really there. > Tom Lane wrote: > > Hmph. Has anyone ever come close to shoving 4G rows into MySQL? Maybe > they just haven't documented their limits... The guantlet has been dropped! Looks like I have my work cut out for me, this weekend. :-) Again, just to clarify, I'm not concerned about a database holding 4 billion records at once. I'm concerned about performing 4,000,000,000 inserts and 3,999,999,000 deletes over the course of a few years. I just ran the numbers and that's about 50 transactions/second on an international web site active 24 hours/day for 3 years. Put 5 such sites on one server and you're down to 10 trans/sec per site. Ramp up to 30 trans/sec/site and you're down to 1 year. Juggle these numbers however you want; it's not that far fetched. Sure, it has a lot to do with the application. Most databases are read much more than they're written, and of course a SELECT doesn't chew up an OID. But every INSERT does. Any application that updates information frequently can be vulnerable to this problem. With my table structure, it happens to be much easier to DELETE and INSERT inside a transaction than to UPDATE. And every "row" of customer-supplied data could turn into 100 INSERTs, so I'm accelerating toward that 4G limit 100 times faster. Bottom line: I can recycle my own IDs if I have to. But I need to know whether I should change my table structure and/or code to conserve OIDs, an internal system identifier particular to the RDBMS I chose. Considering the main reason I chose Postgres was for scalability, this seems a little ironic. -- Paul Caskey paul@nmxs.com Software Engineer New Mexico Software 5041 Indian School NE Albuquerque, NM 87110 --
В списке pgsql-general по дате отправления: