Re: stored proc and inserting hundreds of thousands of rows
От | Greg Smith |
---|---|
Тема | Re: stored proc and inserting hundreds of thousands of rows |
Дата | |
Msg-id | 4DBDED43.4050303@2ndQuadrant.com обсуждение исходный текст |
Ответ на | Re: stored proc and inserting hundreds of thousands of rows (Samuel Gendler <sgendler@ideasculptor.com>) |
Список | pgsql-performance |
On 04/30/2011 09:00 PM, Samuel Gendler wrote: > Some kind of in-memory cache of doc/ad mappings which the ad server > interacts with will serve you in good stead and will be much easier to > scale horizontally than most relational db architectures lend > themselves to...Even something as simple as a process that pushes the > most recent doc/ad mappings into a memcache instance could be > sufficient - and you can scale your memcache across as many hosts as > is necessary to deliver the lookup latencies that you require no > matter how large the dataset. Many of the things I see people switching over to NoSQL key/value store solutions would be served equally well on the performance side by a memcache layer between the application and the database. If you can map the problem into key/value pairs for NoSQL, you can almost certainly do that using a layer above PostgreSQL instead. The main downside of that, what people seem to object to, is that it makes for two pieces of software that need to be maintained; the NoSQL solutions can do it with just one. If you have more complicated queries to run, too, the benefit to using a more complicated database should outweigh that extra complexity though. -- Greg Smith 2ndQuadrant US greg@2ndQuadrant.com Baltimore, MD PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us "PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books
В списке pgsql-performance по дате отправления: