Re: Performance (was: The New Slashdot Setup (includes MySql server))
От | Tom Lane |
---|---|
Тема | Re: Performance (was: The New Slashdot Setup (includes MySql server)) |
Дата | |
Msg-id | 8774.958757789@sss.pgh.pa.us обсуждение исходный текст |
Ответ на | Re: Performance (was: The New Slashdot Setup (includes MySql server)) (Bruce Momjian <pgman@candle.pha.pa.us>) |
Ответы |
Re: Performance (was: The New Slashdot Setup (includes MySql
server))
|
Список | pgsql-hackers |
Bruce Momjian <pgman@candle.pha.pa.us> writes: > All the sequential catalog scans that return one row are gone. What has > not been done is adding indexes for scans returning more than one row. I've occasionally wondered whether we can't find a way to use the catcaches for searches that can return multiple rows. It'd be easy enough to add an API for catcache that could return multiple rows given a nonunique search key. The problem is how to keep the catcache up to date with underlying reality for this kind of query. Deletions of rows will be handled by the existing catcache invalidation mechanism, but how can we know when some other backend has added a row that will match a search condition? Haven't seen an answer short of scanning the table every time, which makes the catcache no win at all. regards, tom lane
В списке pgsql-hackers по дате отправления: