I've collected slow and fast query plans and it looks like when data is cleaned up, PostgreSQL doesn't know what table is big and what is small, and when data generation is in one big transaction, data from this uncommitted transaction already affects SELECT queries, but VACUUM doesn't see this uncommitted data to adjust stats => query planner could come up with suboptimal query plan.
But still, the query itself has a hint as to what table has to be filtered first - there is a WHERE clause to keep just one line from this table. But query planner decides to join another (very big) table first => performance degrades by orders of magnitude.
For me It looks like a flaw in query planner logic, that, having no data about tables content, ignores the WHERE clause that hints what table has to be processed first => not sure whether it should be treated as performance issue or bug.
Query plans are attached as PEV2 standalone HTML pages.
Чтобы сделать работу с сайтом удобнее, мы используем cookie и аналитический сервис «Яндекс.Метрика». Продолжая пользоваться сайтом, вы соглашаетесь с их использованием.