RE: Plans for solving the VACUUM problem
От | Mikheev, Vadim |
---|---|
Тема | RE: Plans for solving the VACUUM problem |
Дата | |
Msg-id | 3705826352029646A3E91C53F7189E32016658@sectorbase2.sectorbase.com обсуждение исходный текст |
Ответ на | Plans for solving the VACUUM problem (Tom Lane <tgl@sss.pgh.pa.us>) |
Список | pgsql-hackers |
> > Removing dead records from rollback segments should > > be faster than from datafiles. > > Is it for better locality or are they stored in a different way ? Locality - all dead data would be localized in one place. > Do you think that there is some fundamental performance advantage > in making a copy to rollback segment and then deleting it from > there vs. reusing space in datafiles ? As it showed by WAL additional writes don't mean worse performance. As for deleting from RS (rollback segment) - we could remove or reuse RS files as whole. > > > How does it do MVCC with an overwriting storage manager ? > > > > 1. System Change Number (SCN) is used: system increments it > > on each transaction commit. > > 2. When scan meets data block with SCN > SCN as it was when > > query/transaction started, old block image is restored > > using rollback segments. > > You mean it is restored in session that is running the transaction ? > > I guess thet it could be slower than our current way of doing it. Yes, for older transactions which *really* need in *particular* old data, but not for newer ones. Look - now transactions have to read dead data again and again, even if some of them (newer) need not to see those data at all, and we keep dead data as long as required for other old transactions *just for the case* they will look there. But who knows?! Maybe those old transactions will not read from table with big amount of dead data at all! So - why keep dead data in datafiles for long time? This obviously affects overall system performance. Vadim
В списке pgsql-hackers по дате отправления: