Re: The ability of postgres to determine loss of files of the main fork
От | Michael Banck |
---|---|
Тема | Re: The ability of postgres to determine loss of files of the main fork |
Дата | |
Msg-id | 68dd1b79.170a0220.3c4175.198f@mx.google.com обсуждение исходный текст |
Ответ на | Re: The ability of postgres to determine loss of files of the main fork (Jakub Wartak <jakub.wartak@enterprisedb.com>) |
Список | pgsql-hackers |
Hi, On Wed, Oct 01, 2025 at 02:05:53PM +0200, Jakub Wartak wrote: > On Wed, Oct 1, 2025 at 1:46 PM Aleksander Alekseev > <aleksander@tigerdata.com> wrote: > > > IMHO all files should be opened at least on startup to check > > > integrity, I would say s/startup/crash recovery/, if any. > > That might be a lot of files to open. > > I was afraid of that, but let's say modern high-end is 200TB big DB, > that's like 200*1024 1GB files, but I'm getting such time(1) timings > for 204k files on ext4: > > $ time ./createfiles # real 0m2.157s, it's > open(O_CREAT)+close() > $ time ls -l many_files_dir/ > /dev/null # real 0m0.734s > $ time ./openfiles # real 0m0.297s , for > already existing ones (hot) > $ time ./openfiles # real 0m1.456s , for > already existing ones (cold, echo 3 > drop_caches sysctl) > > Not bad in my book as a one time activity. It could pose a problem > potentially with some high latency open() calls, maybe NFS or > something remote I guess. Yeah, did you try on SAN as well? I am doubtful that will be performant. Michael
В списке pgsql-hackers по дате отправления: