Re: failures on barnacle (CLOBBER_CACHE_RECURSIVELY) because of memory leaks
От | Tomas Vondra |
---|---|
Тема | Re: failures on barnacle (CLOBBER_CACHE_RECURSIVELY) because of memory leaks |
Дата | |
Msg-id | 53F4E563.9040200@fuzzy.cz обсуждение исходный текст |
Ответ на | Re: failures on barnacle (CLOBBER_CACHE_RECURSIVELY) because of memory leaks (Tomas Vondra <tv@fuzzy.cz>) |
Ответы |
Re: failures on barnacle (CLOBBER_CACHE_RECURSIVELY) because of memory leaks
|
Список | pgsql-hackers |
Hi, On 13.8.2014 19:17, Tomas Vondra wrote: > On 13.8.2014 17:52, Tom Lane wrote: > >> * I'm a bit dubious about testing -DRANDOMIZE_ALLOCATED_MEMORY in the >> same build as -DCLOBBER_CACHE_RECURSIVELY, because each of these is >> darned expensive and it's not clear you'd learn anything by running >> them both together. I think you might be better advised to run two >> separate buildfarm critters with those two options, and thereby perhaps >> get turnaround in something less than 80 days. > > OK, I removed this for barnacle/addax/mite, let's see what's the impact. > > FWIW We have three other animals running with CLOBBER_CACHE_ALWAYS + > RANDOMIZE_ALLOCATED_MEMORY, and it takes ~20h per branch. But maybe the > price when combined with CLOBBER_CACHE_RECURSIVELY is much higher. > >> * It'd likely be a good idea to take out the TestUpgrade and TestDecoding >> modules from the config too. Otherwise, we won't be seeing barnacle's >> next report before 2015, judging from the runtime of the check step >> compared to some of the other slow buildfarm machines. (I wonder whether >> there's an easy way to skip the installcheck step, as that's going to >> require a much longer run than it can possibly be worth too.) > > OK, I did this too. > > I stopped the already running test on addax and started the test on > barnacle again. Let's see in a few days/weeks/months what is the result. It seems to be running much faster (probably after removing the randomization), and apparently it passed the create_view tests without crashing, but then crashed at 'constraints' (again, because of OOM). PortalMemory: 8192 total in 1 blocks; 7880 free (0 chunks); 312 used PortalHeapMemory: 1024 total in 1 blocks; 840 free (0 chunks); 184 used ExecutorState: 769654952 total in 103 blocks; 114984 free (296 chunks); 769539968 used ExprContext: 0 total in 0 blocks; 0 free (0 chunks); 0 used I suppose we don't expect 760MB ExecutorState here. Also, there's ~60MB MessageContext. It's still running, so I'm attaching the relevant part of log (again, with MemoryContextStats for backends with VSS >= 512MB . FWIW, it's running against a844c29966d7c0cd6a457e9324f175349bb12df0. regards Tomas
Вложения
В списке pgsql-hackers по дате отправления: