Re: Transaction progress
От | Manfred Koizar |
---|---|
Тема | Re: Transaction progress |
Дата | |
Msg-id | 9fdm2vssn1iih1qd49gbkcr22t3dn2gehr@4ax.com обсуждение исходный текст |
Ответ на | Re: Transaction progress (Pablo Yaggi <pyaggi@aulamagna.com.ar>) |
Ответы |
Re: Transaction progress
|
Список | pgsql-general |
On Sun, 19 Jan 2003 20:22:45 -0300, Pablo Yaggi <pyaggi@aulamagna.com.ar> wrote: >But anyway the planner says that >it will use Seq scan anyway, so is it not the same if enable_seqscan is on or not ? "Seq scan anyway" on inars_nocontrib_perm1, but there's still that index scan on inars_rectificaciones which will turn into a (much faster) seq scan, if you allow it. >how did you realize that inars_nocontrib_perm1 is 300000 pages ? cost=100000000.00..100347379.07: 100000000.00 is a fake startup cost to talk the planner out of choosing the seq scan. From costsize.c: "The cost of reading a page sequentially is 1.0, by definition." 347379.07 is the cost for reading all pages plus processing all tuples. Given the low number of tuples (43407) the latter is not important, so I guessed 300K pages. >Merge Join (cost=209527960.44..247478144.43 rows=612 width=60) > -> Sort (cost=209527960.44..209527960.44 rows=35037092 width=26) > -> Seq Scan on inars_nocontrib_perm1 (cost=100000000.00..100697315.92 rows=35037092 width=26) > -> Index Scan using inars_rectificaciones_inx on inars_rectificaciones b2 (cost=0.00..37838713.13 rows=9546358 width=34) I have absolutely no feeling for how long a sort of 35M rows might take. Be sure to give it enough sort_mem; but OTOH not too much, the whole dataset doesn't fit into memory, so it has to be written to disk anyway. Sort_mem (and shared_buffers) is discussed on -general, -admin and -performance every now and then. Search the list archives. >> (c) There are almost 10M rows in inars_rectificaciones, the width of >> which is at least 34 (d). Counting for overhead and columns not used >34 (d) ? Sorry, I didn't get it. "34 (d)" means "34 bytes. See note (d) above" BTW, you join 35M rows to 10M rows and the planner expects to get only 612 rows. Is this realistic? If it is, can you change your application to store the "candidates" in a third table as soon as they are created? Then you could get a plan like Nested loop -> Seq scan on candidates (rows=612) -> Index scan on inars_rectificaciones_inx ( rows=1 loops=612) -> Index scan on inars_nocontrib_perm1_inx ( rows=1 loops=612) and an execution time of a few seconds. Servus Manfred
В списке pgsql-general по дате отправления: