Re: Support Parallel Query Execution in Executor
От | Myron Scott |
---|---|
Тема | Re: Support Parallel Query Execution in Executor |
Дата | |
Msg-id | 6D5B293D-6139-4498-939E-FF44155D5080@sacadia.com обсуждение исходный текст |
Ответ на | Re: Support Parallel Query Execution in Executor ("Luke Lonergan" <llonergan@greenplum.com>) |
Ответы |
Re: Support Parallel Query Execution in Executor
|
Список | pgsql-hackers |
On Apr 8, 2006, at 10:29 PM, Luke Lonergan wrote: > Myron, > > First, this sounds really good! > > On 4/8/06 9:54 PM, "Myron Scott" <lister@sacadia.com> wrote: > >> I added a little hack to the buffer >> code to force >> pages read into the buffer to stay at the back of the free buffer >> list >> until the master >> thread has had a chance to use it. > > This is the part I'm curious about - is this using the > shared_buffers region > in a circular buffer fashion to store pre-fetched pages? Yes. That is basically what the slave thread is trying to do. As well as weed out any tuples/pages that don't need to be looked at due to dead tuples. I did several things to try and insure that a buffer needed by the master thread would not be pulled out of the buffer pool before it was seen by the master. I wanted to do this without holding the buffer pinned, so I did the change to the buffer free list to do this. static void AddBufferToFreelist(BufferDesc *bf) { S_LOCK(&SLockArray[FreeBufMgrLock]); int movebehind = SharedFreeList->freePrev; /* find the right spot with bias */ while ( BufferDescriptors[movebehind].bias > bf->bias ) { movebehind= BufferDescriptors[movebehind].freePrev; } ... The bias number is removed the next time the buffer is pulled out of the free list. Also, I force an ItemPointer transfer when the ItemPointer transfer list is full ( currently 4096 ) or 10% of the buffer pool have been affected by the slave thread. Lastly, if the slave thread gets too far ahead of the master thread, it waits for the master to catch up. To my knowledge, this hasn't happened yet. > > One thing I've wondered about is: how much memory is required to get > efficient overlap? Did you find that you had to tune the amount of > buffer > memory to get the performance to work out? I haven't done much tuning yet. I think there is an optimal balance that I most likely haven't found yet. Myron Scott
В списке pgsql-hackers по дате отправления: