Re: Popcount optimization using AVX512
От | Ants Aasma |
---|---|
Тема | Re: Popcount optimization using AVX512 |
Дата | |
Msg-id | CANwKhkNs9WnqCZDV3bKgaqZix0_NC1T=wf77=k4jYgG=Qr-vzw@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Popcount optimization using AVX512 (Nathan Bossart <nathandbossart@gmail.com>) |
Ответы |
Re: Popcount optimization using AVX512
|
Список | pgsql-hackers |
On Tue, 2 Apr 2024 at 00:31, Nathan Bossart <nathandbossart@gmail.com> wrote: > On Tue, Apr 02, 2024 at 12:11:59AM +0300, Ants Aasma wrote: > > What about using the masking capabilities of AVX-512 to handle the > > tail in the same code path? Masked out portions of a load instruction > > will not generate an exception. To allow byte level granularity > > masking, -mavx512bw is needed. Based on wikipedia this will only > > disable this fast path on Knights Mill (Xeon Phi), in all other cases > > VPOPCNTQ implies availability of BW. > > Sounds promising. IMHO we should really be sure that these kinds of loads > won't generate segfaults and the like due to the masked-out portions. I > searched around a little bit but haven't found anything that seemed > definitive. After sleeping on the problem, I think we can avoid this question altogether while making the code faster by using aligned accesses. Loads that straddle cache line boundaries run internally as 2 load operations. Gut feel says that there are enough out-of-order resources available to make it not matter in most cases. But even so, not doing the extra work is surely better. Attached is another approach that does aligned accesses, and thereby avoids going outside bounds. Would be interesting to see how well that fares in the small use case. Anything that fits into one aligned cache line should be constant speed, and there is only one branch, but the mask setup and folding the separate popcounts together should add up to about 20-ish cycles of overhead. Regards, Ants Aasma
Вложения
В списке pgsql-hackers по дате отправления: