Re: Speed up JSON escape processing with SIMD plus other optimisations
От | David Rowley |
---|---|
Тема | Re: Speed up JSON escape processing with SIMD plus other optimisations |
Дата | |
Msg-id | CAApHDvqzvKb5UmUNjyZ04sE0ad01SwKBgA5qztfS8nSx525K7g@mail.gmail.com обсуждение исходный текст |
Ответ на | Speed up JSON escape processing with SIMD plus other optimisations (David Rowley <dgrowleyml@gmail.com>) |
Ответы |
Re: Speed up JSON escape processing with SIMD plus other optimisations
|
Список | pgsql-hackers |
On Thu, 23 May 2024 at 13:23, David Rowley <dgrowleyml@gmail.com> wrote: > Master: > $ pgbench -n -f bench.sql -T 10 -M prepared postgres | grep tps > tps = 362.494309 (without initial connection time) > tps = 363.182458 (without initial connection time) > tps = 362.679654 (without initial connection time) > > Master + 0001 + 0002 > $ pgbench -n -f bench.sql -T 10 -M prepared postgres | grep tps > tps = 426.456885 (without initial connection time) > tps = 430.573046 (without initial connection time) > tps = 431.142917 (without initial connection time) > > About 18% faster. > > It would be much faster if we could also get rid of the > escape_json_cstring() call in the switch default case of > datum_to_json_internal(). row_to_json() would be heaps faster with > that done. I considered adding a special case for the "text" type > there, but in the end felt that we should just fix that with some > hypothetical other patch that changes how output functions work. > Others may feel it's worthwhile. I certainly could be convinced of it. Just to turn that into performance numbers, I tried the attached patch. The numbers came out better than I thought. Same test as before: master + 0001 + 0002 + attached hacks: $ pgbench -n -f bench.sql -T 10 -M prepared postgres | grep tps tps = 616.094394 (without initial connection time) tps = 615.928236 (without initial connection time) tps = 614.175494 (without initial connection time) About 70% faster than master. David
Вложения
В списке pgsql-hackers по дате отправления: