Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
От | Robert Haas |
---|---|
Тема | Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP) |
Дата | |
Msg-id | AANLkTi=BWnKi+5ZxhNN1KHGvNS0ODYC5JikQdEXTYXxV@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP) (Joseph Adams <joeyadams3.14159@gmail.com>) |
Ответы |
Re: patch: Add JSON datatype to PostgreSQL (GSoC, WIP)
|
Список | pgsql-hackers |
On Tue, Oct 19, 2010 at 3:40 PM, Joseph Adams <joeyadams3.14159@gmail.com> wrote: > On Tue, Oct 19, 2010 at 3:17 PM, Robert Haas <robertmhaas@gmail.com> wrote: >> I think we should take a few steps back and ask why we think that >> binary encoding is the way to go. We store XML as text, for example, >> and I can't remember any complaints about that on -bugs or >> -performance, so why do we think JSON will be different? Binary >> encoding is a trade-off. A well-designed binary encoding should make >> it quicker to extract a small chunk of a large JSON object and return >> it; however, it will also make it slower to return the whole object >> (because you're adding serialization overhead). I haven't seen any >> analysis of which of those use cases is more important and why. > > Speculation: the overhead involved with retrieving/sending and > receiving/storing JSON (not to mention TOAST > compression/decompression) will be far greater than that of > serializing/unserializing. I speculate that your speculation is incorrect. AIUI, we, unlike $COMPETITOR, tend to be CPU-bound rather than IO-bound on COPY. But perhaps less speculation and more benchmarking is in order. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company
В списке pgsql-hackers по дате отправления: