Re: [GENERAL] storing large files in database - performance
От | Eric Hill |
---|---|
Тема | Re: [GENERAL] storing large files in database - performance |
Дата | |
Msg-id | CY1PR05MB22651DAA17F5067BD25EBC4EF0E40@CY1PR05MB2265.namprd05.prod.outlook.com обсуждение исходный текст |
Ответ на | Re: [GENERAL] storing large files in database - performance (Eric Hill <Eric.Hill@jmp.com>) |
Список | pgsql-general |
My apologies: I said I ran "this query" but failed to include the query. It was merely this: SELECT "indexFile"."_id", "indexFile"."contents" FROM "mySchema"."indexFiles" AS "indexFile" WHERE "indexFile"."_id" = '591c609bb56d0849404e4720'; Eric -----Original Message----- From: Eric Hill [mailto:Eric.Hill@jmp.com] Sent: Thursday, May 18, 2017 8:35 AM To: Merlin Moncure <mmoncure@gmail.com>; Thomas Kellerer <spam_eater@gmx.net> Cc: PostgreSQL General <pgsql-general@postgresql.org> Subject: Re: storing large files in database - performance I would be thrilled to get 76 MB per second, and it is comforting to know that we have that as a rough upper bound on performance. I've got work to do to figure out how to approach that upper bound from Node.js. In the meantime, I've been looking at performance on the read side. For that, I can bypass all my Node.js layers and justrun a query from pgAdmin 4. I ran this query, where indexFile.contents for the row in question is 25MB in size. Thequery itself took 4 seconds in pgAdmin 4. Better than the 12 seconds I'm getting in Node.js, but still on the order of6MB per second, not 76. Do you suppose pgAdmin 4 and I are doing similarly inefficient things in querying bytea values? Thanks, Eric
В списке pgsql-general по дате отправления: