Обсуждение:
In libpq, in the code of the lo_import function, for each piece of a file of 8KB in size, lo_write is called, which greatly slows down the work of lo_import because lo_write sends a request and waits for a response. The size of 8KB is specified in define LO_BUFSIZE which changed from 1KB to 8KB 24 years ago.
Why not increase the buffer size?
On 1/20/25 10:10, Lana ABADIE wrote: > Hi all > I bumped into a weird case that i don't really understand...maybe > someone in this list could have a clue > We have 2 Postgres databases configured as master/slave > replica (Postgresq 12, RHEL8) > We have applications which write data into the master and applications > which reads data from the replica. > A group of applications reads data using libpq: it declares a select > statement as cursor and then there is fetch which can retrieve at most > 25k rows. Add the complete SELECT and CURSOR code. > The select statement contains a between clause with T1 and T2. T1 is > injected via input parameter but T2=floor(extract (epoch from > coalesce(pg_last_xact_replay_timestamp(),now())))-120. It is passed > directly like that in the query. What does T1 represent and how is it derived? > In other words we have something like select * from ZZ where ... and > timestamp between $T1 and floor(extract (epoch from > coalesce(pg_last_xact_replay_timestamp(),now())))-120; > when this query gets executed, from time to time it returns a truncated > number of rows.. less than if i was doing between T1 and T1... I don't understand above, add more complete definition. Example data would be nice. > T2 being an integer so either T2<T1 in that case i would get a number of > rows of zero or T2>=T1 and I would expect at least #rows greater or > equal to the number of rows between T1 and T1, > Note that we are talking about a total number of rows less than 2000. > Then when i fixed T2, in other words i do a query using between $T1 and > $T2 (where T2=floor(extract (epoch from > coalesce(pg_last_xact_replay_timestamp(),now())))-120) then there is no > issues, number of rows are retrieved correctly. > I also confirmed via metrics collection that the data is there when the > query is being performed. > I would appreciate any explanations on this behavior, and hoping i'm clear. > Thanks > Doris -- Adrian Klaver adrian.klaver@aklaver.com