Thanks for the reply.
I'm not good at English, so I am using machine translation to correct it.
some sentences may be difficult to understand.
The following test code performs asynchronous operations using libpq along
with either epoll or kqueue. I wasn't sure if it's appropriate to include
a large code in an email, so I've uploaded the test code to a gist.
I can easily reproduce the issue on my macOS system which using kqueue, but
it takes many runs to reproduce on my Linux system using epoll.
In the case of edge-triggered mode, `consume_input()` may fail depending on
the amount of data received. This happens when `PQconsumeInput()` doesn't
read all the data received on the socket (The size of the received data of
the socket is larger than the size of the buffer area), and the subsequent
call to `PQisBusy()` returns `1`. Then, waiting for a socket read event,
it fails with a timeout.
In the case of level-triggered mode, there's no problem as events will be
continuously generated while data remains in the socket.
The main issue here is whether to wait for data to arrive in the main loop or
to call `PQconsumeInput()` again. This decision requires checking errno on
the application side.
Is there any other way to resolve this issue?