Re: INSERTing lots of data
От | Craig Ringer |
---|---|
Тема | Re: INSERTing lots of data |
Дата | |
Msg-id | 4BFFBD4B.3070708@postnewspapers.com.au обсуждение исходный текст |
Ответ на | INSERTing lots of data (Joachim Worringen <joachim.worringen@iathh.de>) |
Ответы |
Re: INSERTing lots of data
|
Список | pgsql-general |
On 28/05/10 17:41, Joachim Worringen wrote: > Greetings, > > my Python application (http://perfbase.tigris.org) repeatedly needs to > insert lots of data into an exsting, non-empty, potentially large table. > Currently, the bottleneck is with the Python application, so I intend to > multi-thread it. That may not be a great idea. For why, search for "Global Interpreter Lock" (GIL). It might help if Python's mostly blocked on network I/O, as the GIL is released when Python blocks on the network, but still, your results may not be great. > will I get a speedup? Or will table-locking serialize things on the > server side? Concurrent inserts work *great* with PostgreSQL, it's Python I'd be worried about. -- Craig Ringer
В списке pgsql-general по дате отправления: