Re: INSERTing lots of data
От | Joachim Worringen |
---|---|
Тема | Re: INSERTing lots of data |
Дата | |
Msg-id | 4BFF945A.70302@iathh.de обсуждение исходный текст |
Ответ на | Re: INSERTing lots of data (Szymon Guz <mabewlun@gmail.com>) |
Список | pgsql-general |
On 05/28/2010 11:48 AM, Szymon Guz wrote: > Remember about Python's GIL in some Python implementations so those > threads could be serialized at the Python level. My multi-threaded queries scale nicely with Python 2.6 on Linux, so this is not an issue here. But the queries do not perform concurrent write accesses on the same table. > This is possible that those inserts will be faster. The speed depends on > the table structure, some constraints and triggers and even database > configuration. The best answer is: just check it on some test code, make > a simple multithreaded aplication and try to do the inserts and check > that out. Sure, testing always shows something, but I wonder if something general can be said about the execution of concurrent write transaction on the same table (no triggers, some non-NULL constraints, one index). http://www.postgresql.org/docs/8.4/interactive/mvcc-intro.html says about MVCC: " The main advantage of using the MVCC model of concurrency control rather than locking is that in MVCC locks acquired for querying (reading) data do not conflict with locks acquired for writing data, and so reading never blocks writing and writing never blocks reading. " It does not mention whether writing may block writing, or if it always does. http://bytes.com/topic/python/answers/728130-parallel-insert-postgresql-thread indicates it should not block - can this be confirmed by some Postgresql guru? thanks, Joachim
В списке pgsql-general по дате отправления: