Re: [INTERFACES] Performance
От | David Warnock |
---|---|
Тема | Re: [INTERFACES] Performance |
Дата | |
Msg-id | 3771F636.54AA639B@sundayta.co.uk обсуждение исходный текст |
Ответ на | Performance (Steven Bradley <sbradley@llnl.gov>) |
Список | pgsql-interfaces |
Steven, I have not experimented much with this in Postgresql but in every other dbms I have used the write speed is not consistant. After you run for a while there will be a pause I guess while caches are flushed. To be certain of capturing all your data I would think you need at least 2 threads. One to capure the data and put in into a queue of some sort. The other would take it off the queue and insert it into postgresql. Providing the average speed of postgresql is higher than the capture rate you are OK. If the dbms is being accessed by other users I wonder if you might have even more problems. If so maybe you should have 2 processes. One captures to text files and starts a new text file every x rows. The other processes a file at a time each as one transaction. You could use a table in postgresql to keep track of the files and whether they have been read in. Obviously this is not as transactionally safe but should be able to handle load fluctuations much better. You could of course then scale up by having multiple machines, one per process and also by having more than one instance of the insert process running at a time (on different machines). With the locoking schema of v6.5 this should give a higher throughput than a single insert process. You would have to experiment with the number of processes that give maxium performance. Dave -- David Warnock Sundayta Ltd
В списке pgsql-interfaces по дате отправления: