Re: UPDATE on two large datasets is very slow
| От | Tommy Gildseth |
|---|---|
| Тема | Re: UPDATE on two large datasets is very slow |
| Дата | |
| Msg-id | 4613711E.6060904@gildseth.com обсуждение исходный текст |
| Ответ на | Re: UPDATE on two large datasets is very slow (Martijn van Oosterhout <kleptog@svana.org>) |
| Ответы |
Re: UPDATE on two large datasets is very slow
|
| Список | pgsql-general |
Martijn van Oosterhout wrote: > On Mon, Apr 02, 2007 at 08:24:46PM -0700, Steve Gerhardt wrote: > >> I've been working for the past few weeks on porting a closed source >> BitTorrent tracker to use PostgreSQL instead of MySQL for storing >> statistical data, but I've run in to a rather large snag. The tracker in >> question buffers its updates to the database, then makes them all at >> once, sending anywhere from 1-3 MiB of query data. With MySQL, this is >> accomplished using the INSERT INTO...ON DUPLICATE KEY UPDATE query, >> which seems to handle the insert/update very quickly; generally it only >> takes about a second for the entire set of new data to be merged. >> > > For the record, this is what the SQL MERGE command is for... I don't > think anyone is working on implementing that though... > This will possibly provide a solution to this question: http://www.postgresql.org/docs/current/static/plpgsql-control-structures.html#PLPGSQL-UPSERT-EXAMPLE -- Tommy
В списке pgsql-general по дате отправления: