Re: Need an idea to operate massive delete operation on big size table.
От | Ron Johnson |
---|---|
Тема | Re: Need an idea to operate massive delete operation on big size table. |
Дата | |
Msg-id | CANzqJaCkSt3MkaeVT4wXxLROx3RY4dsArvGTYfn-VX6JxVTBfw@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: Need an idea to operate massive delete operation on big size table. (youness bellasri <younessbellasri@gmail.com>) |
Ответы |
Re: Need an idea to operate massive delete operation on big size table.
Re: Need an idea to operate massive delete operation on big size table. |
Список | pgsql-admin |
Sadly, Postgresql does not have (super-handy) "DISABLE" clauses.
On Wed, Jan 15, 2025 at 10:12 AM youness bellasri <younessbellasri@gmail.com> wrote:
1. Batch Deletion
Instead of deleting all records at once, break the operation into smaller batches. This reduces locking, transaction log growth, and the risk of timeouts.
2. Use Indexes
Ensure that the columns used in the
WHERE
clause of the delete queries are indexed. This speeds up the identification of rows to delete.3. Disable Indexes and Constraints Temporarily
If the table has many indexes or constraints, disabling them during the delete operation can speed up the process. Re-enable them afterward.
Le mer. 15 janv. 2025 à 16:08, Ron Johnson <ronljohnsonjr@gmail.com> a écrit :On Wed, Jan 15, 2025 at 9:54 AM Gambhir Singh <gambhir.singh05@gmail.com> wrote:Hi,I received a request from a client to delete duplicate records from a table which is very large in size.
Delete queries (~2 Billion) are provided via file, and we have to execute that file in DB. Last time it lasted for two days. I feel there must be another way to delete records in an efficient mannerMaybe the delete "queries" are poorly written. Maybe there's no supporting index.
Death to <Redacted>, and butter sauce.
Don't boil me, I'm still alive.
<Redacted> lobster!
В списке pgsql-admin по дате отправления: