Re: Read write performance check
От | Rob Sargent |
---|---|
Тема | Re: Read write performance check |
Дата | |
Msg-id | 926bcc4e-587d-4e83-bab1-273115654442@gmail.com обсуждение исходный текст |
Ответ на | Re: Read write performance check (veem v <veema0000@gmail.com>) |
Список | pgsql-general |
On 12/19/23 12:14, veem v wrote: > Thank you for the confirmation. > > So at first, we need to populate the base tables with the necessary > data (say 100million rows) with required skewness using random > functions to generate the variation in the values of different data > types. Then in case of row by row write/read test , we can traverse in > a cursor loop. and in case of batch write/insert , we need to traverse > in a bulk collect loop. Something like below and then this code can be > wrapped into a procedure and passed to the pgbench and executed from > there. Please correct me if I'm wrong. > > Also can you please guide how the batch(say batch size of ~1000) > Insert can be written ? > > -- Row by row write > FOR i IN 1..total_rows LOOP > data_row := (SELECT > md5(random()::TEXT), > floor(random() * 100)::INT, > random() * 1000::NUMERIC, > NOW()::TIMESTAMP > ); > INSERT INTO BASE_TABLE(column1, column2, column3, column4) > VALUES (data_row.column1, data_row.column2, data_row.column3, > data_row.column4); > END LOOP; > > --Row by row read > BEGIN > FOR i IN 1..total_rows LOOP > -- Row by row read > SELECT * INTO data_row FROM BASE_TABLE WHERE limit 1; > END LOOP; > END; > This row by row is guaranteed to be slow if there's no index on the 100M rows > -- Batch read > BEGIN > -- Batch read > OPEN data_set FOR SELECT * FROM BASE_TABLE LIMIT total_rows; > CLOSE data_set; > END; > Does this batch read in the entire 100M row table? And some suspicious syntax PS: Notice that top posting is frowned upon on this list.
В списке pgsql-general по дате отправления: