Re: speed up full table scan using psql
От | Adrian Klaver |
---|---|
Тема | Re: speed up full table scan using psql |
Дата | |
Msg-id | 79615d08-0bbb-066b-6092-7f59703969bb@aklaver.com обсуждение исходный текст |
Ответ на | Re: speed up full table scan using psql (Lian Jiang <jiangok2006@gmail.com>) |
Список | pgsql-general |
On 5/31/23 13:57, Lian Jiang wrote: > The command is: psql $db_url -c "copy (select row_to_json(x_tmp_uniq) > from public.mytable x_tmp_uniq) to stdout" > postgres version: 14.7 > Does this mean COPY and java CopyManager may not help since my psql > command already uses copy? > > Regarding pg_dump, it does not support json format which means extra > work is needed to convert the supported format to jsonl (or parquet) so > that they can be imported into snowflake. Still exploring but want to > call it out early. Maybe 'custom' format can be parquet? Oops I read this: '...Using spark to read the postgres table...' and missed that you are trying to load into Snowflake. It seems Snowflake supports CSV as well: https://docs.snowflake.com/en/user-guide/data-load-prepare So the previous advice should still hold. > > > Thanks > Lian -- Adrian Klaver adrian.klaver@aklaver.com
В списке pgsql-general по дате отправления: