database with 1000000 rows is very slow
От | David Celjuska |
---|---|
Тема | database with 1000000 rows is very slow |
Дата | |
Msg-id | 38C2C1E1.A41501BD@dcsoft.sk обсуждение исходный текст |
Ответы |
Re: [SQL] database with 1000000 rows is very slow
|
Список | pgsql-sql |
Hallo All! I have database with follow structure: CREATE TABLE "article" ( "id" character varying(15) NOT NULL, "obj_kod" character varying(15), "popis"character varying(80), "net_price" float4, "our_price" float4, "quantity" int2, "group1" charactervarying(40) DEFAULT 'ine', "group2" character varying(40), "pic1" character varying(10) DEFAULT 'noname.jpg', "pic2" character varying(10) DEFAULT 'noname.jpg', "alt1" character varying(15), "alt2" charactervarying(15), "zisk" int2); CREATE UNIQUE INDEX "article_pkey" on "article" using btree ( "id" "varchar_ops" ); and with 1000000 rows. Postgres deamon run on 2xPentiumII 330Mhz with SCSI disk where is this database store. But I think that select * from article where id like 'something%' is very slow (some minutes) and query as: select * from article where id='something' is very slow too. I don't know where is a problem a I would like optimalise this, but how can I do it? When I use hash except btree, query as: select * from article where id='something' is fast but select * from article where id='something%' is very slow. Can I index some columns externaly? For example: psql index database table col. Or postgresql make indexes automaticly? How can I see that postgres use/or no use index on some query? It is possible? Thank you every reply, Davy!
В списке pgsql-sql по дате отправления: