Re: add_path optimization
От | Robert Haas |
---|---|
Тема | Re: add_path optimization |
Дата | |
Msg-id | 603c8f070902021945i183c100ch98acfbb5fe8bbe0c@mail.gmail.com обсуждение исходный текст |
Ответ на | Re: add_path optimization (Stephen Frost <sfrost@snowman.net>) |
Ответы |
Re: add_path optimization
Re: add_path optimization |
Список | pgsql-hackers |
> A good data set, plus complex queries against it, might be the data from > the US Census, specifically the TIGER data and the TIGER geocoder. I've > been following this thread with the intention of putting together a > large-data test set, but I just havn't found the time to yet. Right now > there's alot of dependencies on PostGIS (which aren't really required to > just do the queries to pull out the street segment) which I figure > people would want ripped out. It'd also be nice to include the other > Census data besides just the road data. > > If people really are interested, I'll see what I can put together. It's > *alot* of data (around 23G total in PG), though perhaps just doing 1 > state would be enough for a good test, I keep the states split up > anyway using CHECK constraints. Don't think that would change this > case, though there might be cases where it does affect things.. I'm interested, but I need maybe a 1GB data set, or smaller. The thing that we are benchmarking is the planner, and planning times are related to the complexity of the database and the accompanying queries, not the raw volume of data. (It's not size that matters, it's how you use it?) In fact, in a large database, one could argue that there is less reason to care about the planner, because the execution time will dominate anyway. I'm interested in complex queries in web/OLTP type applications, where you need the query to be planned and executed in 400 ms at the outside (and preferably less than half of that). ...Robert
В списке pgsql-hackers по дате отправления: