"
I need gather data into one table for consistency and easy for export andimport, it's ok if I split data to smaller tables, but whenexport/import/update, i must excute query on alot of table. And this waylead data to inconsistency if I forget update/export/import on 1 or moretable. It is terrible."
That statement is wrong on many levels:
- Easy for import and export...
- Multiple tables / one table are identical for import export purposes.
- export/import/update, i must excute query on alot of table.
- That's what SQL is for...
- lead data to inconsistency if I forget update/export/import on 1 or more
table. It is terrible.
- You build your process once and test it... Additional runs of the process are 'free'...
And as someone else mentioned, the 34 indexes are additional tables anyway.
There is probably a way to optimize your current system... There often is no matter how horrible the implementation...
But I would start by normalizing that as much as possible and then running performance tests against a normalized jobs. There's lots of tools to do that... But they probably aren't much help with your current schema.
Gary