For one of our projects we are using PostgreSQL and access two tables with complex SQL calculations due to performance advantages.
Each table, lets call them
TableB, have 1301 fields of type double precision (field number restriction by PostgreSQL!) The field names for
A1300 and for
We are using many dynamically changing formulas which can directly access the two tables with a simple
id join to do calculations over the numeric data in the records.
An example formula would be:
A99 * ((A1 * B3) / (B5 * B88))
So the program can simply take the formula and directly use it in its SQL like:
select A99 * ((A1 * B3) / (B5 * B88)) as ResultValue from TableA A join TableB B on A.id = B.id
This worked fast in one DB request to 2 indexed tables. But over time, the system has grown, and all the available fields are used up.
Now, the requirement is to have nearly no restriction in the amount of used fields, to accommodate much longer and more complex calculations in one SQL
select over 2 tables. I must admit I don't know much about many other new database systems and their capabilities for such a problem. Are there databases which would support such a concept or another way to have those calculations done in the database? The old way was very flexible for the time being, but is there a more efficient way to implement the same logic? Reading out the data dynamically in to the program, parsing it and calculating with it will probably take much longer. I also think just using another database which allows defining tables with more then 1300 fields would be a short term solution, until we need more fields.