Situation
I'm using multiple storage databases as attachments to one central "manager" DB.
- The storage tables share one pseudo-AUTOINCREMENTindex across all storage databases.
- I need to iterate over the shared index frequently.
- The final number and names of storage tables are not known on storage DB creation.
- On some signal, a then-given range of entries will be deleted.
- It is vital that no insertion fails and no entry gets deleted before its signal.
- Energy outage is possible, data loss in this case is hardly, if ever, tolerable. Any solutions that may cause this (in-memory databases etc) are not viable.
- Database access is currently controlled using strands. This takes care of sequential access.
- Due to the high frequency of INSERTtransactions, I must trigger WAL checkpoints manually. I've seen journals of up to 2GB in size otherwise.
Current solution
I'm inserting datasets using parameter binding to a precreated statement.
INSERT INTO datatable VALUES (:idx, ...);
Doing that, I remember the start and end index. Next, I bind it to an insert statement into the registry table:
INSERT INTO regtable VALUES (:idx, datatable);
My query determines the datasets to return like this:
SELECT MIN(rowid), MAX(rowid), tablename
FROM (SELECT rowid,tablename FROM entryreg LIMIT 30000)
GROUP BY tablename;
After that, I query
SELECT * FROM datatable WHERE rowid >= :minid AND rowid <= :maxid;
where I use predefined statements for each datatable and bind both variables to the first query's results.
This is too slow. As soon as I create the registry table, my insertions slow down so much I can't meet benchmark speed.
Possible Solutions
There are several other ways I can imagine it can be done:
- Create a view of all indices as a - UNIONor- OUTER JOINof all table indices. This can't be done persistently on attached databases.
- Create triggers for - INSERT/- REMOVEon table creation that fill a registry table. This can't be done persistently on attached databases.
- Create a trigger for - CREATE TABLEon database creation that will create the triggers described above. Requires user functions.
Questions
Now, before I go and add user functions (something I've never done before), I'd like some advice if this has any chances of solving my performance issues.
- Assuming I create the databases using a separate connection before attaching them. Can I create views and/or triggers on the database (as mainschema) that will work later when I connect to the database viaATTACH?
- From what it looks like, a trigger AFTER INSERTwill fire after every single line of insert. If it inserts stuff into another table, does that mean I'm increasing my number of transactions from 2 to 1+N? Or is there a mechanism that speeds up triggered interaction? The first case would slow down things horribly.
- Is there any chance that a FULL OUTER JOIN(I know that I need to create it from otherJOINcommands) is faster than filling a registry with insertion transactions every time? We're talking roughly ten transactions per second with an average of 1000 elements (insert) vs. one query of 30000 every two seconds (query).
 
     
    