I'm writing a web app that generates a potentially large text file that the user will download, and all the processing is done in the browser. So far I'm able to read a file over 1 GB in small chunks, process each chunk, generate a large output file incrementally, and store the growing output in IndexedDB. My more naïve attempt which kept all the results in memory and then serialized them to a file at the very end was causing all browsers to crash.
My question is two-fold:
- Can I append to an entry in IndexedDB (either a string or an array) without reading the whole thing into memory first? Right now, this: - task.dbInputWriteQueue.push(output); var transaction = db.transaction("files", "readwrite"); var objectStore = transaction.objectStore("files"); var request = objectStore.get(file.id); request.onsuccess = function() { request.results += nextPartOfOutput objectStore.put(request.results); };- is causing crashes after the output starts to get big. I could just write a bunch of small entries into the database, but then I'd have to read them all in to memory later anyway to concatenate them. See part 2 of my question... 
- Can I make a data object URL to reference a value in IndexedDB without loading that value into memory? For small strings I can do: - var url = window.URL.createObjectURL(new Blob([myString]), {type: 'text/plain'});- But for large strings this doesn't jive too well. In fact, it crashes before the string is loaded. It seems that big reads using - get()from IndexedDB cause Chrome, at least, to crash (even the developer tools crash).
Would it be faster if I was using Blobs instead of strings? Is that conversion cheap?
Basically I need a way, with JavaScript, to write a really big file to disk without loading the whole thing into memory at any one point. I know that you can give createObjectURL a File, but that doesn't work in my case since I'm generating a new file from one the user provides.
 
     
    