I have a transaction with both write and read to the same object store.
var t = db.transaction("my_store","readwrite");
var obj = ...
// Write object 'obj' to store here
// Read back the store, with the obj already added
Is it possible to do the above ops in 1 transaction? As tested with awaiting the write, it seems the 'read back' will get the obj also, but maybe not truly certain due to whether the await have already done committing?
Adding t.commit() after writing obj will just end the transaction and read obj won't work.
Related
I have a typical ngrx-data arrangement of 'User' entities linked to db.
I implement the standard service to handle the data:
#Injectable({providedIn: 'root'})
export class UserService extends EntityCollectionServiceBase<UserEntity> {
constructor(serviceElementsFactory: EntityCollectionServiceElementsFactory) {
super('User', serviceElementsFactory);
}
}
I read the data using:
this.data$ = this.userService.getAll();
this.data$.subscribe(d => { this.data = d; ... }
Data arrives fine. Now, I have a GUI / HTML form where user can make changes and update them. It also works fine. Any changes user makes in the form are updated via:
this.data[fieldName] = newValue;
This updates the data and ngrx-data automatically updates the entity cache.
I want to implement an option, where user can decide to cancel all changes before they are written to the db, and get the initial data before he made any adjustments. However, I am somehow unable to overwrite the cached changes.
I tried:
this.userService.clearCache();
this.userService.load();
also tried to re-call:
this.data$ = this.userService.getAll();
but I am constantly getting the data from the cache that has been changed by the user, not the data from the db. In the db I see the data not modified. No steps were taken to write the data to db.
I am not able to find the approach to discard my entity cache and reload the original db data to replace the cached values.
Any input is appreciated.
You will need to subscribe to the reassigned observable when you change this.data$, but it will be a bit messy.
First you bind this.data$ via this.data$ = this.userService.entities$, then no matter you use load() or getAll(), as long as the entities$ changed, it fire to this.data$.subscribe(). You can even skip the load and getAll if you already did that in other process.
You can then use the clearCache() then load() to reset the cache.
But I strongly recommand you to keep the entity data pure. If the user exit in the middle without save or reset, the data is changed everywhere you use this entity.
My recommand alternatives:
(1) Use angular FormGroup to set the form data with entity data value, then use this setting function to reset the form.
(2) Make a function to copy the data, then use this copy function as reset.
For example using _.cloneDeep.
(2.1) Using rxjs BehaviourSubject:
resetTrigger$ = new BehaviourSubject<boolean>(false);
ngOnInit(){
this.data$ = combineLastest([
this.resetTrigger$,
this.userService.entities$
]).subscribe([trigger, data])=>{
this.data = _.cloneDeep(data)
});
// can skip if already loaded before
this.userService.load();
}
When you want to reset the data, set a new value to the trigger
resetForm(){
this.resetTrigger$.next(!this.resetTrigger$.value)
}
(2.2) Using native function (need to store the original):
this.data$ = this.userService.entities$.pipe(
tap(d=>{
this.originData = d;
resetForm();
})
).subscribe()
resetForm:
resetForm:()=>{
this.data = _.cloneDeep(this.originData);
}
I would like to load a local file (client-side) with papaparse into my React application. Unfortunately, it only loads the first chunk but never the whole file. My file contains about 500 rows and there are never more than 300 rows loaded. It seems like the complete functions is already called after the first chunk.
Since I need to navigate to another page when the file is loaded completely, this is bothering me since I need the complete file for further functions.
The code I use at the moment:
async getData() {
const self = this;
let dataList = [];
Papa.parse(await this.fetchCsv(),
{
delimiter: ',',
header: true,
chunk: function (result, parser) {
parser.pause();
dataList = dataList.concat(result.data)
parser.resume();
},
complete: function () {
self.updateData(dataList);
}
});
}
async fetchCsv() {
const response = await fetch(this.props.location.state.filename);
const reader = response.body.getReader();
const result = await reader.read();
const decoder = new TextDecoder('utf-8');
return decoder.decode(result.value);
}
What I've also tried is using step instead of chunk but this did not change anything.
Can anyone tell me what I'm doing wrong here and why papaparse does not load the whole file?
You may be able to let papaparse do more. It can read local File or stream data from a remote server.
If you only have about 500 records, you may not need to add the complexity associated with streaming. This is especially true if you're just accumulating the data (which it appears you are). Use streaming primarily to process the data 1 record at a time.
If you want to stream, I'd recommend using the "step" callback instead of the "chunk" callback so you can process each row of data.
If you use the step or chunk callbacks, then you don't need the complete callback. If it's called, it won't have the data.
I'm currently working on a project that requires me to make an API call. It only allows me to make 500 requests / 10 mins but the data returned (object with ~800 properties) only changes every few months so I rather just cache it somewhere.
I'm very new to this whole thing and I'm wondering how can I make the call every few months and store the data somewhere so that I could retrieve it from the client whenever needed?
Thanks in advance!
Since you want to store your object for a longer period of time, I would suggest writing it to disk rather than caching it in memory (in case your node app crashes).
You didn't mention it precisely, but I assume you are referring to a simple javascript object, which you want to store? To store such an object to disk, you can do the following:
var fs = require("fs");
// with your object being stored in the variable "myObject", after your API call:
var myObject = ....
fs.writeFile( "myFilename.json", JSON.stringify(myObject), "utf8", function(err) {
if(err) {
return console.log(err);
}
// do whatever you want to do after file has been saved...
});
To read the object from disk, simply do:
myObject = require("./filename.json");
I need to combine a catch up and a subscribe to new feed. So first I query the database for all new records I've missed, then switch to a pub sub for all new records that are coming in.
The first part is easy do your query, perhaps in batches of 500, that will give you an array and you can rx.observeFrom that.
The second part is easy you just put an rx.observe on the pubsub.
But I need to do is sequentially so I need to play all the old records before I start playing the new ones coming in.
I figure I can start the subscribe to pubsub, put those in an array, then start processing the old ones, and when I'm done either remove the dups ( or since I do a dup check ) allow the few dups, but play the accumulated records until they are gone and then one in one out.
my question is what is the best way to do this? should I create a subscribe to start building up new records in an array, then start processing old, then in the "then" of the oldrecord process subscribe to the other array?
Ok this is what I have so far. I need to build up the tests and finish up some psudo code to find out if it even works, much less is a good implementation. Feel free to stop me in my tracks before I bury myself.
var catchUpSubscription = function catchUpSubscription(startFrom) {
EventEmitter.call(this);
var subscription = this.getCurrentEventsSubscription();
// calling map to start subscription and catch in an array.
// not sure if this is right
var events = rx.Observable.fromEvent(subscription, 'event').map(x=> x);
// getPastEvents gets batches of 500 iterates over and emits each
// till no more are returned, then resolves a promise
this.getPastEvents({count:500, start:startFrom})
.then(function(){
rx.Observable.fromArray(events).forEach(x=> emit('event', x));
});
};
I don't know that this is the best way. Any thoughts?
thx
I would avoid mixing your different async strategies unnecessarily. You can use concat to join together the two sequences:
var catchUpSubscription = function catchUpSubscription(startFrom) {
var subscription = this.getCurrentEventsSubscription();
return Rx.Observable.fromPromise(this.getPastEvents({count:500, start:startFrom}))
.flatMap(x => x)
.concat(Rx.Observable.fromEvent(subscription, 'event'));
};
///Sometime later
catchUpSubscription(startTime).subscribe(x => /*handle event*/)
I use two different events for the callback to respond when the IndexedDB transaction finishes or is successful:
Let's say... db : IDBDatabase object, tr : IDBTransaction object, os : IDBObjectStore object
tr = db.transaction(os_name,'readwrite');
os = tr.objectStore();
case 1 :
r = os.openCursor();
r.onsuccess = function(){
if(r.result){
callback_for_result_fetched();
r.result.continue;
}else callback_for_transaction_finish();
}
case 2:
tr.oncomplete = callback_for_transaction_finish();
It is a waste if both of them work similarly. So can you tell me, is there any difference between them?
Sorry for raising up quite an old thread, but it's questioning is a good starting point...
I've looked for a similar question but in a bit different use case and actually found no good answers or even a misleading ones.
Think of a use case when you need to make several writes into the objectStore of even into several ones. You definitely don't want to manage each single write and it's own success and error events. That is the meaning of transaction and this is the (proper) implementation of it for indexedDB:
var trx = dbInstance.transaction([storeIdA, storeIdB], 'readwrite'),
storeA = trx.objectStore(storeIdA),
storeB = trx.objectStore(storeIdB);
trx.oncomplete = function(event) {
// this code will run only when ALL of the following requests are succeed
// and only AFTER ALL of them were processed
};
trx.onerror = function(error) {
// this code will run if ANY of the following requests will fail
// and only AFTER ALL of them were processed
};
storeA.put({ key:keyA, value:valueA });
storeA.put({ key:keyB, value:valueB });
storeB.put({ key:keyA, value:valueA });
storeB.put({ key:keyB, value:valueB });
Clue to this understanding is to be found in the following statement of W3C spec:
To determine if a transaction has completed successfully, listen to the transaction’s complete event rather than the success event of a particular request, because the transaction may still fail after the success event fires.
While it's true these callbacks function similarly they are not the same: the difference between onsuccess and oncomplete is that transactions complete but requests, which are made on those transactions, are successful.
oncomplete is only defined in the spec as related to a transaction. A transaction doesn't have an onsuccess callback.
I would only caution that there is no garentee that getting a successful trx.oncomplete means the data was written to the disk/database:
We are seeing a problem with trx.oncomplete where the data is not being written to the db on disk. FireFox has an explanation of what they did that is causing this problem here: https://developer.mozilla.org/en-US/docs/Web/API/IDBTransaction/oncomplete
It seems that windows/edge is also having the same issue. Basically, there is no guarantee that your app will have data written to the database, if/when the user decides to kill or power down the device. We've even tried waiting up to 15 minutes before shutting down in some cases and haven't seen the data written. For me I'd always want to ensure that a data write completes and is committed.
Are there other solutions for a real persistent database, or enhancements to the IndexedDB beyond FF experimental add...