What is the best way to store object data in HTML5's localStorage. I haven't worked much with key value storage.
In my research i've seen a few different approaches.
example data:
var commands = [
{invokes: 'Window', type: 'file', data: '/data/1'},
{invokes: 'Action', type: 'icon', data: '/data/2'},
{invokes: 'Window', type: 'file', data: '/data/3'}
];
Approach 1: store keys that represent each data item
// for(...) {
localStorage["command[" + i + "].invokes"] = command[i].invokes
localStorage["command[" + i + "].type"] = command[i].type
localStorage["command[" + i + "].data"] = command[i].data
//}
Approach 2: keys is entity name, store json
localStorage["commands"] = JSON.stringify(commands);
Second approach would require a JSON.parse().
pros/cons?
For the record I went with approach 2. My key is similar to a table name, the value is a json stringified array of records. When retrieving the table you must call JSON.parse().
technologies for local storages are: http://madhukaudantha.blogspot.com/2011/02/client-side-storages-with-html-5.html...
Approach 2, OK..
there are more ways
Certainly your approach works just fine however it does leave you a bit stuck with the convention you chose moving forward. I would recommend that you consider wrapping your localStorage access up in a class so that your convention is isolated to a class as a true convention.
Otherwise should you chose to change ho you approach it you will have implementation code scattered all over your code base.
Related
I have a problem with updating certain properties of an array of objects in a real-time database in Firebase.
My object looks like the following (see picture).
Now I want to update the IsComing property of the second participant object.
At the moment I use the updateIsComming() function, but this is not very convincing, because I have to rewrite the whole object.
function updateIsComming() {
const db = getDatabase();
update(ref(db, " your_path/EventModel/" + modelkey ), {
Location: "London",
Participants: [
{ Name: "Bella2", IsComming: "true" },
{ Name: "Tom", IsComing: "true" },
],
});
Instead, I just want to reference the specific prop from that array. For example
Participant[1].IsComming = false;
Is there any way I can access a specific array of an object directly.
Arrays as a data structure are not recommended in Firebase Realtime Database. To learn why, I recommend reading Best Practices: Arrays in Firebase.
One of the reasons for this is that you need to read the array to determine the index of the item to update. The pattern is:
Read the array from the database.
Update the necessary item(s) in the array in your application code.
Write the updated array back to the database.
As you already discovered, this is not ideal. As is common on NoSQL databases, consider an alternative data structure that better suits the use-case.
In this case, an alternative data structure to consider is:
Participants: {
"Bella2": true,
"Tom": true
}
In there, we use the name of the participant as the key which means:
Each participant can be present in the object only once, because keys in an object are by definition unique.
You can now update a user's status by their name with: update(db, "your_path/EventModel/" + modelkey + "/Participants/Tom", false).
I have two tables: Transfers and Releases. I need to join them in one ordered array
#transfers = Transfer.includes(:shipment,batch: %i[products material], sender: :profile).where(receiver: #dispensary, material: { id: material.id }, batch: { capacity: capacity }).where.not(shipment: { delivery_date: nil })
#outgoing_transfers = Transfer.includes(:shipment, batch: %i[products material]).where(sender: #dispensary, material: { id: material.id }, batch: { capacity: capacity }).where.not(shipment: { delivery_date: nil })
return if #releases.empty? && #transfers.empty? && #outgoing_transfers.empty?
#operations = (#releases + #transfers + #outgoing_transfers).sort_by(&:updated_at).paginate(page: params[:page],per_page: 27)
As you can see there is a problem. Each time I want to see another 27 records (I'm using 'will-paginate' gem) it's fire this queries again. Is there a way to do this only once and then operate on this array?
Ps. Sorry for my bad english
"Operating on array" means loading everything in memory - this is a dangerous strategy that could literally kill your app with the dataset large enough.
You could try building a view for smth like select ... from transfers union select .. from releases and working with it (should be easily doable with ActiveRecord too, if necessary - by setting the proper table name and making a model read-only).
Or even question your data design - if transfers and releases are isomorphic to some extent and can be used in the same context, maybe they could be modeled using single table inheritance (or even the same entity with some kind-like property if the difference is really minor?)
For ref, using "#apollo/client": "^3.5.5",
I've defined my typePolicies like so as suggested in docs:
HistoricalData: {
keyFields: ["variable", "workspace"],
fields:{...}
}
and when my cache is built, I am expecting my cacheId to be like
<__typename>:<id>:<id>
HistoricalData:${props.variable}:${props.workspace}`;
but instead, when I look in the Apollo cache, it's been created using the keyField names and the values in an object, such as
HistoricalData:{"variable":"GAS.TOTAL","workspace":"ABC"}
instead of
HistoricalData:GAS.TOTAL:ABC
so when I try to readFragment it returns null
client.readFragment({
id: `HistoricalData:${props.variable}:${props.workspace}`,
fragment: apolloGQL`fragment MyHistorical on Historical {
variable
workspace
}`})
It does actually return a value from the cache if I create my id in the structure that exists in the cache and readFragment using this.
Has anyone else noticed that Apollo client is not creating the cache id's in the structure that they describe in the docs?
After some research I came upon the correct way to handle this case. I know that you have already moved on, but just in case anyone else has the same problem in the future, here goes:
As described in the documentation for customizing the cache ID, the cache ID will be an stringified object, as you pointed out. It's not quite explicit in the documentation, but at this point in time it provides this nested example for a cache ID:
Book:{"title":"Fahrenheit 451","author":{"name":"Ray Bradbury"}}
But as users we don't have to preoccupy us with the format of this ID, because there's a helper for that, called cache.identify.
For your specific case, you could use something like this:
const identifiedId = cache.identify({
__typename: 'HistoricalData',
variable: 'GAS.TOTAL',
workspace: 'ABC'
});
cache.readFragment({
id: identifiedId,
fragment: apolloGQL`fragment MyHistorical on Historical {
variable
workspace
}`
});
My use case is a mobile app with react native, but I guess it's very common good practices.
I want to be able, in an app, to take an image (from the camera or the gallery), and to be able to store it so it can be fetched from the date it was added, or some metadata added by the user.
The theory seems quite simple, a way of doing it can be :
Use any library (eg this great one) to get the image,
Store image as base64 and metadata in, let's say RealmJS (some internal DB),
Query this DB to get what I want.
This should work, and should be quite simple to implement.
But I'm wondering about a few things :
According to the performance of a smartphone's camera, isn't it quite a shame to store it as base64 (and no checksum, more memory used, ...) ?
This format, base64, isn't a bad idea in general for storing image ?
Is it a good idea to store the image in RealmJS, as it will be a pain for the user to reuse the image (share it on facebook...), but on the other hand, if I wrote it to the smartphone and store a URI, it can lead to a lot of problems (missing file if the user deletes it, need to access to memory, ...)
Is this approach "clean" (ok it works, but ...) ?
If you have any experience, tips, or good practice to share, I'll be happy to talk about it :)
You can store binary data (images) in Realm. But if you are using Realm locally (not sync), I will suggest that you store the image on the file system and store the path in Realm. Your model could be something like:
const ImageSchema = {
name: 'Image',
properties: {
path: 'string',
created: 'Date',
modified: 'Date?',
tags: 'Tag[]'
}
};
const TagSchema = {
name: 'Tag',
properties: {
name: 'string',
images: { type: 'linkingObjects', objectType: 'Image', property: 'tags' }
}
};
That is, for every image the timestamp for its creation is stored. Moreover, it has an optional timestamp if the image has been modified. The property path is where to find the image. If you prefer to store the image, you can use a property of type data instead. To find image less that a week old, you can use realm.objects('Image').filtered('created >= $1', new Date(Date.now()-7*24*60*60)).
Just for fun, I have added a list of tags for each image. The linkingObject in Tag makes it possible to find all image which have a particular tag e.g., realm.objects('Tag').filtered('#links.Tag.name == "Dog"').
I am new to Backbone-relational, I am not sure what is the right way to use HasMany.
I have a Parent model which have many children (by "many" I mean thousands of children). In order to avoid performance issue, I query children by their foreign key: /child/?parent=1, instead of create a huge list of child_ids in Parent. But seems this is not the way Backbone-relational work.
So I am wondering what is the right way to handle this.
1, Change my json api to include list of child id in parent, then send thousands of ids as Backbone-relational recommend:
url = function(models) {
return '/child/' + ( models ? 'set/' + _.pluck( models, 'id' ).join(';') + '/' : '');
}
// this will end up with a really long url: /child/set/1;2;3;4;...;9998;9999
2, override many method in Backbone-relational, let it handle this situation. My first thought is :
relations: [{
collectionOptions: function(model){
// I am not sure if I should use `this` to access my relation object
var relation = this;
return {
model: relation.relatedModel,
url: function(){
return relation.relatedModel.urlRoot + '?' + relation.collectionKey + '=' + model.id;
}
}
}
}]
// This seems work, but it can not be inherent by other model
// And in this case parent will have am empty children list at beginning.
// So parent.fetchRelated() won't fetch anything, I need call this url my self.
3, Only use Backbone-relational as a Store, then use Collection to manage relations.
4, Some other magic way or pattern or backbone framework
Thanks for help.
Here's the solution that I have going on my current project. Note that Project hasMany comments, events, files, and videos. Those relations and their reverse relations are defined on those models:
Entities.Project = Backbone.RelationalModel.extend({
updateRelation: function(relation) {
var id = this.get('id'),
collection = this.get(relation);
return collection.fetch({ data: $.param({ project_id: id }) });
}
});
I have the REST endpoint configured to take parameters that act as successive "WHERE" clauses. So project.updateRelation('comments') will send a request to /comments?project_id=4 I have some further logic on the server-side to filter out stuff the user has no right to see. (Laravel back-end, btw)