I am a very simple human being. I need simple things in life. I just want to get a value as int/String/float from a KNOWN not dynamic child from realtime database, simple. no multiple values, no complex physics theory behind networking. I have seen tons of Stackoverflow on how to retrieve data into list, dynamic child, oh no. not me please. I just need a very simple single value from Database. I will show u the database structure below:
Realtime Database
I just need to get the value of "rewardAtEvery" as int. that's all nothing more, nothing less. I can live in peace afterward. Thanks a lot
another human being here. These are the steps you can follow to achieve it:
Import the firebase realtime database dependence in your pubspec.yaml:
dependencies:
...
firebase_database: ^4.1.1
import the package in your .dart file:
import 'package:firebase_database/firebase_database.dart';
Use it like this:
DataSnapshot snap =
await FirebaseDatabase.instance.reference().child("path/to/Attributes/rewardAtEvery").once();
dynamic value = snap.value;
Related
I'm developing a software which is drawing some elements on the screen which is using by mechanical engineers.
I'm string my project data in reducer store. This project data has tons of objects, arrays etc. I mean for each element on the screen, there is a data stored in project.
When user makes an action, I must recalculate project and set it to redux store again for example;
...
case SET_ACTIVE_UNIT:
let unit = action.unit;
project = state.get('project').toJS(); //I'm using immutable
project = ProjectLogic.addActiveUnit(project, unit, action.shiftKey);
return state.set('project', fromJS(project));
...
Ok, you will say that this kind of usage is not right. Because I'm reading all data and reseting it to reducer whole data. You will advice me to use state.setIn but it is really imposible. Beacuse in addActiveUnit function will recalculate project, %20 of project data will be changed. So, I can't handle this change state.setIn
My problem starts here; if there is 60-80 elements drawing on the screen after return state.set('project', fromJS(project)); rendering performance slows down. Every new items gets it worse.
How can I handle this problem?
Thanks all
As a general observation, toJS() is considered to be the most expensive API in Immutable.js, and should be avoided as much as possible.
My initial advice would be to not use Immutable.js.
Instead, you might want to look at using immer to handle the immutable update logic.
Also, our new redux-starter-kit package uses Immer internally.
Beyond that, I'd suggest doing some profiling to see where exactly the perf bottlenecks actually are.
I have a React-Redux app where I have several tabs, and I keep my code in a structure of folder-per-tab. Each folder contains an actions file, service file, constants file and a reducer file.
When I fetch the data from the server, I fetch it as one big nested object, whose top level keys are sectionA, sectionB, sectionC and so on.
Each tab may use data from multiple sections, for example, tab 1 may use sectionA and sectionB, tab 2 may use sectionB and sectionC and so on.
This creates a problem in the way I split the data into reducers. If the top level keys in the redux store will be "tab1" and "tab2", and I would want to update data in sectionB, then I will have to do it in two different reducers. On the other hand, if the top level keys would be "sectionA", "sectionB" etc, then my folder structure is wrong. Any way to solve this?
Thanks.
It sounds like you are thinking very much like a front-end developer, and categorising your state according to how it relates to the user interface.
You might want to think about how you are normalizing your state shape:
https://redux.js.org/recipes/structuring-reducers/normalizing-state-shape
Redux is really a tiny backend for your front-end. I'm sure the purists will debate this on a million levels, but it actually functions like a little, local document store.
Try thinking about your redux structure more in terms of what the data is, than where you want to put it on the screen.
the normalizr library is some next level-ness for that
https://github.com/paularmstrong/normalizr
I'm still debating whether I think it's too far. My app is starting to turn from an MVVC into an MVCMVCCVMMV... (you get it, some kind of epic roman numeral).
How much data do I want to keep in a pubsub model locally, vs always hitting my API server for that?
How long does a user leave a page open, filling the redux store up with new data until there's a memory problem?
Garbage collection in redux is a whole extra conversation, and this is worth a read: https://github.com/reduxjs/redux/issues/1824
Old mate Dan Abramov jumps in with some useful thoughts on that thread.
I realise none of this is an answer per-se, but it seems like redux has more 'use case scenarios' than answers generally anyway.
I'm trying to save an object and verify that it is saved right after, and it doesn't seem to be working.
Here is my object
import com.googlecode.objectify.annotation.Entity;
import com.googlecode.objectify.annotation.Id;
#Entity
public class PlayerGroup {
#Id public String n;//sharks
public ArrayList<String> m;//members [39393,23932932,3223]
}
Here is the code for saving then trying to load right after.
playerGroup = new PlayerGroup();
playerGroup.n = reqPlayerGroup.n;
playerGroup.m = reqPlayerGroup.m;
ofy().save().entity(playerGroup).now();
response.i = playerGroup;
PlayerGroup newOne = ofy().load().type(PlayerGroup.class).id(reqPlayerGroup.n).get();
But the "newOne" object is null. Even though I just got done saving it. What am I doing wrong?
--Update--
If I try later (like minutes later) sometimes I do see the object, but not right after saving. Does this have to do with the high replication storage?
Had the same behavior some time ago and asked a question on google groups - objectify
Here the answer I got :
You are seeing the eventual consistency of the High-Replication
Datastore. There has been a lot of discussion of this exact subject
on the Objecify list in google groups , including several links to the
Google documentation on the subject.
Basically, any kind of query which does not include an ancestor() may
return results from a stale view of the datastore.
Jeff
I also got another good answer to deal with the behavior
For deletes, query for keys and then batch-get the entities. Make sure
your gets are set to strong consistency (though I believe this is the
default). The batch-get should return null for the deleted entities.
When adding, it gets a little trickier. Index updates can take a few
seconds. AFAIK, there are three ways out of this: 1; Use precomputed
results (avoiding the query entirely). If your next view is the user's
recently created entities, keep a list of those keys in the user
entity, and update that list when a new entity is created. That list
will always be fresh, no query required. Besides avoiding stale
indexes, this also speeds up your app. The more you result sets you
can reliably manage, the more queries you can avoid.
2; Hide the latency by "enhancing" the query results with the recently
added entities. Depending on the rate at which you're adding entities,
either inject only the most recent key, or combine this with the
solution in 1.
3; Hide the latency by taking the user through some unaffected views
before landing on your query-based view. This strategy definitely has
a smell over it. You need to make sure those extra steps are relevant
to the user, or you'll give a poor experience.
Butterflies, Joakim
You can read it all here:
How come If I dont use async api after I'm deleting an object i still get it in a query that is being done right after the delete or not getting it right after I add one
Another good answer to a similar question : Objectify doesn't store synchronously, even with now
Google is proposing changing one entry at a time to the default values ....
http://code.google.com/appengine/articles/update_schema.html
I have a model with a million rows and doing this with a web browser will take me ages. Another option is to run this using task queues but this will cost me a lot of cpu time
any easy way to do this?
Because the datastore is schema-less, you do literally have to add or remove properties on each instance of the Model. Using Task Queues should use the exact same amount of CPU as doing it any other way, so go with that.
Before you go through all of that work, make sure that you really need to do it. As noted in the article that you link to, it is not the case that all entities of a particular model need to have the same set of properties. Why not change your Model class to check for the existence of new or removed properties and update the entity whenever you happen to be writing to it anyhow.
Instead of what the docs suggest, I would suggest to use low level GAE API to migrate.
The following code will migrate all the items of type DbMyModel:
new_attribute will be added if does not exits.
old_attribute will be deleted if exists.
changed_attribute will be converted from boolean to string (True to Priority 1, False to Priority 3)
Please note that query.Run returns iterator returning Entity objects. Entity objects behave simply like dicts:
from google.appengine.api.datastore import Query, Put
query = Query("DbMyModel")
for item in query.Run():
if not 'new_attribute' in item:
item['attribute'] = some_value
if 'old_attribute' in item:
del item['old_attribute']
if ['changed_attribute'] is True:
item['changed_attribute'] = 'Priority 1'
elif ['changed_attribute'] is False:
item['changed_attribute'] = 'Priority 3'
#and so on...
#Put the item to the db:
Put(item)
In case you need to select only some records, see the google.appengine.api.datastore module's source code for extensive documentation and examples how to create filtered query.
Using this approach it is simpler to remove/add properties and avoid issues when you have already updated your application model than in GAE's suggested approach.
For example, now-required fields might not exist (yet) causing errors while migrating. And deleting fields does not work for static properties.
This doesn't help OP but may help googlers with a tiny app: I did what Alex suggested, but simpler. Obviously this isn't appropriate for production apps.
deploy App Engine Console
write code right inside the web interpreter against your live datastore
like so:
from models import BlogPost
for item in BlogPost.all():
item.attr="defaultvalue"
item.put()
Weird question, but I'm not sure if it's anti-pattern or not.
Say I have a web app that will be rendering 1000 records to an html table.
The typical approach I've seen is to send a query down to the database, translate the records in some way to some abstract state (be it an array, or a object, etc) and place the translated records into a collection that is then iterated over in the view.
As the number of records grows, this approach uses up more and more memory.
Why not send along with the query a callback that performs an operation on each of the translated rows as they are read from the database? This would mean that you don't need to collect the data for further iteration in the view so the memory footprint shrinks, and you're not iterating over the data twice.
There must be something implicitly wrong with this approach, because I rarely see it used anywhere. What's wrong with this approach?
Thanks.
Actually, this is exactly how a well-developed application should behave.
There is nothing wrong with this approach, except that not all database interfaces allow you to do this easily.
If we talk about tabularizing 10 records for a yet another social network, there is no need to mess with callbacks if you can get an array of hashes or whatever with a single call that is already implemented for you.
There must be something implicitly wrong with this approach, because I rarely see it used anywhere.
I use it. Frequently. Even when i wouldn't use too much memory repeatedly copying the data, using a callback just seems cleaner. In languages with closures, it also lets you keep relevant code together while factoring out the messy DB stuff.
This is a "limited by your tools" class of problem: Most programming languages don't allow to say "Do something around this code". This was solved in recent years with the advent of closures. Think of a closure as a way to pass code into another method which is then executed in a context. For example, in GSQL, you can write:
def l = []
sql.execute ("select id from table where time > ?", time) { row ->
l << row[0]
}
This will open a connection to the database, create a statement and a result set and then run the l << it[0] for each row the DB returns. Note that the code runs inside of sql.execute() but it can access local variables (l) and variables defined in sql.execute() (row).
With this kind of code, you can even generate the result of a HTTP request on the fly without keeping much of the page in RAM at any time. In my case, I'd stream a 2MB document to the browser using only a few KB of RAM and the browser would then chew 83s to parse this.
This is roughly what the iterator pattern allows you to do. In many cases this breaks down on the interface between your application and the database. Technologies like LINQ even have solutions that can send back code to the database.
I've found it easier to use an interface resolver than deep callback where its hooked up through several classes. MS has a much fancier version than mine called Unity. This provides a much cleaner way of accessing classes that should not be tightly coupled
http://www.codeplex.com/unity