Firebase: Realtime Database - extreme data usage - database

So we're working with the Firebase Realtime Database and have encountered some issues regarding the data usage. Currently the total database data comes to a clean 4MB and all we're really GET'ing with every single call made to the RTDB is a single boolean and then a set of 2 strings, basically.
Structure is similar to this:
- Root >
------ child 1 >
------- child 1.2 >
-------- child 1.3 >
--------- value : true
------ child 2 >
------- child 2.1 >
-------- child 2.2 >
--------- child 2.3 >
---------- amount : 1
Now all we do is get our listener straight into Child 1.3 from root and get its value with
firebaseRTDatabase.child("child1.1/child1.2/child1.3/value").addValueEventListener()
Then if that value is true, we listen into the second data set like so
firebaseRTDatabase.child("child2/child2.1/child2.2").limitToLast(1).addChildEventListener()
Then whenever that finds a new child it will handle it within the app.
Issue now is: With the data that the "usage" tab gives us and a bit of calculus it seems that every connection made consumes about 330 MB, almost our full database. As if every connection downloads the whole table, but that shouldn't really be the case right?
I've also noticed that even when idle and unused at least once an hour something is still triggered, even without any events happening. I thought this had to do with FB token refresh and Auth stuff, but manually updating my token after 59 minutes does not solve it and this means we get a classic FB handshake once an hour on every device, which of course also leads to some big usage as this handshake feels bigger than any data we are trying to retrieve.
Anyone have any thoughts on this? Does it indeed seem strange, or is this just expected and a normal usage amount for RTDB?
So, in short:
Is there a way to make sure we don't download anything off RTDB that we don't need?
Is it plausible that we are downloading the full extend of the table or child at any one point?
Is there a way around the hourly handshake happening?
I'm not sure what info to give you. I think this is sufficient but feel free to ask for stuff!
Thanks in advance peeps! :)

Related

Salesforce Apex CPU LImit

currently we are having issue with an CPU Limit. We do have a lot of processes that are most likely not optimized, I have already combined some processes for the same object but it is not enough. I am trying to understand logs rights now - as you can see on the screenshots, there is one process that is being called multiple times (I assume each time for created record). Even if I create, for example, 60 records in one operation/dml statement, the Process Builders still gets called 60 times? (this is what I think is happening) Is that a problem we are having right now? If so, is there a better way to do it? Because right now we need updates from PB to run, but I expected it should get bulkified or something like that. I was also thinking there might be some looping between processes. If there are more information you need, please let me know. Thank you.
Well, yes, the process builder will be invoked 60 times, 1 record at a time. But that shouldn't be your problem. The final update / create child records / email send (or whatever your action is) will be bulkified, it won't save 1 record at a time. If the process calls some apex actions - they're supposed to support passing collection of records, not just single record.
You maybe looking at wrong place. CPU time suggests code problems, not config (flow, workflow, process builder... although if you're doing updates of fields on "this" record it's possible you'd benefit from before-save flows). Try to compare timestamps related to METHOD_BEGIN, METHOD_END for triggers, code methods (including invocable action / process plugin interfaces).
Maybe there's code that doesn't need to run because key fields didn't change, there's nothing to recalculate, rollup. Hard to say without seeing the debug log.
Maybe the operation doesn't have to be immediate. Think if you can offload some stuff to "scheduled actions", "time based workflows" or in apex terms "#future, batchable, queueable". But they'd have to be relatively safe to run, if there's error - it won't display to the user because the action will be in the background, you'd need to handle the errors manually (send an email, create a record, make chatter post or bell notification).
You could try uploading the log to https://apextimeline.herokuapp.com/ and try to make sense out of that Gantt-chart-like output. Or capture the log "pro" way, with https://help.salesforce.com/s/articleView?id=sf.code_dev_console_solving_problems_using_system_log.htm&type=5 or https://marketplace.visualstudio.com/items?itemName=financialforce.lana (you'll likely need developer's help to make sense out of it).

My website throws the error "Aw, snap" after some time (Probably because of Websockets) - React

my website receives a lot of data through websockets, as I display data in real time. (I would say like 30 objects/second). It works well, but after some time (I don't know exactly how much, but like 30 min ~ 1 hour) I get the error "Aw, snap".
What I think is causing the error is that I have a table where whenever I receive one specific type of object, I add it into the table. I receive at least 15 objects every second and it loads some text but also image. I think it is giving the error because of the images.
Is there anything I can do to avoid this error? Because I really need the data and the images too. But I do not want the page crashing after some time..

GOOGAPPUID is not between 0 and 999

According to the documentation for traffic splitting a cookie will be set to control traffic splitting with a number between 0 and 999. See https://cloud.google.com/appengine/docs/developers-console/#traffic-splitting
This has been working fine for quite some time. But now whenever I clear my cookies and reload my solution, the GOOGAPPUID is no longer a number between 0 and 999.
Instead I am now getting a value like:
xCgkQ3wMg28WhrgU
xCgkQjAcg9sehrgU
This is a screenshot of my cookie information: http://screencast.com/t/z2fjR4xgYfB
I cannot find any information about a change in the traffic splitting, so I am bit puzzled why this happens. Does anybody know or have an idea why?
thanks,
Thomas

JMeter Think Time

Apologies if this request is similar to others - I am new to JMeter and have searched for other relevants posts but couldn't find anything - or maybe I just didn't understand them!
I'm performance testing a system with a web based application. The front end system will be processing records submitted into the system via MQ - the front end allows the user to pick up a record from the queue, validate some detail, make changes and submit the changes.
There will be 20 users using the front end to do this message validation, update and submission.
Each user is expected to need 30 seconds to pick a message from queue, make changes and resubmit - so we are expecting 1 user to process 120 records/hour, so 20 users will be expected to process 2400 records/hour
The picking up the record off the queue, changing it and submitting the changes will be done via 3 individual web pages.
SO - think time across the 3 pages has been defined as 24 seconds (leaving 6 of the 30 second limit for rendering, server responses, db calls etc.)
However I don't know how to specify this within JMeter. From my reading I can see that I can add a Timer in as a parent to a sampler and I assume I can add a Timer in as a parent of the Recording Controller? - but I need to be able to specify that the 24 second think time is spread across those 3 different pages.
I read a post elsewhere suggesting that if I record using the proxy after adding the Gaussian Random Timer in as a child of the Test Plan (parent to everything else) then the http proxy will record the think time as a ${T} variable in the Gaussian Random Timer - I tried this and this didn't work (also I don't want to rely on this - I'd like to be able to understand and make changes to think time properly rather than relying on JMETER to do it for me.)
To reiterate - 20 users, 30 seconds for 1 user to complete a transaction, TT defined as 24 seconds - I am struggling what Timer to use, where to put it so that the think-time is spread across the samplers that equate to the GETS associated with the 3 pages the user will navigate through.
Apologies for the lengthy post - I just wanted to be clear and concise.
Many thanks in advance,
As per JMeter Timers documentation
Note that timers are processed before each sampler in the scope in which they are found; if there are several timers in the same scope, all the timers will be processed before each sampler.
Timers are only processed in conjunction with a sampler. A timer which is not in the same scope as a sampler will not be processed at all.
To apply a timer to a single sampler, add the timer as a child element of the sampler. The timer will be applied before the sampler is executed. To apply a timer after a sampler, either add it to the next sampler, or add it as the child of a Test Action Sampler.
Now regarding "what timer to use"
There are 2 scenarios:
Virtual-User-oriented scenario - when you try to simulate N users working together
Goal-Oriented-scenario - when you try to produce N hits per second load.
In case of scenario 1 even Constant Timer can be quite enough, besides it will provide repeatability of results. See above quote for information on where to put your timer(s)
In case of scenario 2 you'll need Constant Throughput Timer. If 20 users process 2400 records per hour and each record assumes 3 web page calls, it means that 7200 requests will be made in one hour which in its turn stands for 120 requests per minute (this is what you should enter into the timer's "throughput" area) or 2 requests per second.

Extend Store class to always execute a function after load on ExtJS

I am working on a project where we were asked to "patch" (they don't want a lot of time spent on development as they soon will replace the system) a system implemented under ExtJS 4.1.0.
That system is used under a very slow and non-stable network connection. So sometimes the stores don't get the expected data.
First two things that come to my mind as patches are:
1. Every time a store is loaded for the first time, wait 5 seconds and try again. Most times, a page refresh fix the problem of stores not loading.
Somehow, check detect that no data was received after loading a store and, try to get it again.
This patches should be executed only once to avoid infinite loops or unnecessary recursivity, given that it's ok that some times, it's ok that stores don't get any data back.
I don't like this kind of solutions but it was requested by the client.
This link should help with your question.
One of the posters suggests adding the below in an overrides.js file which is loaded in between the ExtJs source code and your applications code.
Ext.util.Observable.observe(Ext.data.Connection);
Ext.data.Connection.on('requestexception', function(dataconn, response, options){
if (response.responseText != null) {
window.document.body.innerHTML = response.responseText;
}
});
Using this example, on any error instead of echoing the error in the example you could log the error details for debugging later and try to load again. I would suggest adding some additional logic into this so that it will only retry a certain number of times otherwise it could run indefinitely while the browser window is open and more than likely crash the browser and put additional load on your server.
Obviously the root cause of the issue is not the code itself, rather your slow connection. I'd try to address this issue rather than any other.

Resources