I’m having trouble with my matomo in my openshift. This matomo is rather unstable.
When I start the pod motomo runs fine for a short time (very short time). Then matomo starts to respond with http 504 regularly … eventualy beeing unable to process any request successfully and respond with 504 only.
My guess is that matomo tries (lots of) communication with the internet. My openshift is not allowed to communication with the internet. Could this be the cause of trouble?
What is the recommended setup for matomo in general and matomo in openshift in particular?
I recently updated to matomo 4. Looks a tiny little bit more stable but still ways to go for production use.
Best Regards
Sebastian
not sure if you're still having the issue but if your container has limited internet connectivity, then there might be something to it. AFAIK by default Matomo has Provider plugin enabled, which performs external DNS lookup. That takes place as part of tracking requests processing, and could fail in your case. Check this: Matomo optimisation how-to
Tracker API: if the ‘Provider’ plugin is activated in your Matomo, the Internet provider by doing a reverse DNS lookup which adds a few milliseconds overhead.
Other than that, I suppose you're speaking about timeouts on tracking requests - you could enable tracker debug mode (in config.ini.php) to see the output of request processing.
If this is about reporting queries - then this may be broader subject, as this may be issue with archiving timeouts.
If this will reach you and you would respond, please specify if this is about tracking or reporting requests :)
Related
We are using Cloud Tasks to call an "on-prem" API gateway (using Http request). This API gateway (IBM API Connect) sits in front off an on-prem system (Oracle). This back end system can at times be very slow. >5s.
We are desperately trying to increase the throughput but “adjusting” the Cloud Task queue settings (like -max-dispatches-per-second etc).
gcloud tasks queues update queue-1 --max-dispatches-per-second=8 --max-concurrent-dispatches=16
But all we see when we “crank up” the Cloud Task settings is that yellow triangle telling us that we are “enforced" to lower rate due to "system resources".
My understanding is that the yellow triangle shows up due to “errors” from the API gateway we call. Basically GCP/Cloud Tasks re-acts "by it self" based on return codes/errors/time-outs/latency etc from the API end-point we are calling with the result of a very low rate/thru-put. Is this understanding correct? Can someone verify?
The GUI does say that "or because currently there is no instance available to execute a request". What instance are they talking about? So to me that means that there is a possibility that it's "GCP specific" resources that comes into the picture here and have an effect on the "enforced rate"? Or?
Anyway, any help/insight would be appreciated.
Thanks
The error message you are seeing can be prompted by any of the 2 things you are mentioning: "Enforced rates" or "lack of GCP resources at the time of request".
The "Enforced rates" that Cloud tasks is refering to are the ones mentioned here. As you mention, this is due to the server being overloaded and returning too many errors. When this happens Cloud tasks acts by itself and will slow down execution until errors stop.
The "currently there is no instance available to execute a request" message you are seeing is that GCP does not have resources to create the request. Remember that cloud tasks is a managed service so this means that requests are created by GCP fully managed compute engine instances. This is a bit rare, although it does happen from time to time.
In order to make sure which of these 2 issues is the one you are running into, I would recommend you to check your Stackdriver logs and see if you are getting a high amount of errors on the Cloud Tasks filter as if this is the case, most likely you are running into the "Enforced rates" territory.
Hope you find this useful!
About two weeks ago, a Chrome update crippled users of my angular app. I load a lot of data but the entire single page application loaded in < 4 seconds but every single user went to > 40 seconds after updating Chrome 2 weeks ago. I did not experience the problem, but when I upgraded Chrome to 64.0.3282.167 from 63.0.3239.132, the problem also began for me.
Somewhere between Chrome 63.0.3239.132 and 64.0.3282.167, there was a change that basically slowed my Angular app to a crawl. It affects loading and rendering across the board and made the entire app almost unusable. I've been looking for the issue for a few days with no joy.
Does anyone have any insight or recommendation on what could cause such a performance degradation?
Here is a screenshot of my network tab. All of this used to be very fast before the Chrome update and now it just crawls.
If I set:
httpProvider.useApplyAsync(true), it alleviates the problem but my application is huge and this causes a lot of erratic behavior in a 5 year old application.
I'm not sure if this is still an issue, but I know that Google has continued to ramp up security measures with Chrome. This is especially true with HTTPS and I believe Google is pushing for everything to move to HTTPS. Certificates that are not clean (several criteria for this) present problems and may be requiring extra measures to process. I believe there is an add-on (or built-in) for Chrome dev tools that can break out the TLS processing to show you more detail.
A high TTFB reveals one of two primary issues. Either:
Bad network conditions between client and server, or A slowly
responding server application
To address a high TTFB, first cut out as much network as possible. Ideally, host the application locally and see if there is still a big TTFB. If there is, then the application needs to be optimized for response speed. This could mean optimizing database queries, implementing a cache for certain portions of content, or modifying your web server configuration. There are many reasons a backend can be slow. You will need to do research into your software and figure out what is not meeting your performance budget.
If the TTFB is low locally then the networks between your client and the server are the problem. The network traversal could be hindered by any number of things. There are a lot of points between clients and servers and each one has its own connection limitations and could cause a problem. The simplest method to test reducing this is to put your application on another host and see if the TTFB improves.
When we switch the server code to a new version in google app engine console, lots of new instances need to be spawned. Because of that, we see some 500 errors and long response time.
What is the best practice to mitigate those problems?
500 responses have not always occurred to requests during a deployment. Previously the new version of your app was able to take over traffic from the old without interruption, however that seemed to stop quite some time ago. These 500s don't appear to go to your application at all (as in no requests will show in your logs, and they won't be served by your applications 500 page). The time window also seems to vary from between none, to up to a minute.
I'm not aware of any indication that the appengine team is looking at solving this, although it seems like a bug (or at the least a reasonable feature request).
To get around this issue, we generally deploy to a different version and switch that to be the default version. Once that is serving traffic, we deploy to the previous version, then switch it back to default. This allows customers to be served uninterrupted, but it does require (at least in java land) a new build.
In addition to the other persons answer re: warmup requests, you should also look at traffic splitting - "App Engine's Traffic Splitting tool allows you to roll out features for your app slowly over a period of time, similar to what Google does when rolling out a new feature over a few days or weeks. Traffic Splitting also allows you to do A/B Testing. Traffic Splitting works by splitting incoming requests to different versions of your app."
docs here https://developers.google.com/appengine/docs/adminconsole/trafficsplitting
Set up warmup requests to load your application before actual traffic is directed to the instance:
Python: https://developers.google.com/appengine/docs/python/config/appconfig#Warmup_Requests
Java: https://developers.google.com/appengine/docs/java/config/appconfig#Warmup_Requests
Go: https://developers.google.com/appengine/docs/go/config/appconfig#Inbound_Services
PHP: https://developers.google.com/appengine/docs/php/config/appconfig#Warmup_Requests
I have an unlimited host in everything, except the MySql Database, which has limited number of connection at the same time. I want to know if there is any way to cache the posts in the server/host, so it doesn't load them every from the Database every time a visitor loads the page.
This would not be a problem without a lot of traffic, but I have a lot of traffic and this crashed they crashed my Database yesterday.
Thank you.
use a caching solution. memcache is one of the most used.
there are also some which does a complete page caching - resin and varnish are quite popular.
in fact there is a varnish plugin fir wordpress at http://wordpress.org/extend/plugins/wordpress-varnish/installation/.
I'm building a mobile application in VB.NET (compact framework), and I'm wondering what the best way to approach the potential offline interactions on the device. Basically, the devices have cellular and 802.11, but may still be offline (where there's poor reception, etc). A driver will scan boxes as they leave his truck, and I want to update the new location - immediately if there's network signal, or queued if it's offline and handled later. It made me think, though, about how to handle offline-ness in general.
Do I cache as much data to the device as I can so that I use it if it's offline - Essentially, each device would have a copy of the (relevant) production data on it? Or is it better to disable certain functionality when it's offline, so as to avoid the headache of synchronization later? I know this is a pretty specific question that depends on my app, but I'm curious to see if others have taken this route.
Do I build the application itself to act as though it's always offline, submitting everything to a local queue of sorts that's owned by a local class (essentially abstracting away the online/offline thing), and then have the class submit things to the server as it can? What about data lookups - how can those be handled in a "Semi-live" fashion?
Or should I have the application attempt to submit requests to the server directly, in real-time, and handle it if it itself request fails? I can see a potential problem of making the user wait for the timeout, but is this the most reliable way to do it?
I'm not looking for a specific solution, but really just stories of how developers accomplish this with the smoothest user experience possible, with a link to a how-to or heres-what-to-consider or something like that. Thanks for your pointers on this!
We can't give you a definitive answer because there is no "right" answer that fits all usage scenarios. For example if you're using SQL Server on the back end and SQL CE locally, you could always set up merge replication and have the data engine handle all of this for you. That's pretty clean. Using the offline application block might solve it. Using store and forward might be an option.
You could store locally and then roll your own synchronization with a direct connection, web service of WCF service used when a network is detected. You could use MSMQ for delivery.
What you have to think about is not what the "right" way is, but how your implementation will affect application usability. If you disable features due to lack of connectivity, is the app still usable? If you have stale data, is that a problem? Maybe some critical data needs to be transferred when you have GSM/GPRS (which typically isn't free) and more would be done when you have 802.11. Maybe you can run all day with lookup tables pulled down in the morning and upload only transactions, with the device tracking what changes it's made.
Basically it really depends on how it's used, the nature of the data, the importance of data transactions between fielded devices, the effect of data latency, and probably other factors I can't think of offhand.
So the first step is to determine how the app needs to be used, then determine the infrastructure and architecture to provide the connectivity and data access required.
I haven't used it myself, but have you looked into the "store and forward" capabilities of the CF? It may suit your needs. I believe it uses an Exchange mailbox as a message queue to send SOAP packets to and from the device.
The best way to approach this is to always work offline, then use message queues to handle sending changes to and from the device. When the driver marks something as delivered, for example, update the item as delivered in your local store and also place a message in an outgoing queue to tell the server it's been delivered. When the connection is up, send any queued items back to the server and get any messages that have been queued up from the server.