I'm building an app which should also work offline (of course with stale data).
So I thought if the user is online, I'll query the datastore to fetch fresh data, and if he's online, I'll store the recently fetched data in memcache. But then I thought memcache in GAE is implemented on the server-side most probably. Am I right?
Edit: I made my browser work offline.Reloaded the page, nothing happened and nothing appeared in my logs. But then I disabled my laptop's WiFi and somehow it started working. I got GET 200 requests in my log. Does this mean memcache is client-side?
I got the answer. Memcache is server-side caching. It worked when I disabled the wifi on my laptop because my laptop ran the dev server and also hosted the memcache. When my browser was offline, it couldn't send out requests to anybody so it failed.
Related
I'm a backend developer, and I've developed a WebApi service with Asp.Net Core.
I've also developed ApiGateway using Ocelot library.
From front side, front-end developers use React and Axios as HTTP client.
When the request is provided to the API method directly, it works nice.
But when the request is called through the ApiGateway, response time is much bigger (more than 1-2 minutes).
This delay is performed from Chromium-based browsers f.e: Google Chrome, Microsoft Edge and Opera.
Everything is OK from Mozilla Firefox.
Also, there is no any problem from Postman and JMeter.
What can be a reason of a such behavior?
Or where should I try to find a solution, on the back or front side?
With API Gateway it requires going from the client to API Gateway,
which means leaving the application and going out to the internet,
then back to your application to go to your other Instance, then back
to API Gateway, which means leaving your application again and then
back to your first instance.
So this additional latency is expected. The only way to lower the
latency is to add in API Caching which is only going to be useful is
if the content you are requesting is going to be static and not
updating constantly. You will still see the longer latency when the
item is removed from cache and needs to be fetched from the System,
but it will lower most calls.
So I guess the latency is normal, which is unfortunate.
As for why it responds well on Mozilla Firefox, probably due to differences between API implementations.
The above is my point of view, I hope it will help you. If my understanding is wrong, please correct me, thanks.
I followed this guide: Quickstart for Python. After launching the "hello, world" app to app engine (flex) I went to [project].appspot.com and noticed that it is very slow. I tried testing it in different devices and network conditions and I still have the same issue. I went to Cloud Trace and can't build a report due to a lack of traces. It is also slow in both http and https. I deployed to us-central and I am in Texas.
I have attached some logs from Logging and a snippet from Google Chrome's Dev Tools to show the slowness.
Logs from Logging:
Chrome Dev Tools:
The images don’t show anything especially unexpected. Response time will vary depending on the location of the client and its distance to the region of the App Engine Flex instance. Responses that take an especially long time are likely due to a cold boot.
You probably use a free instance of app engine. Because it's free the lifespan is very short, therefor it shuts down after a short amount of time without requests. When you send a new request after some time, the instance has to set up and then process the request, which takes time. You can keep pinging the app to keep the instance alive. Similiar question is anwered here.
After following the instruction to migrate from a GAE app from Master/Slave to High Replication Datastore(HRD), the app is returning nothing for datastore read. I am able to see the data using the "Datastore Viewer" and they are there (migrated successfully). I have not changed any code. Just wondering if there's anything I need to set or configure for the datastore read to happen. I don't see any error in the "Log Console" on my dev machine and no error on the server's "Logs".
The issue has resolved itself after a few days. Now the app is returning the correct data. It may be just a glitch from the migration. I have another GAE app that's stuck in the middle of the migration. Searching on SO I have found others that are experiencing the same problem.
I've been looking at CloudFlare as a CDN service for my Google App Engine hosting, and as a student, cost is always an issue (aka free services only). I read on the CF blog that when the origin server is down, CF will serve a cached version of the website from its own servers to users.
So if we hit the GAE quota limit, is the server considered as "down"? Will CF display the cached website? I don't plan to have a lot of dynamic content so serving an entire cached website is not too much of an issue to me.
If the answer to my first question is no, is it possible to get CF to serve it's cached website content automatically once GAE hits any quota limit? I know it's probably unlikely but just wanted to throw this question out.
According to CloudFlare's wiki, the Always Online feature will return a cached page only if the backend server is unavailable or returns a response code of 502 or 504. When you hit quota limits App Engine itself will generally still be available, so whether the cache works depends on the response code in your case.
If your app exceeds its bandwidth or instance hour quota, App Engine will return a 403 Forbidden response code. It is possible to customize the content of the error response, but not the code. It seems then that CloudFlare will not serve a cached page in this case.
However, if your app hits an API usage quota, your code will receive an exception and you can choose to return one of those 50x codes and trigger the cache.
I'm not sure if this particular case will work for CloudFlare because of the error code that App Engine returns (we are working on some enhancements for Always Online, but it really won't tackle 403 errors).
It does appear that AppEngine does allow you some customization of the error pages?
Tip: You can configure your application to serve a custom error page when your application exceeds a quota. For details, see Custom Error Responses documentation for Python and Java.
I'm mostly a PHP developer. Now I have a client steering me in the direction of a standalone app that runs on Mac, Windows, and Linux. It deals with pilot weather data and he wants it to work offline during flights and then sync up fresh data in the airport wifi. Immediately I thought of Google Chrome Apps for this.
I need to know what the storage size limitation is for Google Chrome databases when used specifically in Google Chrome Apps. I've been having trouble finding this information.
Some extra, related questions are:
When someone clears their Chrome browser cache, does this mean their Databases, Local Storage, and Application Cache are wiped clean? Or, is it only some of those resources are cleaned? My fear is that someone clears their cache and there goes all my offline app storage in the Google Chrome App.
I hate to sound dumb, but is "WebSQL" different than Chrome Databases?
Why would I use Local Storage versus a Chrome Database? (See the difference when you do CTRL+SHIFT+I in Chrome, and then click Resources tab.)
Storage limit is 5MB by default. This limit can be disabled with unlimitedStorage permission (more details here).
I don't want to clear my cache to test it, but I am pretty sure storage is not cleaned. There is a related issue report which says that there is no way to delete storage explicitly right now.
WebDatabase, WebSQL, "a database api" all refer to the same thing - web database API that is currently based on SQLite.
Web database is pretty much a full scale database, localStorage is just a hashmap (an associative array) that stores key-value pairs.