I have a node js app in mongodb cloud platform,which will be used for posting 1 million messages to a topic in GCP pubsub.Since the platform is not supporting the npm package #google-cloud/pubsub,we implemented it using the API reference for Pubsub.Upon load testing the app,I can see each message is taking 50 seconds for posting it to the topic.Ideally it should take less than 5 secs.It takes around 30 seconds for the access_token API call and 20 seconds for the message posting API call.Since each message posting is a independent event,we cannot maintain a session to store the access_token and reuse it and API_KEY authentication method is not available for GCP PubSub.Is the API method for gcp pubsub is very slow when compared to using library #google-cloud/pubsub ?.
Can anyone suggest a solution to improve performance of GCP PubSub using APIs
The PubSub client library are greatly optimized in several ways. The first one is the use of gRPC protocol instead of REST API. Then, there is message aggregation before a push to PubSub (500ms of wait by default). Then, there is various async mechanism to parallelize the processing.
So, a huge and great work done by the Client Library teams and hard (or expensive) to reproduce on your side. But you can, the sources are public, you can have a look to the client libraries!
The 30s for the access_token retrieval is too long. Are you sure that you haven't network issue? In any case, this token is valid for 1H. If you can reuse it in your subsequent call you will save a lot of time!
Related
Google is deprecating Cloud Iot, so not an option.
https://cloud.google.com/iot/docs/release-notes
Cloud IoT Core will be retired on August 16, 2023. After August 15, 2023, the >documentation for IoT Core will no longer be available.
I would like to use Firebase - Firestore for my backend. It takes all the hassles out of keeping a server up and running, scalability etc.
I managed to send data after login and authentication from an ESP32S3 using ESP-IDF in C, (note not Arduino, and not C++), and would like to know if I can rather use a websocket for the communication, once the Authentication done, and if so, can you give me a code example or pointers.
With a websocket, I can send data to my own server hosted in Europe, in less than 400ms.
With Firestore, there is a large HTTP header, that includes the API key, and also the Auth Token, a large amount of data, quite a lot of handshaking going on over HTTPS, and eventually the data is sent. This takes more than 1400ms.
We are weighing items in a farming scenario, and need to weigh very frequently, and the 1400ms with fast internet is not acceptable.
So if I could still go with Firebase Authentication, and Firestore for data, I probably would be able to speed it up to even faster than 400ms if I could use a WebSocket client connection with the Firestore document store. I can use the Refresh Token if needed to refresh the Auth Token, and thus keep the socket connection up, every 3600s as required by Firebase, (that also takes quite long) but less of a hassle, as only once every say 55 minutes.
Any pointers, advice will be appreciated.
Firestore supports multiple SDKs and wire protocols, but none of them work over web sockets. The closest you can get with Firestore would be its REST API, which is documented here. It's not the easiest protocol to work with though, so I recommend using the API explorer that is built into the documentation to create examples for yourself.
I'm doing a request on an API that takes a very long time to execute (about 30 seconds to 4 minutes). Getting the user to wait is of course not a good idea, but I'm not sure about the web technologies that could allow to recontact the browser (subscribe) automatically after the request executed.
Any example of code, and pointers to the right techs would be really appreciated. I'm using aws APIs on the backend, and nextjs / redux on the frontend
Thanks
I'm attempting to create a microservice on Google App Engine that is not intended to handle HTTP requests.
Instead, I was hoping to have a continuously running Python script that monitors a remote queue--RabbitMQ, to be precise--and sends out an api-call to another service as tasks are pushed to the queue.
I was wondering, firstly, is it possible to run a script upon deployment--one that did not originate with a user action/request?
Secondly, how would I accomplish this?
Thanks in advance for your time!
You can deploy your "script" as a manually scaled module -- see https://cloud.google.com/appengine/docs/python/modules/ -- with exactly one instance. As the docs say, "When you start a manual scaling instance, App Engine immediately sends a /_ah/start request to each instance"; so, just set that module's handler for /_ah/start to the handler you want to run (in the module's yaml file and the WSGI app in the Python code, using whatever lightweight framework you like -- webapp2, falcon, flask, bottle, or whatever else... the framework won't be doing much for you in this case save the one-off routing).
Note that the number of free machine hours for manual scaling modules is limited to 8 hours per day (for the smaller, B1 instance class; proportionally fewer for larger instance classes), so you may need to upgrade to paid-app status if you need to run for more than 8 hours.
Like #brant said, App Engine is designed to handle HTTP requests. It's not a perfect fit for background jobs, unless you try to wrap your logic into one http request.
Further, App Engine will emit an error when the response timeout, depending on your scaling settings. If you want to try it, consider basic or manual scaling.
For this type of workload, I would suggest you use a VM.
I think there are a few problems with this design.
First, App Engine is designed to be an HTTP request processor, not a RabbitMQ message processor. GAE is intended for many small requests, not one long-running process.
Second, "RabbitMQ should not be exposed to the public internet, it wasn't created for such use case."
I would recommend that you keep the RabbitMQ clients on the same internal network as the RabbitMQ broker, and have the clients send HTTP requests to App Engine.
Say that in a Google App Engine application (Java) some requests take a very long time to complete; perhaps some even time out after 30 seconds. Does the GAE Console (Dashboard, Monitoring or similar) provide any way to list the URLs (or any other request properties, such as API method calls) associated with the long-running requests?
https://cloud.google.com/appengine/docs/python/tools/appstats
The Python SDK includes the Appstats library used for profiling the
RPC (Remote Procedure Call) performance of your application. An App
Engine RPC is a roundtrip network call between your application and an
App Engine Service API. For example, all of these API calls are RPC
calls:
Datastore calls such as ndb.get_multi(), ndb.put_multi(), or
ndb.gql(). Memcache calls such as memcache.get(), or
memcache.get_multi(). URL Fetch calls such as urlfetch.fetch(). Mail
calls such as mail.send().
Actually, the old Dashboard (https://appengine.google.com/dashboard) provides the info I wanted in the Current Load box (bottom left), in the Avg Latency (last hr) column.
Suddenly my gapi client stopped sending request params to endpoint.
This is how my code looks like
Load the gapi JS
https://apis.google.com/js/client.js?onload=initGoogleApis
in initGoogleApis
function initGoogleApis() {
var ROOT = HOST + "/_ah/api";
gapi.client.load("userendpoint", "v1", function() {
userendpoint = gapi.client.userendpoint;
}, ROOT); }
Now when I query userendpoint.<some function>, then it is not passing the request params to endpoint
NOTE: it was working fine till today morning.
Anyone else facing the same issue? (this might be due to some update in the gapi library)
This issue has been resolved as of yesterday 2014-09-23 08:00 (US Pacific Time).
Details about this issue can be found in the Google App Engine Downtime Notify Group
However 'Google APIs Client Library for JavaScript' is still in Beta and breaking changes have been rolled out more than once. Clound Endpoints themselves are out of beta and can be used for production use.
Now, to properly answer this questions:
The simple advice here is: Don't use beta products for production applications.
To avoid problems with Google APIs Client Library for JavaScript, just don't use it. You can write your own REST API client that will not be affected by changes to the JavaScript library from Google. I have done this for testing purposes a couple of times and it is not hard, just a lot of work depending on how many endpoints you have and how complex they are.
We have the same problem on two projects.
I think that Google has deoployed a new version of the "https://apis.google.com/js/client.js" and it dosen't works as expected...
We need to open a ticket to Google support. If I have any news I will report them to you.
Google reports (https://groups.google.com/forum/#!topic/google-appengine-downtime-notify/t9GElAJwj8U):
We are currently experiencing an issue with Google Cloud Endpoints where the GAPI Javascript client is unable to pass request parameters. For everyone who is affected, we apologize for any inconvenience you may be experiencing. We will provide an update by Tuesday, 2014-09-23 05:00 (all times are in US/Pacific) with current details, and if available an estimated time for resolution.
Update:
We have fixed the issue affecting Google Cloud Endpoints JavaScript client and are gradually rolling-out a fixed version. We estimate full resolution of the issue by 06:30 US/Pacific Pacific. We will provide an update by 06:00 AM.
Update:
Now it works for me.
Marco