Every minute or so my app creates some data and needs to send it out to more than 1000 remote servers via URL Fetch callbacks. The callback URL for each server is stored on separate entities. The time lag between creating the data and sending it to the remote servers should be roughly less than 5 seconds.
My initial thought is to use the Pipeline API to fan out URL Fetch requests to different task queues.
Unfortunately task queues are not guaranteed to be executed in a timely fashion. Therefore from requesting a task queue start to it actually executing could take minutes to hours. From previous experience this gap is regularly over a minute so is not necessarily appropriate.
Is there any way from within App Engine to achieve what I want? Maybe you know of an outside service that can do the fan out in a timely fashion?
Well, there's probably no good solution for the gae here.
You could keep a backend running; hammering the datastore/memcache
every second for new data to send out, and then spawn dozens of async url-fetches.
But thats really inefficient...
If you want a 3rd party service, pubnub.com is capable of doing fan-out, however i don't know if it could fit in your setup.
How about using the async API? You could then do a large number of simultaneous URL calls, all from a single location.
If the performance is particularly sensitive, you could do them from a backend and use a B8 instance.
Related
I'm sending data from my backend every 10 seconds and I wanted to display that data in reactjs. I've searched on the net to use socket.io to display real-time data. Is there a better way to use it?
If you're dead set on updating your data every 10 seconds, it would make more sense to make a request from the client to the server, as HTTP requests can only be opened from client to server. By using HTTP requests, you won't need to use socket.io, but socket.io is an easy alternative if you need much faster requests.
Depending on how you are generating the data being sent from your backend, specifically if you are using a database, there is most likely a way to subscribe to changes in the database. This would actually update the data in realtime, without a 10 second delay.
If you want a more detailed answer, you'll have to provide more detail regarding your question: what data are you sending? where is it coming from or how are you generating it?
I'm working on an autodialer feature, in which an agent will get a call when I trigger the button from the frontend (using react js language), and then automatically all the leads in the agent assigned portal will get back-to-back calls from agent number. However, because this process is automatic, the agent won't know who the agent has called, so I want to establish a real-time connection so that I can show a popup on the frontend that contains information about the lead who was called.
This is more like a question about the right approach:
We have an single page web application in angularjs that is loading a view that contains multiple diagrams. Each diagram fetch the data that needs to be displayed through the REST service. There is a limitation in chrome with 6 connection simultaneously. As we have views with more than 10 diagrams the data fetch results in queuing the calls untils previous one are resolved. This appears to the user as if the data fetch is slow.
Is there a way to execute all calls in parallel (same server, different REST endpoints)?
What where the single page solution that would not be limited by the browser but provide faster throughput?
Caching in frontend is only partially applicable, due to the active filtering of data by the user.
One solution will be combining multiple request to one request, by that the overhead of multiple connection establishment time will be gone.
You can make a proxy api which can take care of them.
The problem with combining endpoints is, if any of your endpoint has higher processing time then the other combined endpoints response has to wait for it.
Best solution is, make the endpoints first enough so 6 connections are enough
I want to send a particular HTTP request (or otherwise communicate a message) to every (dynamic/autoscaled) instance which is currently running for a particular App Engine application.
My goal is to trigger each instance to discard some locally cached data (because I have just modified the underlying data and want them to reload it).
One possible solution is to store a value in Memcache, and have instances check this each time they handle a request to see if they should flush their cache. But this adds latency to every request.
Another possible solution would be to somehow stop all running instances. No fixed overhead, but some impact while instances are restarted.
An even less desirable solution would be to redeploy the application code in order to cause all instances to be stopped. This now adds additional delay on my end as a deployment takes some time.
You could use the management API to list instances for a given version, but I'd suggest that you'd probably want to use something like the PubSub API to create a subscription on each of your App Engine instances. Since each instance has its own subscription, any messages sent to the monitored queue will be received by all instances.
You can create the subscription at startup (the /_ah/start endpoint may be useful), and then delete it at shutdown (using the /_ah/stop endpoint).
How can I upload parse and download excel files in Google appengine that require more than 30secs ? I use java poi and backend tasks, but as soon as the backend does the job I cannot notify the client. I cannot download the excel that is created from the backend task... Any suggestions would be much appreciated.
The best approach here is not to fight HTTP and a web service architecture but rather to work with it.
Introduce a notion of a job id. When your client uploads a file, immediately return a token that represents that job. Extra credit, include an estimated duration of the job. For starters, lets say its 2 minutes.
The client is then responsible for querying the server for the state of that job id using the token. The server either returns the answer, or it returns the token back with an updated ETA.
For starters, you could just always tell the client to check back in 2 minutes (or whatever constant makes most sense for your workload). As your server processing becomes smarter, you could give more accurate estimates, and decrease the busy-waiting the client does.
Can we start a dynamic backend programatically? mean while when a backend is starting how can i handle the request by falling back on the application(i mean app.appspot.com).
When i stop a backend manually in admin console, and send a request to it, its not starting "dynamically"
Dynamic backends come into existence when they receive a request, and
are turned down when idle; they are ideal for work that is
intermittent or driven by user activity.
Resident backends run continuously, allowing you to rely on the state
of their memory over time and perform complex initialization.
http://code.google.com/appengine/docs/python/backends/overview.html
I recently started executing a long running task on a dynamic backend and noticed a dramatic increase in the performance of the frontends. I assume this was because the long running task was competing for resources with normal user requests.
Backends are documented quite thoroughly here. Backends have to be started and stopped with appcfg or the admin console, as documented here. A stopped backend will not handle requests - if you want this, you should probably be using the Task Queue instead.
It appears that a dynamic backend need not be explicitly stopped. The overvicew (http://code.google.com/appengine/docs/python/backends/overview.html) states that the billing for a dynamic backend stops 15 minutes after the last request is processed. So, if your app has a cron job, for example, that requires 5 minutes to complete, and needs to run every hour, then you could configure a backend to do this. The cost you'll incur is 15+5 minutes every hour, or 8 hours for the whole day. I suppose the free quota allows you 9 backend hours. So, this type of scenario would be free for you. The backend will start when you send your first request to it through a queue, and will stop 15 minutes after the last request you send is processed completely.