My Logic App has one trigger. When it fires, 3 instances of the Logic App run and all perform the same operations, leading to duplication in the database.
Multiple concurrent instances in groups of three
This is the Logic App orchestration:
Logic App
Resolved by making Recurrence single instance only
I experienced the same issue: 15 instances started. In my case, I actually did have 15 messages to process but my email client didn't give a clue :)
But for the other cases, here's some background information on concurrence control which I think replaced the "single instance" option:
https://toonvanhoutte.wordpress.com/2017/08/29/logic-apps-concurrency-control/
Related
I make server for simple realtime multiplayer game, on google app engine, Python SDK.
Requests very simple, and process in 1ms maximum.
I hold all game data in static variables of instane.
I set 'Min Pending Latency' to 15 sec. for prevent spawn of second instance.
But some time second instance has created any way.
How i can disable, or kill second instance, if it has ben spawn, And process all requests in single instance only?
If you're fighting against the system, that's an indication that you're doing something wrong.
You should not try to manage all your requests inside a single instance. That defeats the whole purpose of using GAE. The problem of course is that you shouldn't be storing your data as static variables inside your instance. Even apart from the issues with other instances being started, every instance is stopped and restarted every so often: so your data would be lost.
You should keep your data in the places meant for that: in memcache, and in the datastore.
beside only 8 instance-hours is free,
you can do this with Module + Manual Scaling
https://developers.google.com/appengine/docs/java/modules/
https://developers.google.com/appengine/docs/python/modules/
I am following the ampool metrics for my adf application hosted on weblogic. I am running transactions using multiple user sessions. One of my application module's pool shows the max count of instances increasing abruptly compared to other application modules which are more rigorously being used. What could be causing my am instances to not be reused but always create new ones? Any direction would be highly appreciated.
Thanks!
What is the AM pooling setting for the AM? Did you change the settings or keep the defaults?
Could it be that you are accessing this AM from code that doesn't release the AM back to the pool?
I'm using go language and it seems good practice to communicate between different threads/routines by channels and locks instead of datastore. However, it appears that it's not possible between two instances if there's more than one instance running. Is there a way to make it not open a second one, even if there's high traffic?
To answer the question in the title:
Go to app dashboard, on left you will find a Application settings link. In the admin UI you will find two sliders, drag the first one at the very left and the second (Min pending Latency) to the max allowed value (right). And last but not least, optimize your request response time.
Even if you do the above there's no guarantee that GAE will not fire up a second instance.
You should use Backends if you want fine to control the spawning and shutdown of instances.
I don't think it is absolutely the right approach .. You have to think about scalability issues from the first day of your design .. As christopher said I would go with memcache!
I am actually implementing a web application on Google App Engine. This has taken me for the moment a huge time in re-designing the database and the application through GAE requirements and best practices.
My problem is this: How can I be sure that GAE is fault tolerant, or at what degree is it fault tolerant? I didn't find any documents in GAE on this, and it is an issue that could have drawbacks for me: My app would have, for example, to read an entity from the datastore, compute it in the application, and then put it on the datastore. In this case how could we be sure that this would be correctly done and that we get the right data : if for example the machine on which the computing have be done crash ?
Thank you for your help!
If a server crashes during a request, that request is going to fail, but any new requests would be routed to a different server. So one user might see an error, but the rest would not. The data in the datastore would be fine. If you have data that needs to be kept consistent, you would do your updates in a transaction, so that either the whole set of updates was applied or none.
Transactions operating on the same entity group are executed serially, but transactions operating on different entity groups run in parallel. So, unless there is a single entity which everything in your app wants to read and write, scalability will not suffer from transactions.
From what I gather, AppEngine fires up "Application Instances" (for a lack of better terminology that I know of) as a function of demand on the said application.
Now, let's say I define Scheduled Tasks for my Application, is it possible that the said tasks might end-up being run by multiple Application Instances?
The reason I am asking: if my application uses the datastore as some sort of "Task Repository" and I use Scheduled Tasks to pull work items from it, is it possible that an Application Instance might get the same work items as another (assuming I am not adding addition state to account for this possibility) ?
The contract for the Task Queue API is such that it is possible for tasks to be executed more than once - though such occurrences are rare, and they wouldn't result in the same task being executed multiple times simultaneously. If re-execution does occur, it's entirely possible that they'll be executed on different instances.