Application module instances in ampool not being reused - oracle-adf

I am following the ampool metrics for my adf application hosted on weblogic. I am running transactions using multiple user sessions. One of my application module's pool shows the max count of instances increasing abruptly compared to other application modules which are more rigorously being used. What could be causing my am instances to not be reused but always create new ones? Any direction would be highly appreciated.
Thanks!

What is the AM pooling setting for the AM? Did you change the settings or keep the defaults?

Could it be that you are accessing this AM from code that doesn't release the AM back to the pool?

Related

Tomcat via Apache Server going down after too many connections

I have an Apache (2.4) Server that serves content through the AJP connector on a Tomcat 7 Server.
One of my clients manages to kill the tomcat instance after running too many concurrent connections to a JSP JSON Api service. (Apache still works, but tomcat falls over. Restarting Tomcat brings it back up) there are no errors in tomcats logs.
I would like to protect the site from falling over like that, but I am not sure what configurations to change.
I do not want to limit the number of concurrent connections as there are legitimate use cases for that,
My Tomcat memory settings are :
Initial Memory pool : 1280MB
Maximum memory pool : 2560MB
which I assumed was plenty.
It might be worth mentioning that the API service relies on multiple, possibly heavy MySQL connections.
Any advice would be most appreciated.
Why don't you'd slowly switch your most used/ important application features to microservices architecture and dockerize your tomcat servers to be able to manage multiple instances of your application. This will hopefully help your application to manage multiple connections without impacting the overall performance of the servers at the job.
If you are talking about scaling, you need to do the horizontal scaling her with multiple tomcat servers.
If you cannot limit user connections & still want the app to run smooth, then you need to scale. Architectural change to microservices is an option but may not be possible always for a production solution.
The best to think about is running multiple tomcats sharing the load. There are various ways to do this. With your tech stack, I feel the Apache 2 load balancer plugin in combination with Tomcat will do best.
Have an example here.
Now, with respect to server capacity, db connection capacity etc, you might also need to think about vertical scaling.

Please explain App Engine Instances parameters

In App Engine Dashboard-> In Combobox Summmary -> I choose Instances: there are these values:created, active.
I dont understand what does created Instances mean, active Instances mean.
Is created Instances idle Instances?
Is active Instances dynamic Instances?
Why created Instances is 3 but active Instances is 1, then my system fail.
Warning:
''While handling this request, the process that handled this request was found to be using too much memory and was terminated. This is likely to cause a new process to be used for the next request to your application. If you see this message frequently, you may have a memory leak in your application or may be using an instance with insufficient memory. Consider setting a larger instance class in app.yaml.''
Thanks
Created instances are the ones your application started in a given situation, not necessarily serving, and can also be idle. Instances are created depending on the instance scaling type you specified in your app.yaml.
Active instances are those instances that are serving traffic or have served traffic at a given timeframe.
Here's How Instances are Managed in App Engine for detailed explanation about GAE instances.
The warning you received is usually due to the available instance exceeds the maximum memory for its configured instance_class . You might need to specify higher instance class or use max_concurrent_requests to optimize your instances and properly handle requests.
You could also configure maximum and minimum number of instances in your app.yaml depending on how much traffic you would like your application to handle.

Why are there multiple instances of Logic Apps running?

My Logic App has one trigger. When it fires, 3 instances of the Logic App run and all perform the same operations, leading to duplication in the database.
Multiple concurrent instances in groups of three
This is the Logic App orchestration:
Logic App
Resolved by making Recurrence single instance only
I experienced the same issue: 15 instances started. In my case, I actually did have 15 messages to process but my email client didn't give a clue :)
But for the other cases, here's some background information on concurrence control which I think replaced the "single instance" option:
https://toonvanhoutte.wordpress.com/2017/08/29/logic-apps-concurrency-control/

elastic4s automatic reconnections when connection is dropped

Is there a way (or best practice) for handling automatic reconnections in elastic4s?
I have the situation where the elastic cluster gets rebooted behind my application (security updates etc). [Obviously this is not ideal and would be better handled by a rolling restart but we're not quite there yet.]
However when this happens the connection is dropped and never recovers when the cluster comes back online. It keeps saying no nodes are available. If I restart the application it will reconnect without issues.
Is there a way to handle this nicely without having to create a new connection (ie TcpClient)? Currently I'd have to distribute the new TcpClient to the various parts of the application or wrap the API in something which handles this situation. Neither appeal much.
Thanks
You could consider switching to the HttpClient, which will obviously work after a cluster restart because it doesn't maintain a connection. The elastic4s API is the same regardless of which underlying client you are using, so, in theory, it should be an easy change.

How do I determine the ideal value for my database.yml pool?

Longtime listener, first-time caller here... my Postgres database says it allows the following:
Connections:
6/120
What should my corresponding "pool" setting be in this scenario? 6? 120? Something else entirely? Thanks in advance for any help here.
If it makes a difference I'm using Puma & Sidekiq to run a Rails 4 application on Heroku.
How many connections does your app use under typical load? Set the idle pool to that, and set the max pool to somewhere under the max allowed by the server.
But, that server connections setting should also be tuned to your application and hardware. It's typically some function of your core count, RAM and work_mem setting, and the kind of disks you have, but will also depend on what kind of queries your app typically runs.
(see here for some tips: https://wiki.postgresql.org/wiki/Number_Of_Database_Connections)
Postgres is actually pretty forgiving: opening connections is cheap (an undersized pool), compared to many other databases; idle open connections (oversized pool) are also cheap (a few K of shared buffers, if memory serves).
It's really having more active connections than your resources allow that will cause problems, which is why the server-side configuration is more important.

Resources