Terminate a long running OSB request - osb

I'm using OSB 12c.I have an OSB proxy, which takes 15 minutes to complete each request on an average.
Lets say that I have five request now in running state.
Is there a way to see these running requests just like we can see the requests of bpel in EM console ?
Is there a way to terminate one of the requests without any impact on rest of the running requests ?
Is it possible to kill all request in case point-2 is not possible ?
Thanks !

I don't think so, not without changing things.
If you were open to changing the service to e.g. decompose the request into separate internal JMS messages, you should be able to use JMX to interrogate the MDBs and discover what they're up to. Then again, if you were to switch to JMS you could probably just look at the queue and get an idea about what it's doing just from the number and content of the messages created.
I'm not aware of the ability to cancel individual requests in OSB, sorry.

I think you can not terminate OSB threads directly.
You can configure your Weblogic to deal with stuck threads. (Thread that were running for a certain period of time)
You can configure a Dispatch Policy in your proxys using Work Manager to handle Stuck Threads and minimize impact on server.

Related

Programatically listing and sending requests to dynamic App Engine instances

I want to send a particular HTTP request (or otherwise communicate a message) to every (dynamic/autoscaled) instance which is currently running for a particular App Engine application.
My goal is to trigger each instance to discard some locally cached data (because I have just modified the underlying data and want them to reload it).
One possible solution is to store a value in Memcache, and have instances check this each time they handle a request to see if they should flush their cache. But this adds latency to every request.
Another possible solution would be to somehow stop all running instances. No fixed overhead, but some impact while instances are restarted.
An even less desirable solution would be to redeploy the application code in order to cause all instances to be stopped. This now adds additional delay on my end as a deployment takes some time.
You could use the management API to list instances for a given version, but I'd suggest that you'd probably want to use something like the PubSub API to create a subscription on each of your App Engine instances. Since each instance has its own subscription, any messages sent to the monitored queue will be received by all instances.
You can create the subscription at startup (the /_ah/start endpoint may be useful), and then delete it at shutdown (using the /_ah/stop endpoint).

Throttling FTP Polling consumers using apache camel

I have a requirement where in at one point of time, I need to connect to multiple ftp/sftp endpoints (say 100 ftp endpoints) to download files and process them.
I have a route like below. The Seda queue further processes the messages by moving them into appropriate folders
from(ftp://username#host/foldername?password=XXXXX&include=.*).to("seda:"+routeId)
Now if I am starting all the FTP endpoints at the same time, which is resulting in JVM memory issues. How could I throttle the starting of the ftp endpoints? can I use a SEDA before the ftp to throttle (if so how can I use it)? Any other EIP's or ideas I could use to throttle the triggering of the polling ftp consumers?
You can look into the throttler dsl to if you want to throttle the fetching of the messages.
http://camel.apache.org/throttler.html
For controlling the startup you can look into the simplescheduleroutepolicy..
http://camel.apache.org/simplescheduledroutepolicy.html
It handles route activating and deactivating. Although I haven't used it myself but it looks like you can perhaps add a controlled delay on when routes should start and stop.
I have had this problem in the past solved it using cron in the following way:
from("ftp://username#host/foldername?password=XXXXX&include=.*&scheduler=quartz2&scheduler.cron=0/2+*+*+*+*+?")
You can set up every FTP consumer to pull at different times (say with one minute difference).
If you decided to go down this path, you can use the following website to construct your crons easily:
http://www.cronmaker.com/
Hope this helps.
R.

NLog's WebService target: How to sequentially send the requests?

I use NLog's WebService target in Silverlight and run into a problem if the logging service is unavailable.
What happens is that all calls to the logging service hang for a long time until they time out.
This is firstly ugly and secondly problematic in the face of a request limit, which I have under my given circumstances. After the request limit is reached due to several pending logging requests, the application also fails to make requests that are not logging related.
Ideally I'd like a WebService target that sends the requests sequentially, but I can't configure it to do that, can I?
Since I have full control about the logging server I could also move to a different target, but I'd rather have a purely configuration-based solution.
Some time back I implemented a logging target like that for Silverlight. We were using Common.Logging for .NET and it did not support Silverlight. So, we ported part of Common.Logging to Silverlight and implemented a "logging service adapter" to send our logging messages to a logging service. I implemented a logging queue using the producer/consumer pattern. Maybe you will find it useful.
In the end, the project that I was working on when I implemented this didn't go anywhere, so this particular piece of code is not in use.
Using WCF service via async interface from worker thread, how do I ensure that events are sent from the client "in order"

How to fan out URL Fetch requests in a timely fashion?

Every minute or so my app creates some data and needs to send it out to more than 1000 remote servers via URL Fetch callbacks. The callback URL for each server is stored on separate entities. The time lag between creating the data and sending it to the remote servers should be roughly less than 5 seconds.
My initial thought is to use the Pipeline API to fan out URL Fetch requests to different task queues.
Unfortunately task queues are not guaranteed to be executed in a timely fashion. Therefore from requesting a task queue start to it actually executing could take minutes to hours. From previous experience this gap is regularly over a minute so is not necessarily appropriate.
Is there any way from within App Engine to achieve what I want? Maybe you know of an outside service that can do the fan out in a timely fashion?
Well, there's probably no good solution for the gae here.
You could keep a backend running; hammering the datastore/memcache
every second for new data to send out, and then spawn dozens of async url-fetches.
But thats really inefficient...
If you want a 3rd party service, pubnub.com is capable of doing fan-out, however i don't know if it could fit in your setup.
How about using the async API? You could then do a large number of simultaneous URL calls, all from a single location.
If the performance is particularly sensitive, you could do them from a backend and use a B8 instance.

How do dynamic backends start in Google App Engine

Can we start a dynamic backend programatically? mean while when a backend is starting how can i handle the request by falling back on the application(i mean app.appspot.com).
When i stop a backend manually in admin console, and send a request to it, its not starting "dynamically"
Dynamic backends come into existence when they receive a request, and
are turned down when idle; they are ideal for work that is
intermittent or driven by user activity.
Resident backends run continuously, allowing you to rely on the state
of their memory over time and perform complex initialization.
http://code.google.com/appengine/docs/python/backends/overview.html
I recently started executing a long running task on a dynamic backend and noticed a dramatic increase in the performance of the frontends. I assume this was because the long running task was competing for resources with normal user requests.
Backends are documented quite thoroughly here. Backends have to be started and stopped with appcfg or the admin console, as documented here. A stopped backend will not handle requests - if you want this, you should probably be using the Task Queue instead.
It appears that a dynamic backend need not be explicitly stopped. The overvicew (http://code.google.com/appengine/docs/python/backends/overview.html) states that the billing for a dynamic backend stops 15 minutes after the last request is processed. So, if your app has a cron job, for example, that requires 5 minutes to complete, and needs to run every hour, then you could configure a backend to do this. The cost you'll incur is 15+5 minutes every hour, or 8 hours for the whole day. I suppose the free quota allows you 9 backend hours. So, this type of scenario would be free for you. The backend will start when you send your first request to it through a queue, and will stop 15 minutes after the last request you send is processed completely.

Resources