How to fetch parameter of remote node using RemoteGetParamReq in processRequest of an Agent - unetstack

I have written an Agent and added to node 1 to fetch PhysicalParameters (Propogation speed, nodes energy etc) of node 2 using RemoteGetParamReq which is working inside agents startup() method.How can I use RemoteGetParamReq inside agents processRequest() method as I want to get parameter values of remote node 2 when node 1 triggers a DatagramReq so that i can get latest parameter value.
class MyRemoteParam extends UnetAgent {
AgentID phy,rmt
RemoteGetParamReq req
Message rsp
#Override
protected void setup() {
super.setup()
register(Services.PHYSICAL);
register(Services.DATAGRAM);
}
void startup() {
phy = agentForService Services.PHYSICAL
rmt= agentForService Services.REMOTE
req = new RemoteGetParamReq();
req.setRecipient(rmt);
req.setRemoteAgentID(phy);
req.setTo(2);
req.get(PhysicalParam.propagationSpeed);
req.get(PhysicalParam.timestampedTxDelay);
req.get(MyEnergyParameters.init_energy);
rsp = phy.request(req, 2000);
System.out.println "Node 2 propogation speed:"+rsp.get(PhysicalParam.propagationSpeed)
System.out.println "Node 2 energy:"+rsp.get(MyEnergyParameters.init_energy)
}
#Override
Message processRequest(Message msg) {
if (msg instanceof DatagramReq) {
req = new RemoteGetParamReq();
req.setRecipient(rmt);
req.setRemoteAgentID(phy);
req.setTo(2);
req.get(PhysicalParam.propagationSpeed);
req.get(PhysicalParam.timestampedTxDelay);
req.get(MyEnergyParameters.init_energy);
rsp = phy.request(req, 2000);
System.out.println "Node 2 propogation speed:"+rsp.get(PhysicalParam.propagationSpeed)
System.out.println "Node 2 energy:"+rsp.get(MyEnergyParameters.init_energy)
return new Message(msg, Performative.AGREE)
}
return null
}//end of processrequest
}//end of MyRemoteParam class

The processRequest() method should complete in order to respond to the requester, and so it isn't a good idea to make your parameter request inside it and wait. You can, however, trigger a request for the parameters to happen asynchronously by adding a OneShotBehavior something like this:
#Override
Message processRequest(Message msg) {
if (msg instanceof DatagramReq) {
add(new OneShotBehavior() {
#Override
public void action() {
req = new RemoteGetParamReq();
req.setRecipient(rmt);
req.setRemoteAgentID(phy);
req.setTo(2);
req.get(PhysicalParam.propagationSpeed);
req.get(PhysicalParam.timestampedTxDelay);
req.get(MyEnergyParameters.init_energy);
rsp = phy.request(req, 2000);
System.out.println "Node 2 propogation speed:"+rsp.get(PhysicalParam.propagationSpeed)
System.out.println "Node 2 energy:"+rsp.get(MyEnergyParameters.init_energy)
} // action
} // one shot behavior
return new Message(msg, Performative.AGREE)
}
return null
} // process request
Side note: DatagramReq is perhaps not the right request to trigger this on, since it asks your agent to send a datagram. You may wish to define your own appropriately named request for this purpose for good programming style.

Related

Akka Streams- a Merge stage sometimes pushes downstream only once all upstream sources pushed to it

I have been experimenting with writing a custom Source in Java. Specifically, I wrote a Source that takes elements from a BlockingQueue. I'm aware of Source.queue, however I don't know how to get the materialized value if I connect several of those to a Merge stage. Anyway, here's the implementation:
public class TestingSource extends GraphStage<SourceShape<String>> {
private static final ExecutorService executor = Executors.newCachedThreadPool();
public final Outlet<String> out = Outlet.create("TestingSource.out");
private final SourceShape<String> shape = SourceShape.of(out);
private final BlockingQueue<String> queue;
private final String identifier;
public TestingSource(BlockingQueue<String> queue, String identifier) {
this.queue = queue;
this.identifier = identifier;
}
#Override
public SourceShape<String> shape() {
return shape;
}
#Override
public GraphStageLogic createLogic(Attributes inheritedAttributes) {
return new GraphStageLogic(shape()) {
private AsyncCallback<BlockingQueue<String>> callBack;
{
setHandler(out, new AbstractOutHandler() {
#Override
public void onPull() throws Exception {
String string = queue.poll();
if (string == null) {
System.out.println("TestingSource " + identifier + " no records in queue, invoking callback");
executor.submit(() -> callBack.invoke(queue)); // necessary, otherwise blocks upstream
} else {
System.out.println("TestingSource " + identifier + " found record during pull, pushing");
push(out, string);
}
}
});
}
#Override
public void preStart() {
callBack = createAsyncCallback(queue -> {
String string = null;
while (string == null) {
Thread.sleep(100);
string = queue.poll();
}
push(out, string);
System.out.println("TestingSource " + identifier + " found record during callback, pushed");
});
}
};
}
}
This example works, so it seems that my Source is working properly:
private static void simpleStream() throws InterruptedException {
BlockingQueue<String> queue = new LinkedBlockingQueue<>();
Source.fromGraph(new TestingSource(queue, "source"))
.to(Sink.foreach(record -> System.out.println(record)))
.run(materializer);
Thread.sleep(2500);
queue.add("first");
Thread.sleep(2500);
queue.add("second");
}
I also wrote an example that connects two of the Sources to a Merge stage:
private static void simpleMerge() throws InterruptedException {
BlockingQueue<String> queue1 = new LinkedBlockingQueue<>();
BlockingQueue<String> queue2 = new LinkedBlockingQueue<>();
final RunnableGraph<?> result = RunnableGraph.fromGraph(GraphDSL.create(
Sink.foreach(record -> System.out.println(record)),
(builder, out) -> {
final UniformFanInShape<String, String> merge =
builder.add(Merge.create(2));
builder.from(builder.add(new TestingSource(queue1, "queue1")))
.toInlet(merge.in(0));
builder.from(builder.add(new TestingSource(queue2, "queue2")))
.toInlet(merge.in(1));
builder.from(merge.out())
.to(out);
return ClosedShape.getInstance();
}));
result.run(materializer);
Thread.sleep(2500);
System.out.println("seeding first queue");
queue1.add("first");
Thread.sleep(2500);
System.out.println("seeding second queue");
queue2.add("second");
}
Sometimes this example works as I expect- it prints "first" after 5 seconds, and then prints "second" after another 5 seconds.
However, sometimes (about 1 in 5 runs) it prints "second" after 10 seconds, and then immediately print "first". In other words, the Merge stage pushes the strings downstream only when both Sources pushed something.
The full output looks like this:
TestingSource queue1 no records in queue, invoking callback
TestingSource queue2 no records in queue, invoking callback
seeding first queue
seeding second queue
TestingSource queue2 found record during callback, pushed
second
TestingSource queue2 no records in queue, invoking callback
TestingSource queue1 found record during callback, pushed
first
TestingSource queue1 no records in queue, invoking callback
This phenomenon happens more frequently with MergePreferred and MergePrioritized.
My question is- is this the correct behavior of Merge? If not, what am I doing wrong?
At first glance, blocking the thread with a Thread.sleep in the middle of the stage seems to be at least one of the problems.
Anyway, I think it would be way easier to use Source.queue, as you mention in the beginning of your question. If the issue is to extract the materialized value when using the GraphDSL, here's how you do it:
final Source<String, SourceQueueWithComplete<String>> source = Source.queue(100, OverflowStrategy.backpressure());
final Sink<Object, CompletionStage<akka.Done>> sink = Sink.ignore();
final RunnableGraph<Pair<SourceQueueWithComplete<String>, CompletionStage<akka.Done>>> g =
RunnableGraph.fromGraph(
GraphDSL.create(
source,
sink,
Keep.both(),
(b, src, snk) -> {
b.from(src).to(snk);
return ClosedShape.getInstance();
}
)
);
g.run(materializer); // this gives you back the queue
More info on this in the docs.

Tomcat executor with runnable while(true) loop is only run once. Why?

I am trying to implement a javax.mail.event.MessageCountListener in Tomcat. When I start the application the contextInitialized method seems to run and the mailbox is read. However, I see the log message "Idling" only once. I would expect that it would idle constantly and invoke the AnalyzerService() when an email is received or deleted.
Update: Found that the idle() method is not returning. It runs untill com.sun.mail.iap.ResponseInputStream.readResponse(ByteArray ba) method where it runs into a while loop where it never gets out.
Am I misusing the idle() method for something I should not do? Is this a bug in com.sun.mail.iap package?
The AnalyzerContextListener.java:
import com.sun.mail.imap.IMAPStore;
import java.util.Properties;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import javax.mail.Folder;
import javax.mail.MessagingException;
import javax.mail.Session;
import javax.mail.event.MessageCountListener;
import javax.servlet.ServletContext;
import javax.servlet.ServletContextEvent;
import javax.servlet.ServletContextListener;
public class AnalyzerContextListener implements ServletContextListener {
private ExecutorService executorService;
private final String username = "myemail#gmail.com";
private final String password = "mypassword";
private final String mailhost = "imap.gmail.com";
private final String foldername = "INBOX";
#Override
public void contextInitialized(ServletContextEvent sce) {
final ServletContext servletContext = sce.getServletContext();
executorService = Executors.newFixedThreadPool(3);
Session session = Session.getInstance(new Properties());
try {
final IMAPStore store = (IMAPStore) session.getStore("imaps");
store.connect(mailhost, username, password);
final Folder folder = store.getFolder(foldername);
if (folder == null) {
servletContext.log("Folder in mailbox bestaat niet.");
return;
}
folder.open(Folder.READ_ONLY);
MessageCountListener countListener = new AnalyzerService();
folder.addMessageCountListener(countListener);
Runnable runnable = new Runnable() {
#Override
public void run() {
while (true) {
try {
servletContext.log("Aantal berichten in folder: " + folder.getMessageCount());
servletContext.log("Idling");
store.idle();
} catch (MessagingException ex) {
servletContext.log(ex.getMessage());
return;
}
}
}
};
executorService.execute(runnable);
servletContext.log("Executorservice gestart");
} catch (MessagingException ex) {
servletContext.log(ex.getMessage());
}
}
#Override
public void contextDestroyed(ServletContextEvent sce) {
sce.getServletContext().log("Context wordt vernietigd");
executorService.shutdown();
sce.getServletContext().log("Executorservice gestopt");
}
}
The AnalyzerService.java:
import javax.mail.Message;
import javax.mail.MessagingException;
import javax.mail.event.MessageCountEvent;
import javax.mail.event.MessageCountListener;
class AnalyzerService implements MessageCountListener {
public AnalyzerService() {
}
#Override
public void messagesAdded(MessageCountEvent event) {
Message[] addedMessages = event.getMessages();
for (Message message : addedMessages) {
try {
System.out.println(message.getSubject());
} catch (MessagingException ex) {
System.out.println(ex.getMessage());
}
}
}
#Override
public void messagesRemoved(MessageCountEvent event) {
Message[] removedMessages = event.getMessages();
for (Message message : removedMessages) {
try {
System.out.println(message.getSubject());
} catch (MessagingException ex) {
System.out.println(ex.getMessage());
}
}
}
}
while (true) {
try {
servletContext.log("Aantal berichten in folder: " + folder.getMessageCount());
servletContext.log("Idling");
store.idle();
} catch (MessagingException ex) {
servletContext.log(ex.getMessage());
return;
}
}
has exactly 2 3 possibilities to end earlier than never run only once.
The loop actually ends either:
Through the explicit return in case of a MessagingException. Look at your logs, there either a message or something strange like "null". Consider using a proper stacktrace log (.log(String message, Throwable throwable)) since Exception#getMessage() is often empty or not telling you much.
Through any unchecked exception. You should notice that in some log though since uncaught exceptions via executorService.execute should invoke the nearest uncaught exeption handler which is generally bad. See Choose between ExecutorService's submit and ExecutorService's execute
The loop stops executing after it logs "Idling"
store.idle() never returns. (every other line of code could do that theoretically as well, e.g. the folder.getMessageCount() call in a 2nd iteration but that's very unlikely)
Regarding No 3 - the documentation
Use the IMAP IDLE command (see RFC 2177), if supported by the server, to enter idle mode so that the server can send unsolicited notifications without the need for the client to constantly poll the server. Use a ConnectionListener to be notified of events. When another thread (e.g., the listener thread) needs to issue an IMAP comand for this Store, the idle mode will be terminated and this method will return. Typically the caller will invoke this method in a loop.
If the mail.imap.enableimapevents property is set, notifications received while the IDLE command is active will be delivered to ConnectionListeners as events with a type of IMAPStore.RESPONSE. The event's message will be the raw IMAP response string. Note that most IMAP servers will not deliver any events when using the IDLE command on a connection with no mailbox selected (i.e., this method). In most cases you'll want to use the idle method on IMAPFolder.
That sounds like this method is not designed to return any time soon. In your case never since you don't issue any commands towards the server after you enter idle. Besides that
folder.idle() could be what you should actually do
I guess the documentation is wrong, however ConnectionListener and MessageCountListener are two different things.

ChannelFactory method call increse memory

I have an winform application which consumes windows service, i user ChannelFactory
to connect to service, problem is when i call service method using channel the memory usage increase and after
method execute memory not go down(even after form close), i call GC.Collect but no change
channel Create class
public class Channel1
{
List<ChannelFactory> chanelList = new List<ChannelFactory>();
ISales salesObj;
public ISales Sales
{
get
{
if (salesObj == null)
{
ChannelFactory<ISales> saleschannel = new ChannelFactory<ISales>("SalesEndPoint");
chanelList.Add(saleschannel);
salesObj = saleschannel.CreateChannel();
}
return salesObj;
}
}
public void CloseAllChannels()
{
foreach (ChannelFactory chFac in chanelList)
{
chFac.Abort();
((IDisposable)chFac).Dispose();
}
salesObj = null;
}
}
base class
public class Base:Form
{
public Channel1 channelService = new Channel1();
public Channel1 CHANNEL
{
get
{
return channelService;
}
}
}
winform class
Form1:Base
private void btnView_Click(object sender, EventArgs e)
{
DataTable _dt = new DataTable();
try
{
gvAccounts.AutoGenerateColumns = false;
_dt = CHANNEL.Sales.GetDatatable();
gvAccounts.DataSource = _dt;
}
catch (Exception ex)
{
MessageBox.Show("Error Occurred while processing...\n" + ex.Message, "Warning", MessageBoxButtons.OK, MessageBoxIcon.Warning);
}
finally
{
CHANNEL.CloseAllChannels();
_dt.Dispose();
//GC.Collect();
}
}
You're on the right track in terms of using ChannelFactory<T>, but your implementation is a bit off.
ChannelFactory<T> creates a factory for generating channels of type T. This is a relatively expensive operation (as compared to just creating a channel from the existing factory), and is generally done once per life of the application (usually at start). You can then use that factory instance to create as many channels as your application needs.
Generally, once I've created the factory and cached it, when I need to make a call to the service I get a channel from the factory, make the call, and then close/abort the channel.
Using your posted code as a starting point, I would do something like this:
public class Channel1
{
ChannelFactory<ISales> salesChannel;
public ISales Sales
{
get
{
if (salesChannel == null)
{
salesChannel = new ChannelFactory<ISales>("SalesEndPoint");
}
return salesChannel.CreateChannel();
}
}
}
Note that I've replaced the salesObj with salesChannel (the factory). This will create the factory the first time it's called, and create a new channel from the factory every time.
Unless you have a particular requirement to do so, I wouldn't keep track of the different channels, especially if follow the open/do method/close approach.
In your form, it'd look something like this:
private void btnView_Click(object sender, EventArgs e)
{
DataTable _dt = new DataTable();
try
{
gvAccounts.AutoGenerateColumns = false;
ISales client = CHANNEL.Sales
_dt = client.GetDatatable();
gvAccounts.DataSource = _dt;
((ICommunicationObject)client).Close();
}
catch (Exception ex)
{
((ICommunicationObject)client).Abort();
MessageBox.Show("Error Occurred while processing...\n" + ex.Message, "Warning", MessageBoxButtons.OK, MessageBoxIcon.Warning);
}
}
The code above gets a new ISales channel from the factory in CHANNEL, executes the call, and then closes the channel. If an exception happens, the channel is aborted in the catch block.
I would avoid using Dispose() out of the box on the channels, as the implementation in the framework is flawed and will throw an error if the channel is in a faulted state. If you really want to use Dispose() and force the garbage collection, you can - but you'll have to work around the WCF dispose issue. Google will give you a number of workarounds (google WCF Using for a start).

SilverLight WCF Response does not come back in time

This code is being used to validate if an email exists in the database. The service return the values fine because it was tested with WCF Storm. In the code I am trying to call this method which return an object (validationResponse). If validationResonse has a true key I want to throw the ValidationException. What i think is happening is SL is making the call asyn and then moving one to he next line of code. How can I call a WCF method and get its reponse and act on it?
public string email
{
get
{
return _email;
}
set
{
vc.emailAddressCompleted += new EventHandler<emailAddressCompletedEventArgs>(vc_emailAddressCompleted);
vc.emailAddressAsync(value);
//Fails here with a null reference to vr (vr is declared futher up)
if (vr.isValid == false)
{
throw new ValidationException(vr.validationErrors);
}
this._email = value;
}
}
void vc_emailAddressCompleted(object sender, emailAddressCompletedEventArgs e)
{
//this never gets executed
this.vr = e.Result;
}
In silverlight all service calls are made asynchronously, in other words you can't call the service synchronously and wait for the reply. So what is happening in your code is vr is null and the exception is being thrown before the service call returns. You could change your code to something like this:
vc.emailAddressCompleted +=
new EventHandler<emailAddressCompletedEventArgs>(vc_emailAddressCompleted);
vc.emailAddressAsync(value);
//this while loop is not necessary unless you really want to wait
//until the service returns
while(vr==null)
{
//wait here or do something else until you get a return
Thread.Sleep(300);
}
//if you got here it means the service returned and no exception was thrown
void vc_emailAddressCompleted(object sender, emailAddressCompletedEventArgs e)
{
//should do some validation here
if (e.Error!=null) throw new Exception(e.Error.ToString());
vr = e.Result;
if (!vr.isValid)
{
throw new ValidationException(vr.validationErrors);
}
_email = value;
}

How to run batched WCF service calls in Silverlight BackgroundWorker

Is there any existing plumbing to run WCF calls in batches in a BackgroundWorker?
Obviously since all Silverlight WCF calls are async - if I run them all in a backgroundworker they will all return instantly.
I just don't want to implement a nasty hack if theres a nice way to run service calls and collect the results.
Doesnt matter what order they are done in
All operations are independent
I'd like to have no more than 5 items running at once
Edit: i've also noticed (when using Fiddler) that no more than about 7 calls are able to be sent at any one time. Even when running out-of-browser this limit applies. Is this due to my default browser settings - or configurable also. obviously its a poor man's solution (and not suitable for what i want) but something I'll probably need to take account of to make sure the rest of my app remains responsive if i'm running this as a background task and don't want it using up all my connections.
I think your best bet would be to have your main thread put service request items into a Queue that is shared with a BackgroundWorker thread. The BackgroundWorker can then read from the Queue, and when it detects a new item, initiate the async WCF service request, and setup to handle the AsyncCompletion event. Don't forget to lock the Queue before you call Enqueue() or Dequeue() from different threads.
Here is some code that suggests the beginning of a solution:
using System;
using System.Collections.Generic;
using System.ComponentModel;
namespace MyApplication
{
public class RequestItem
{
public string RequestItemData { get; set; }
}
public class ServiceHelper
{
private BackgroundWorker _Worker = new BackgroundWorker();
private Queue<RequestItem> _Queue = new Queue<RequestItem>();
private List<RequestItem> _ActiveRequests = new List<RequestItem>();
private const int _MaxRequests = 3;
public ServiceHelper()
{
_Worker.DoWork += DoWork;
_Worker.RunWorkerAsync();
}
private void DoWork(object sender, DoWorkEventArgs e)
{
while (!_Worker.CancellationPending)
{
// TBD: Add a N millisecond timer here
// so we are not constantly checking the Queue
// Don't bother checking the queue
// if we already have MaxRequests in process
int _NumRequests = 0;
lock (_ActiveRequests)
{
_NumRequests = _ActiveRequests.Count;
}
if (_NumRequests >= _MaxRequests)
continue;
// Check the queue for new request items
RequestItem item = null;
lock (_Queue)
{
RequestItem item = _Queue.Dequeue();
}
if (item == null)
continue;
// We found a new request item!
lock (_ActiveRequests)
{
_ActiveRequests.Add(item);
}
// TBD: Initiate an async service request,
// something like the following:
try
{
MyServiceRequestClient proxy = new MyServiceRequestClient();
proxy.RequestCompleted += OnRequestCompleted;
proxy.RequestAsync(item);
}
catch (Exception ex)
{
}
}
}
private void OnRequestCompleted(object sender, RequestCompletedEventArgs e)
{
try
{
if (e.Error != null || e.Cancelled)
return;
RequestItem item = e.Result;
lock (_ActiveRequests)
{
_ActiveRequests.Remove(item);
}
}
catch (Exception ex)
{
}
}
public void AddRequest(RequestItem item)
{
lock (_Queue)
{
_Queue.Enqueue(item);
}
}
}
}
Let me know if I can offer more help.

Resources