I have been trying to implement a distributed system that would store data in the node that does a get command. My idea was to use the KeyAffinityService to find a key that is associated with the local node and each time a before a put command is done, store a key that would refer to the same local node and use a grouping API that would use this key to store the value in the local node.
The code I have is the following:
SimpleCache.java:
import java.util.*;
import java.util.concurrent.*;
import org.infinispan.Cache;
import org.infinispan.affinity.*;
import org.infinispan.manager.*;
//Used to store the key for the local node
class locaddr{ static String nut; static String sim;}
public class SimpleCache {
public void start() throws Exception {
EmbeddedCacheManager manager = new DefaultCacheManager("democluster.xml");
Cache<String, String> cache = manager.getCache();
String command = "";
int ticketid = 1;
Scanner scan = new Scanner(System.in);
cache.start();
manager.start();
// Create the affinity service to find the Key for the manager
KeyAffinityService keyAffinityService = KeyAffinityServiceFactory.newLocalKeyAffinityService(
cache,
(KeyGenerator)new RndKeyGenerator(),
Executors.newSingleThreadExecutor(),
100);
//Find key associated with local node
locaddr.nut = Objects.toString(keyAffinityService.getKeyForAddress(manager.getAddress()));
log("Start of program.....");
log("Input one of following commands:");
log("book");
log("pay");
log("list");
log("locaddr");
log("quit");
while (true){
command = scan.nextLine();
if (command.equals("book")) {
log("Enter name ");
String name = scan.nextLine();
locaddr.sim = Objects.toString(keyAffinityService.getCollocatedKey(locaddr.nut));
cache.put(Integer.toString(ticketid)+manager.getAddress().toString(),name);
log("Booked ticket " + name);
ticketid++;
}
else if (command.equals("pay")) {
log("Enter ticket number ");
String id = scan.nextLine();
log("Display ticket:"+cache.get(id));
String ticket = cache.remove(id);
log("Checked out ticket " + ticket);
}
else if (command.equals("list")) {
Set <String> set = cache.keySet();
for (String ticket: set) {
log(ticket + " " + cache.get(ticket));
}
}
else if (command.equals("quit")) {
cache.clear();
cache.stop();
manager.stop();
keyAffinityService.stop();
log("Bye");
break;
}
else if (command.equals("locaddr")) {
log("local key for manager is: "+locaddr.nut);
log("manager address is: " + manager.getAddress());
}
else {
log("Unknown command " + command);
}
}
}
public static void main(String[] args) throws Exception{
new SimpleCache().start();
}
public static void log(String s){
System.out.println(s);
}
}
democluster.xml:
<infinispan xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="urn:infinispan:config:6.0
http://www.infinispan.org/schemas/infinispan-config-6.0.xsd"
xmlns="urn:infinispan:config:6.0">
<global>
<transport>
<properties>
<property name="configurationFile" value="jgroups-tcp.xml" />
</properties>
</transport>
</global>
<default>
<clustering mode="distributed" >
<sync/>
<hash numOwners="1" numSegments="100" capacityFactor="1">
<groups enabled="true">
<grouper class="KXGrouper"/>
</groups>
</hash>
</clustering>
</default>
</infinispan>
KXGrouper.java:
import org.infinispan.distribution.group.Grouper;
public class KXGrouper implements Grouper<String> {
public String computeGroup(String key, String group) {
String g = locaddr.sim;
return g;
}
public Class<String> getKeyType() {
return String.class;
}
}
My implementation is based in a simple cache implementation example of infinispan. However I am having two main issues:
1.
When I run this code in separate JVMs, sometimes it will work but sometimes when I do a "book" command (which invokes the collocated key command and the cache put command), I get an error where it says the another node is no longer part of the cluster. The error looks like this:
Exception in thread "main" java.lang.IllegalStateException: Address SRI-PC-4630 is no longer in the cluster
at org.infinispan.affinity.KeyAffinityServiceImpl.getKeyForAddress(KeyAffinityServiceImpl.java:107)
at org.infinispan.affinity.KeyAffinityServiceImpl.getCollocatedKey(KeyAffinityServiceImpl.java:91)
at SimpleCache.start(SimpleCache.java:77)
at SimpleCache.main(SimpleCache.java:125)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120)
Where "Address SRI-PC-4630" would be the address of the manager in another JVM that is running. I have been looking online for a solution to this issue but no one seems to have a similar problem.
2.
If I do get it running and do a "book" and have a key/value stored in the local node, I cannot access it from any other node.
I have been trying to fix this but to no avail and any advice or recommendation would be greatly appreciated.
Related
As we know that in TestNG we can have a method running parallay being called by multiple threads,
#Test(invocationCount=5,threadPoolSize=5)
public void testMethod()
{
///code to generate load
}
Now we want to do the same thing but with 5 sets pf parameters for each thread invocation in parlall.
You would need to use data providers in TestNG for doing this.
Here's a sample that shows this
//This method will provide data to any test method that declares that its Data Provider
//is named "test1"
#DataProvider(name = "test1")
public Object[][] createData1() {
return new Object[][] {
{ "Cedric", new Integer(36) },
{ "Anne", new Integer(37)},
};
}
//This test method declares that its data should be supplied by the Data Provider
//named "test1"
#Test(dataProvider = "test1")
public void verifyData1(String n1, Integer n2) {
System.out.println(n1 + " " + n2);
}
Now in order to enable parallel execution, please make sure you add the attribute data-provider-thread-count and set its value to a desired value. The default value for this attribute is 10. This attribute lets you control your thread pool size for data providers in TestNG.
For e.g.,
<suite name="Unit-test-suite" verbose="2" data-provider-thread-count="15">
Take a look at the official documentation for it from here.
Yesterday I asked on this site for an Exception that I didn't know how to handle, and I luckily found an answer in a very short time. Well.. here's another one. The class is the same of the other question: when I download messages from the imap server and handle them into an array, it gives me IMAPAddress Exception. This time I really don't know what I could do, I don't want to use POP3 because I just want to see emails stored on the server, not really handle them. Thank you for the attention.
Here's the code:
ScaricaEmail(String host,String porta,String user,String pw)
{
this.host=host;
this.porta=porta;
nick=user;
this.pw=pw;
}
public static Object[][] checkMail(String cartella)
{
Object[][] tabella;
try
{
Properties propvals = new Properties();
propvals.put("mail.imaps.host", host);
propvals.put("mail.imaps.port", porta);
propvals.put("mail.imaps.starttls.enable", "true");
propvals.put("mail.imaps.ssl.trust", "*");
Session emailSessionObj = Session.getDefaultInstance(propvals);
//Create IMAP store object and connect with the server
Store storeObj = emailSessionObj.getStore("imaps");
storeObj.connect(host, nick, pw);
//Create folder object and open it in read-only mode
Folder emailFolderObj = storeObj.getFolder(cartella);
emailFolderObj.open(Folder.READ_ONLY);
//Fetch messages from the folder and print in a loop
Message[] messageobjs = emailFolderObj.getMessages();
tabella=new Object[messageobjs.length][6];
for(int i = 1; i <= messageobjs.length; i++)
{
Message m = messageobjs[i-1];
String mimeType = m.getContentType();
Object[] risultati=new String[6];
risultati[i-1]=m.getFrom()[i-1]; //Here's where I get the Exception
risultati[i-1]=m.getSubject();
risultati[i-1]=getTestoDaMessaggio(m);
risultati[i-1]=getContoAllegati(m);
risultati[i-1]=m.getSentDate();
risultati[i-1]=0;
tabella[i-1]=risultati;
}
emailFolderObj.close(false);
storeObj.close();
}
catch (Exception exp)
{
exp.printStackTrace();
tabella=null;
}
return tabella;
}
Here's the output:
java.lang.ArrayStoreException: com.sun.mail.imap.protocol.IMAPAddress
at clientemail.ScaricaEmail.checkMail(ScaricaEmail.java:57)
at clientemail.Home.initComponents(Home.java:240)
at clientemail.Email$4.actionPerformed(Email.java:167)
at javax.swing.AbstractButton.fireActionPerformed(AbstractButton.java:2022)
at javax.swing.AbstractButton$Handler.actionPerformed(AbstractButton.java:2346)
at javax.swing.DefaultButtonModel.fireActionPerformed(DefaultButtonModel.java:402)
at javax.swing.DefaultButtonModel.setPressed(DefaultButtonModel.java:259)
at javax.swing.plaf.basic.BasicButtonListener.mouseReleased(BasicButtonListener.java:252)
at java.awt.Component.processMouseEvent(Component.java:6525)
at javax.swing.JComponent.processMouseEvent(JComponent.java:3324)
at java.awt.Component.processEvent(Component.java:6290)
at java.awt.Container.processEvent(Container.java:2234)
at java.awt.Component.dispatchEventImpl(Component.java:4881)
at java.awt.Container.dispatchEventImpl(Container.java:2292)
at java.awt.Component.dispatchEvent(Component.java:4703)
at java.awt.LightweightDispatcher.retargetMouseEvent(Container.java:4898)
at java.awt.LightweightDispatcher.processMouseEvent(Container.java:4533)
at java.awt.LightweightDispatcher.dispatchEvent(Container.java:4462)
at java.awt.Container.dispatchEventImpl(Container.java:2278)
at java.awt.Window.dispatchEventImpl(Window.java:2750)
at java.awt.Component.dispatchEvent(Component.java:4703)
at java.awt.EventQueue.dispatchEventImpl(EventQueue.java:758)
at java.awt.EventQueue.access$500(EventQueue.java:97)
at java.awt.EventQueue$3.run(EventQueue.java:709)
Thank you.
Object[] risultati=new String[6];
risultati[i-1]=m.getFrom()[i-1]; //Here's where I get the Exception
Message.getFrom() returns an Address[] and you are getting the first index which is an Address. Address is not a java.lang.String so it can't be stored in a String[].
To make it work you could do something like:
risultati[i-1]=String.valueOf(m.getFrom()[i-1]);
Or you can change the array type:
Object[] risultati=new Object[6];
In general, you avoid coping values to an array and just use the message object.
Is Hazelcast always blocking in case initial.min.cluster.size is not reached? If not, under which situations is it not?
Details:
I use the following code to initialize hazelcast:
Config cfg = new Config();
cfg.setProperty("hazelcast.initial.min.cluster.size",Integer.
toString(minimumInitialMembersInHazelCluster)); //2 in this case
cfg.getGroupConfig().setName(clusterName);
NetworkConfig network = cfg.getNetworkConfig();
JoinConfig join = network.getJoin();
join.getMulticastConfig().setEnabled(false);
join.getTcpIpConfig().addMember("192.168.0.1").addMember("192.168.0.2").
addMember("192.168.0.3").addMember("192.168.0.4").
addMember("192.168.0.5").addMember("192.168.0.6").
addMember("192.168.0.7").setRequiredMember(null).setEnabled(true);
network.getInterfaces().setEnabled(true).addInterface("192.168.0.*");
join.getMulticastConfig().setMulticastTimeoutSeconds(MCSOCK_TIMEOUT/100);
hazelInst = Hazelcast.newHazelcastInstance(cfg);
distrDischargedTTGs = hazelInst.getList(clusterName);
and get log messages like
debug: starting Hazel pullExternal from Hazelcluster with 1 members.
Does that definitely mean there was another member that has joined and left already? It does not look like that would be the case from the log files of the other instance. Hence I wonder whether there are situtations where hazelInst = Hazelcast.newHazelcastInstance(cfg); does not block even though it is the only instance in the hazelcast cluster.
The newHazelcastInstance blocks till the clusters has the required number of members.
See the code below for how it is implemented:
private static void awaitMinimalClusterSize(HazelcastInstanceImpl hazelcastInstance, Node node, boolean firstMember)
throws InterruptedException {
final int initialMinClusterSize = node.groupProperties.INITIAL_MIN_CLUSTER_SIZE.getInteger();
while (node.getClusterService().getSize() < initialMinClusterSize) {
try {
hazelcastInstance.logger.info("HazelcastInstance waiting for cluster size of " + initialMinClusterSize);
//noinspection BusyWait
Thread.sleep(TimeUnit.SECONDS.toMillis(1));
} catch (InterruptedException ignored) {
}
}
if (initialMinClusterSize > 1) {
if (firstMember) {
node.partitionService.firstArrangement();
} else {
Thread.sleep(TimeUnit.SECONDS.toMillis(3));
}
hazelcastInstance.logger.info("HazelcastInstance starting after waiting for cluster size of "
+ initialMinClusterSize);
}
}
If you set the logging on debug then perhaps you can see better what is happening. Member joining and leaving should already be visible under info.
I'm writing a method to output to several output streams at once, the way I got it set up right now is that I have a LogController, LogFile and LogConsole, the latter two are implementations of the Log interface.
What I'm trying to do right now adding a method to the LogController that attaches any implementation of the Log interface.
How I want to do this is as follows: in the LogController I have an associative array, in which I store pointers to Log objects. When the writeOut method of the LogController is called, I want it to then run over the elements of the array and call their writeOut methods too. The latter I can do, but the previous is proving to be difficult.
Mage/Utility/LogController.d
module Mage.Utility.LogController;
import std.stdio;
interface Log {
public void writeOut(string s);
}
class LogController {
private Log*[string] m_Logs;
public this() {
}
public void attach(string name, ref Log l) {
foreach (string key; m_Logs.keys) {
if (name is key) return;
}
m_Logs[name] = &l;
}
public void writeOut(string s) {
foreach (Log* log; m_Logs) {
log.writeOut(s);
}
}
}
Mage/Utility/LogFile.d
module Mage.Utility.LogFile;
import std.stdio;
import std.datetime;
import Mage.Utility.LogController;
class LogFile : Log {
private File fp;
private string path;
public this(string path) {
this.fp = File(path, "a+");
this.path = path;
}
public void writeOut(string s) {
this.fp.writefln("[%s] %s", this.timestamp(), s);
}
private string timestamp() {
return Clock.currTime().toISOExtString();
}
}
I've already tried multiple things with the attach functions, and none of them. The build fails with the following error:
Mage\Root.d(0,0): Error: function Mage.Utility.LogController.LogController.attach (string name, ref Log l) is not callable using argument types (string, LogFile)
This is the incriminating function:
public void initialise(string logfile = DEFAULT_LOG_FILENAME) {
m_Log = new LogController();
LogFile lf = new LogFile(logfile);
m_Log.attach("Log File", lf);
}
Can anyone tell me where I'm going wrong here? I'm stumped and I haven't been able to find the answer anywhere. I've tried a multitude of different solutions and none of them work.
Classes and interfaces in D are reference types, so Log* is redundant - remove the *. Similarly, there is no need to use ref in ref Log l - that's like taking a pointer by reference in C++.
This is the cause of the error message you posted - variables passed by reference must match in type exactly. Removing the ref should solve the error.
I am working with mule <cxf:proxy-service> and need to extract the web service method name to attach to message for later use.
We've a service proxy class implementing Callable interface. Initially we tried to get operation name like this:
public Object onCall(MuleEventContext eventContext) throws Exception {
try {
MuleMessage inboundMessage = eventContext.getMessage();
Set<String> props = inboundMessage.getInvocationPropertyNames();
System.out.println("CXF invocation properties ==> " + props);
System.out.println("CXF invocation property ==> " + inboundMessage.getInvocationProperty("cxf_operation"));
but the above code gives incorrect operation name. (We've 4 operations in service and it always give the 2nd operation name). Below is the mule flow used for this:
<flow name="proxyService">
<http:inbound-endpoint address="${some.address}"
exchange-pattern="request-response">
<cxf:proxy-service wsdlLocation="classpath:abc.wsdl"
namespace="http://namespace"
service="MyService">
</cxf:proxy-service>
</http:inbound-endpoint>
<component class="com.services.MyServiceProxy" />
So, I resorted to write an inbound cxf interceptor to extract the operation name. I wrote below interceptor which works fine with <cxf:jaxws-service> but not with <cxf:proxy-service> element.
Here is my interceptor:
public class GetCXFOperation extends AbstractPhaseInterceptor<Message> {
public GetCXFOperation() {
super(Phase.PRE_INVOKE);
}
#Override
public void handleMessage(Message message) throws Fault {
Exchange exchange = message.getExchange();
Endpoint ep = exchange.get(Endpoint.class);
OperationInfo op = exchange.get(OperationInfo.class);
if(op != null){
System.out.println("Operation Name: " + op.getName().getLocalPart());
} else{
Object nameProperty = exchange.get("org.apache.cxf.resource.operation.name");
if(nameProperty != null)
System.out.println(nameProperty.toString());
}
}
}
Seeking guidance as to how to extract operation name in <cxf:proxy-service>? Is there an easy mule way of getting correct answer? Or is there a different phase in which I should be invoking my interceptor? What phases work with <cxf:proxy-service>