Stateless Session Beans Identity with #EJB and #Inject - ejb-3.1

I have been looking into section 3.4.7.2 of the EJB3.2 specifications lately und done some tests.
The specifications:
#EJB Cart cart1;
#EJB Cart cart2;
… if (cart1.equals(cart1)) { //
this test must return true ...}
… if (cart1.equals(cart2)) { //
this test must also return true ...}
The equals method always returns true when used to compare references
to the same business interface type of the same stateless session
bean.
The specifications cite explicitly the #EJB Annotation so I did some tests and I could confirm - if (cart1.equals(cart2)) return always true - the identities assumption.
Because very often I see #Inject as meant to work the same as #EJB, I tried the same example above but with #Inject. In that case the if (cart1.equals(cart2)) return always false.
I was wondering whether there are some comments on that.
The code for test purposes:
public abstract class FormatOutputWithBeansIdentity extends HttpServlet {
protected void formatOutput(final PrintWriter out, SLSBLocalView beanA, SLSBLocalView beanB) throws IllegalStateException {
...;
out.println("<br>beanA and beanB are equal : " + checkIfEqual(beanA, beanB) + "<br>");
out.println("<br>beanA and beanA are equal : " + checkIfEqual(beanA, beanA) + "<br>");
}
private Boolean checkIfEqual(SLSBLocalView beanA, SLSBLocalView beanB) {
// The equals method always returns true when used to compare references to the same business interface type of the same stateless session bean.
return beanA.equals(beanB);
}
}
#WebServlet(name = "ServletDemo1", urlPatterns = {"/ServletDemo1"})
public class ServletDemo1 extends FormatOutputWithBeansIdentity {
#EJB
SLSBLocalView beanA;
#EJB
SLSBLocalView beanB;
protected void processRequest(HttpServletRequest request, HttpServletResponse response)
throws ServletException, IOException {
try (PrintWriter out = response.getWriter()) {
...
out.println("<h1>Test Session Object Identity Using #EJB</h1>");
formatOutput(out, beanA, beanB);
...
}
}
}

Related

Apache Camel Generic Router - pass exchange properties to static class methods

I am trying to create a generic router whose processor and other attributes are populated from a static class. Here is sample code.
public class GenericRouter extends RouteBuilder( {
#Override
public void configure() throws Exception {
from("direct:generic-route")
.process(Util.getProcesss(“${exchangeProperty[processKey]"))
.ToD(Util.getUrl(“${exchangeProperty[urlKey]"));
}
}
Public class Util{
Map<String,Object> routerResources;
static {
//load routerResources
}
public static Processor getProcessor(String processorKey){
return (Processor)routerResources.get(processorKey);
}
public static Processor getUrl(String urlKey){
return (String)routerResources.get(urlKey);
}
}
The generic router is expected to post a rest call. the properties "urlKey" and "processorUrl" are already available in exchange. I finding it difficult to pass exchange properties to static Util class methods.
If you want to access properties of an exchange in plain java you can use .process or .exchange. If you need to access body or headers you can use e.getMessage().getBody() and e.getMessage().getHeader()
from("direct:generic-route")
.process( e -> {
String processKey = e.getProperty("processKey", String.class);
Processor processor = Util.getProcessor(processKey);
processor.process(e);
})
.setProperty("targetURL").exchange( e -> {
String urlKey = e.getProperty("urlKey", String.class);
return Util.getUrl(urlKey);
})
.toD("${exchangeProperty.targetURL}");
Also make sure you fix the return type of this method:
public static Processor getUrl(String urlKey){
return (String)routerResources.get(urlKey);
}
As a side note, you can actually use map stored in body, header or property through simple language.
public class ExampleTest extends CamelTestSupport {
#Test
public void example(){
template.sendBodyAndHeader("direct:example", null, "urlKey", "urlA");
}
#Override
protected RoutesBuilder createRouteBuilder() throws Exception {
return new RouteBuilder() {
#Override
public void configure() throws Exception {
Map<String, String> urlMap = new HashMap<>();
urlMap.put("urlA", "direct:pointA");
urlMap.put("urlB", "direct:pointB");
from("direct:example")
.setProperty("urlMap").constant(urlMap)
.log("url: ${exchangeProperty.urlMap['${headers.urlKey}']}");
}
};
}
}

Modify the message body, or other wise modify some data in a camel route

What is the best way to make a small modification to some data in a Camel route?
I'm pulling in a BSON document from Mongo. I need to use a timestamp from it in an http call, but I need to convert it from milliseconds to seconds.
I tried setting a header.
.setHeader("test").jsonpath("$.startTime")
Which lets me add the timestamp to the URL with a Simple expression.
.toD("https://test.com/api/markets?resolution=60&start_time=${headers.test}")
But I can't find a way to modify the value of the header.
I also tried using a process
.process(new Processor() {
public void process(Exchange exchange) throws Exception {
DocumentContext message = JsonPath.parse(exchange.getMessage().getBody());
String time = message.read("$.startTime").toString();
time = "111100000";
// do something with the payload and/or exchange here
//exchange.getIn().setBody("Changed body");
}
})
But here the exchange isn't passed back out. I based this on how I used an enrich EIP, with an aggregation strategy that returned an Exchange with the changes I made. This Process doesn't seem to work that way.
You can modify body, header or property using Lambda, processor or a bean. With processor you need to use Message.setHeader method to modify the value of the header at least for value types and strings. Bean methods receive body value by default so if you want to pass in header you'll need to specify it using simple language.
import org.apache.camel.Exchange;
import org.apache.camel.Processor;
import org.apache.camel.RoutesBuilder;
import org.apache.camel.builder.RouteBuilder;
import org.apache.camel.component.mock.MockEndpoint;
import org.apache.camel.test.junit4.CamelTestSupport;
import org.junit.Test;
public class SetHeaderTest extends CamelTestSupport {
#Test
public void testGreeting() throws Exception {
MockEndpoint resultMockEndpoint = getMockEndpoint("mock:result");
resultMockEndpoint.expectedMessageCount(3);
template.sendBodyAndHeader("direct:modifyGreetingLambda",
null, "greeting", "Hello");
template.sendBodyAndHeader("direct:modifyGreetingProcessor",
null, "greeting", "Hello");
template.sendBodyAndHeader("direct:modifyGreetingBean",
null, "greeting", "Hello");
resultMockEndpoint.assertIsSatisfied();
}
#Override
protected RoutesBuilder createRouteBuilder() throws Exception {
return new RouteBuilder(){
#Override
public void configure() throws Exception {
from("direct:modifyGreetingLambda")
.routeId("modifyGreetingLambda")
.setHeader("greeting").exchange(exchange -> {
String modifiedGreeting = (String)exchange.getMessage().getHeader("greeting");
modifiedGreeting += " world!";
return modifiedGreeting;
})
.log("${headers.greeting}")
.to("mock:result");
from("direct:modifyGreetingProcessor")
.routeId("modifyGreetingProcessor")
.process(new Processor(){
#Override
public void process(Exchange exchange) throws Exception {
String modifiedGreeting = (String)exchange.getMessage().getHeader("greeting");
modifiedGreeting += " world!";
exchange.getMessage().setHeader("greeting", modifiedGreeting);
}
})
.log("${headers.greeting}")
.to("mock:result");
from("direct:modifyGreetingBean")
.routeId("modifyGreetingBean")
.setHeader("greeting").method(new ModifyGreetingBean(),
"modifyGreeting('${headers.greeting}')")
.log("${headers.greeting}")
.to("mock:result");
}
};
}
public class ModifyGreetingBean {
public String modifyGreeting(String greeting) {
return greeting + " world!";
}
}
}
Aside from these you can also use expression languages like simple or groovy.
In the route you can set the header with the milliseconds value .setHeader("test").jsonpath("$.startTime").
Then in a processor you can retrieve this value:
String milliSecondsValue = (String) exchange.getIn().getHeader("test");
Then you transform the milliSecondsValue to the value you want and you set it back on the exchange:
exchange.getIn().setHeader("test", secondsValue);
After that call .toD("https://test.com/api/markets?resolution=60&start_time=${header.test}") and it will use the seconds value

BroadcastProcessFunction Processing Delay

I'm fairly new to Flink and would be grateful for any advice with this issue.
I wrote a job that receives some input events and compares them with some rules before forwarding them on to kafka topics based on whatever rules match. I implemented this using a flatMap and found it worked well, with one downside: I was loading the rules just once, during application startup, by calling an API from my main() method, and passing the result of this API call into the flatMap function. This worked, but it means that if there are any changes to the rules I have to restart the application, so I wanted to improve it.
I found this page in the documentation which seems to be an appropriate solution to the problem. I wrote a custom source to poll my Rules API every few minutes, and then used a BroadcastProcessFunction, with the Rules added to to the broadcast state using processBroadcastElement and the events processed by processElement.
The solution is working, but with one problem. My first approach using a FlatMap would process the events almost instantly. Now that I changed to a BroadcastProcessFunction each event takes 60 seconds to process, and it seems to be more or less exactly 60 seconds every time with almost no variation. I made no changes to the rule matching logic itself.
I've had a look through the documentation and I can't seem to find a reason for this, so I'd appreciate if anyone more experienced in flink could offer a suggestion as to what might cause this delay.
The job:
public static void main(String[] args) throws Exception {
// set up the streaming execution environment
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setStreamTimeCharacteristic(TimeCharacteristic.IngestionTime);
// read the input from Kafka
DataStream<KafkaEvent> documentStream = env.addSource(
createKafkaSource(getSourceTopic(), getSourceProperties())).name("Kafka[" + getSourceTopic() + "]");
// Configure the Rules data stream
DataStream<RulesEvent> ruleStream = env.addSource(
new RulesApiHttpSource(
getApiRulesSubdomain(),
getApiBearerToken(),
DataType.DataTypeName.LOGS,
getRulesApiCacheDuration()) // Currently set to 120000
);
MapStateDescriptor<String, RulesEvent> ruleStateDescriptor = new MapStateDescriptor<>(
"RulesBroadcastState",
BasicTypeInfo.STRING_TYPE_INFO,
TypeInformation.of(new TypeHint<RulesEvent>() {
}));
// broadcast the rules and create the broadcast state
BroadcastStream<RulesEvent> ruleBroadcastStream = ruleStream
.broadcast(ruleStateDescriptor);
// extract the resources and attributes
documentStream
.connect(ruleBroadcastStream)
.process(new FanOutLogsRuleMapper()).name("FanOut Stream")
.addSink(createKafkaSink(getDestinationProperties()))
.name("FanOut Sink");
// run the job
env.execute(FanOutJob.class.getName());
}
The custom HTTP source which gets the rules
public class RulesApiHttpSource extends RichSourceFunction<RulesEvent> {
private static final Logger LOGGER = LoggerFactory.getLogger(RulesApiHttpSource.class);
private final long pollIntervalMillis;
private final String endpoint;
private final String bearerToken;
private final DataType.DataTypeName dataType;
private final RulesApiCaller caller;
private volatile boolean running = true;
public RulesApiHttpSource(String endpoint, String bearerToken, DataType.DataTypeName dataType, long pollIntervalMillis) {
this.pollIntervalMillis = pollIntervalMillis;
this.endpoint = endpoint;
this.bearerToken = bearerToken;
this.dataType = dataType;
this.caller = new RulesApiCaller(this.endpoint, this.bearerToken);
}
#Override
public void open(Configuration configuration) throws Exception {
// do nothing
}
#Override
public void close() throws IOException {
// do nothing
}
#Override
public void run(SourceContext<RulesEvent> ctx) throws IOException {
while (running) {
if (pollIntervalMillis > 0) {
try {
RulesEvent event = new RulesEvent();
event.setRules(getCurrentRulesList());
event.setDataType(this.dataType);
event.setRetrievedAt(Instant.now());
ctx.collect(event);
Thread.sleep(pollIntervalMillis);
} catch (InterruptedException e) {
running = false;
}
} else if (pollIntervalMillis <= 0) {
cancel();
}
}
}
public List<Rule> getCurrentRulesList() throws IOException {
// call API and get rulles
}
#Override
public void cancel() {
running = false;
}
}
The BroadcastProcessFunction
public abstract class FanOutRuleMapper extends BroadcastProcessFunction<KafkaEvent, RulesEvent, KafkaEvent> {
protected final String RULES_EVENT_NAME = "rulesEvent";
protected final MapStateDescriptor<String, RulesEvent> ruleStateDescriptor = new MapStateDescriptor<>(
"RulesBroadcastState",
BasicTypeInfo.STRING_TYPE_INFO,
TypeInformation.of(new TypeHint<RulesEvent>() {
}));
#Override
public void processBroadcastElement(RulesEvent rulesEvent, BroadcastProcessFunction<KafkaEvent, RulesEvent, KafkaEvent>.Context ctx, Collector<KafkaEvent> out) throws Exception {
ctx.getBroadcastState(ruleStateDescriptor).put(RULES_EVENT_NAME, rulesEvent);
LOGGER.debug("Added to broadcast state {}", rulesEvent.toString());
}
// omitted rules matching logic
}
public class FanOutLogsRuleMapper extends FanOutRuleMapper {
public FanOutLogsJobRuleMapper() {
super();
}
#Override
public void processElement(KafkaEvent in, BroadcastProcessFunction<KafkaEvent, RulesEvent, KafkaEvent>.ReadOnlyContext ctx, Collector<KafkaEvent> out) throws Exception {
RulesEvent rulesEvent = ctx.getBroadcastState(ruleStateDescriptor).get(RULES_EVENT_NAME);
ExportLogsServiceRequest otlpLog = extractOtlpMessageFromJsonPayload(in);
for (Rule rule : rulesEvent.getRules()) {
boolean match = false;
// omitted rules matching logic
if (match) {
for (RuleDestination ruleDestination : rule.getRulesDestinations()) {
out.collect(fillInTheEvent(in, rule, ruleDestination, otlpLog));
}
}
}
}
}
Maybe you can give the complete code of the FanOutLogsRuleMapper class, currently the match variable is always false

camel return value from external Web Service

I need to invoke an external Web service running on WildFly from camel.
I managed to invoke it using the following route:
public class CamelRoute extends RouteBuilder {
final String cxfUri =
"cxf:http://localhost:8080/DemoWS/HelloWorld?" +
"serviceClass=" + HelloWorld.class.getName();
#Override
public void configure() throws Exception {
from("direct:start")
.id("wsClient")
.log("${body}")
.to(cxfUri + "&defaultOperationName=greet");
}
}
My question is how to get the return value from the Web service invocation ? The method used returns a String :
#WebService
public class HelloWorld implements Hello{
#Override
public String greet(String s) {
// TODO Auto-generated method stub
return "Hello "+s;
}
}
If the service in the Wild Fly returns the value then to see the values you can do the below
public class CamelRoute extends RouteBuilder {
final String cxfUri =
"cxf:http://localhost:8080/DemoWS/HelloWorld?" +
"serviceClass=" + HelloWorld.class.getName();
#Override
public void configure() throws Exception {
from("direct:start")
.id("wsClient")
.log("${body}")
.to(cxfUri + "&defaultOperationName=greet").log("${body}");
//beyond this to endpoint you can as many number of componenets to manipulate the response data.
}
}
The second log will log the response from the web service that you are returning. If you need to manipulate or do some routing and transformation with the response then you should look at the type of the response and accordingly you should use appropriate transformer.
Hope this helps.

Request Factory GWT editor change isn't persisting related JDO entities

I'm using (and new to) RequestFactory in GWT 2.5, with JDO entities with a one-to-many relationship, on AppEngine datastore. I've just started using the GWT RequestFactoryEditorDriver to display/edit my objects.
The Driver traverses my objects fine, and displays them correctly. However, when I try to edit a value on the "related" objects, the change doesn't get persisted to the datastore.
When I change b.name on my UI and click "save", I notice only A's persist() call is called. B's persist() is never called. How do I make the editorDriver fire on both ARequest as well as BRequest request contexts? (since what I want is for B's InstanceRequest<AProxy,Void> persist() to be called when my edits are to B objects only.)
Also, AFAICT, if I have an editor on BProxy, any object b that is being shown by the editor (and following the Editor Contract) should automatically be "context.edit(b)"ed by the Driver to make it mutable. However, in my case "context" is an ARequest, not a BRequest.
Do I have to make a ValueAwareEditor like mentioned here: GWT Editor framework
and create a fresh BRequest inside the flush() call and fire it, so that changes to B separately persist in a BRequest before the ARequest is fired?
editorDriver.getPaths() gives me:
"bs"
Also, the driver definitely sees the change to B's property, as editorDriver.isChanged() returns true before I fire() the context.
There are no errors on my client-side or server-side logs, and the Annotation Processor runs with no warnings.
Here's how I setup my driver:
editorDriver = GWT.create(Driver.class);
editorDriver.initialize(rf, view.getAEditor());
final ARequest aRequest = rf.ARequest();
final Request<List<AProxy>> aRequest = aRequest.findAByUser(loginInfo.getUserId());
String[] paths = editorDriver.getPaths();
aRequest.with(paths).fire(new Receiver<List<AProxy>>() {
#Override
public void onSuccess(List<AProxy> response) {
AProxy a = response.get(0);
ARequest aRequest2 = rf.aRequest();
editorDriver.edit(a, aRequest2);
aRequest2.persist().using(a);
}
});
This is how my entities look:
public abstract class PersistentEntity {
public Void persist() {
PersistenceManager pm = getPersistenceManager();
try {
pm.makePersistent(this);
} finally {
pm.close();
}
return null;
}
public Void remove() {
PersistenceManager pm = getPersistenceManager();
try {
pm.deletePersistent(this);
} finally {
pm.close();
}
return null;
}
}
#PersistenceCapable(identityType = IdentityType.APPLICATION)
#Version(strategy=VersionStrategy.VERSION_NUMBER, column="VERSION",
extensions={#Extension(vendorName="datanucleus", key="field-name", value="version")})
public class A extends PersistentEntity {
... (Id, version omitted for brevity)
#Persistent
private String name;
#Persistent
private List<B> bs;
public String getName() {
return name;
}
...
public void setName(String name) {
this.name = name;
}
public List<B> getBs() {
return bs;
}
public void setBs(List<B> bs) {
this.bs = bs;
}
}
... (same annotations as above omitted for brevity)
public class B extends PersistentEntity {
... (Id, version omitted for brevity)
#Persistent
private String name;
public String getName() {
return name;
}
public void setName(String name) {
this.name = name;
}
}
Here are the proxies:
#ProxyFor(A.class)
public interface AProxy extends EntityProxy {
String getName();
List<BProxy> getBs();
void setName(String name);
void setBs(List<BProxy> bs);
}
#ProxyFor(B.class)
public interface BProxy extends EntityProxy {
String getName();
void setName(String name);
}
Here are my service stubs:
#Service(A.class)
public interface ARequest extends RequestContext {
Request<List<A>> findAByUser(String userId);
InstanceRequest<AProxy, Void> persist();
InstanceRequest<AProxy, Void> remove();
}
#Service(B.class)
public interface BRequest extends RequestContext {
Request<List<A>> findB(String key);
InstanceRequest<BProxy, Void> persist();
InstanceRequest<BProxy, Void> remove();
}
Edit:
I've now changed my ARequest interface and service implementation to support a "saveAndReturn" method, so that I can recursively "persist" "a" on the server side:
Request<UserSandboxProxy> saveAndReturn(AProxy aProxy);
I find now that when I "flush" my RequestFactoryEditorDriver, the client-side context object has my new "b.name" value. However, if I call "context.fire()" and inspect my "saveAndReturn" method on the server side, the resulting server-side object "a", just before I "persist" it, doesn't contain the change to "b.name" on any item of the List.
Why could this be happening? How do I debug why this client-information doesn't go across the wire, to the server?
Options I've considered, tried and ruled out:
1) Ensuring the APT has been run, and there are no warnings/errors on Proxy or Service interfaces
2) Ensuring that my proxies does have a valid setter in AProxy for the List
You have to use a session-per-request pattern for RequestFactory to work properly. More details here: https://code.google.com/p/google-web-toolkit/issues/detail?id=7827

Resources