Given the following Aspect defination,
#Component
#Aspect
public class DefAspect {
Logger log = Logger.getLogger("DefAspect");
#Around("execution(* com.jd.brick.service.OrderService.saveOrder(..))")
public Object around(ProceedingJoinPoint proceedingJoinPoint) throws Throwable {
long start = System.currentTimeMillis();
Object value;
try {
value = proceedingJoinPoint.proceed();
} catch (Throwable throwable) {
throw throwable;
} finally {
long duration = System.currentTimeMillis() - start;
String msg = String.format("%s.%s took %s ms",
proceedingJoinPoint.getSignature().getDeclaringType().getSimpleName(),
proceedingJoinPoint.getSignature().getName(),
duration);
log.info(msg);
}
return value;
}
}
Spring will automatically make a proxy object, which will log the duration of method invocation.
But in some situation, the generated proxy object need to be removed.
In general, spring bean can be removed from BeanDefinitionRegistry.
ConfigurableApplicationContext ctx = SpringApplication.run(ProxyApplication.class, args);
BeanDefinitionRegistry registry = (BeanDefinitionRegistry) ctx.getAutowireCapableBeanFactory();
for(String beanName : ctx.getBeanDefinitionNames()){
System.out.println(beanName);
registry.removeBeanDefinition(beanName);
}
As for AOP situation, the bean name seems not easy determined.
Any help would be appreciated, thanks in advance.
Related
I'm fairly new to Flink and would be grateful for any advice with this issue.
I wrote a job that receives some input events and compares them with some rules before forwarding them on to kafka topics based on whatever rules match. I implemented this using a flatMap and found it worked well, with one downside: I was loading the rules just once, during application startup, by calling an API from my main() method, and passing the result of this API call into the flatMap function. This worked, but it means that if there are any changes to the rules I have to restart the application, so I wanted to improve it.
I found this page in the documentation which seems to be an appropriate solution to the problem. I wrote a custom source to poll my Rules API every few minutes, and then used a BroadcastProcessFunction, with the Rules added to to the broadcast state using processBroadcastElement and the events processed by processElement.
The solution is working, but with one problem. My first approach using a FlatMap would process the events almost instantly. Now that I changed to a BroadcastProcessFunction each event takes 60 seconds to process, and it seems to be more or less exactly 60 seconds every time with almost no variation. I made no changes to the rule matching logic itself.
I've had a look through the documentation and I can't seem to find a reason for this, so I'd appreciate if anyone more experienced in flink could offer a suggestion as to what might cause this delay.
The job:
public static void main(String[] args) throws Exception {
// set up the streaming execution environment
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setStreamTimeCharacteristic(TimeCharacteristic.IngestionTime);
// read the input from Kafka
DataStream<KafkaEvent> documentStream = env.addSource(
createKafkaSource(getSourceTopic(), getSourceProperties())).name("Kafka[" + getSourceTopic() + "]");
// Configure the Rules data stream
DataStream<RulesEvent> ruleStream = env.addSource(
new RulesApiHttpSource(
getApiRulesSubdomain(),
getApiBearerToken(),
DataType.DataTypeName.LOGS,
getRulesApiCacheDuration()) // Currently set to 120000
);
MapStateDescriptor<String, RulesEvent> ruleStateDescriptor = new MapStateDescriptor<>(
"RulesBroadcastState",
BasicTypeInfo.STRING_TYPE_INFO,
TypeInformation.of(new TypeHint<RulesEvent>() {
}));
// broadcast the rules and create the broadcast state
BroadcastStream<RulesEvent> ruleBroadcastStream = ruleStream
.broadcast(ruleStateDescriptor);
// extract the resources and attributes
documentStream
.connect(ruleBroadcastStream)
.process(new FanOutLogsRuleMapper()).name("FanOut Stream")
.addSink(createKafkaSink(getDestinationProperties()))
.name("FanOut Sink");
// run the job
env.execute(FanOutJob.class.getName());
}
The custom HTTP source which gets the rules
public class RulesApiHttpSource extends RichSourceFunction<RulesEvent> {
private static final Logger LOGGER = LoggerFactory.getLogger(RulesApiHttpSource.class);
private final long pollIntervalMillis;
private final String endpoint;
private final String bearerToken;
private final DataType.DataTypeName dataType;
private final RulesApiCaller caller;
private volatile boolean running = true;
public RulesApiHttpSource(String endpoint, String bearerToken, DataType.DataTypeName dataType, long pollIntervalMillis) {
this.pollIntervalMillis = pollIntervalMillis;
this.endpoint = endpoint;
this.bearerToken = bearerToken;
this.dataType = dataType;
this.caller = new RulesApiCaller(this.endpoint, this.bearerToken);
}
#Override
public void open(Configuration configuration) throws Exception {
// do nothing
}
#Override
public void close() throws IOException {
// do nothing
}
#Override
public void run(SourceContext<RulesEvent> ctx) throws IOException {
while (running) {
if (pollIntervalMillis > 0) {
try {
RulesEvent event = new RulesEvent();
event.setRules(getCurrentRulesList());
event.setDataType(this.dataType);
event.setRetrievedAt(Instant.now());
ctx.collect(event);
Thread.sleep(pollIntervalMillis);
} catch (InterruptedException e) {
running = false;
}
} else if (pollIntervalMillis <= 0) {
cancel();
}
}
}
public List<Rule> getCurrentRulesList() throws IOException {
// call API and get rulles
}
#Override
public void cancel() {
running = false;
}
}
The BroadcastProcessFunction
public abstract class FanOutRuleMapper extends BroadcastProcessFunction<KafkaEvent, RulesEvent, KafkaEvent> {
protected final String RULES_EVENT_NAME = "rulesEvent";
protected final MapStateDescriptor<String, RulesEvent> ruleStateDescriptor = new MapStateDescriptor<>(
"RulesBroadcastState",
BasicTypeInfo.STRING_TYPE_INFO,
TypeInformation.of(new TypeHint<RulesEvent>() {
}));
#Override
public void processBroadcastElement(RulesEvent rulesEvent, BroadcastProcessFunction<KafkaEvent, RulesEvent, KafkaEvent>.Context ctx, Collector<KafkaEvent> out) throws Exception {
ctx.getBroadcastState(ruleStateDescriptor).put(RULES_EVENT_NAME, rulesEvent);
LOGGER.debug("Added to broadcast state {}", rulesEvent.toString());
}
// omitted rules matching logic
}
public class FanOutLogsRuleMapper extends FanOutRuleMapper {
public FanOutLogsJobRuleMapper() {
super();
}
#Override
public void processElement(KafkaEvent in, BroadcastProcessFunction<KafkaEvent, RulesEvent, KafkaEvent>.ReadOnlyContext ctx, Collector<KafkaEvent> out) throws Exception {
RulesEvent rulesEvent = ctx.getBroadcastState(ruleStateDescriptor).get(RULES_EVENT_NAME);
ExportLogsServiceRequest otlpLog = extractOtlpMessageFromJsonPayload(in);
for (Rule rule : rulesEvent.getRules()) {
boolean match = false;
// omitted rules matching logic
if (match) {
for (RuleDestination ruleDestination : rule.getRulesDestinations()) {
out.collect(fillInTheEvent(in, rule, ruleDestination, otlpLog));
}
}
}
}
}
Maybe you can give the complete code of the FanOutLogsRuleMapper class, currently the match variable is always false
It runs with processing time and using a broadcast state.
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setStreamTimeCharacteristic(TimeCharacteristic.ProcessingTime);
BroadcastStream<List<TableOperations>> broadcastOperationsState = env
.addSource(new LoadCassandraOperations(10000L, cassandraHost, cassandraPort)).broadcast(descriptor);
SingleOutputStreamOperator<InternalVariableValue> stream =
env.addSource(new SourceMillisInternalVariableValue(5000L));
SingleOutputStreamOperator<InternalVariableOperation> streamProcessed =
stream
.keyBy(InternalVariableValue::getUuid)
.connect(broadcastOperationsState)
.process(new AddOperationInfo())
;
streamProcessed.print();
SourceMillisIntervalVariableValues create a event every 5s . The events are stored in a static collection. The run method looks like :
public class SourceMillisInternalVariableValue extends RichSourceFunction<InternalVariableValue>{
private boolean running;
long millis;
public SourceMillisInternalVariableValue(long millis) {
super();
this.millis = millis;
}
#Override
public void open(Configuration parameters) throws Exception {
super.open(parameters);
running = true;
}
#Override
public void cancel() {
running = false;
}
#Override
public void run(SourceContext<InternalVariableValue> ctx) throws Exception {
//Espera inicial
Thread.sleep(1500);
PojoVariableValues[] pojoData =
new PojoVariableValues[]{
new PojoVariableValues("id1", "1"),
new PojoVariableValues("id2", "2"),
....
....
new PojoVariableValues("id21", "21")
};
int cont = 0;
while (cont<pojoData.length) {
System.out.println("Iteration "+cont+" "+pojoData.length);
ctx.collect(generateVar(pojoData[0+cont].getUUID(), pojoData[0+cont].getValue()));
ctx.collect(generateVar(pojoData[1+cont].getUUID(), pojoData[1+cont].getValue()));
ctx.collect(generateVar(pojoData[2+cont].getUUID(), pojoData[2+cont].getValue()));
cont = cont +3;
Thread.sleep(millis);
}
}
private InternalVariableValue generateVar(String uuid, String value)
{
return InternalVariableValueMessage.InternalVariableValue.newBuilder()
.setUuid(uuid)
.setTimestamp(new Date().getTime()).setValue(value).setKeyspace("nest").build();
}
class PojoVariableValues {
private String UUID;
private String Value;
public PojoVariableValues(String uUID, String value) {
super();
UUID = uUID;
Value = value;
}
public String getUUID() {
return UUID;
}
public void setUUID(String uUID) {
UUID = uUID;
}
public String getValue() {
return Value;
}
public void setValue(String value) {
Value = value;
}
}
}
LoadCassandraOperations emits events every 10 seconds. It works fine.
When I run this code, SourceMillisIntervalVariableValues stops in the first iteration, emiting only three events. If I comment the process function, both sources works properly, but if I run the process , the source is cancel...
I spect than the source emits all events ( 21 exactly ) , and all of them are processing in the aggregate function. If I run this code, the while loop in the sources only complete one iteration.
Any idea ?
Thank youuu . cheers
EDIT:
Important. This code is for explore the processint time and broadcast feature. I know that I'm not using the best practices in the sources. Thanks
EDIT 2:
The problem starts when I try to run the process function.
Solved !!
The problem was that I try to run it using a TestContainer and I can't watch any logs.
I ran it with a simple main method and I can see some code errors ( like the commented in the comments. Tnks !!! ).
I tried to use EJB programmatic timers with IBM Liberty Profile 18.0.0.1. Here is my service.xml:
<feature>ejbLite-3.2</feature>
......
<ejbContainer>
<timerService nonPersistentMaxRetries="3" nonPersistentRetryInterval="10" />
</ejbContainer>
And here is my bare bone code snippet.
#Stateless
public class BatchSubmissionTimer {
private static final Logger LOGGER =
Logger.getLogger(BatchSubmissionTimer.class.getName());
#Resource
TimerService timerService;
private Date lastProgrammaticTimeout;
public void setTimer(long intervalDuration) {
LOGGER.info("Setting a programmatic timeout for "
+ intervalDuration + " milliseconds from now.");
Timer timer = timerService.createTimer(intervalDuration,
"Created new programmatic timer");
}
#Timeout
public void programmaticTimeout(Timer timer) {
this.setLastProgrammaticTimeout(new Date());
LOGGER.info("Programmatic timeout occurred.");
}
public String getLastProgrammaticTimeout() {
if (lastProgrammaticTimeout != null) {
return lastProgrammaticTimeout.toString();
} else {
return "never";
}
}
public void setLastProgrammaticTimeout(Date lastTimeout) {
this.lastProgrammaticTimeout = lastTimeout;
}
}
This is how my client invokes the timer:
BatchSubmissionTimer batchSubmissionTimer = new BatchSubmissionTimer();
batchSubmissionTimer.setTimer(5000);
However, I got a non-pointer error on injected TimerService. The TimerService wasn't injected successfully. Can anybody shed some lights on this? Appreciate it!
In your example, you are instantiating your own instance of BatchSubmissionTimer rather than allowing the container to provide it as an EJB, so the container does not have a chance to inject a value for the annotated timerService field. There are several ways to access it as an EJB, including lookup or injecting it, for example,
#EJB
BatchSubmissionTimer batchSubmissionTimer;
I need to override command timeout property specified in my application.properties file. Here is what I tried
#Test
public void testTokenQueryTimeout() throws Exception
{
String propertyToSet ="hystrix.command.quickbaseTokenQueryCommand.execution.isolation.thread.timeoutInMilliseconds";
String prop="";
try {
prop = ConfigurationManager.getConfigInstance().getProperty(
propertyToSet).toString();
logger.info("\n\n\noriginal quickbaseTokenQueryCommand timeout ="+prop);
System.setProperty(
propertyToSet,"10");
prop = ConfigurationManager.getConfigInstance().getProperty(
propertyToSet).toString();
logger.info("\n\n\nupdated quickbaseTokenQueryCommand timeout ="+prop);
String response = accountValidation.isValidToken(token);
logger.info(response);
Assert.assertFalse(true);
}
catch (AccountValidationServiceException e)
{
Assert.assertTrue(Constants.ERRCODE_TOKEN_QUERY_TIMED_OUT.equals(e.getErrorCode()));
}
finally {
ConfigurationManager.getConfigInstance().clearProperty(propertyToSet);
System.clearProperty(propertyToSet);
if(!GeneralUtil.isObjectEmpty(System.getProperty(
propertyToSet)))prop = System.getProperty(
propertyToSet);
logger.info("Updated testTokenQueryTimeout timeout ="+prop);
}
}
Notice, System.setProperty(propertyToSet,"10"). With this approach this test case passes i.e. the property gets changed and command times out but another test case fails due to this command timeout though I am clearing the property from System.
I also tried setting the property using ConfigurationManager.getConfigInstance().setProperty(
propertyToSet).toString(),"10"); But in that case, this change of property has no effect and command does not timeout.
Is there something I am missing here.
Please help.
Try using the ConcurrentCompositeConfiguration class
application.properties
hystrix.command.HelloWorldCommand.execution.isolation.thread.timeoutInMilliseconds=200
Command
public class HelloWorldCommand extends HystrixCommand<String> {
public HelloWorldCommand() {
super(HystrixCommandGroupKey.Factory.asKey("HelloWorldGroup"));
}
#Override
protected String run() throws Exception {
TimeUnit.MILLISECONDS.sleep(1100);
return "Hello";
}
}
Test
public class HelloWorldCommandTest {
#Test
public void commandConfigTest() {
String propertyKey = "hystrix.command.HelloWorldCommand.execution.isolation.thread.timeoutInMilliseconds";
ConcurrentCompositeConfiguration config = (ConcurrentCompositeConfiguration) ConfigurationManager.getConfigInstance();
Integer originalTimeout = (Integer) config.getProperty(propertyKey);
config.setOverrideProperty(propertyKey, 1200);
String result = new HelloWorldCommand().execute();
assertThat(result, is("Hello"));
config.setOverrideProperty(propertyKey, originalTimeout);
Integer timeoutValue = (Integer) config.getProperty(propertyKey);
assertThat(timeoutValue, is(originalTimeout));
}
}
I'm scratching my head over this:
Using an Interceptor to check a few SOAP headers, how can I abort the interceptor chain but still respond with an error to the user?
Throwing a Fault works regarding the output, but the request is still being processed and I'd rather not have all services check for some flag in the message context.
Aborting with "message.getInterceptorChain().abort();" really aborts all processing, but then there's also nothing returned to the client.
What's the right way to go?
public class HeadersInterceptor extends AbstractSoapInterceptor {
public HeadersInterceptor() {
super(Phase.PRE_LOGICAL);
}
#Override
public void handleMessage(SoapMessage message) throws Fault {
Exchange exchange = message.getExchange();
BindingOperationInfo bop = exchange.getBindingOperationInfo();
Method action = ((MethodDispatcher) exchange.get(Service.class)
.get(MethodDispatcher.class.getName())).getMethod(bop);
if (action.isAnnotationPresent(NeedsHeaders.class)
&& !headersPresent(message)) {
Fault fault = new Fault(new Exception("No headers Exception"));
fault.setFaultCode(new QName("Client"));
try {
Document doc = DocumentBuilderFactory.newInstance()
.newDocumentBuilder().newDocument();
Element detail = doc.createElementNS(Soap12.SOAP_NAMESPACE, "mynamespace");
detail.setTextContent("Missing some headers...blah");
fault.setDetail(detail);
} catch (ParserConfigurationException e) {
}
// bad: message.getInterceptorChain().abort();
throw fault;
}
}
}
Following the suggestion by Donal Fellows I'm adding an answer to my question.
CXF heavily relies on Spring's AOP which can cause problems of all sorts, at least here it did. I'm providing the complete code for you. Using open source projects I think it's just fair to provide my own few lines of code for anyone who might decide not to use WS-Security (I'm expecting my services to run on SSL only). I wrote most of it by browsing the CXF sources.
Please, comment if you think there's a better approach.
/**
* Checks the requested action for AuthenticationRequired annotation and tries
* to login using SOAP headers username/password.
*
* #author Alexander Hofbauer
*/
public class AuthInterceptor extends AbstractSoapInterceptor {
public static final String KEY_USER = "UserAuth";
#Resource
UserService userService;
public AuthInterceptor() {
// process after unmarshalling, so that method and header info are there
super(Phase.PRE_LOGICAL);
}
#Override
public void handleMessage(SoapMessage message) throws Fault {
Logger.getLogger(AuthInterceptor.class).trace("Intercepting service call");
Exchange exchange = message.getExchange();
BindingOperationInfo bop = exchange.getBindingOperationInfo();
Method action = ((MethodDispatcher) exchange.get(Service.class)
.get(MethodDispatcher.class.getName())).getMethod(bop);
if (action.isAnnotationPresent(AuthenticationRequired.class)
&& !authenticate(message)) {
Fault fault = new Fault(new Exception("Authentication failed"));
fault.setFaultCode(new QName("Client"));
try {
Document doc = DocumentBuilderFactory.newInstance()
.newDocumentBuilder().newDocument();
Element detail = doc.createElementNS(Soap12.SOAP_NAMESPACE, "test");
detail.setTextContent("Failed to authenticate.\n" +
"Please make sure to send correct SOAP headers username and password");
fault.setDetail(detail);
} catch (ParserConfigurationException e) {
}
throw fault;
}
}
private boolean authenticate(SoapMessage msg) {
Element usernameNode = null;
Element passwordNode = null;
for (Header header : msg.getHeaders()) {
if (header.getName().getLocalPart().equals("username")) {
usernameNode = (Element) header.getObject();
} else if (header.getName().getLocalPart().equals("password")) {
passwordNode = (Element) header.getObject();
}
}
if (usernameNode == null || passwordNode == null) {
return false;
}
String username = usernameNode.getChildNodes().item(0).getNodeValue();
String password = passwordNode.getChildNodes().item(0).getNodeValue();
User user = null;
try {
user = userService.loginUser(username, password);
} catch (BusinessException e) {
return false;
}
if (user == null) {
return false;
}
msg.put(KEY_USER, user);
return true;
}
}
As mentioned above, here's the ExceptionHandler/-Logger. At first I wasn't able to use it in combination with JAX-RS (also via CXF, JAX-WS works fine now). I don't need JAX-RS anyway, so that problem is gone now.
#Aspect
public class ExceptionHandler {
#Resource
private Map<String, Boolean> registeredExceptions;
/**
* Everything in my project.
*/
#Pointcut("within(org.myproject..*)")
void inScope() {
}
/**
* Every single method.
*/
#Pointcut("execution(* *(..))")
void anyOperation() {
}
/**
* Log every Throwable.
*
* #param t
*/
#AfterThrowing(pointcut = "inScope() && anyOperation()", throwing = "t")
public void afterThrowing(Throwable t) {
StackTraceElement[] trace = t.getStackTrace();
Logger logger = Logger.getLogger(ExceptionHandler.class);
String info;
if (trace.length > 0) {
info = trace[0].getClassName() + ":" + trace[0].getLineNumber()
+ " threw " + t.getClass().getName();
} else {
info = "Caught throwable with empty stack trace";
}
logger.warn(info + "\n" + t.getMessage());
logger.debug("Stacktrace", t);
}
/**
* Handles all exceptions according to config file.
* Unknown exceptions are always thrown, registered exceptions only if they
* are set to true in config file.
*
* #param pjp
* #throws Throwable
*/
#Around("inScope() && anyOperation()")
public Object handleThrowing(ProceedingJoinPoint pjp) throws Throwable {
try {
Object ret = pjp.proceed();
return ret;
} catch (Throwable t) {
// We don't care about unchecked Exceptions
if (!(t instanceof Exception)) {
return null;
}
Boolean throwIt = registeredExceptions.get(t.getClass().getName());
if (throwIt == null || throwIt) {
throw t;
}
}
return null;
}
}
Short answer, the right way to abort in a client-side interceptor before the sending the request is to create the Fault with a wrapped exception :
throw new Fault(
new ClientException( // or any non-Fault exception, else blocks in
// abstractClient.checkClientException() (waits for missing response code)
"Error before sending the request"), Fault.FAULT_CODE_CLIENT);
Thanks to post contributors for helping figuring it out.
CXF allows you to specify that your interceptor goes before or after certain interceptors. If your interceptor is processing on the inbound side (which based on your description is the case) there is an interceptor called CheckFaultInterceptor. You can configure your interceptor to go before it:
public HeadersInterceptor(){
super(Phase.PRE_LOGICAL);
getBefore().add(CheckFaultInterceptor.class.getName());
}
The check fault interceptor in theory checks if a fault has occurred. If one has, it aborts the interceptor chain and invokes the fault handler chain.
I have not yet been able to test this (it is fully based on the available documentation I've come across trying to solve a related problem)