how to implement a REST-client with tomee and CXF using MultivaluedMap? - cxf

I'm not able to find the right dependency to implement a REST client with tomEE and CXF.
My module has this dependency:
<dependency>
<groupId>org.apache.openejb</groupId>
<artifactId>tomee-jaxrs</artifactId>
<version>1.7.1</version>
<scope>provided</scope>
</dependency>
The initial client implementation is simple. It has to serve a post method and submit a MultiValueMap.
import javax.ws.rs.core.MediaType;
import javax.ws.rs.core.MultivaluedMap;
import org.apache.cxf.jaxrs.client.WebClient;
import org.apache.cxf.jaxrs.ext.form.Form;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class RestClient<T> {
private static final Logger LOG = LoggerFactory.getLogger(RestClient.class);
private WebClient client;
private Class<T> type;
public RestClient(Class<T> aType, String aBaseUrl, String aPath) {
this.client = WebClient.create(aBaseUrl);
this.client.path(aPath);
this.client.accept(MediaType.APPLICATION_JSON);
this.type = aType;
}
public T post(MultivaluedMap<String, String> params) {
LOG.debug("sending POST request to: " + this.client.getCurrentURI());
Form theForm = new Form(params);
T theReponse = (T) this.client.post(theForm, this.type.getClass());
return theReponse;
}
}
The problem is that I cannot figure out an implementation of javax.ws.rs.core.MultivaluedMap. In result I cannot call my method. :(
I see only the interface. Isn't CXF completely provided by my pom.xml and doesn't it have an implementation of this interface? What dependency should I use to enable a proper work of CXF with tomEE?
I did not find any example on the web.

With cxf, the MultivaluedMap implementation is org.apache.cxf.jaxrs.impl.MetadataMap.
In newer JAX-RS 2.0 compliant versions, there is a javax.ws.rs.core.MultivaluedHashMap, but with JAX-RS 1.x, the implementation of the interface is implementation specific.

Related

Using Flink connector within Flink StateFun

I've managed to plug in the GCP PubSub dependency into the Flink Statefun JAR and then build the Docker image.
I've added the below to the pom.xml.
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-gcp-pubsub</artifactId>
<version>1.16.0</version>
<scope>test</scope>
</dependency>
It's not too clear how I now specify my PubSub ingress and egress in the module.yaml that we use with the StateFun image.
https://nightlies.apache.org/flink/flink-statefun-docs-master/docs/modules/overview/
For example, for Kakfa you use:
kind: io.statefun.kafka.v1/egress
spec:
id: com.example/my-egress
address: kafka-broker:9092
deliverySemantic:
type: exactly-once
transactionTimeout: 15min
I can see the official connectors have a Kind const in the Java code that you use to reference the connectors within your module.yaml but I can't see in the docs how to reference the Flink connectors you plug in yourself to the StateFun image.
GCP PubSub is not officially supported as a standard Statefun IO component, only Kafka and Kinesis for now; however you can come up with your own custom ingress/egress connector relatively easily. Unfortunately you won't be able to provide a way to have a new yaml-based config item, as the modules configurators for Kafka and Kinesis seem to be hard-coded in the runtime. You'll have to do your configuration in your code:
Looking at the source/ingress example:
public class ModuleWithSourceSpec implements StatefulFunctionModule {
#Override
public void configure(Map<String, String> globalConfiguration, Binder binder) {
IngressIdentifier<TypedValue> id =
new IngressIdentifier<>(TypedValue.class, "com.example", "custom-source");
IngressSpec<TypedValue> spec = new SourceFunctionSpec<>(id, new FlinkSource<>());
binder.bindIngress(spec);
binder.bindIngressRouter(id, new CustomRouter());
}
}
Your goal is going to be to provide the new FlinkSource<>(), which is a org.apache.flink.streaming.api.functions.source.SourceFunction
You could declare it thus:
SourceFunction source =
PubSubSource.newBuilder()
.withDeserializationSchema(new IntegerSerializer())
.withProjectName(projectName)
.withSubscriptionName(subscriptionName)
.withMessageRateLimit(1)
.build();
You'll also have to come up with a new CustomRouter(), to determine which function instance should handle an event initially. You can take inspiration from here:
public static class GreetingsStateBootstrapDataRouter implements Router<Tuple2<String, Integer>> {
#Override
public void route(
Tuple2<String, Integer> message, Downstream<Tuple2<String, Integer>> downstream) {
downstream.forward(new Address(GREETER_FUNCTION_TYPE, message.f0), message);
}
}
Same thing for sink/egress, no router to provide:
public class ModuleWithSinkSpec implements StatefulFunctionModule {
#Override
public void configure(Map<String, String> globalConfiguration, Binder binder) {
EgressIdentifier<TypedValue> id = new EgressIdentifier<>("com.example", "custom-sink", TypedValue.class);
EgressSpec<TypedValue> spec = new SinkFunctionSpec<>(id, new FlinkSink<>());
binder.bindEgress(spec);
}
}
With new FlinkSink<>() replaced by this sink:
SinkFunction sink =
PubSubSink.newBuilder()
.withSerializationSchema(new IntegerSerializer())
.withProjectName(projectName)
.withTopicName(outputTopicName)
.build();
That you would use like so, in the egress case:
public class GreeterFn implements StatefulFunction {
static final TypeName TYPE = TypeName.typeNameFromString("com.example.fns/greeter");
static final TypeName CUSTOM_EGRESS = TypeName.typeNameFromString("com.example/custom-sink");
static final ValueSpec<Integer> SEEN = ValueSpec.named("seen").withIntType();
#Override
CompletableFuture<Void> apply(Context context, Message message) {
if (!message.is(User.TYPE)) {
throw new IllegalStateException("Unknown type");
}
User user = message.as(User.TYPE);
String name = user.getName();
var storage = context.storage();
var seen = storage.get(SEEN).orElse(0);
storage.set(SEEN, seen + 1);
context.send(
EgressMessageBuilder.forEgress(CUSTOM_EGRESS)
.withUtf8Value("Hello " + name + " for the " + seen + "th time!")
.build());
return context.done();
}
}
You'll also have to make your Module known to the runtime using a file mentioning your Module in the META-INF/services directory of your jar, like so:
com.example.your.path.ModuleWithSourceSpec
com.example.your.path.ModuleWithSinkSpec
Alternatively if you prefer annotations you can use Google Autoservice like so
I hope it helps!

Using Project Reactor with Apache Camel

I'd like to know if it is possible to use Project Reactor with Apache Camel, so applications be fully reactive and non-blocking IO. I'd like to know how does the Project Reactor support works when integrating other Apache Camel's components.
Can I read for example from S3 reactively (therefore I'll need to use the Async S3 client behind the scenes)? Or will I block when reading from S3 and then just create a Flux out of what has been returned?
Where reactiveness is needed, you should use the relevant spring and reactor libraries. there are pseudo camel code also u can db call in camel bean or processors etc
#RestController
#RequestMapping(value = "/api/books")
#RequiredArgsContructor
public class HomeController {
private final BookRepository bookRepository;
private final ProducerTemplate template
#GetMapping("")
public Flux<Book> getHome() {
List <Book> books=bookRepository.findAll();
X ret = template.requestBody("direct:something", books, X.class);
}
}
#Component
public class SomeRoute extends RouteBuilder {
#Override
public void configure() throws Exception {
from("direct:something")
.process(e-> {
List<Book> books = e.getIn.getBody(List.class)
// some logic
e.getIn.setBody(new X( ))
})
}

Can I use mongock with Spring Data Repositories?

I tried to inject with #Autowired annotation a repository into changelog
and it doesn't get injected.
Config uses spring application context
#Bean
public SpringBootMongock mongock(ApplicationContext springContext, MongoClient mongoClient) {
return new SpringBootMongockBuilder(mongoClient, "yourDbName", "com.package.to.be.scanned.for.changesets")
.setApplicationContext(springContext)
.setLockQuickConfig()
.build();
}
And the changelog
#ChangeLog(order = "001")
public class MyMigration {
#Autowired
private MyRepository repo;
#ChangeSet(order = "001", id = "someChangeId", author = "testAuthor")
public void importantWorkToDo(DB db){
repo.findAll() // here null pointer
}
}
firstly, notice that if you are using repositories in your changelogs, it's a bad practice to use it for writes, as it won't be covered by the lock mechanism(this is feature is coming soon), only for reads.
To inject your repository(or any other dependency) you simply need to inject it in your changeSet method signature, like this:
#ChangeLog(order = "001")
public class MyMigration {
#ChangeSet(order = "001", id = "someChangeId", author = "testAuthor")
public void importantWorkToDo(MongoTemplate template, MyRepository repo){
repo.findAll(); this should work
}
}
Notice that you should use the last version(at this moment 3.2.4) and DB class is not supported anymore. Please use MongoDatabase or MongoTemplate(preferred).
Documentation to Mongock
we have recently released the version 4.0.7.alpha, which among other things allows you to use Spring repositories(and any other custom bean you wish) in your changeSets with no problem. You can insert, update, delete and read. It will be safely covered by the lock.
The only restriction is that it needs to be an interface, which should be the common case for Spring repositories.
Please take a look to this example

JBoss 6 EAP - override HTTP response status code in a SOAP service that sends empty response back from 202 to 200

We have a SOAP web service that we are migrating from JBoss EAP 5.1 to 6.4.7 and one of the webservices returns absolutely nothing but 200 (in JBoss 5). When we migrated to 6 it still works and returns nothing but returns a 202 instead and that is going to break clients. We have no control over clients. I tried a SOAPHandler at the close method but it does nothing as it is not even called as my guess is that since there is no SOAP message going back there is nothing that triggers the handler.
I also tried to access the context directly in the web method and modif but it did nothing.
MessageContext ctx = wsContext.getMessageContext();
HttpServletResponse response = (HttpServletResponse) ctx.get(MessageContext.SERVLET_RESPONSE);
response.setStatus(HttpServletResponse.SC_OK);
I couldn't find anything in the manual.
Any direction is very much appreciated.
Here is how the port and its implementation look like:
Here is how the port and its implementation head look like:
#WebService(name = "ForecastServicePortType", targetNamespace = "http://www.company.com/forecastservice/wsdl")
#SOAPBinding(parameterStyle = SOAPBinding.ParameterStyle.BARE)
#XmlSeeAlso({
ObjectFactory.class
})
public interface ForecastServicePortType {
/**
*
* #param parameters
* #throws RemoteException
*/
#WebMethod(action = "http://www.company.com/forecast/sendForecast")
public void sendForecast(
#WebParam(name = "SendForecast", targetNamespace = "http://www.company.com/forecastservice", partName = "parameters")
SendForecastType parameters) throws RemoteException;
}
#WebService(name = "ForecastServicePortTypeImpl", serviceName = "ForecastServicePortType", endpointInterface = "com.company.forecastservice.wsdl.ForecastServicePortType", wsdlLocation = "/WEB-INF/wsdl/ForecastServicePortType.wsdl")
#HandlerChain(file = "/META-INF/handlers.xml")
public class ForecastServicePortTypeImpl implements ForecastServicePortType {
...
}
In case anybody will find this useful. Here is the solution;
Apache CXF by default uses async requests and even if the annotation #OneWay is missing it still behaves as it if was there.
So in order to disable that an interceptor needs to be created like below:
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.cxf.binding.soap.SoapMessage;
import org.apache.cxf.binding.soap.interceptor.AbstractSoapInterceptor;
import org.apache.cxf.interceptor.Fault;
import org.apache.cxf.phase.Phase;
import java.util.Arrays;
public class DisableOneWayInterceptor extends AbstractSoapInterceptor {
private static final Log LOG = LogFactory.getLog(DisableOneWayInterceptor.class);
public DisableOneWayInterceptor(){
super(Phase.PRE_LOGICAL);
addBefore(Arrays.asList(org.apache.cxf.interceptor.OneWayProcessorInterceptor.class.getName()));
}
#Override
public void handleMessage(SoapMessage soapMessage) throws Fault {
if(LOG.isDebugEnabled())
LOG.debug("OneWay behavior disabled");
soapMessage.getExchange().setOneWay(false);
}
}
And called in WebService class (annotated with #WebService) as below:
#org.apache.cxf.interceptor.InInterceptors (interceptors = {"com.mycompany.interceptors.DisableOneWayInterceptor" })

How do you set WriteConcern using spring mongodb to ACKNOWLEDGED?

I want to set the WriteConcern in spring mongodb to acknowledged. Also, I'm wondering if this is the default value? I am using the spring.data.mongodb.uri in my application.properties so I don't have any mongo configuration class.
From the docs of SpringData here
9.4.3. WriteConcern
You can set the com.mongodb.WriteConcern property that the MongoTemplate will use for write operations if it has not yet been specified via the driver at a higher level such as com.mongodb.Mongo. If MongoTemplate’s WriteConcern property is not set it will default to the one set in the MongoDB driver’s DB or Collection setting.
9.4.4. WriteConcernResolver
For more advanced cases where you want to set different WriteConcern values on a per-operation basis (for remove, update, insert and save operations), a strategy interface called WriteConcernResolver can be configured on MongoTemplate. Since MongoTemplate is used to persist POJOs, the WriteConcernResolver lets you create a policy that can map a specific POJO class to a WriteConcern value. The WriteConcernResolver interface is shown below.
public interface WriteConcernResolver {
WriteConcern resolve(MongoAction action);
}
Find a direct implementation here
you can do this over Spring bean
#Configuration
public class MongoConfiguration {
#Bean
public WriteConcernResolver writeConcernResolver() {
return action -> {
System.out.println("Using Write Concern of Acknowledged");
return WriteConcern.ACKNOWLEDGED;
};
}
}
It is not sufficient to only provide a WriteConcernResolver over Bean configuration.
The MongoTemplate will not use it.
To make this happen you have to create a class like this with two options to set the WriteConcern:
import com.mongodb.WriteConcern;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.mongodb.MongoDatabaseFactory;
import org.springframework.data.mongodb.core.MongoTemplate;
import org.springframework.data.mongodb.core.WriteResultChecking;
import org.springframework.data.mongodb.core.convert.MongoConverter;
#Configuration
public class MongoConfiguration {
Logger logger = LoggerFactory.getLogger(MongoConfiguration.class);
public MongoConfiguration() {
logger.info("MongoConfiguration applied ...");
}
#Bean
MongoTemplate mongoTemplate(MongoDatabaseFactory mongoDbFactory, MongoConverter converter) {
MongoTemplate mongoTemplate = new MongoTemplate(mongoDbFactory, converter);+
// Version 1: set statically
logger.debug("Setting WriteConcern statically to ACKNOWLEDGED");
mongoTemplate.setWriteConcern(WriteConcern.ACKNOWLEDGED);
// Version 2: provide a WriteConcernResolver, which is called for _every_ MongoAction
// which might degrade performance slightly (not tested)
// and is very flexible to determine the value
mongoTemplate.setWriteConcernResolver(action -> {
logger.debug("Action {} called on collection {} for document {} with WriteConcern.MAJORITY. Default WriteConcern was {}", action.getMongoActionOperation(), action.getCollectionName(), action.getDocument(), action.getDefaultWriteConcern());
return WriteConcern.ACKNOWLEDGED;
});
mongoTemplate.setWriteResultChecking(WriteResultChecking.EXCEPTION);
return mongoTemplate;
}
}
You can set the write-concern in the xml-configuration (if applicable)
<mongo:db-factory ... write-concern="SAFE"/>

Resources