When i receive an error in onerrorrepsonse of android volley request i want to retry the request. How can i achieve this?
well, you can create the RetryPolicy to change default retry behavior, only specify timeout milliseconds, retry count arguments :
public class YourRequest extends StringRequest {
public YourRequest(String url, Response.Listener<String> listener,
Response.ErrorListener errorListener) {
super(url, listener, errorListener);
setRetryPolicy(new DefaultRetryPolicy(DefaultRetryPolicy.DEFAULT_TIMEOUT_MS,
DefaultRetryPolicy.DEFAULT_MAX_RETRIES, DefaultRetryPolicy.DEFAULT_BACKOFF_MULT));
}
}
the another way is estimate the VolleyError, re-execute the same request again when if was TimeoutError instance :
public static void executeRequest() {
RequestQueue.add(new YourRequest("http://your.url.com/", new Response.Listener<String>() {
#Override
public void onResponse(String response) {
}
}, new Response.ErrorListener() {
#Override
public void onErrorResponse(VolleyError error) {
if (error instanceof TimeoutError) {
// note : may cause recursive invoke if always timeout.
executeRequest();
}
}
}));
}
you may have a question at this very moment : "have Volley offer some retry callback methods ?", the answer is "none". but there have a project calls Netroid which based Volley and satisfy preceded question, with it, you can take a retry callback if you care about that, you can calculate how much time used when retry coming and how long this request execute, the code style like this :
final String REQUESTS_TAG = "Request-Demo";
String url = "http://facebook.com/";
JsonObjectRequest request = new JsonObjectRequest(url, null, new Listener<JSONObject>() {
long startTimeMs;
int retryCount;
#Override
public void onPreExecute() {
startTimeMs = SystemClock.elapsedRealtime();
}
#Override
public void onFinish() {
RequestQueue.add(request);
NetroidLog.e(REQUESTS_TAG);
}
#Override
public void onRetry() {
long executedTime = SystemClock.elapsedRealtime() - startTimeMs;
if (++retryCount > 5 || executedTime > 30000) {
NetroidLog.e("retryCount : " + retryCount + " executedTime : " + executedTime);
mQueue.cancelAll(REQUESTS_TAG);
} else {
NetroidLog.e(REQUESTS_TAG);
}
}
});
request.setRetryPolicy(new DefaultRetryPolicy(5000, 20, DefaultRetryPolicy.DEFAULT_BACKOFF_MULT));
request.setTag(REQUESTS_TAG);
RequestQueue.add(request);
Netroid also have many other handy and powerful features, hope that will help you enough :).
You can set counter variable for trying specific time with out making it full recursive
static int count=10; //so its will try ten time
public void userLogin(final View view)
{
final RequestQueue requestQueue= Volley.newRequestQueue(getApplicationContext());
String url = "http://192.168.43.107/mobodb/register.php";
StringRequest stringRequest=new StringRequest(Request.Method.POST,url,new Response.Listener<String>()
{
#Override
public void onResponse(String response) {
Toast.makeText(getApplicationContext(),"Updated",Toast.LENGTH_LONG).show();
}
}
},new Response.ErrorListener()
{
#Override
public void onErrorResponse(VolleyError error) {
count=count-1;
Toast.makeText(getApplicationContext(),"Retry left"+count,Toast.LENGTH_LONG).show();
if (count>0) {
// note : may cause recursive invoke if always timeout.
userLogin(view);
}
else
{
Toast.makeText(getApplicationContext(),"Request failed pls check network connection or the error is "+error.getMessage(),Toast.LENGTH_LONG).show();
}
}
})
{
#Override
protected Map<String, String> getParams() throws AuthFailureError {
Map<String,String> paramter=new HashMap<String,String>();
paramter.put("name",login_name);
paramter.put("user_pass",login_pass);
return paramter;
}
};
requestQueue.add(stringRequest);
stringRequest.setRetryPolicy(new DefaultRetryPolicy(20 * 1000, 10, 1.0f));
you can also check response inside which you can return from php and deal in your java class
#Override
public void onResponse(String response) {
if(response.contains("no record found for"))
Toast.makeText(getApplicationContext(),response.toString(),Toast.LENGTH_LONG).show();
else
{
Toast.makeText(getApplicationContext(),"Updated num of row is"+response.toString(),Toast.LENGTH_LONG).show();
}
}
your PHP code will be
if($res){
$resp=mysql_affected_rows();
if($resp==0)
{
$resp="no record found for".$_POST['name'];
}
if($resp==1 or $resp>1)
{
$resp=mysql_affected_rows();
}else $resp="efrror is".mysql_error();
Related
I'm fairly new to Flink and would be grateful for any advice with this issue.
I wrote a job that receives some input events and compares them with some rules before forwarding them on to kafka topics based on whatever rules match. I implemented this using a flatMap and found it worked well, with one downside: I was loading the rules just once, during application startup, by calling an API from my main() method, and passing the result of this API call into the flatMap function. This worked, but it means that if there are any changes to the rules I have to restart the application, so I wanted to improve it.
I found this page in the documentation which seems to be an appropriate solution to the problem. I wrote a custom source to poll my Rules API every few minutes, and then used a BroadcastProcessFunction, with the Rules added to to the broadcast state using processBroadcastElement and the events processed by processElement.
The solution is working, but with one problem. My first approach using a FlatMap would process the events almost instantly. Now that I changed to a BroadcastProcessFunction each event takes 60 seconds to process, and it seems to be more or less exactly 60 seconds every time with almost no variation. I made no changes to the rule matching logic itself.
I've had a look through the documentation and I can't seem to find a reason for this, so I'd appreciate if anyone more experienced in flink could offer a suggestion as to what might cause this delay.
The job:
public static void main(String[] args) throws Exception {
// set up the streaming execution environment
final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setStreamTimeCharacteristic(TimeCharacteristic.IngestionTime);
// read the input from Kafka
DataStream<KafkaEvent> documentStream = env.addSource(
createKafkaSource(getSourceTopic(), getSourceProperties())).name("Kafka[" + getSourceTopic() + "]");
// Configure the Rules data stream
DataStream<RulesEvent> ruleStream = env.addSource(
new RulesApiHttpSource(
getApiRulesSubdomain(),
getApiBearerToken(),
DataType.DataTypeName.LOGS,
getRulesApiCacheDuration()) // Currently set to 120000
);
MapStateDescriptor<String, RulesEvent> ruleStateDescriptor = new MapStateDescriptor<>(
"RulesBroadcastState",
BasicTypeInfo.STRING_TYPE_INFO,
TypeInformation.of(new TypeHint<RulesEvent>() {
}));
// broadcast the rules and create the broadcast state
BroadcastStream<RulesEvent> ruleBroadcastStream = ruleStream
.broadcast(ruleStateDescriptor);
// extract the resources and attributes
documentStream
.connect(ruleBroadcastStream)
.process(new FanOutLogsRuleMapper()).name("FanOut Stream")
.addSink(createKafkaSink(getDestinationProperties()))
.name("FanOut Sink");
// run the job
env.execute(FanOutJob.class.getName());
}
The custom HTTP source which gets the rules
public class RulesApiHttpSource extends RichSourceFunction<RulesEvent> {
private static final Logger LOGGER = LoggerFactory.getLogger(RulesApiHttpSource.class);
private final long pollIntervalMillis;
private final String endpoint;
private final String bearerToken;
private final DataType.DataTypeName dataType;
private final RulesApiCaller caller;
private volatile boolean running = true;
public RulesApiHttpSource(String endpoint, String bearerToken, DataType.DataTypeName dataType, long pollIntervalMillis) {
this.pollIntervalMillis = pollIntervalMillis;
this.endpoint = endpoint;
this.bearerToken = bearerToken;
this.dataType = dataType;
this.caller = new RulesApiCaller(this.endpoint, this.bearerToken);
}
#Override
public void open(Configuration configuration) throws Exception {
// do nothing
}
#Override
public void close() throws IOException {
// do nothing
}
#Override
public void run(SourceContext<RulesEvent> ctx) throws IOException {
while (running) {
if (pollIntervalMillis > 0) {
try {
RulesEvent event = new RulesEvent();
event.setRules(getCurrentRulesList());
event.setDataType(this.dataType);
event.setRetrievedAt(Instant.now());
ctx.collect(event);
Thread.sleep(pollIntervalMillis);
} catch (InterruptedException e) {
running = false;
}
} else if (pollIntervalMillis <= 0) {
cancel();
}
}
}
public List<Rule> getCurrentRulesList() throws IOException {
// call API and get rulles
}
#Override
public void cancel() {
running = false;
}
}
The BroadcastProcessFunction
public abstract class FanOutRuleMapper extends BroadcastProcessFunction<KafkaEvent, RulesEvent, KafkaEvent> {
protected final String RULES_EVENT_NAME = "rulesEvent";
protected final MapStateDescriptor<String, RulesEvent> ruleStateDescriptor = new MapStateDescriptor<>(
"RulesBroadcastState",
BasicTypeInfo.STRING_TYPE_INFO,
TypeInformation.of(new TypeHint<RulesEvent>() {
}));
#Override
public void processBroadcastElement(RulesEvent rulesEvent, BroadcastProcessFunction<KafkaEvent, RulesEvent, KafkaEvent>.Context ctx, Collector<KafkaEvent> out) throws Exception {
ctx.getBroadcastState(ruleStateDescriptor).put(RULES_EVENT_NAME, rulesEvent);
LOGGER.debug("Added to broadcast state {}", rulesEvent.toString());
}
// omitted rules matching logic
}
public class FanOutLogsRuleMapper extends FanOutRuleMapper {
public FanOutLogsJobRuleMapper() {
super();
}
#Override
public void processElement(KafkaEvent in, BroadcastProcessFunction<KafkaEvent, RulesEvent, KafkaEvent>.ReadOnlyContext ctx, Collector<KafkaEvent> out) throws Exception {
RulesEvent rulesEvent = ctx.getBroadcastState(ruleStateDescriptor).get(RULES_EVENT_NAME);
ExportLogsServiceRequest otlpLog = extractOtlpMessageFromJsonPayload(in);
for (Rule rule : rulesEvent.getRules()) {
boolean match = false;
// omitted rules matching logic
if (match) {
for (RuleDestination ruleDestination : rule.getRulesDestinations()) {
out.collect(fillInTheEvent(in, rule, ruleDestination, otlpLog));
}
}
}
}
}
Maybe you can give the complete code of the FanOutLogsRuleMapper class, currently the match variable is always false
When I need to work with I/O (Query DB, Call to the third API,...), I can use RichAsyncFunction. But I need to interact with Google Sheet via GG Sheet API: https://developers.google.com/sheets/api/quickstart/java. This API is sync. I wrote below code snippet:
public class SendGGSheetFunction extends RichAsyncFunction<Obj, String> {
#Override
public void asyncInvoke(Obj message, final ResultFuture<String> resultFuture) {
CompletableFuture.supplyAsync(() -> {
syncSendToGGSheet(message);
return "";
}).thenAccept((String result) -> {
resultFuture.complete(Collections.singleton(result));
});
}
}
But I found that message send to GGSheet very slow, It seems to send by synchronous.
Most of the code executed by users in AsyncIO is sync originally. You just need to ensure, it's actually executed in a separate thread. Most commonly a (statically shared) ExecutorService is used.
private class SendGGSheetFunction extends RichAsyncFunction<Obj, String> {
private transient ExecutorService executorService;
#Override
public void open(Configuration parameters) throws Exception {
super.open(parameters);
executorService = Executors.newFixedThreadPool(30);
}
#Override
public void close() throws Exception {
super.close();
executorService.shutdownNow();
}
#Override
public void asyncInvoke(final Obj message, final ResultFuture<String> resultFuture) {
executorService.submit(() -> {
try {
resultFuture.complete(syncSendToGGSheet(message));
} catch (SQLException e) {
resultFuture.completeExceptionally(e);
}
});
}
}
Here are some considerations on how to tune AsyncIO to increase throughput: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/Flink-Async-IO-operator-tuning-micro-benchmarks-td35858.html
I've tested Geofence example by cn1 where it sets local notification. When the app is closed(get destroyed), it still gives notification. But I want to get location through GPS and run connectionRequest to save them in the server. I replaced the connectionRequest code instead of LocalNotification in following code but it doesnot work. What should I do to run the connectionRequest when the app is closed(not when it is minimized but destroyed) so that once the user installs and close (destroys) it, the app sent his/her location data in the server forever untill the app is uninstalled.
Geofence gf = new Geofence("test", loc, 100, 100000);
LocationManager.getLocationManager().addGeoFencing(GeofenceListenerImpl.class, gf);
Geofence with localNotification:
public class GeofenceListenerImpl implements GeofenceListener {
#Override
public void onExit(String id) {
}
#Override
public void onEntered(String id) {
if(Display.getInstance().isMinimized()) {
Display.getInstance().callSerially(() -> {
Dialog.show("Welcome", "Thanks for arriving", "OK", null);
});
} else {
LocalNotification ln = new LocalNotification();
ln.setId("LnMessage");
ln.setAlertTitle("Welcome");
ln.setAlertBody("Thanks for arriving!");
Display.getInstance().scheduleLocalNotification(ln, 10, LocalNotification.REPEAT_NONE);
}
}
}
Why the following doesnot work? (it only work when the app is running or is minimized but not when it is destroyed.)
public class GeofenceListenerImpl implements GeofenceListener {
#Override
public void onExit(String id) {
System.out.println("geofence onExit");
}
#Override
public void onEntered(String id) {
if(Display.getInstance().isMinimized()) {
Display.getInstance().callSerially(() -> {
System.out.println("geofence isMinimized");
});
} else {
System.out.println("geofence when app is closed");
//I want to run connectionRequest here but is not working
}
}
}
PS. I've used background fetch but it only works when the app is minimized.
Update1: Demo of how I used connectionRequest outside of minimized() method...
public class GeofenceListenerImpl implements GeofenceListener {
#Override
public void onExit(String id) {
System.out.println("geofence onExit");
}
#Override
public void onEntered(String id) {
if(Display.getInstance().isMinimized()) {
Display.getInstance().callSerially(() -> {
});
} else {
System.out.println("geofence when app is closed");
Connection c = new Connection();
c.liveTrackConnectionMethod("22" , "23");
}
}
}
Connection class
public class Connection {
ArrayList<Map<String, Object>> response;
public void liveTrackConnectionMethod(String lat, String lon) {
ConnectionRequest cr = new ConnectionRequest() {
#Override
protected void readResponse(InputStream input) throws IOException {
JSONParser jSONParser = new JSONParser();
Map parser = jSONParser.parseJSON(new InputStreamReader(input));
response = null;
}
};
cr.setPost(true);
cr.setUrl("http://url.com");
cr.addArgument("userid", Preferences.get(AllUrls.userIdPreference, null));
cr.addArgument("lat", lat + "");
cr.addArgument("long", lon + "");
cr.addRequestHeader("Accept", "application/json");
NetworkManager.getInstance().addToQueueAndWait(cr);
}
}
I think an app will always return false for isMinimized() when the app is closed or minimized (i.e. not currently running in the foreground) I may be wrong about this.
Try calling your connectionRequest script outside the isMinimized(). After all, you will want to keep track of user location whether they are using the app or not.
Your first solution with LocalNotification will show users a notification by calling the else part, rather than the Dialog when they're using the app, because isMinimized() will be false.
I'm configuring WS-Security for my web service with Apache CXF. I've added WSS4JInInterceptor and callback and it's working. The problem is that some methods don't require to be protected by WS-Security and some should.
How can i do that way? Any annotations or input map keys for WSS4JInInterceptor?
I can see in WSS4jInInterceptor code:
public void handleMessage(SoapMessage msg) throws Fault {
if (msg.containsKey(SECURITY_PROCESSED) || isGET(msg)) {
return;
}
So it seems i can add my custom interceptor and add SECURITY_PROCESSED to unprotected methods, but it seems that there is better approach.
For now i had to reinvent the wheel with my own Interceptor impl:
// adds skip flag for methods that should not be checked
public static class CheckMethodsInterceptor implements PhaseInterceptor<SoapMessage> {
private List<String> checkedMethods;
public CheckMethodsInterceptor(List<String> checkedMethods) {
this.checkedMethods = checkedMethods;
}
protected void allowConnection(SoapMessage message) {
// skip checking by WSS4JInInterceptor
message.put(WSS4JInInterceptor.SECURITY_PROCESSED, "true");
}
#Override
public void handleMessage(SoapMessage message) throws Fault {
String action = (String)message.get("SOAPAction");
if (action == null || !checkedMethods.contains(action.substring(action.lastIndexOf("/") + 1))) {
allowConnection(message);
}
}
#Override
public Set<String> getAfter() {
return Collections.emptySet();
}
#Override
public Set<String> getBefore() {
return Collections.emptySet();
}
#Override
public String getId() {
return CheckMethodsInterceptor.class.getName();
}
#Override
public String getPhase() {
return Phase.PRE_PROTOCOL;
}
#Override
public Collection<PhaseInterceptor<? extends Message>> getAdditionalInterceptors() {
return null;
}
#Override
public void handleFault(SoapMessage message) {
}
}
and use it like this:
// checked methods only (before WSS4JInInterceptor !)
SecurityService.CheckMethodsInterceptor checkMethodsInterceptor =
new SecurityService.CheckMethodsInterceptor(Arrays.asList(
"CreateUsers",
"GetUsers"
));
ep.getServer().getEndpoint().getInInterceptors().add(checkMethodsInterceptor);
WSS4JInInterceptor inSecurityInterceptor = new WSS4JInInterceptor(inSecurityProperties);
Feel free to suggest better solution.
In Python I can consume a web service so easily:
from suds.client import Client
client = Client('http://www.example.org/MyService/wsdl/myservice.wsdl') #create client
result = client.service.myWSMethod("Bubi", 15) #invoke method
print result #print the result returned by the WS method
I'd like to reach such a simple usage with Java.
With Axis or CXF you have to create a web service client, i.e. a package which reproduces all web service methods so that we can invoke them as if they where normal methods. Let's call it proxy classes; usually they are generated by wsdl2java tool.
Useful and user-friendly. But any time I add/modify a web service method and I want to use it in a client program I need to regenerate proxy classes.
So I found CXF DynamicClientFactory, this technique avoids the use of proxy classes:
import org.apache.cxf.endpoint.Client;
import org.apache.cxf.endpoint.dynamic.DynamicClientFactory;
//...
//create client
DynamicClientFactory dcf = DynamicClientFactory.newInstance();
Client client = dcf.createClient("http://www.example.org/MyService/wsdl/myservice.wsdl");
//invoke method
Object[] res = client.invoke("myWSMethod", "Bubi");
//print the result
System.out.println("Response:\n" + res[0]);
But unfortunately it creates and compiles proxy classes runtime, hence requires JDK on the production machine. I have to avoid this, or at least I can't rely on it.
My question:
Is there another way to dinamically invoke any method of a web service in Java, without having a JDK at runtime and without generating "static" proxy classes? Maybe with a different library? Thanks!
I know this is a really old question but if you are still interested you could use soap-ws github project: https://github.com/reficio/soap-ws
Here you have a sample usage really simple:
Wsdl wsdl = Wsdl.parse("http://www.webservicex.net/CurrencyConvertor.asmx?WSDL");
SoapBuilder builder = wsdl.binding()
.localPart("CurrencyConvertorSoap")
.find();
SoapOperation operation = builder.operation()
.soapAction("http://www.webserviceX.NET/ConversionRate")
.find();
Request request = builder.buildInputMessage(operation)
SoapClient client = SoapClient.builder()
.endpointUrl("http://www.webservicex.net/CurrencyConvertor.asmx")
.build();
String response = client.post(request);
As you can see it is really simple.
With CXF 3.x this could be possible with StaxDataBinding. Follow below steps to get the basics. Of course, this could be enhanced to your needs.
Create StaxDataBinding something like below. Note below code can be enhanced to your sophistication.
class StaxDataBinding extends AbstractInterceptorProvidingDataBinding {
private XMLStreamDataReader xsrReader;
private XMLStreamDataWriter xswWriter;
public StaxDataBinding() {
super();
this.xsrReader = new XMLStreamDataReader();
this.xswWriter = new XMLStreamDataWriter();
inInterceptors.add(new StaxInEndingInterceptor(Phase.POST_INVOKE));
inFaultInterceptors.add(new StaxInEndingInterceptor(Phase.POST_INVOKE));
inInterceptors.add(RemoveStaxInEndingInterceptor.INSTANCE);
inFaultInterceptors.add(RemoveStaxInEndingInterceptor.INSTANCE);
}
static class RemoveStaxInEndingInterceptor
extends AbstractPhaseInterceptor<Message> {
static final RemoveStaxInEndingInterceptor INSTANCE = new RemoveStaxInEndingInterceptor();
public RemoveStaxInEndingInterceptor() {
super(Phase.PRE_INVOKE);
addBefore(StaxInEndingInterceptor.class.getName());
}
public void handleMessage(Message message) throws Fault {
message.getInterceptorChain().remove(StaxInEndingInterceptor.INSTANCE);
}
}
public void initialize(Service service) {
for (ServiceInfo serviceInfo : service.getServiceInfos()) {
SchemaCollection schemaCollection = serviceInfo.getXmlSchemaCollection();
if (schemaCollection.getXmlSchemas().length > 1) {
// Schemas are already populated.
continue;
}
new ServiceModelVisitor(serviceInfo) {
public void begin(MessagePartInfo part) {
if (part.getTypeQName() != null
|| part.getElementQName() != null) {
return;
}
part.setTypeQName(Constants.XSD_ANYTYPE);
}
}.walk();
}
}
#SuppressWarnings("unchecked")
public <T> DataReader<T> createReader(Class<T> cls) {
if (cls == XMLStreamReader.class) {
return (DataReader<T>) xsrReader;
}
else {
throw new UnsupportedOperationException(
"The type " + cls.getName() + " is not supported.");
}
}
public Class<?>[] getSupportedReaderFormats() {
return new Class[] { XMLStreamReader.class };
}
#SuppressWarnings("unchecked")
public <T> DataWriter<T> createWriter(Class<T> cls) {
if (cls == XMLStreamWriter.class) {
return (DataWriter<T>) xswWriter;
}
else {
throw new UnsupportedOperationException(
"The type " + cls.getName() + " is not supported.");
}
}
public Class<?>[] getSupportedWriterFormats() {
return new Class[] { XMLStreamWriter.class, Node.class };
}
public static class XMLStreamDataReader implements DataReader<XMLStreamReader> {
public Object read(MessagePartInfo part, XMLStreamReader input) {
return read(null, input, part.getTypeClass());
}
public Object read(QName name, XMLStreamReader input, Class<?> type) {
return input;
}
public Object read(XMLStreamReader reader) {
return reader;
}
public void setSchema(Schema s) {
}
public void setAttachments(Collection<Attachment> attachments) {
}
public void setProperty(String prop, Object value) {
}
}
public static class XMLStreamDataWriter implements DataWriter<XMLStreamWriter> {
private static final Logger LOG = LogUtils
.getL7dLogger(XMLStreamDataWriter.class);
public void write(Object obj, MessagePartInfo part, XMLStreamWriter writer) {
try {
if (!doWrite(obj, writer)) {
// WRITE YOUR LOGIC HOW you WANT TO HANDLE THE INPUT DATA
//BELOW CODE JUST CALLS toString() METHOD
if (part.isElement()) {
QName element = part.getElementQName();
writer.writeStartElement(element.getNamespaceURI(),
element.getLocalPart());
if (obj != null) {
writer.writeCharacters(obj.toString());
}
writer.writeEndElement();
}
}
}
catch (XMLStreamException e) {
throw new Fault("COULD_NOT_READ_XML_STREAM", LOG, e);
}
}
public void write(Object obj, XMLStreamWriter writer) {
try {
if (!doWrite(obj, writer)) {
throw new UnsupportedOperationException("Data types of "
+ obj.getClass() + " are not supported.");
}
}
catch (XMLStreamException e) {
throw new Fault("COULD_NOT_READ_XML_STREAM", LOG, e);
}
}
private boolean doWrite(Object obj, XMLStreamWriter writer)
throws XMLStreamException {
if (obj instanceof XMLStreamReader) {
XMLStreamReader xmlStreamReader = (XMLStreamReader) obj;
StaxUtils.copy(xmlStreamReader, writer);
xmlStreamReader.close();
return true;
}
else if (obj instanceof XMLStreamWriterCallback) {
((XMLStreamWriterCallback) obj).write(writer);
return true;
}
return false;
}
public void setSchema(Schema s) {
}
public void setAttachments(Collection<Attachment> attachments) {
}
public void setProperty(String key, Object value) {
}
}
}
Prepare your input to match the expected input, something like below
private Object[] prepareInput(BindingOperationInfo operInfo, String[] paramNames,
String[] paramValues) {
List<Object> inputs = new ArrayList<Object>();
List<MessagePartInfo> parts = operInfo.getInput().getMessageParts();
if (parts != null && parts.size() > 0) {
for (MessagePartInfo partInfo : parts) {
QName element = partInfo.getElementQName();
String localPart = element.getLocalPart();
// whatever your input data you need to match data value for given element
// below code assumes names are paramNames variable and value in paramValues
for (int i = 0; i < paramNames.length; i++) {
if (paramNames[i].equals(localPart)) {
inputs.add(findParamValue(paramNames, paramValues, localPart));
}
}
}
}
return inputs.toArray();
}
Now set the proper data binding and pass the data
Bus bus = CXFBusFactory.getThreadDefaultBus();
WSDLServiceFactory sf = new WSDLServiceFactory(bus, wsdl);
sf.setAllowElementRefs(false);
Service svc = sf.create();
Client client = new ClientImpl(bus, svc, null,
SimpleEndpointImplFactory.getSingleton());
StaxDataBinding databinding = new StaxDataBinding();
svc.setDataBinding(databinding);
bus.getFeatures().add(new StaxDataBindingFeature());
BindingOperationInfo operInfo = ...//find the operation you need (see below)
Object[] inputs = prepareInput(operInfo, paramNames, paramValues);
client.invoke("operationname", inputs);
If needed you can match operation name something like below
private BindingOperationInfo findBindingOperation(Service service,
String operationName) {
for (ServiceInfo serviceInfo : service.getServiceInfos()) {
Collection<BindingInfo> bindingInfos = serviceInfo.getBindings();
for (BindingInfo bindingInfo : bindingInfos) {
Collection<BindingOperationInfo> operInfos = bindingInfo.getOperations();
for (BindingOperationInfo operInfo : operInfos) {
if (operInfo.getName().getLocalPart().equals(operationName)) {
if (operInfo.isUnwrappedCapable()) {
return operInfo.getUnwrappedOperation();
}
return operInfo;
}
}
}
}
return null;
}