Issue with webservice result schema in Webexperience Factory - websphere-portal

I'm using IBM webexperience factory 8.0. with fixpack. In my project I'm using Webservice Multiple Operations builder. It is having different operations with different result schema. When I try to assign the result of an operation to a Datapage it is showing the same result schema for all the operations.
Actually the return values of these operations should be different but it is showing the same.
Please refer this link for the screenshot.
How can I solve this issue.

Related

What are the options for getting data into JS/Angular from an Impala query within a Zeppelin note?

I'm currently getting data from an Impala query into Javascript/Angular, within a Zeppelin note, by using a %angular paragraph that does an AJAX call to the Zeppelin Notebook REST API (to run a paragraph synchronously, doc: https://zeppelin.apache.org/docs/0.8.2/usage/rest_api/notebook.html#run-a-paragraph-synchronously ) - this runs another paragraph in the same note, which is set to %impala and has the query. Doing this via the API means I get the data back into Javascript as the REST response to the API call. (That API link seems to be the only call in the whole API that will run a paragraph and return data from it).
However, it's been proving a bit unreliable in our big corporate setting for various reasons (network and policy related etc). My question is are there ANY other ways, within Zeppelin, that I could use to get data from a query (on a huge database), into Javascript. The data is often a few 10's of KB but could potentially by multi-MB.
For example, I know I can run pyspark .sql("My query here") type queries in a %pyspark paragraph but how do I then get the data over to JS running in the %angular para? Is the only option the same API call? I know of variable binding using the z. context but not sure that can cope with larger data sizes?
Any suggestions very welcome! Thanks

Data retrieval using dapper with ASP.NET Core C#

I am facing a problem while fetching SQL Server data using ASP.NET Core with Dapper method.
I have a views with 3 varbinary(max) columns with 800 records, which executes under 2 seconds in SSMS.
But in .NET Core, it is taking around 1 minute to return the result.
Please see Dapper data access methods I tried below.
Method 1:
var data = await connection.QueryMultipleAsync(command);
Method 2:
var data = await connection.QueryAsync<ModelName>(command);
Can anyone help me to solve the issue?
Which is the fastest Dapper method to fetch varbinary(max) data with same time as in SSMS?
The short answer is that they should not have any difference in performance, and since they have, there must be something else happening, but it is not possible to tell what as you haven't provided a repro of the problem.
If you are new to Dapper, my recommendation is to take a look a this tutorial I wrote. You'll get your answers: https://medium.com/dapper-net/multiple-executions-56c410e9f8dd
The full tutorial is here: https://medium.com/dapper-net/dapper-net-tutorial-summary-79125c8ecdb2

How to gather complete object schema with Mulesoft Salesforce connector (Mule 4)

...I'm using the Mule Salesforce connector (for Mule Runtime 4.4.2) in Anypoint Studio (7.4.2).
The Salesforce query language does not allow the * operator to gather all keys from an object, so I'm looking for another means to retrieve a sample object and create a model record that I could use for updates and creation.
Using the Task object (documented here: https://developer.salesforce.com/docs/atlas.en-us.object_reference.meta/object_reference/sforce_api_objects_task.htm) as an example, I find that the describeLayout() and retrieve() methods look promising.
However, when I try to invoke retrieve(), I'm required to submit a list of fields to retrieve.
I don't see the describeLayout() method exposed in the connector, so I haven't seen what it returns.
Have I missed a general purpose approach to allow me to gather every field from a sample object?
[edited for clarity]
See if there's describe support. describeLayout is primarily used if you need to recreate a SF page in a mobile app for example, won't tell you much about field types and will list only fields the end user can see, there can be more hidden in the background.
You could have some luck with REST API describe: https://developer.salesforce.com/docs/atlas.en-us.api_rest.meta/api_rest/dome_sobject_describe.htm
Or metadata API: https://developer.salesforce.com/docs/atlas.en-us.api.meta/api/sforce_api_calls_describesobjects_describesobjectresult.htm
I don't know what's available for you. I'd expect Mule connector to do it for you as part of some connection wizard, pull info about all queryable tables and after you pick one - about all fields you can see in it. Maybe you're overcomplicating something, you need a truly dynamic SELECT * equivalent, that would work when admin adds new fields without having to refresh the connection?
Metadata can also be queried, it's stored in tables like actual data. See if https://stackoverflow.com/a/60067076/313628 helps.
...so it turns out that the Mule 4 Salesforce connector does support describe SObject.
To the Anypoint Studio developer, it shows up like this:
The XML definition offers little further insight.
Update: After further investigation, it turns out that an additional operation needs to be applied using Dataweave to get a simple list of fields. After you have the SObject structure in payload, you will:
payload.fields.*name
This yields an array with the field names.

What is the best way to copy and compare data received from Web API request to SQL Server database?

There is a requirement in my project where we need to design a system which can collect data through Web API and then use the data to compare and copy the received data to an existing SQL Server DB. I want to know if anyone has already worked on such requirement and if yes then what is the best way to design it? I am currently thinking of below two options. Please let me know which one is better and if there is any other option.
My algorithm will be as - fetch the data through web api -> compare the data -> save mismatched data to a particular table -> copy new data to the existing tables.
The two options I am currently thinking of are-
1) Use a windows service which will run once in a day and execute above algo.
2) Use SSIS package which will run once in a day and execute above algo.
If anyone has used either of this solution, please guide me to an article or blog which can be helpful to me.
I have similar project requirement before. What I achieved is in SSIS.
Brief steps:
Using C# script to get the return data (http://json2csharp.com/ is an easy way to return C# class based on your JSON components)
using third party dll, install Newtonsoft.Json to deserialize the JSON
Assign the results in C# script to each pre defined variable (be careful with the data type)
Compare the result with the existing table in data flow task.
Let me know if you have any questions

How to setup EXTJS4 store for CRUD Couch DB?

How to calibrate Extjs 4 store for simple CRUD from/to couchDb?
There is a demo project that was put together for our last Austin Sencha meetup that shows connecting Ext 4 to both Couch and MongoDB:
https://github.com/coreybutler/JSAppStack
Specifically this class will probably help you get started.
I have developed a library called SenchaCouch to make it easy to use CouchDB as the sole server for hosting both application code and data. Check it out at https://github.com/srfarley/sencha-couch.
I'd like to point out that to fully implement CRUD capabilities with the demo require some modification. CouchDB requires you to append revisions for any update/delete operation. This can also cause some issues with the field attributes in the Ext REST proxy. There is a project called mvcCouch that would be worth taking a look at. This project references a plugin that should help with full CRUD operations against CouchDB.
You'll find a number of subtleties in ExtJS 4's REST proxy that can slow you down. this brief post summarises the major ones:
In your Model class, you have to either define a hardcoded 'id' property or use 'idProperty' to specify one column as 'id'.
You server side code needs to return the entire updated record(s) to the browser. CouchDB normally returns only an _id and _rev, so you'll have to find a way to get the entire document on your own.
Be aware that the format of the record in the "data" must be JSON-formatted.
Be sure to implemented at least one Validator in your Model class because, in ExtJS source code AbstractStore.js, you can find the following code, which may always return true for a new created record in RowEditing plugin when the store is set as autoSync = true.
filterNew: function(item) {
// only want phantom records that are valid
return item.phantom === true && item.isValid();
},
This last item is, in my opinion, a design bug. The isValid() function should by rights return true by default, and rely on the developer to throw an error if problems occur.
The end result is that unless you have a validator for every field, the updates never get sent to CouchDB. You won't see any error thrown, it will just do nothing.
I just released two update libs for Sencha Touch and CouchDB respecively(based on S. Farley's previous work). They support nested data writing and basic CRUD.
https://github.com/rwilliams/sencha-couchdb-extjs
https://github.com/rwilliams/sencha-couchdb-touch

Resources