azure logic app - check if stored procedure results are empty - azure-logic-apps

In an Azure Logic App, I am running EXECUTE STORED PROCEDURE (V2), which I want to check results if there is no returned data. Below is the result body when no results come back. How can I check for this in the logic app?
I have tried enter link description here but still not working.
update... so I made a string variable, and then appended the results of sql to this string variable. Then I count the length, if its 2 then the results are blank, if its NOT 2, then it has data.
Is this an efficient way or is there something easier?

We could add a Condition with "#{body('Execute_stored_procedure_(V2)')?['resultsets']?['Table1']}" is equal to "[]". If true[means no records], do the things you want. We could set this in the Code view mode.
Example :
if the SP results some rows, the query body will be like this:
{
"ResultSets": {
"Table1": [
{
"s_no": 1
},
{
"s_no": 2
}
]
},
"ReturnCode": 0,
"OutputParameters": {}
}
The sample sp logic with the resultset for the above sample is:
if the SP results no row, the query body will be like below:
{
"ResultSets": {
"Table1": []
},
"ReturnCode": 0,
"OutputParameters": {}
}
The sample sp logic with no resultset for the above sample is:
The above SP used and pics are just for easy understanding.
Logic app's code view - expression will be like below:
"expression": {
"and": [
{
"equals": [
"#{body('Execute_stored_procedure_(V2)')?['resultsets']?['Table1']}",
"[]"
]
}
]
},

Related

MongoDB update array of documents and replace by an array of replacement documents

Is it possible to bulk update (upsert) an array of documents with MongoDB by an array of replacement fields (documents)?
Basically to get rid of the for loop in this pseudo code example:
for user in users {
db.users.replaceOne(
{ "name" : user.name },
user,
{ "upsert": true }
}
The updateMany documentation only documents the following case where all documents are being updated in the same fashion:
db.collection.updateMany(
<query>,
{ $set: { status: "D" }, $inc: { quantity: 2 } },
...
)
I am trying to update (upsert) an array of documents where each document has it's own set of replacement fields:
updateOptions := options.UpdateOptions{}
updateOptions.SetUpsert(true)
updateOptions.SetBypassDocumentValidation(false)
_, error := collection.Col.UpdateMany(ctx, bson.M{"name": bson.M{"$in": names}}, bson.M{"$set": users}, &updateOptions)
Where users is an array of documents:
[
{ "name": "A", ...further fields},
{ "name": "B", ...further fields},
...
]
Apparently, $set cannot be used for this case since I receive the following error: Error while bulk writing *v1.UserCollection (FailedToParse) Modifiers operate on fields but we found type array instead.
Any help is highly appreciated!
You may use Collection.BulkWrite().
Since you want to update each document differently, you have to prepare a different mongo.WriteModel for each document update.
You may use mongo.ReplaceOneModel for individual document replaces. You may construct them like this:
wm := make([]mongo.WriteModel, len(users)
for i, user := range users {
wm[i] = mongo.NewReplaceOneModel().
SetUpsert(true).
SetFilter(bson.M{"name": user.name}).
SetReplacement(user)
}
And you may execute all the replaces with one call like this:
res, err := coll.BulkWrite(ctx, wm)
Yes, here we too have a loop, but that is to prepare the write tasks we want to carry out. All of them is sent to the database with a single call, and the database is "free" to carry them out in parallel if possible. This is likely to be significantly faster that calling Collection.ReplaceOne() for each document individually.

Full Text Search in OrientDB JSON Data

I have following data in OrientDB 3.0.27 where some of the values are in JSON Array and some are string
{
"#type": "d",
"#rid": "#57:0",
"#version": 2,
"#class": "abc_class",
"user_name": [
"7/1 LIBOR Product"
],
"user_Accountability": [],
"user_Rollout_32_date": [],
"user_Brands": [
"AppNet"
],
"user_lastModificationTime": [
"2019-11-27 06:40:35"
],
"user_columnPercentage": [
"0.00"
],
"user_systemId": [
"06114a87-a099-0c30c60b49c4"
],
"user_lastModificationUser": [
"system"
],
"system_type": "Product",
"user_createDate": [
"2017-10-27 09:58:42"
],
"system_modelId": "bian_model",
"user_parent": [
"a12a41bd-af6f-0ca028af480d"
],
"user_Strategic_32_value": [],
"system_oeId": "06114a87-a099-0c30c60b49c4",
"user_description": [],
"#fieldTypes": "user_name=e,user_Accountability=e,user_Rollout_32_date=e,user_Brands=e,user_lastModificationTime=e,user_columnPercentage=e,user_systemId=e,user_lastModificationUser=e,user_createDate=e,user_parent=e,user_Strategic_32_value=e,user_description=e"
}
I have tried following queries:
select * from `abc_class ` where any() = ["AppNet"] limit 2;
select * from `abc_class ` where any() like '%a099%' limit 2;
Both of the above queries work since they are respecting the datatype of the field.
I want to run a contains query which will search in ANY field with ANY data type (like String, number, JSON Array, etc) more of like a - full text search.
select * from `abc_class ` where any() like '%AppNet%' limit 2;
The above query doesn't work since the real value is inside JSON Array. Tried almost all the things from filtering section documentation
How can I achieve full-text search like functionality with the existing data?
EDIT # 1
After doing more research now I'm able to atleast convert the array value into string and then run like operator on it, like below;
select * from `abc_class` where user_name.asString() like '%LIBOR%'
However, using any().asString() doesn't result any result
select * from `abc_class` where any().asString() like '%LIBOR%'
If the above query can be enhanced somehow to query any column as string, then the problem can be resolved.
If all the column values needs to be searched then we can create a JSON object of the full row data and convert it into String.
Then query the string with like keyword, as follows:
select * from `abc_class` where #this.toJSON().asString() like '%LIBOR%'
If we will be converting to #this.asString() directly then we'll be getting the count of array elements instead of the real data inside the array elements like below:
abc_class#57:4{system_modelId:model,system_oeId:14f4b593-a57d-4d37ad070a10,system_type:Product,user_lastModificationUser:[1],user_name:[1],user_description:[0],user_Accountability:[0],user_lastModificationTime:[1],user_Rollout_32_date:[0],user_Strategic_32_value:[0],user_createDate:[1],user_Brands:[0],user_parent:[1],user_systemId:[1],user_columnCompletenessPercentage:[1]} v2
Therefore, we need to first convert into JSON and then into String to query the full record using #this.toJSON().asString()
References:
https://orientdb.com/docs/last/sql/SQL-Methods.html
https://orientdb.com/docs/last/sql/SQL-Where.html
https://orientdb.com/docs/last/sql/SQL-Syntax.html

Mongodb: upsert multiple documents at the same time (with key like $in)

I stumbled upon a funny behavior in MongoDB:
When I run:
db.getCollection("words").update({ word: { $in: ["nico11"] } }, { $inc: { nbHits: 1 } }, { multi: 1, upsert: 1 })
it will create "nico11" if it doesn't exist, and increase nbHits by 1 (as expected).
However, when I run:
db.getCollection("words").update({ word: { $in: ["nico10", "nico11", "nico12"] } }, { $inc: { nbHits: 1 } }, { multi: 1, upsert: 1 })
it will correctly update the keys that are already in the DB, but not insert the missing ones.
Is that the expected behavior, and is there any way I can provide an array to mongoDB, for it to update the existing elements, and create the ones that need to be created?
That is expected behaviour according to the documentation:
The update creates a base document from the equality clauses in the
parameter, and then applies the update expressions from the
parameter. Comparison operations from the will not be
included in the new document.
And, no, there is no way to achieve what you are attempting to do here using a simple upsert. The reason for that is probably that the expected outcome would be impossible to define. In your specific case it might be possible to argue along the lines of: "oh well, it is kind of obvious what we should be doing here". But imagine a more complex query like this:
db.getCollection("words").update({
a: { $in: ["b", "c" ] },
x: { $in: [ "y", "z" ]}
},
{ $inc: { nbHits: 1 } },
{ multi: 1, upsert: 1 })
What should MongoDB do in this case?
There is, however, the concept of bulk write operations in MongoDB where you would need to define three separate updateOne operations and package them up in a single request to the server.

ElasticSearch: how to perform search as MSSQL "LIKE word% with tokenized string?

Currently we are performing full text search within MSSQL with query:
select * from contract where number like 'word%'
the problem is that contract number may be like
АА-1641471
TST-100069
П-5112-90-00230
001-1000017
1617/292/000001
and ES split all this into tokens.
How to configure ES not to split all this contract numbers into tokens and perform same search like SQL query above ?
the closest solution i've found is to perform query like this:
{
"size": 10,
"query": {
"regexp": {
"contractNumber": {
"value": ".*п-11.*"
}
}
}
}
this solution work same as MSSQL LIKE 'word%' with value like 1111,2568 etc, but fails with п-11
One option could be to use the wildcard query which can perform any type of wildcard combination i.e %val%, %val or val%
{
"query": {
"wildcard" : { "contractNumber" : "*11" }
}
}
NOTE: It's not recommended to start with a wildcard in the search. Could be extremely slow
To make this work with string values to prevent them from being tokenized, you need to update your index and tell the analyser to stay away. One way of doing that is to define the property as type keyword instead of text
PUT /_template/template_1
{
"index_patterns" : ["your_index*"],
"order" : 0,
"settings" : {
"number_of_shards" : 1
},
"mappings" : {
"your_document_type" : {
"properties" : {
"contractNumber" : {
"type" : "keyword"
}
}
}
}
NOTE: replace your_index with your index name and your_document_type with the document type.
When the mapping is added, delete the current index and recreate it, then it will use the template for properties and your contractNumber will be indexed as a keyword

findBy query not returning correct page info

I have a Person collection that is made up of the following structure
{
"_id" : ObjectId("54ddd6795218e7964fa9086c"),
"_class" : "uk.gov.gsi.hmpo.belt.domain.person.Person",
"imagesMatch" : true,
"matchResult" : {
"_id" : null,
"score" : 1234,
"matchStatus" : "matched",
"confirmedMatchStatus" : "notChecked"
},
"earlierImage" : DBRef("image", ObjectId("54ddd6795218e7964fa9086b")),
"laterImage" : DBRef("image", ObjectId("54ddd67a5218e7964fa908a9")),
"tag" : DBRef("tag", ObjectId("54ddd6795218e7964fa90842"))
}
Notice that the "tag" is a DBRef.
I've got a Spring Data finder that looks like the following:
Page<Person> findByMatchResultNotNullAndTagId(#Param("tagId") String tagId, Pageable page);
When this code is executed the find query looks like the following:
{ matchResult: { $ne: null }, tag: { $ref: "tag", $id: ObjectId('54ddd6795218e7964fa90842') } } sort: {} projection: {} skip: 0 limit: 1
Which is fine, I get a collection of 1 person back (limit=1). However the page details are not correct. I have 31 persons in the collection so I should have 31 pages. What I get is the following:
"page" : {
"size" : 1,
"totalElements" : 0,
"totalPages" : 0,
"number" : 0
}
The count query looks like the following:
{ count: "person", query: { matchResult: { $ne: null }, tag.id: "54ddd6795218e7964fa90842" } }
That tag.id doesn't look correct to me compared with the equivalent find query above.
I've found that if I add a new method to org.springframework.data.mongodb.core.MongoOperations:
public interface MongoOperations {
public long count(Query query, Class<?> entityClass, String collectionName);
}
And then re-jig AbstractMongoQuery.execute(Query query) to use that method instead of the similar method without the entityClass parameter then I get the correct paging results.
Question: Am I doing something wrong or is this a bug in Spring Data Mongo?
Edit
Taking inspiration from Christoph I've added the following test code on Git https://github.com/tedp/Spring-Data-Test
The information contained in the Page returned depends on the query executed. Assuming a total number of 31 elements in you collection, only a few of them, or even just one might match the given criteria by referencing the tag with id: 54ddd6795218e7964fa90842. Therefore you only get the total elements that match the query, and not the total elements within your collection.
This bug was actually fixed DATAMONGO-1120 as pointed out by Christoph. I needed to override the spring data version to use 1.6.2.RELEASE until the next iteration of Spring Boot where presumably Spring Data will be up lifted to at least 1.6.2.RELEASE.

Resources