How to change the order of the compose connector in logic apps(azure). By default it is asc order - azure-logic-apps

I get the data in a json format, in which i have table details (it has n number of rows and columns ). I want to send a mail and include the row details in the mail body. Right now i am able to send the mail and include the row details. I am using a compose connector to define the required rows. But the output is always in ascending order.(I want to display Message column first and then the Count column, but i always get Count column first and then Message column) I want to customize the output.
I tried using initialize connector but since i am using the for-each loop i can not use Initialize connector. (Initialize connector have to be initialized at top.
Before triggering the logic app
After triggering the logic app
Current output i am getting:
I want to customize the output i.e Message column first and then the count column

This is an expected behaviour that shouldn't have side effects if you stick to the JSON format. It's because JavaScript JSON libraries default to alphabetical ordering of properties.
However if you insist on custom order, you could refer to my answer before create new json object using compose connector. You need create a string with the input you want the compose it or parse it to json.

Related

Way to calculate difference between Tab Filter and Overall in Google Data Studio?

I have a calculated value in Google Data Studio that a client wants to see as follows:
Result = Value (No filters on page applied) - Value (Filters on page applied dynamically)
I don't know how to calculate or accomplish this in Google Data Studio or if this is possible. Can someone show me if this solution is possible?
There are two workarounds:
Parameters
Instead of using filters, create a parameter1 and a field with the function
Value -
case when parameter1=some_field then Value
else 0 end
Blend
Duplicate the data source and hide all field which shall not be filtered. Blend this data source with the original one and use as joining keys the dimension which are not filtered. Use the field Value in both sets as metrics and rename it as filtered / not filtered.
Add a chart and add in this chart a metric field with a formula substracting these two Value fields.

Convert CSV elements into a single Array using Azure Logic Apps

I have a csv file which has the following sample values
ReportId,ReportName
1,Poy
2,Boy
3,Soy
I want this to be converted into a single array like
[ReportId,ReportName,1,Poy,2,Boy,3,Soy]
using logic apps.
Is this possible?
You could refer to my below flow. Init a csv data, then split with expression split(outputs('Compose'),'\n'), you need to go to code view to edit the expression, or it would be split(outputs('Compose'),'\\n').
Then do the loop action to get the single data. The for each 2 input is split(item(),','). Then append the current item to the array.
Here is my result.

SSIS Script Component - get raw row data in data flow

I am processing a flat file in SSIS and one of the requirements is that if a given row contains an incorrect number of delimiters, fail the row but continue processing the file.
My plan is to load the rows into a single column in SQL server, but during the load, I’d like to test each row during the data flow to see if it has the right number of delimiters, and add a derived column value to store the result of that comparison.
I’m thinking I could do that with a script task component, but I’m wondering if anyone has done that before and what would be the best method? If a script task component would be the way to go, how do I access the raw row with its delimiters inside the script task?
SOLUTION:
I ended up going with a modified version of Holder's answer as I found that TOKENCOUNT() will not count null values per this SO answer. When two delimiters are not separated by a value, it will result in an incorrect count (at least for my purposes).
I used the following expression instead:
LEN(EntireRow) - LEN(REPLACE(EntireRow, "|", ""))
This results in the correct count of delimiters in the row, regardless of whether there's a value in a given field or not.
My suggestion is to use Derrived Column to do your test
And then add a Conditional Split to decide if you want to insert the rows or not.
Something like this:
Use the TokenCount function in the Derrived Column box to get number of columns like this: TOKENCOUNT(EntireRow,"|")

Vlookup data greater than 0 only from two columns

On my first tab, I need to return the data from the second tab, columns A,B and C; only when the value of column A is greater than 0. I have pics to send but don't see a way to upload them. Currently using this formula, but have been unable to expand on it to get what I need.
=VLOOKUP('MSA Units'!A4, 'MSA Units'!A4:C882,1,FALSE)
I have also tried various forms of INDEX and MATCH arguments with no good results.
HERE IS REVISED LINK TO IMAGES: http://imgur.com/a/20J0v
Filter MSA Units and for columnA select all other than (Blanks). Copy ColumnsA:C of what remains visible into Daily Report Coversheet.

How to add images and videos in cassandra's (Queries to include image)?

In cassandra how to add images in column family for a row. Below I mentioned a sample table which shows the KEY,Name,Age and Cover_Image.
Here we can able to add Name,Age by entering the queries like this
create column family users with comparator=UTF8Type
and column_metadata=[
{column_name:Name,validation_class:UTF8Type},
{column_name:Age,validation_class:LongType,index_type:KEYS}];
set users[babu][Name]='Babu Droid';
set users[babu][Age]=23;
Like the above queries, what's the query to add an image (query for both create(also validation_class) and set options)
For an image you would want to use BytesType for the column validator and simply insert the raw bytes of the image into the column. There won't be a good way to do something like that from the command line interface though. You would need to write some custom code using a client library.

Resources