have variable for the rownum through multiple loop itarations - loops

I am producing a csv file in OIC using append option, in a loop, so each iteration of a loop the process writes to the same file, with a rownum column which is the sequential record number in the file, the way I do it in my XSLT is:
<xsl:variable name="Counter">
<xsl:number level="any"/>
</xsl:variable>
<ns33:RowNum>
<xsl:value-of select="$Counter"/>
</ns33:RowNum>
or
<ns35:RowNum xml:id="id_286">
<xsl:value-of select="position ()"/>
</ns35:RowNum>
What happens is, with example of 10 records in a loop, in the first loop rownum takes values 1 .. 10, in the second loop 1 .. 10 again. In the csv file the rownum column have values 1 .. 10 1 .. 10. I want them to be 1 .. 20.
I have hard time figuring how to do that. Any ideas?
Thanks

In OIC, variable defined inside XSLT will be maintained within the scope of XSLT only. As you want to use a number consistent between different iterations of XSLT transformations, you will need a variable outside XSLT. This variable needs to be passed in as a parameter to XSLT.
Declare 1 variable in OIC level (outside the loop) and set it as 0.
As said before, it also passed as a parameter to XSLT.
Say the variable name as $externalCounter. Populate value of RowNum as below
<ns33:RowNum>
<xsl:value-of select="$externalCounter + $Counter"/>
</ns33:RowNum>
After your xslt transformation, inside the loop, update value of $externalCounter to maximum value of RowNum in XSLT output. In your case, value of last nodeset should be sufficient.
$xsltOutput/..../RowNum[last()]

Related

Power Automate - Filter Array with parameter variable

Power Automate - Filter Array
Filter array will look for numeric date value from List Rows Present. It will then email the results in a table via email.
This query works …
#and(equals(items()?[‘Date’], ‘44825’))
But replacing the forced ‘44825’ with a variable does not.
#and(equals(items()?[‘Date’], variables(‘DateNumber’)))
DateNumber is an int variable that contains 44825. Flow shows it is in variable as expected but the filter is not doing what is expected.
Not used much of this before so am thinking the variable function is not correct

assign a reference to an indicator dnamically in a for loop

I have a large number of internal path references in a LabVIEW project. Each path is entered manually into a bundle function along with a reference to a numeric indicator on the block diagram. Because I have a lot of paths and therefore a lot of numeric indicators, the block diagram is a big mess.
I want to streamline this by having a CSV file with a nx2 array. on column 1 I want to have the path of the internal reference itself. On column 2 I want to have the name of the numeric indicator (already placed in the block diagram and front panel) that corresponds to the path in column 1. Using a for loop I want to loop over each row of the CSV file and using a bundle function, bundle the path (on index 0) and a reference to the numeric indicator itself. Here is the actual problem I am having since I don't know how to dynamically assign the name of the numeric indicator (on index 1) to a digital reference as the loop executes. See the state of my current VI for more reference. Please help me find a way to dynamically crate digital reference to each numeric indicator as the loop goes through.
Right now, the closest I got to the goal is to get the name of the numeric indicator (index 1 on the CSV) assigned to a string reference, but my numeric indicators are still unreferenced and not connected to the bundle function.
Note that the column 2 in the CSV has the same name as the numeric indicators, so "numeric","numeric 1", "numeric 2", "numeric 3", "numeric 4"
Read this https://forums.ni.com/t5/LabVIEW/How-to-get-control-reference-from-control-indicator-label-name/td-p/3884075 to learn how to obtain control/indicator reference by its name. That should solve your problem. Use first CSV column as file path and the second column to obtain indicator reference. Then bundle two of them and that's it!

Pentaho data integration loop count variable

I want a simple loop function to count the number of loop like below in java programming:
for (int i = 0; i <3; i++) {
count = count+1;
}
System.out.println(count);
I am doing it using Pentaho data integration. so I have 1 job contain 3 transformations in it, where first transformation set the number of loop (above example 3), then second transformation click "Execute every input row" for looping, and set variable inside the transformation using Javascript with getVariable() and setVariable() function. the last transformation just get variable and write log to show the count.
The problem is every loop in the transformation 2 will get variable as 0. so it end up result = 1, what I expect is 3.
added the project files here: file
We'll need more details to help you, don't you have a simple sample of what you are trying to accomplish?
You can pass variables to a transformation from the job, so I don't think you'll need the getVariable() and setVariable() methods, you can just use the configuration properties of the transformation to execute:
I prefer using parameters (next tab) better than arguments/variables, but that's my preference.
The problem is that, in t2 transformation, you are getting the variable and setting a new value for the same variable at the same time, which does not work in the same transformation. When you close the Set variable step you get this warning:
To avoid it you need to use two variables, one you set before executing the loop, and another set each time you execute the loop or after executing the loop with the last value.
I have modified your job to make it work, in t1 transformation, I have added a new field (rownum_seq) created with the Add sequence step, to know how much to add to variable cnt in each execution of the loop. I could have used your id field, but in case you don't have a similar field in your real world job, that's the step you need to achieve something similar. I have modified the variable name to make more clear what I'm doing, in t1 I set the value of variable var_cnt_before.
In t2 transformation, I read var_cnt_before, and set the value of var_cnt_after as the sum of var_cnt_before + rownum_seq, this means I'm changing the value of var_cnt_after each time t2 is executed.
In t3 transformation, I read var_cnt_after, which has the value of the last execution of t2.
You could also calculate var_cnt_after in t1 and not modify it in t2, using the Group by step to get the max value of rownum_seq, so you don't need to modify that variable each time you execute t2, depending on what you need to achieve you might need to use it or change in t2 or you just need the final value so you calculate it in t1.
This is the link to the modified job and transformations.

Why is this Loop not working in qlik sense script?

I am having a problem trying to do a FOR LOOP as it produces no values.
I am doing 3 steps, but not sure what is the problem, I have attached the APP.
/// 1. These are the FOR values to PASS for the variables below.
for i= -1 to -7 ;
for j=-8 to -15;
for z= -16 to -21
//// 2.These are variable FUNCTIONS with same structure
LET
V_result1=(sum(Peek(Result_1,$i))*0.45+sum(Peek(Result_1,$j))*0.35+sum(Peek(Result_1,$z))*0.2)*1/5;
V_result2=(sum(Peek(Result_2,$i))*0.45+sum(Peek(Result_2,$j))*0.35+sum(Peek(Result_2,$z))*0.2)*1/5;
//// 3. The table where to apply those VARIABLES from a RESIDENT table.
DATE_PRODUCTION_4;
LOAD
"Date",
Sum($(V_result1)) as Forecast1,
sum($(V_result2)) as Forecast2
Resident [DATE_PRODUCTION_3]
GROUP BY "Date";
APP TEST
Couple of things going wrong here:
If we look here, we see this:
Argument Description
field_name Name of the field for which the return value is required. Input value must be given as a string (for example, quoted literals).
"Input value must be given as a string (for example, quoted literals)."
So instead of:
Peek(Result_1,$i)
You should use:
Peek('Result_1', $i)
If we look here, we see this:
When using a variable for text replacement in the script or in an
expression, the following syntax is used:
$(variablename)
So building on step one, instead of this
Peek('Result_1', $i)
You should use:
Peek('Result_1', $(i))
Your for loop starts at -1 and goes to -7, but in the app you added, -7 will always return NULL, since your data only consists of 4 rows. So change your for loop to a smaller range and first start of with one loop, then nest another for loop and then nest another. That way you can solve it step-by-step.

SSIS - Export table data to flat file in chunks

I have a requirement of exporting the table data into a flat file (.csv format) with each of the files containing only 5000 records each. If the table has 10000 records then it has to create 2 files with 5000 records each. The records in the table will increase on daily basis. So basically I am looking for a dynamic solution which will export "n" number of records to "n" number of files with only 5000 records each.
*A simple visualization:
Assume the table has 10230 records. What i need is:
File_1.csv - 1 to 5000 records
File_2.csv - 5001 to 10000 records
File_3.csv - 10001 to 10230 records*
I have tried BCP command for the above mentioned logic. Can this be done using Data Flow Task?
No, that is not something SSIS is going to support well natively.
A Script Task, or Script Component acting as a destination, could accomplish this but you'd be re-inventing a significant portion of the wheel with all the file handling required.
The first step would be to add a row number to all the rows coming from the source in a repeatable fashion. That could be as simple as SELECT *, ROW_NUMBER() OVER (ORDER BY MyTablesKey) AS RN FROM dbo.MyTable
Now that you have a monotonically increasing value associated to each row, you could use the referenced answer to be able to pull the data in a given range if you take the ForEach approach.
If you could make a reasonable upper bounds on how many buckets/files of data you'd ever have, then you could use some of the analytic functions to specify the size of your groupings. Then all of the data is fed into the data flow and you have a conditional split that has that upper bounds worth of output buffers heading to flat file destinations.
An alternative approach would be to export the file as is and then use something like PowerShell to split it up into smaller units. Unix is nice as they have split as a native method for just this sort of thing.
Well, it can be done with standard SSIS components ans SQL 2012+. Idea is the following - use SELECT ... ORDER BY ... OFFSET <Row offset> ROWS FETCH NEXT <Row number> ROWS as bucket source and use it together with FOR container and Flat File Destination with expressions.
More details:
Create package with Iterator int variable with init value of 0 and Flat File Destination where connection string is defined as an Expression of `"\Filename_"+[User::Iterator]+".csv". Also define Bucket_size variable or parameter as int
Create For loop sequence container. Leave its parameters empty for now. Next steps will be inside For Loop.
On Loop container (or Package level - up to you) create SQL_rowcount variable with "SELECT count(*) FROM ... ORDER BY ... OFFSET "+(DT_WSTR,20)[User::Iterator]*[User::Bucket_Size]+" ROWS ". This command gives you remaining rows count in the current bucket.
Create Task Execute SQL Command with command from SQL_rowcount variable and storing single result into a variable Bucket_Rowcount.
Create a string variable SQL_bucket with expression "SELECT .. FROM ... ORDER BY ... OFFSET "+(DT_WSTR,20)[User::Iterator]*[User::Bucket_Size]+" ROWS FETCH NEXT "+(DT_WSTR,20)[User::Bucket_Size]+" ROWS".
Create a simple dataflow task - OLEDB Source with command from SQL_bucket variable and Flat File destination from step 1.
Now little trick - we have to define loop conditions. We do it based on current bucket rowcount - last bucket has no more than Bucket size rows. Continuation condition (checked before loop entry) - last iteration has more than Bucket rows (at least 1 row left for the next iteration).
So, define the following properties for For Loop Container
InitExpression - #Bucket_Rowcount = #Bucket_Size + 1
EvalExpression - #Bucket_Rowcount > #Bucket_Size
AssignExpression - #Iterator = #Iterator + 1
This is it.
You can optimize it if the source table is not modified during export; first (before For Loop) fetch number of rows and figure out number of buckets, and do this number of iterations. Thus you avoid repeating select count(*) statements in the loop.

Resources