Let us assume my my psake default.ps1 looks like this :
properties { $dllsToMerge= #("x.dll","y.dll")}
task ilmerge {
exec { &ilmerge -dll $dllsToMerge -out single.dll}
}
now when running this task at some times I would like to change which assemblies get merged. For example :
invoke-psake ilmerge -properties #{"dllsToMerge"="#("x.dll","y.dll","z.dll")"}
for some reason the above command does not work as it is not able to parse the string as array. Any way to do this without manual parsing and string to array conversion ?
Related
I extracted from a previous response an Object of tuple with the following regex :
.check(regex(""""idSc":(.{1,8}),"pasTemps":."codePasTemps":(.),"""").ofType[(String,String)].findAll.saveAs ("OBJECTS1"))
So I get my object :
OBJECTS1 -> List((1657751,2), (1658105,2), (4557378,2), (1657750,1), (916,1), (917,2), (1658068,1), (1658069,2), (4557379,2), (1658082,1), (4557367,1), (4557368,1), (1660865,2), (1660866,2), (1658122,1), (921,1), (922,2), (923,2), (1660875,1), (1660876,2), (1660877,2), (1658300,1), (1658301,1), (1658302,1), (1658309,1), (1658310,1), (2996562,1), (4638455,1))
After that I did a Foreach and need to extract every couple to add them in next requests So we tried :
.foreach("${OBJECTS1}", "couple") {
exec(http("request_foreach47"
.get("/ctr/web/api/seriegraph/bydates/${couple(0)}/${couple(1)}/1552863600000/1554191743799")
.headers(headers_27))
}
But I get the message : named 'couple' does not support index access
I also though that to use 2 regex on the couple to extract both part could work but I haven't found any way to use a regex on a session variable. (Even if its not needed for this case but possible im really interessed to learn how as it could be usefull)
If would be really thankfull if you could provided me help. (Im using Gatling 2 but can,'t use a more recent version as its for work and others scripts have been develloped with Gatling2)
each "couple" is a scala tuple which can't be indexed into like a collection. Fortunately the gatling EL has a function that handles tuples.
so instead of
.get("/ctr/web/api/seriegraph/bydates/${couple(0)}/${couple(1)}/1552863600000/1554191743799")
you can use
.get("/ctr/web/api/seriegraph/bydates/${couple._1}/${couple._2}/1552863600000/1554191743799")
I have the following values in my hiera yaml file:
test::config_php::php_modules :
-'soap'
-'mcrypt'
-'pdo'
-'mbstring'
-'php-process'
-'pecl-memcache'
-'devel'
-'php-gd'
-'pear'
-'mysql'
-'xml'
and following is my test class:
class test::config_php (
$php_version,
$php_modules = hiera_hash('php_modules', {}),
$module_name,
){
class { 'php':
version => $php_version,
}
$php_modules.each |String $php_module| {
php::module { $php_module: }
}
}
While running my puppet manifests I get the following error:
Error: Evaluation Error: Error while evaluating a Function Call, create_resources(): second argument must be a hash at /tmp/vagrant-puppet/modules-f38a037289f9864906c44863800dbacf/ssh/manifests/init.pp:46:3 on node testdays-1a.vagrant.loc.vag
I am quite confused on what exactly am I doing wrong. My puppet version is 3.6.2 and I also have parser = future
I would really appreciate any help here.
Looks like your YAML was slightly off.
You don't really need quotes in YAML.
Your indentation was two instead of one.
Your first colon on the first time was spaced. This will throw a syntax error.
it should look more like this:
test::config_php::php_modules:
- soap
- mcrypt
- pdo
- mbstring
- php-process
- pecl-memcache
- devel
- php-gd
- pear
- mysql
- xml
In the future try and look up YAML parsers like this: link
The problem was with my puppet version, somehow version 3.6 acts weird while creating resources, for instance it was failing on the following line,:
create_resources('::ssh::client::config::user', $fin_users_client_options)
The code snippet above is part of ssh module from puppet labs, which I assume is throughly tested and shouldn't be a reason for the an exception.
A further analysis led to the fact that the exception was thrown when the parameter parser = future was set in the config file
I cannot iterate using each without setting future as the parser, therefore I decided to change my source as follow:
I created a new class:
define test::install_modules {
php::module { $name: }
}
and then I changed the config config_php to:
class test::config_php (
$php_version,
$php_modules = [],
){
class { 'php':
version => $php_version,
}
install_modules { $php_modules: }
}
Everything seems to be much better now.
Camel 2.13.0
I am attempting to consume a json string containing multiple records and produce an output file for each record.
public void configure() {
from("file:data/input")
// my bean splits the input string and returns a json string via Gson
.bean(new com.camel.Tokenizer(), "getSentences")
.split(new JsonPathExpression("$[*]"))
.convertBodyTo(String.class)
.to("file:data/output");
}
}
Sample input:
"This is string #1. This is string #2."
If I run the code above as-is I get a single output file containing "This is string #2.". If I remove the .split() call I get a single output file containing json as expected:
[
{
"token": "This is string #1."
},
{
"token": "This is string #2."
}
]
How can I achieve two output files representing both lines of data?
It occurred to me that perhaps the split was working correctly and the second output file was overwriting the first. According to the documentation, the default behavior if CamelFileName is not set is to create a unique generated ID but I do not experience this. In my case the output file name always matches the input file name.
How can I get unique file name within each folder?
Thanks!
Finally stumbled upon the proper search terms and came across the following helpful post: Camel: Splitting a collection and writing to files
The trick is to use a bit of logic in the .to() to achieve unique output file names:
.to("file:data/sentence_q?fileName=${header.CamelSplitIndex}.txt");
The JsonPathExpression works like a charm and no need for a Processor() or unmarshal(), as I'd tried previously.
I'm trying to put data in my graph DB using neo4j. I'm new in the field and I don't find it easy to use the batch import tool that Michael Hunger wrote.
My goal is to generate at least 10000 nodes with just one property set. So I wrote a python script that generates 10000 lines of Cypher queries like "CREATE (:label{ number : '3796142470'})".
I put them in the console and execute them but I get this exception:
StackTrace:
scala.collection.immutable.List.take(List.scala:84)
org.neo4j.cypher.internal.compiler.v2_0.ast.SingleQuery.checkOrder(Query.scala:33)
Am I doing something wrong? In case the only way to generate those nodes is to use a batch/rest API, could you suggest me a easier way to do it?
Change:
CREATE (:label{ number : '3796142470'})
to look like:
CREATE (n1:Label { number : '3796142470'})
So you are following convention:
CREATE (n:Person { name : 'Andres', title : 'Developer' })
Put them into a file (say import.txt) and then:
bin/neo4j-shell -file import.txt
See http://blog.neo4j.org/2014/01/importing-data-to-neo4j-spreadsheet-way.html for more details.
I've done a few searches here, and while some issues are similar, they don't seem to be exactly what I need.
What I'm trying to do is import an Excel file into a SQL table via SSIS, but the problem is that I will never know the exact filename. We get files at no steady interval, and the file usually has a date/month in the name. For instance, our current file is "Census Data - May 2013.xls". We will only ever load ONE file at a time, so I don't need to loop through a directory for multiple Excel files.
My concept is that I can take this file, copy it to a "Loading" directory, and load it from there. At the start of the package, I will first clear out the loading directory, then scan the original directory for an Excel file, copy it to the loading directory and then load it into SQL. I suppose I may have to store the file names somewhere so I don't copy the same file into the loading directory in subsequent months, but I'm not really sure of the best way to handle that.
I've pretty much got everything down except the part that scans the directory for the Excel file and copies it to the loading directory. I've taken the majority of my info from this page, which (again) is close to what I want to do but not quite exactly the solution I need.
Can anyone get me over the finish line? I can't seem to get the Excel Connection Manager right (this is my first time using variables), and I can't figure out how to get the file into the Loading directory.
Problem statement
How do I dynamically identify a file name?
You will require some mechanism to inspect the contents of a folder and see what exists. Specifically, you are looking for an Excel file in your "Loading" directory. You know the file extension and that is it.
Resolution A
Use a ForEach File Enumerator.
Configure the Enumerator with an Expression on FileSpec of *.xls or *.xlsx depending on which flavor of Excel you're dealing with.
Add another Expression on Directory to be your Loading directory.
I typically create SSIS Variables named FolderInput and FileMask and assign those in the Enumerator.
Now when you run your package, the Enumerator is going to look in Diretory and find all the files that match the FileSpec.
Something needs to be done with what is found. You need to use that file name that the Enumerator returns. That's done through the Variable Mappings tab. I created a third Variable called CurrentFileName and assign it the results of the enumerator.
If you put a Script Task inside the ForEach Enumerator, you should be able to see that the value in the "Locals" window for #[User::CurrentFileName] has updated from the Design time value of whatever to the "real" file name.
Resolution B
Use a Script Task.
You will still need to create a Variable to hold the current file name and it probably won't hurt to also have the FolderInput and FileMask Variables available. Set the former as ReadWrite and the latter as ReadOnly variables.
Chose the .NET language of your choice. I'm using C#. The method System.IO.Directory.EnumerateFiles
using System;
using System.Data;
using System.IO;
using Microsoft.SqlServer.Dts.Runtime;
using System.Windows.Forms;
namespace ST_fe2ea536a97842b1a760b271f190721e
{
[Microsoft.SqlServer.Dts.Tasks.ScriptTask.SSISScriptTaskEntryPointAttribute]
public partial class ScriptMain : Microsoft.SqlServer.Dts.Tasks.ScriptTask.VSTARTScriptObjectModelBase
{
public void Main()
{
string folderInput = Dts.Variables["User::FolderInput"].Value.ToString();
string fileMask = Dts.Variables["User::FileMask"].Value.ToString();
try
{
var files = Directory.EnumerateFiles(folderInput, fileMask, SearchOption.AllDirectories);
foreach (string currentFile in files)
{
Dts.Variables["User::CurrentFileName"].Value = currentFile;
break;
}
}
catch (Exception e)
{
Dts.Events.FireError(0, "Script overkill", e.ToString(), string.Empty, 0);
}
Dts.TaskResult = (int)ScriptResults.Success;
}
enum ScriptResults
{
Success = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Success,
Failure = Microsoft.SqlServer.Dts.Runtime.DTSExecResult.Failure
};
}
}
Decision tree
Given the two resolutions to the above problem, how do you chose? Normally, people say "It Depends" but there only possible time it would depend is if the process should stop/error out in the case that more than one file did exist in the Loading folder. That's a case that the ForEach enumerator would be more cumbersome than a script task. Otherwise, as I stated in my original response that adds cost to your project for Development, Testing and Maintenance for no appreciable gain.
Bits and bobs
Further addressing nuances in the question: Configuring Excel - you'll need to be more specific in what isn't working. Both Siva's SO answer and the linked blogspot article show how to use the value of the Variable I call CurrentFileName to ensure the Excel File is pointing to the "right" file.
You will need to set the DelayValidation to True for both the Connection Manager and the Data Flow as the design-time value for the Variable will not be valid when the package begins execution. See this answer for a longer explanation but again, Siva called that out in their SO answer.