Is there a way to implement a post-build manual trigger like in https://wiki.jenkins-ci.org/display/JENKINS/Delivery+Pipeline+Plugin using Pipeline?
Using "input" unfortunately is not appropriate in this case.
Something like
stage "Testing stuff"
node("slave") {
git credentialsId: "blahblahblah", url: "git#github.com:blah/blah.git"
sh """testing something"""
}
stage "Building stuff"
node("slave") {
git credentialsId: "blahblahblah", url: "git#github.com:blah/blah.git"
sh """building stuff"""
}
input message: "Deploy?"
stage "Deploying stuff"
node("slave") {
sh """Deploy stuss""
}
Related
When I try and create a scratch org as follows...
sfdx force:org:create -s -f config/project-scratch-def.json -a myscratchorg
I am getting following error :
The request to create a scratch org failed with error code: C-1033
Sample Scratch Org Definition
{
"orgName": "<Org name here>",
"edition": "Enterprise",
"features": []
}
Tried rerunning the Builds.
This was addressed HERE and the solution for you would be to add the below to your config file. This will put you on the upcoming release, assuming that your DevHub is already upgraded to that release.
{
...
"release": "preview"
...
}
if your DevHub is not on your release, replace "preview" with "previous"
Am trying to install jenkins-x version 2.0.785 via ansible 2.9.9.
How do I handle the prompts like "Please enter the name you wish to use with git:" that I get while installing JX? There are multiple prompts to be handled that I will get when I execute the JX install command.
I get the above mentioned prompt even though "--git-username=automation" is already passed in the JX install command. I tried with both expect and shell module in ansible.
Kindly, suggest me a solution where I can handle these prompts via ansible.
Tried:-
- name: Handling multiple prompts
expect:
command: jx install --git-provider-kind bitbucketserver --git-provider-url http://rtx-swtl-git.fnc.net.local --git-username automation --default-environment-prefix TestProject --git-api-token MzI1ODg1NjA1NTk4OqjiP9N3lr4iHt9L5rofdaWMqsgW --on-premise --external-ip {{ hostvars[groups["kubemaster"][0]]["ip"] }} --verbose --static-jenkins=true --provider=openshift
responses:
Question:
- Please enter the name you wish to use with git: automation
timeout: 60
- name: Handling multiple prompts
expect:
command: jx install --git-provider-kind bitbucketserver --git-provider-url http://rtx-swtl-git.fnc.net.local --git-username automation --default-environment-prefix TestProject --git-api-token MzI1ODg1NjA1NTk4OqjiP9N3lr4iHt9L5rofdaWMqsgW --on-premise --external-ip {{ hostvars[groups["kubemaster"][0]]["ip"] }} --verbose --static-jenkins=true --provider=openshift
responses:
Please enter the name you wish to use with git: "automation"
- name: Handling multiple prompts
become: yes
shell: |
automation '' | jx install --git-provider-kind bitbucketserver --git-provider-url http://rtx-swtl-git.fnc.net.local --git-username automation --default-environment-prefix Testproject --git-api-token MzI1ODg1NjA1NTk4OqjiP9N3lr4iHt9L5rofdaWMqsgW --on-premise --external-ip {{ hostvars[groups["kubemaster"][0]]["ip"] }} --verbose --static-jenkins true --provider openshift
These doesn't give any errors in stderr section of ansible logs, the only thing is I receive the below attached logs in RED and it doesn't proceed further with the installation steps.
Output:-
fatal: [master]: FAILED! => {
"changed": true,
"cmd": "jx install --git-provider-kind bitbucketserver --git-provider-url http://rtx-swtl-git.fnc.net.local --git-username automation --default-environment-prefix TestProject --git-api-token MzI1ODg1NjA1NTk4OqjiP9N3lr4iHt9L5rofdaWMqsgW --on-premise --external-ip 167.254.204.90 --verbose --static-jenkins=true --provider=openshift --domain=jenkinsx.io",
"delta": "0:03:00.190343",
"end": "2020-06-17 06:44:03.620694",
"invocation": {
"module_args": {
"chdir": null,
"command": "jx install --git-provider-kind bitbucketserver --git-provider-url http://rtx-swtl-git.fnc.net.local --git-username automation --default-environment-prefix TestProject --git-api-token MzI1ODg1NjA1NTk4OqjiP9N3lr4iHt9L5rofdaWMqsgW --on-premise --external-ip 167.254.204.90 --verbose --static-jenkins=true --provider=openshift --domain=jenkinsx.io",
"creates": null,
"echo": false,
"removes": null,
"responses": {
"Question": [
{
"Please enter the name you wish to use with git": "automation"
},
{
"Please enter the email address you wish to use with git": "automation#fujitsu.com"
},
{
"\\? Do you wish to use automation as the local Git user for http://rtx-swtl-git.fnc.net.local server": "y"
},
{
"\\? Do you wish to use http://rtx-swtl-git.fnc.net.local as the pipelines Git server": "y"
}
]
},
"timeout": 180
}
},
"msg": "command exceeded timeout",
"rc": null,
"start": "2020-06-17 06:41:03.430351",
"stdout": "\u001b[1m\u001b[32m?\u001b[0m\u001b[0m \u001b[1mConfigured Jenkins installation type\u001b[0m: \u001b[36mStatic Jenkins Server and Jenkinsfiles\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: checking installation flags\r\n\u001b[36mDEBUG\u001b[0m: flags after checking - &{ConfigFile: InstallOnly:false Domain: ExposeControllerURLTemplate: ExposeControllerPathMode: AzureRegistrySubscription: DockerRegistry:docker-registry.default.svc:5000 DockerRegistryOrg: Provider:openshift VersionsRepository:https://github.com/jenkins-x/jenkins-x-versions.git VersionsGitRef: Version: LocalHelmRepoName:releases Namespace:jx CloudEnvRepository:https://github.com/jenkins-x/cloud-environments NoDefaultEnvironments:false RemoteEnvironments:false DefaultEnvironmentPrefix:TestProject LocalCloudEnvironment:false EnvironmentGitOwner: Timeout:6000 HelmTLS:false RegisterLocalHelmRepo:false CleanupTempFiles:true Prow:false DisableSetKubeContext:false Dir: Vault:false RecreateVaultBucket:true Tekton:false KnativeBuild:false BuildPackName: Kaniko:false GitOpsMode:false NoGitOpsEnvApply:false NoGitOpsEnvRepo:false NoGitOpsEnvSetup:false NoGitOpsVault:false NextGeneration:false StaticJenkins:true LongTermStorage:false LongTermStorageBucketName: CloudBeesDomain: CloudBeesAuth:}\r\n\u001b[36mDEBUG\u001b[0m: Setting the dev namespace to: \u001b[32mjx\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: Using helmBinary \u001b[32mhelm\u001b[0m with feature flag: \u001b[32mnone\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: \u001b[32mhelm\u001b[0m is already available on your PATH at \u001b[32m/usr/bin/helm\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: \u001b[32mkubectl\u001b[0m is already available on your PATH at \u001b[32m/usr/bin/kubectl\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: \u001b[32mhelm\u001b[0m is already available on your PATH at \u001b[32m/usr/bin/helm\u001b[0m\r\nContext \"jx/master-167-254-204-90-nip-io:8443/waruser\" modified.\r\n\u001b[36mDEBUG\u001b[0m: \u001b[32mkubectl\u001b[0m is already available on your PATH at \u001b[32m/usr/bin/kubectl\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: \u001b[32mhelm\u001b[0m is already available on your PATH at \u001b[32m/usr/bin/helm\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: Storing the kubernetes provider openshift in the TeamSettings\r\n\u001b[36mDEBUG\u001b[0m: Enabling helm template mode in the TeamSettings\r\nGit configured for user: \u001b[32mautomation\u001b[0m and email \u001b[32mautomation#fujitsu.com\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: Using \u001b[32mhelm2\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: Skipping \u001b[32mtiller\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: Using helmBinary \u001b[32mhelm\u001b[0m with feature flag: \u001b[32mtemplate-mode\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: Initialising Helm '\u001b[32minit --client-only\u001b[0m'\r\nhelm installed and configured\r\nNot installing ingress as using OpenShift which uses Route and its own mechanism of ingress\r\nEnabling anyuid for the Jenkins service account in namespace jx\r\nscc \"anyuid\" added to: [\"system:serviceaccount:jx:jenkins\"]\r\nscc \"hostaccess\" added to: [\"system:serviceaccount:jx:jenkins\"]\r\nscc \"privileged\" added to: [\"system:serviceaccount:jx:jenkins\"]\r\nscc \"anyuid\" added to: [\"system:serviceaccount:jx:default\"]\r\n\u001b[36mDEBUG\u001b[0m: Long Term Storage not supported by provider 'openshift', disabling this option\r\nSet up a Git username and API token to be able to perform CI/CD\r\n\u001b[36mDEBUG\u001b[0m: merging pipeline secrets with local secrets\r\n\u001b[0G\u001b[2K\u001b[1;92m? \u001b[0m\u001b[1;99mDo you wish to use automation as the local Git user for http://rtx-swtl-git.fnc.net.local server: \u001b[0m\u001b[37m(Y/n) \u001b[0m\u001b[?25l\u001b7\u001b[999;999f\u001b[6n",
"stdout_lines": [
"\u001b[1m\u001b[32m?\u001b[0m\u001b[0m \u001b[1mConfigured Jenkins installation type\u001b[0m: \u001b[36mStatic Jenkins Server and Jenkinsfiles\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: checking installation flags",
"\u001b[36mDEBUG\u001b[0m: flags after checking - &{ConfigFile: InstallOnly:false Domain: ExposeControllerURLTemplate: ExposeControllerPathMode: AzureRegistrySubscription: DockerRegistry:docker-registry.default.svc:5000 DockerRegistryOrg: Provider:openshift VersionsRepository:https://github.com/jenkins-x/jenkins-x-versions.git VersionsGitRef: Version: LocalHelmRepoName:releases Namespace:jx CloudEnvRepository:https://github.com/jenkins-x/cloud-environments NoDefaultEnvironments:false RemoteEnvironments:false DefaultEnvironmentPrefix:TestProject LocalCloudEnvironment:false EnvironmentGitOwner: Timeout:6000 HelmTLS:false RegisterLocalHelmRepo:false CleanupTempFiles:true Prow:false DisableSetKubeContext:false Dir: Vault:false RecreateVaultBucket:true Tekton:false KnativeBuild:false BuildPackName: Kaniko:false GitOpsMode:false NoGitOpsEnvApply:false NoGitOpsEnvRepo:false NoGitOpsEnvSetup:false NoGitOpsVault:false NextGeneration:false StaticJenkins:true LongTermStorage:false LongTermStorageBucketName: CloudBeesDomain: CloudBeesAuth:}",
"\u001b[36mDEBUG\u001b[0m: Setting the dev namespace to: \u001b[32mjx\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: Using helmBinary \u001b[32mhelm\u001b[0m with feature flag: \u001b[32mnone\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: \u001b[32mhelm\u001b[0m is already available on your PATH at \u001b[32m/usr/bin/helm\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: \u001b[32mkubectl\u001b[0m is already available on your PATH at \u001b[32m/usr/bin/kubectl\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: \u001b[32mhelm\u001b[0m is already available on your PATH at \u001b[32m/usr/bin/helm\u001b[0m",
"Context \"jx/master-167-254-204-90-nip-io:8443/waruser\" modified.",
"\u001b[36mDEBUG\u001b[0m: \u001b[32mkubectl\u001b[0m is already available on your PATH at \u001b[32m/usr/bin/kubectl\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: \u001b[32mhelm\u001b[0m is already available on your PATH at \u001b[32m/usr/bin/helm\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: Storing the kubernetes provider openshift in the TeamSettings",
"\u001b[36mDEBUG\u001b[0m: Enabling helm template mode in the TeamSettings",
"Git configured for user: \u001b[32mautomation\u001b[0m and email \u001b[32mautomation#fujitsu.com\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: Using \u001b[32mhelm2\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: Skipping \u001b[32mtiller\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: Using helmBinary \u001b[32mhelm\u001b[0m with feature flag: \u001b[32mtemplate-mode\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: Initialising Helm '\u001b[32minit --client-only\u001b[0m'",
"helm installed and configured",
"Not installing ingress as using OpenShift which uses Route and its own mechanism of ingress",
"Enabling anyuid for the Jenkins service account in namespace jx",
"scc \"anyuid\" added to: [\"system:serviceaccount:jx:jenkins\"]",
"scc \"hostaccess\" added to: [\"system:serviceaccount:jx:jenkins\"]",
"scc \"privileged\" added to: [\"system:serviceaccount:jx:jenkins\"]",
"scc \"anyuid\" added to: [\"system:serviceaccount:jx:default\"]",
"\u001b[36mDEBUG\u001b[0m: Long Term Storage not supported by provider 'openshift', disabling this option",
"Set up a Git username and API token to be able to perform CI/CD",
"\u001b[36mDEBUG\u001b[0m: merging pipeline secrets with local secrets",
"\u001b[0G\u001b[2K\u001b[1;92m? \u001b[0m\u001b[1;99mDo you wish to use automation as the local Git user for http://rtx-swtl-git.fnc.net.local server: \u001b[0m\u001b[37m(Y/n) \u001b[0m\u001b[?25l\u001b7\u001b[999;999f\u001b[6n"
]
}
PLAY RECAP *************************************************************************************************************************************************************
master : ok=3 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Helm, JX, Git, Ansible versions:-
I am trying to use scikit_bring_your_own/container/decision_trees/train mode, running in AWS CLI, I had no issues. Trying to replicate in Creating Sagemaker Training Job , facing issue in loading data from S3 to docker image path.
In CLI command we used specify the docker run -v $(pwd)/test_dir:/opt/ml --rm ${image} train from where the input needs to referred.
In training job, mentioned the S3 bucket location and output path for model artifacts.
Error entered in the exception as in train - "container/decision_trees/train"
raise ValueError(('There are no files in {}.\n' +
'This usually indicates that the channel ({}) was incorrectly specified,\n' +
'the data specification in S3 was incorrectly specified or the role specified\n' +
'does not have permission to access the data.').format(training_path, channel_name))
Traceback (most recent call last):
File "/opt/program/train", line 55, in train
'does not have permission to access the data.').format(training_path, channel_name))
So not understanding is there any tweaking required or any access missing.
kindly help
If you set the InputDataConfig in the CreateTrainingJob API like this
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataDistributionType": "FullyReplicated",
"S3DataType": "S3Prefix",
"S3Uri": "s3://<bucket>/a.csv"
}
},
"InputMode": "File",
},
{
"ChannelName": "eval",
"DataSource": {
"S3DataSource": {
"S3DataDistributionType": "FullyReplicated",
"S3DataType": "S3Prefix",
"S3Uri": "s3://<bucket>/b.csv"
}
},
"InputMode": "File",
}
]
SageMaker download the data specified above from S3 to the /opt/ml/input/data/channel_name directory in the Docker container. In this case, the algorithm container should be able to find the input data under
/opt/ml/input/data/train/a.csv
/opt/ml/input/data/eval/b.csv
You can find more details in https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-training-algo.html
I have a ServiceBuswithQueue ARM template that has the output section like this below:
"outputs": {
"serviceBusNamespaceName": {
"type": "string",
"value": "[parameters('serviceBusNamespaceName')]"
},
"namespaceConnectionString": {
"type": "string",
"value": "[listkeys(variables('authRuleResourceId'), variables('sbVersion')).primaryConnectionString]"
},
"sharedAccessPolicyPrimaryKey": {
"type": "string",
"value": "[listkeys(variables('authRuleResourceId'), variables('sbVersion')).primaryKey]"
},
"serviceBusQueueName": {
"type": "string",
"value": "[parameters('serviceBusQueueName')]"
}
}
For that I Created the Continuous Integration (CI) and Continuous Deployment (CD) in VSTS, In CD I have used the PowerShell task to deploy the above ARM template. But I want to pass the output of this ARM template like "$(serviceBusQueueName)" to input parameter of the next ARM template in Continuous Deployment.
In know the above scenario can achieved using ARM outputs in between the two ARM task in Continuous Deployment. But I don’t want it because currently I am using the PowerShell task to deploy the ARM template.
Before posting this question, I was researched and find the following links but those are not helpful to resolve my issue.
Azure ARM templates - using the output of other deployments
How do I use ARM 'outputs' values another release task?
Can anyone please suggest me how to resolve the above issue?
You can override parameters by specifying corresponding parameters.
Override template parameter in the script
# Start the deployment
Write-Host "Starting deployment...";
$outputs = New-AzureRmResourceGroupDeployment -ResourceGroupName $resourceGroupName -Mode Incremental -TemplateFile $templateFilePath -TemplateParameterFile $parametersFilePath;
foreach ($key in $outputs.Outputs.Keys){
$type = $outputs.Outputs.Item($key).Type
$value = $outputs.Outputs.Item($key).Value
Write-Host "##vso[task.setvariable variable=$key;]$value"
}
You can display all the environment variables in a subsequent script:
Write-Host "Environment variables:"
gci env:* | sort-object name
Somebody recommended a baseline in my stream. How do I know how recommend it? I can only see who created it but didn't get any info who recommend it. Is there any specific command to see the history of the baseline/stream/view etc?
I don't think that meta-data is recorded.
You can check for the policies attached to the UCM project though
POLICY_CHSTREAM_UNRESTRICTED
If it is not set, that means only the owner of the UCM project can change the recommended baseline on a stream.
Otherwise, as suggested below, you would need to catch and record that event yourself through trigger.
At the area of ClearCase 7.x (thos could have changed with CC8), this was done with a pre-op trigger on chstream, but it had to deal with two kind of interactions:
CLI (cleartool chstream -recommended)
GUI (through the "recommend baseline" contextual menu)
See for instance this thread:
recommend_bls trigger, with an early out check if chstream was not for a recommend baseline:
if ($ENV{CLEARCASE_CMDLINE}) {
# chstream run from the command line, check for a "-recommended" option
if ($ENV{CLEARCASE_CMDLINE} =~ /-recommend /) {
$msg->D( "this is a chstream to recommend a baseline",
"CLEARCASE_CMDLINE is: <$ENV{CLEARCASE_CMDLINE}>",
"trigger proceeding...",
);
}
else {
$msg->D( "EARLY OUT - this chstream command does not include a
baseline recommend:",
"CLEARCASE_CMDLINE is: <$ENV{CLEARCASE_CMDLINE}>",
);
exit 1;
}
}
else {
# chstream was run from the gui, must look at event records to
# determine if the command was a baseline recommend or
# some other change to the stream
my $lshist_rc = qx($CT lshist -minor -last 1 -me stream:$ENV{CLEARCASE_STREAM});
if ($?) {
# error in the lshist command, report trigger error
my #args = ("$CP proceed -type error -default abort -mask abort -newline -prompt \"***RECOMMEND_BL Trigger Version: $VERSION***\n\n<lshist> cmd failed on
stream:$ENV{CLEARCASE_STREAM}.\nResults:\n$lshist_rc\nPlease send a
screen print of this error to your ClearCase admin.\" -prefer_gui");
system (#args);
$msg->D( "Processing aborted - lshist command failed!",
"$lshist_rc"
);
exit 11;
}
chomp($lshist_rc);
# check latest stream event record to see if the chstream was
# a baseline recommend or some other change to the stream.
# a baseline recommend will have an event record of the form:
# "Set activity process variable named "UCM_STREAM_RECBLS".
if ($lshist_rc =~ /UCM_STREAM_RECBLS/) {
$msg->D( "this is a chstream to recommend a baseline",
"latest event record on stream is:",
"$lshist_rc",
"trigger proceeding...",
);
}
else {
$msg->D( "EARLY OUT - this chstream command did not include a baseline recommend:",
"latest event record on stream is:",
"$lshist_rc",
);
exit 1;
}
}