Failed to retrieve Second Generation Package Version info of ancestor - package

I created 3 package versions - 0.1.0.1, 0.1.0.2, 0.1.0.3 of 2nd generation managed package.
I promoted 0.1.0.3 to release
I created versions 0.1.0.4 - 0.1.0.6 with --skipancestorcheck
I made following update in sfdx-project file:
I specified ancestorId in sfdx-project file as id (04t......) of released version 0.1.0.3
I set "versionName" to "ver 0.2" from "ver 0.1"
I set "versionNumber" to "0.2.0.NEXT" from "0.1.0.NEXT"
I ran command sfdx force:package:beta:version:create --package "Test App" --installationkey "XXX" --definitionfile config/project-scratch-def.json --wait 10 -c"
I got error
"ERROR running force:package:version:create: An unexpected error occurred. Please include this ErrorId if you contact support: 636537585-177978 (-693775129)"
Salesforce Support found that ErrorId is related to "java.lang.RuntimeException: Failed to retrieve Second Generation Package Version info of ancestor 05i3z000000fyk0AAA."
05i3z000000fyk0AAA is Id of Package2Version record of version 0.1.0.3 in my DevHub org
My expectation is that new beta version of package will be created but I got error.
Did you have something similar?
I am now stuck, because I can not delete released version of 2nd generation package and I can not create new one. Do you have any idea how to get out of this?
I tried ancestorVersion : HIGHEST and all possible values for ancestorId, ancestorVersion but nothing helped.
My latest sfdx-project file is
{
"packageDirectories": [
{
"path": "force-app",
"default": true,
"package": "Test App Core",
"versionName": "ver 0.2",
"versionNumber": "0.2.0.NEXT",
"ancestorId":"04t..........YAAQ"
}
],
"name": "test-salesforce-core",
"namespace": "zenoo_app",
"sfdcLoginUrl": "https://login.salesforce.com",
"sourceApiVersion": "55.0",
"packageAliases": {
"Test App Core": "0Ho3........CAA",
"Test App Core#0.1.0-1": "04t..............OAAQ",
"Test App Core#0.1.0-2": "04t..............TAAQ",
"Test App Core#0.1.0-3": "04t..............YAAQ"
}
}
I am using sfdx-cli version 7.176

Related

Getting "The request to create a scratch org failed with error code: C-1033" when trying to create a scratch org?

When I try and create a scratch org as follows...
sfdx force:org:create -s -f config/project-scratch-def.json -a myscratchorg
I am getting following error :
The request to create a scratch org failed with error code: C-1033
Sample Scratch Org Definition
{
"orgName": "<Org name here>",
"edition": "Enterprise",
"features": []
}
Tried rerunning the Builds.
This was addressed HERE and the solution for you would be to add the below to your config file. This will put you on the upcoming release, assuming that your DevHub is already upgraded to that release.
{
...
"release": "preview"
...
}
if your DevHub is not on your release, replace "preview" with "previous"

Uploading appx from electron builder to Windows Store giving Invalid package identity name ... (expected: XXXAppName)

I'm trying to upload an appx file generated by electron builder to the windows store.
Unfortunately I'm now receiving the following error:
Invalid package identity name: Teselagen.OpenVectorEditor (expected: 56560Teselagen.OpenVectorEditor)
Invalid package family name: Teselagen.OpenVectorEditor_6fpmqnhnq2nc4 (expected: 56560Teselagen.OpenVectorEditor_6fpmqnhnq2nc4)
I'm not sure where those weird numbers are coming from or why that would be expected. Here's what my electron builder settings look like:
"build": {
"appx": {
"identityName": "Teselagen.OpenVectorEditor",
"publisher": "CN=D373F92F-3481-433F-9DC5-0BE55DE5500D",
"publisherDisplayName": "Teselagen",
"applicationId": "OpenVectorEditor",
"displayName": "OpenVectorEditor"
},
"win": {
"target": "appx"
},
Does anyone know how to get around this or why those weird numbers would be expected. Thanks so much!
Ok.. after troubleshooting for quite a long time.. the following finally worked for me:
"build": {
"appx": {
"identityName": "56560Teselagen.OpenVectorEditor", //I changed this to include the identityName that was generated for me
"publisher": "CN=D373F92F-3481-433F-9DC5-0BE55DE5500D",
"publisherDisplayName": "Teselagen",
"applicationId": "OpenVectorEditor", //need to include this otherwise it will default to the identityName which will break because applicationId isn't allowed to start with numbers
"displayName": "OpenVectorEditor"
},
"win": {
"target": "appx"
},
Originally I didn't realize that an identityName had been generated for me when I created a submission on the windows developer page. You can find your identityName here:

Automating JX installation process via ansible2.9

Am trying to install jenkins-x version 2.0.785 via ansible 2.9.9.
How do I handle the prompts like "Please enter the name you wish to use with git:" that I get while installing JX? There are multiple prompts to be handled that I will get when I execute the JX install command.
I get the above mentioned prompt even though "--git-username=automation" is already passed in the JX install command. I tried with both expect and shell module in ansible.
Kindly, suggest me a solution where I can handle these prompts via ansible.
Tried:-
- name: Handling multiple prompts
expect:
command: jx install --git-provider-kind bitbucketserver --git-provider-url http://rtx-swtl-git.fnc.net.local --git-username automation --default-environment-prefix TestProject --git-api-token MzI1ODg1NjA1NTk4OqjiP9N3lr4iHt9L5rofdaWMqsgW --on-premise --external-ip {{ hostvars[groups["kubemaster"][0]]["ip"] }} --verbose --static-jenkins=true --provider=openshift
responses:
Question:
- Please enter the name you wish to use with git: automation
timeout: 60
- name: Handling multiple prompts
expect:
command: jx install --git-provider-kind bitbucketserver --git-provider-url http://rtx-swtl-git.fnc.net.local --git-username automation --default-environment-prefix TestProject --git-api-token MzI1ODg1NjA1NTk4OqjiP9N3lr4iHt9L5rofdaWMqsgW --on-premise --external-ip {{ hostvars[groups["kubemaster"][0]]["ip"] }} --verbose --static-jenkins=true --provider=openshift
responses:
Please enter the name you wish to use with git: "automation"
- name: Handling multiple prompts
become: yes
shell: |
automation '' | jx install --git-provider-kind bitbucketserver --git-provider-url http://rtx-swtl-git.fnc.net.local --git-username automation --default-environment-prefix Testproject --git-api-token MzI1ODg1NjA1NTk4OqjiP9N3lr4iHt9L5rofdaWMqsgW --on-premise --external-ip {{ hostvars[groups["kubemaster"][0]]["ip"] }} --verbose --static-jenkins true --provider openshift
These doesn't give any errors in stderr section of ansible logs, the only thing is I receive the below attached logs in RED and it doesn't proceed further with the installation steps.
Output:-
fatal: [master]: FAILED! => {
"changed": true,
"cmd": "jx install --git-provider-kind bitbucketserver --git-provider-url http://rtx-swtl-git.fnc.net.local --git-username automation --default-environment-prefix TestProject --git-api-token MzI1ODg1NjA1NTk4OqjiP9N3lr4iHt9L5rofdaWMqsgW --on-premise --external-ip 167.254.204.90 --verbose --static-jenkins=true --provider=openshift --domain=jenkinsx.io",
"delta": "0:03:00.190343",
"end": "2020-06-17 06:44:03.620694",
"invocation": {
"module_args": {
"chdir": null,
"command": "jx install --git-provider-kind bitbucketserver --git-provider-url http://rtx-swtl-git.fnc.net.local --git-username automation --default-environment-prefix TestProject --git-api-token MzI1ODg1NjA1NTk4OqjiP9N3lr4iHt9L5rofdaWMqsgW --on-premise --external-ip 167.254.204.90 --verbose --static-jenkins=true --provider=openshift --domain=jenkinsx.io",
"creates": null,
"echo": false,
"removes": null,
"responses": {
"Question": [
{
"Please enter the name you wish to use with git": "automation"
},
{
"Please enter the email address you wish to use with git": "automation#fujitsu.com"
},
{
"\\? Do you wish to use automation as the local Git user for http://rtx-swtl-git.fnc.net.local server": "y"
},
{
"\\? Do you wish to use http://rtx-swtl-git.fnc.net.local as the pipelines Git server": "y"
}
]
},
"timeout": 180
}
},
"msg": "command exceeded timeout",
"rc": null,
"start": "2020-06-17 06:41:03.430351",
"stdout": "\u001b[1m\u001b[32m?\u001b[0m\u001b[0m \u001b[1mConfigured Jenkins installation type\u001b[0m: \u001b[36mStatic Jenkins Server and Jenkinsfiles\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: checking installation flags\r\n\u001b[36mDEBUG\u001b[0m: flags after checking - &{ConfigFile: InstallOnly:false Domain: ExposeControllerURLTemplate: ExposeControllerPathMode: AzureRegistrySubscription: DockerRegistry:docker-registry.default.svc:5000 DockerRegistryOrg: Provider:openshift VersionsRepository:https://github.com/jenkins-x/jenkins-x-versions.git VersionsGitRef: Version: LocalHelmRepoName:releases Namespace:jx CloudEnvRepository:https://github.com/jenkins-x/cloud-environments NoDefaultEnvironments:false RemoteEnvironments:false DefaultEnvironmentPrefix:TestProject LocalCloudEnvironment:false EnvironmentGitOwner: Timeout:6000 HelmTLS:false RegisterLocalHelmRepo:false CleanupTempFiles:true Prow:false DisableSetKubeContext:false Dir: Vault:false RecreateVaultBucket:true Tekton:false KnativeBuild:false BuildPackName: Kaniko:false GitOpsMode:false NoGitOpsEnvApply:false NoGitOpsEnvRepo:false NoGitOpsEnvSetup:false NoGitOpsVault:false NextGeneration:false StaticJenkins:true LongTermStorage:false LongTermStorageBucketName: CloudBeesDomain: CloudBeesAuth:}\r\n\u001b[36mDEBUG\u001b[0m: Setting the dev namespace to: \u001b[32mjx\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: Using helmBinary \u001b[32mhelm\u001b[0m with feature flag: \u001b[32mnone\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: \u001b[32mhelm\u001b[0m is already available on your PATH at \u001b[32m/usr/bin/helm\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: \u001b[32mkubectl\u001b[0m is already available on your PATH at \u001b[32m/usr/bin/kubectl\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: \u001b[32mhelm\u001b[0m is already available on your PATH at \u001b[32m/usr/bin/helm\u001b[0m\r\nContext \"jx/master-167-254-204-90-nip-io:8443/waruser\" modified.\r\n\u001b[36mDEBUG\u001b[0m: \u001b[32mkubectl\u001b[0m is already available on your PATH at \u001b[32m/usr/bin/kubectl\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: \u001b[32mhelm\u001b[0m is already available on your PATH at \u001b[32m/usr/bin/helm\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: Storing the kubernetes provider openshift in the TeamSettings\r\n\u001b[36mDEBUG\u001b[0m: Enabling helm template mode in the TeamSettings\r\nGit configured for user: \u001b[32mautomation\u001b[0m and email \u001b[32mautomation#fujitsu.com\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: Using \u001b[32mhelm2\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: Skipping \u001b[32mtiller\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: Using helmBinary \u001b[32mhelm\u001b[0m with feature flag: \u001b[32mtemplate-mode\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: Initialising Helm '\u001b[32minit --client-only\u001b[0m'\r\nhelm installed and configured\r\nNot installing ingress as using OpenShift which uses Route and its own mechanism of ingress\r\nEnabling anyuid for the Jenkins service account in namespace jx\r\nscc \"anyuid\" added to: [\"system:serviceaccount:jx:jenkins\"]\r\nscc \"hostaccess\" added to: [\"system:serviceaccount:jx:jenkins\"]\r\nscc \"privileged\" added to: [\"system:serviceaccount:jx:jenkins\"]\r\nscc \"anyuid\" added to: [\"system:serviceaccount:jx:default\"]\r\n\u001b[36mDEBUG\u001b[0m: Long Term Storage not supported by provider 'openshift', disabling this option\r\nSet up a Git username and API token to be able to perform CI/CD\r\n\u001b[36mDEBUG\u001b[0m: merging pipeline secrets with local secrets\r\n\u001b[0G\u001b[2K\u001b[1;92m? \u001b[0m\u001b[1;99mDo you wish to use automation as the local Git user for http://rtx-swtl-git.fnc.net.local server: \u001b[0m\u001b[37m(Y/n) \u001b[0m\u001b[?25l\u001b7\u001b[999;999f\u001b[6n",
"stdout_lines": [
"\u001b[1m\u001b[32m?\u001b[0m\u001b[0m \u001b[1mConfigured Jenkins installation type\u001b[0m: \u001b[36mStatic Jenkins Server and Jenkinsfiles\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: checking installation flags",
"\u001b[36mDEBUG\u001b[0m: flags after checking - &{ConfigFile: InstallOnly:false Domain: ExposeControllerURLTemplate: ExposeControllerPathMode: AzureRegistrySubscription: DockerRegistry:docker-registry.default.svc:5000 DockerRegistryOrg: Provider:openshift VersionsRepository:https://github.com/jenkins-x/jenkins-x-versions.git VersionsGitRef: Version: LocalHelmRepoName:releases Namespace:jx CloudEnvRepository:https://github.com/jenkins-x/cloud-environments NoDefaultEnvironments:false RemoteEnvironments:false DefaultEnvironmentPrefix:TestProject LocalCloudEnvironment:false EnvironmentGitOwner: Timeout:6000 HelmTLS:false RegisterLocalHelmRepo:false CleanupTempFiles:true Prow:false DisableSetKubeContext:false Dir: Vault:false RecreateVaultBucket:true Tekton:false KnativeBuild:false BuildPackName: Kaniko:false GitOpsMode:false NoGitOpsEnvApply:false NoGitOpsEnvRepo:false NoGitOpsEnvSetup:false NoGitOpsVault:false NextGeneration:false StaticJenkins:true LongTermStorage:false LongTermStorageBucketName: CloudBeesDomain: CloudBeesAuth:}",
"\u001b[36mDEBUG\u001b[0m: Setting the dev namespace to: \u001b[32mjx\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: Using helmBinary \u001b[32mhelm\u001b[0m with feature flag: \u001b[32mnone\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: \u001b[32mhelm\u001b[0m is already available on your PATH at \u001b[32m/usr/bin/helm\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: \u001b[32mkubectl\u001b[0m is already available on your PATH at \u001b[32m/usr/bin/kubectl\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: \u001b[32mhelm\u001b[0m is already available on your PATH at \u001b[32m/usr/bin/helm\u001b[0m",
"Context \"jx/master-167-254-204-90-nip-io:8443/waruser\" modified.",
"\u001b[36mDEBUG\u001b[0m: \u001b[32mkubectl\u001b[0m is already available on your PATH at \u001b[32m/usr/bin/kubectl\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: \u001b[32mhelm\u001b[0m is already available on your PATH at \u001b[32m/usr/bin/helm\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: Storing the kubernetes provider openshift in the TeamSettings",
"\u001b[36mDEBUG\u001b[0m: Enabling helm template mode in the TeamSettings",
"Git configured for user: \u001b[32mautomation\u001b[0m and email \u001b[32mautomation#fujitsu.com\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: Using \u001b[32mhelm2\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: Skipping \u001b[32mtiller\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: Using helmBinary \u001b[32mhelm\u001b[0m with feature flag: \u001b[32mtemplate-mode\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: Initialising Helm '\u001b[32minit --client-only\u001b[0m'",
"helm installed and configured",
"Not installing ingress as using OpenShift which uses Route and its own mechanism of ingress",
"Enabling anyuid for the Jenkins service account in namespace jx",
"scc \"anyuid\" added to: [\"system:serviceaccount:jx:jenkins\"]",
"scc \"hostaccess\" added to: [\"system:serviceaccount:jx:jenkins\"]",
"scc \"privileged\" added to: [\"system:serviceaccount:jx:jenkins\"]",
"scc \"anyuid\" added to: [\"system:serviceaccount:jx:default\"]",
"\u001b[36mDEBUG\u001b[0m: Long Term Storage not supported by provider 'openshift', disabling this option",
"Set up a Git username and API token to be able to perform CI/CD",
"\u001b[36mDEBUG\u001b[0m: merging pipeline secrets with local secrets",
"\u001b[0G\u001b[2K\u001b[1;92m? \u001b[0m\u001b[1;99mDo you wish to use automation as the local Git user for http://rtx-swtl-git.fnc.net.local server: \u001b[0m\u001b[37m(Y/n) \u001b[0m\u001b[?25l\u001b7\u001b[999;999f\u001b[6n"
]
}
PLAY RECAP *************************************************************************************************************************************************************
master : ok=3 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Helm, JX, Git, Ansible versions:-

Training Job in Sagemaker gives error in locating file in S3 to docker image path

I am trying to use scikit_bring_your_own/container/decision_trees/train mode, running in AWS CLI, I had no issues. Trying to replicate in Creating Sagemaker Training Job , facing issue in loading data from S3 to docker image path.
In CLI command we used specify the docker run -v $(pwd)/test_dir:/opt/ml --rm ${image} train from where the input needs to referred.
In training job, mentioned the S3 bucket location and output path for model artifacts.
Error entered in the exception as in train - "container/decision_trees/train"
raise ValueError(('There are no files in {}.\n' +
'This usually indicates that the channel ({}) was incorrectly specified,\n' +
'the data specification in S3 was incorrectly specified or the role specified\n' +
'does not have permission to access the data.').format(training_path, channel_name))
Traceback (most recent call last):
File "/opt/program/train", line 55, in train
'does not have permission to access the data.').format(training_path, channel_name))
So not understanding is there any tweaking required or any access missing.
kindly help
If you set the InputDataConfig in the CreateTrainingJob API like this
"InputDataConfig": [
{
"ChannelName": "train",
"DataSource": {
"S3DataSource": {
"S3DataDistributionType": "FullyReplicated",
"S3DataType": "S3Prefix",
"S3Uri": "s3://<bucket>/a.csv"
}
},
"InputMode": "File",
},
{
"ChannelName": "eval",
"DataSource": {
"S3DataSource": {
"S3DataDistributionType": "FullyReplicated",
"S3DataType": "S3Prefix",
"S3Uri": "s3://<bucket>/b.csv"
}
},
"InputMode": "File",
}
]
SageMaker download the data specified above from S3 to the /opt/ml/input/data/channel_name directory in the Docker container. In this case, the algorithm container should be able to find the input data under
/opt/ml/input/data/train/a.csv
/opt/ml/input/data/eval/b.csv
You can find more details in https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms-training-algo.html

Adding local dependency in Zeppelin Helium

I am creating a Zeppelin Helium Visualization and I need to add one local dependency. I am working on Zeppelin 0.8.snapshot version.
I am not able to do it, I have tried adding in the following manner. I tried using "*" for my modules, I also tried providing relative path without success.
My module has to be added locally.
{
"name": "zeppelin_helium_xxx",
"description" : "xxx",
"version": "1.0.0",
"main": "heliumxxx",
"author": "",
"license": "Apache-2.0",
"dependencies": {
"mymodule": "*",
"zeppelin-tabledata": "*",
"zeppelin-vis": "*"
}
}
Currently, Zeppelin doesn't support the relative path in helium json. You need to provide the absolute path for the artifact field.
Here is one example from https://github.com/1ambda/zeppelin-highcharts-columnrange/blob/master/zeppelin-highcharts-columnrange.json
{
"type" : "VISUALIZATION",
"name" : "zeppelin-highcharts-columnrange",
"version" : "local",
"description": "Column range chart using highcharts library",
"artifact" : "/Users/lambda/github/1ambda/zeppelin-highcharts-columnrange",
"icon": "<i class=\"fa fa-align-center\"></i>"
}
Additionally, there is a JIRA ticket for this issue.
https://issues.apache.org/jira/browse/ZEPPELIN-2097
And you might see an incorrect error message when you load local helium packages.
ERROR [2017-03-05 12:54:14,308] ({qtp1121647253-68}
HeliumBundleFactory.java[buildBundle]:131) - Can't get module name and version of package zeppelin-markdown-spell
Then check the artifact value again. Probably, it's invalid.
https://issues.apache.org/jira/browse/ZEPPELIN-2212

Resources