How can I automate supplying a version in appxmanifest? - wpf

I am using the desktop bridge for a WPF desktop application and am looking to automate the creation of my msix packages during build. I do not want to store any version information in source control.
The WPF project in the solution uses the gitversion msbuild task to automatically infer the version for my executable every time a build is done.
Unfortunately, I'm not sure if any such similar mechanism exists for .appxmanifest.
My thinking is that it would be nice to have this nicely integrated with the build process, similar to gitversion, but I haven't been able to find any documentation about what my options are during build or the Create App Packages process.
Perhaps there's some transform step during build that I'm not aware of that can be done to the .appxmanifest? Or maybe there's a way to have the version always reflect the version of the executable being bundled?
(MSDN forums question)

Your should modify the .appxmanifest file in your build pipeline before you create the package. After all, it's just a text-based XML file.
If you are using Azure Pipelines, you could accomsplish this using a Powershell task and a counter variable that gets incremented for each build:
pool:
vmImage: vs2017-win2016
variables:
buildPlatform: 'x86'
buildConfiguration: 'release'
major: 1
minor: 0
build: 0
revision: $[counter('rev', 0)]
steps:
- powershell: |
[Reflection.Assembly]::LoadWithPartialName("System.Xml.Linq")
$path = "Msix/Package.appxmanifest"
$doc = [System.Xml.Linq.XDocument]::Load($path)
$xName =
[System.Xml.Linq.XName]
"{http://schemas.microsoft.com/appx/manifest/foundation/windows10}Identity"
$doc.Root.Element($xName).Attribute("Version").Value =
"$(major).$(minor).$(build).$(revision)";
$doc.Save($path)
displayName: 'Version Package Manifest'
+Build, Package and Sign.
Please refer to this MSDN Magazine article for more information and a complete example of how to set up continuous integration (CI), continuous deployment (CD) and automatic updates of sideloaded MSIX packaged WPF applications using Azure Pipelines.

You must update the manifest before packaging. Check out this sample including a powershell script to poke the xml with the gitversion provided. https://github.com/microsoft/devops-for-windows-apps/blob/master/azure-pipelines.yml#L72

Related

How to deploy SQL Server Express on Docker Desktop Kubernetes

I've been studying "Kubernetes Up and Running" by Hightower et al (first edition) Chapter 13 where they discussed creating a Reliable MySQL Singleton (Since I just discovered that there is a second edition, I guess I'll be buying it soon).
Using their MySQL reliable singleton example as a model, I've been looking for some sample YAML files to make a similar deployment with Microsoft SQL Server (Express) on Docker Desktop for Kubernetes.
Apparently I need YAML files to deploy
Persistent Volume
Volume claim (should this be NFS?)
SQL Server (Express edition) replica set (in spite of the fact that this is just a singleton).
I've tried this example but I'm confused because it does not contain a persistent volume & claim and it does not work. I get the error
Error: unable to recognize "sqlserver.yml": no matches for kind "Deployment" in version "apps/v1beta1"
Can someone please point me to some sample YAML files that are not Azure specific that will work on Docker Desktop Kubernetes for Windows 10? After debugging my application, I'll want to deploy this to Azure (AKS).
Wed Jul 15 2020 Update
I left out the "-n namespace" for the helm install command (possibly because I'm using Helm and you are using helm v2?).
That install command still did not work. Then I did a
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
Now this command works:
helm install todo-app-database stable/mssql-linux
Progress!
When I do a "k get pods" I see that my todo-app-mssql-linux database is in the pending state. So I did a
kubectl get events
and I see
Warning FailedScheduling pod/todo-app-database-mssql-linux-8668d9b88c-lsh5l 0/1 nodes are available: 1 Insufficient memory.
I've been google searching for "Kubernetes insufficient memory" and can find no match.
I suspect this is a problem specific to "Docker Desktop Kubernetes".
When I look at the output for
helm -n ns-todolistdemo template todo-app-database stable/mssql-linux
I see the deployment is asking for 2Gi. (Interesting: when I use the template command, the "-n ns-todolistdemo" does not cause an error like it does with the install command).
So I do
kubectl describe deployment todo-app-database-mssql-linux >todo-app-database-mssql-linux.yaml
I edit the yaml file to change 2Gi to 1Gi.
kubectl apply -f todo-app-database-mssql-linux.yaml
I get this error:
error: error parsing todo-app-database-mssql-linux.yaml: error converting YAML to JSON: yaml: line 9: mapping values are not allowed in this context
Hmm... that did not work. I try delete:
kubectl delete deployment todo-app-database-mssql-linux
kubectl create -f todo-app-database-mssql-linux.yaml
I get this error:
error: error validating "todo-app-database-mssql-linux.yaml": error validating data: invalid object to validate; if you choose to ignore these errors, turn validation off with --validate=false
So I try apply:
kubectl apply -f todo-app-database-mssql-linux.yaml
Same error!
Shucks.... Is there a way to adjust the memory allocation for Docker Desktop?
Thank you
Siegfried
Short answer
https://github.com/helm/charts/blob/master/stable/mssql-linux/templates/pvc-master.yaml
Detailed Answer
Docker For Desktop comes already with a default StorageClass :
This storage class is responsible for auto-provisioning of PV whenever you create a PVC.
If you have a YAML definition of PVC (persistent volume claim), you just need to keep storageClass empty, so it will use the default.
k get storageclass
NAME PROVISIONER AGE
hostpath (default) docker.io/hostpath 11d
This is fair enough as Docker-For-Desktop Cluster is a one node cluster. So if your DB crashes and the cluster opens it again , it will not move to another node, because simply, you have a single node :)
Now should write the YAML of PVC from scratch ?
No , you don't need. Because Helm should be your best friend.
( I explained below Why you have to use Helm even without deep learning curve)
Fortunately, the community provides a chart called stable/mssql-linux..
Let's run it together :
helm -n <your-namespace> install todo-app-database stable/mssql-linux
# helm -n <namespace> install <release-name> <chart-name-from-community>
If you want to check the YAML (namely PVC) that Helm computed, you can run template instead of install
helm -n <your-namespace> template todo-app-database stable/mssql-linux
Why I give you the answer with Helm ?
Writing YAML from scratch lets reinventing the wheel while others do it.
The most efficient way is to reuse what community prepare for you.
However, you may ask: How can i reuse what others doing ?
That's why Helm comes.
Helm comes to be your installer of any application on top of kubernetes regardless how much YAML does your app require.
Install it now and hit the ground choco install kubernetes-helm

Artifactory Generic Download - VSTS Task Failing

I have set up a very basic task in a VSTS build definition with the following simple steps and objective:
Setup and successfully test an endpoint to our Artifactory repository.
Implement a VSTS "Artifactory Generic Download" Task to retrieve a single jar file from the Artifactory repository.
Drop the jar file in staging directory of the build agent.
The file spec source, based on an example from the JFrog website www.jfrog.com and set up as a Task Configuration is very basic and is depicted below:
Unfortunately, triggering this build job fails horribly with the below error and I simply can't figure out why it is failing. Would appreciate some help on this.
It seems that no artifacts were found and the task fails due to the configured "Fail task if no dependencies were downloaded" flag. If you wish to change this behavior, you can uncheck the flag in your task configuration.
As for the not downloaded artifacts, make sure a repository called "list" exists and a jar file exists in the provided pattern.
More information about file-specs can be found here.

Add build number to package version with `dotnet pack` in VSTS Build process

With a .NET Framework library you could specify a version with a wildcard and NUGET pack command would append the build date and version automatically when running a NUGET Build Task in VSTS.
[assembly: AssemblyVersion("1.0.*")]
NUGET PACK would generate a NUPKG file with a version like 1.0.6604.1234 appending the date number and a build ID.
NET Standard issues
In .NET Core and .NET standard the new .csproj format does not support this wildcard format.
We can't package with Nuget.exe (reason: this issue) but we can use dotnet pack except I need to auto-increment the build numbers. The dotnet Build Task in VSTS allows me to wholly replace the version number, but I want to retain the version in the csproj file, and just append a build number (as I used to).
I found that using <VersionPrefix>x.y</VersionPrefix> in the csproj file would work with nuget pack and I could then add the additional parameter VersionSuffix=$(Build.BuildNumber) to the pack task.
All looked good until the first dev updated the project version in the project properties dialog. Visual Studio ignored the VersionPrefix and set the <Version> tag - and the build number fix is ignored because a Version tag exists.
Is there a way to read the Version from the csproj? If so I could set the build property to Version=$(ProjectVersion).$(Build.BuildNumber) ?
Or are there alternative ways to handle auto-incrementing the build version when packaging?
First you can select Use an environment variable for Automatic package versioning, use your defined variable such as temp ($(build.buildNumber)) as Environment variable.
More details take a look at this link: Dotnet pack automatic package versioning build number clarification
Another way is using the "arguments" field in the dotnet CLI task, you can pass additional arguments to the dotnet cli.
Using --version-suffix $(Build.BuildNumber) will pass the build
number as version suffix. Make sure you don't have a <version>
element set in your csproj, but rather a <versionprefix> element.
The built version will look like versionprefix-versionsuffix, so for
example, if you have <versionprefix>1.2.3</versionprefix> and build
number 201805002, the built version will be 1.2.3-201805002. In this case do not select the automatic package versioning.
Thanks to #patricklu-msft for his suggestions.
There is it appears no built-in way to emulate the wildcard behaviour we previously had NUGET pack with dotnet pack, nor was there way to get the <Version> tag out of the project file.
So I've created a new VSTS Build task that does this: VersionTaskReader in the MarketPlace.
This extension can be pointed to a .csproj or .vbproj and will set an environment variable VERSION, and VERSION_BUILD which has the BUILDID appended. You can optionally add a prefix to make each instance different if needed.
For example, if your project contains <Version>1.1</Version> then the VERSION_BUILD would be something like 1.1.8680
Then the dotnet pack task can use the environment variable VERSION_BUILD in the versioning options screen, so that the build number automatically increments.

KNIME Command Line Execution - ClassNotFoundException

I'd like to schedule a KNIME workflow. The workflow does its job very good as long as I start it from the KNIME GUI application. When I execute the same workflow via command line, java complains that com.microsoft.sqlserver.jdbc.SQLServerDriver
could not be found (ClassNotFoundException).
I invoke it via:
"D:\Progamme\KNIME\knime.exe" -nosplash -application -consoleLog org.knime.product.KNIME_BATCH_APPLICATION -preferences="absolutepathto\preferences.epf" -workflowDir="absolutepathto\workflow"
Since the error message signals missing content in the java CLASSPATH I also tried to add the parameters
-vmargs -classpath .;"absolutepathto/sqljdbc42.jar"
But still I earn a java slap, pointing to the same error...
I also tried to run the command from within the knime.exe's directory and I also tried to add the JAR file to Preferences -> Java -> Build Path -> Classpath Variable / User Libraries (referenced via the -preference argument). But that had no effect.
Did anybody face the same problems? Maybe with other third party JARs?
It is all about a Database connector that is configured like this:
Does the integrated security maybe force a misleading error?
System spec: KNIME 3.2.2 on Windows Server 2008 R2
Update - extract from preferences file
/configuration/org.eclipse.core.net/org.eclipse.core.net.hasMigrated=true
/configuration/org.eclipse.ui.ide/MAX_RECENT_WORKSPACES=10
/configuration/org.eclipse.ui.ide/RECENT_WORKSPACES=<list of some workspaces>
/configuration/org.eclipse.ui.ide/RECENT_WORKSPACES_PROTOCOL=3
/configuration/org.eclipse.ui.ide/SHOW_RECENT_WORKSPACES=false
/configuration/org.eclipse.ui.ide/SHOW_WORKSPACE_SELECTION_DIALOG=true
Is there maybe a problem due to the fact that it is a shared KNIME instance among several users and the command line execution does not know which workspace has to be chosen? Is the workspace somehow needed and why?
Partial Solution:
I finally managed it but I don't know exactly why it works now. What I did was to load a fresh portable version of KNIME and ran the same commands only changing the executable path to the new portable version. Before that I started the portable version once to set the workspace directory and register the database driver in preferences dialog and .ini file, nothing else, same configuration so far as the shared KNIME instance. What I am really wondering abpout is that from now on the commands are also working with the shared KNIME instance. I really don't know what caused the change that let KNIME find the driver class.
Info
Because I encountered a few more problems within shared environment in KNIME command line mode, that led to undeterministic execution results, I wrote a little .NET library. This gives me more flexibility/control over the workflow execution (which returncodes and error messages occured and so on). You can find it here if you're interested: KnimeNet
I took a very minimal approach:
cd "C:\Program Files\KNIME"
.\knime -nosplash -noexit -consoleLog -reset -application org.knime.product.KNIME_BATCH_APPLICATION -workflowFile="D:\Work\Knime Workflows\Output\CMD_Test.knwf" -preferences="D:\Work\Knime Workflows\Output\CMD_Test.epf"

Supplying build info as qx.core.Environment entries

I have my qooxdoo project built and deployed by a CI server. Upon build, the server generates build info (version, VCS revision, CI build number, timestamp) that I would like to be passed to my qooxdoo app as qx.core.Environment keys.
At the moment, I have CI server generate a build.json file which is packaged together with the application, loaded at startup and converted to environment keys (by application code). This costs us an extra XHR.
On the other hand, I know that environment entries can be supplied during build, via config.json. Of course our build system can preprocess config.json to fill in environment entries, but I'm a bit skeptic of the idea of CI server fiddling with config.json. Is there any better solution? Is it possible to make generator script read environment entries from some auxiliary source?
I would write a #VERSION# tag into my script and at the end of the build process just search and replace this string in the compiled js file.
perl -i -p -e 's/#VERSION#/0.3.0/g' build/script/hello.js

Resources