I am new to Azure Pipelines, so I apologize if this seems rudimentary.
I created a build pipeline in Azure DevOps that runs successfully, but when I click into the artifact that it produced all of the files that were in the bin/Release folder are available except for the exe. Does anyone know why this may be? Is my YAML structured correctly for my WPF application?
trigger:
- main
pool:
vmImage: 'windows-latest'
variables:
solution: '**/*.sln'
buildPlatform: 'Any CPU'
buildConfiguration: 'Release'
steps:
- task: NuGetToolInstaller#1
- task: NuGetCommand#2
inputs:
restoreSolution: '$(solution)'
- task: VSBuild#1
inputs:
solution: '$(solution)'
platform: '$(buildPlatform)'
configuration: '$(buildConfiguration)'
- task: VSTest#2
inputs:
platform: '$(buildPlatform)'
configuration: '$(buildConfiguration)'
- task: CopyFiles#2
displayName: 'Copy setup to artifact'
inputs:
SourceFolder: 'bin\\Release'
Contents: '**'
targetFolder: '$(Build.ArtifactStagingDirectory)'
- task: PublishBuildArtifacts#1
displayName: 'Publish Artifact: drop'
inputs:
PathtoPublish: '$(build.artifactstagingdirectory)'
ArtifactName: 'Install Package'
publishLocation: 'Container'
On a related note, if I wanted this installer to be available as a link on say a SharePoint site, how would I go about setting up the CD pipeline?
Related
I am trying to download various artefacts for confluent version using get_url module. I am behind a proxy and below is my playbook.
I have to put in a proxy information for one of the downloads, but not for other. Trying to find out how do I determine which ones need proxy details defined in the task and which ones should not have that information. I got verify cert error when I had added proxy information to the second task.
Is there a way to avoid setting that information in the task for the first download task as well
tasks:
- name: Download Confluent enterprise version
get_url:
url: https://packages.confluent.io/archive/7.0/confluent-7.0.7.tar.gz
dest: /export/home/svcuser/tmp
use_proxy: yes
register: showconfluentdlstatus
environment:
http_proxy: http://myuserid:mypassword#proxy.prudential.com:8080/
https_proxy: https://myuserid:mypassword#proxy.prudential.com:8080/
- name: show confluent enterprise download status
debug: var=showconfluentdlstatus
- name: uncompress confluent enterprise
unarchive:
src: /export/home/svcuser/tmp/confluent-7.0.7.tar.gz
dest: /export/home/svcuser/tmp/confluent_7.0.7/
register: unarchiveconfluentstatus
- name: show unarchive confluent status status
debug: var=unarchiveconfluentstatus
- name: Download Confluent playbook for same version as enterprise confluent version
# Proxy doesn't seem to be needed for this
get_url:
url: https://github.com/confluentinc/cp-ansible/archive/refs/heads/7.0.7-post.zip
dest: /export/home/svcuser/tmp
register: showconfluentplaybookdlstatus
- name: show confluent playbook for same version as enterprise confluent version download status
debug: var=showconfluentplaybookdlstatus
- name: uncompress playbook for same version as enterprise confluent version download status
unarchive:
src: /export/home/svcuser/tmp/cp-ansible-7.0.7-post.zip
dest: /export/home/svcuser/tmp/confluent_7.0.7/
register: unarchiveconfluentplaybookstatus
- name: show unarchive confluent playbook for same version as enterprise confluent version status
debug: var=unarchiveconfluentplaybookstatus
I am new to azure and I made a setup on the azure portal with GitHub and set .yml. the pipeline runs successfully in the build phase but fails in the deployment phase.
here is my workflow file.
push:
branches: ["master"]
workflow_dispatch:
env:
AZURE_WEBAPP_NAME: wep-app-name # set this to your application's name
AZURE_WEBAPP_PACKAGE_PATH: "." # set this to the path to your web app project, defaults to the repository root
NODE_VERSION: "16.15.0" # set this to the node version to use
NODE_OPTIONS: "--max-old-space-size=8192"
permissions:
contents: read
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: Set up Node.js
uses: actions/setup-node#v3
with:
node-version: ${{ env.NODE_VERSION }}
cache: "npm"
#- name: yarn install, build, and test
# run: |
# node --max_old_space_size=8192
# yarn
#yarn run build
#zip artifact
- name: Zip artifact for deployment
run: zip release.zip ./build/* -r #get files and folder in build folder and
#compress into release.zip with linux zip command
- name: Upload artifact for deployment job
uses: actions/upload-artifact#v3
with:
name: node-app
path: build
deploy:
permissions:
contents: none
runs-on: ubuntu-latest
needs: build
environment:
name: "Development"
url: ${{ steps.deploy-to-webapp.outputs.webapp-url }}
steps:
- name: Download artifact from build job
uses: actions/download-artifact#v3
with:
name: node-app
path: .
- name: "Deploy to Azure WebApp"
uses: azure/webapps-deploy#v2
id: deploy-to-webapp
with:
app-name: "app-name"
slot-name: 'Production'
publish-profile: ${{ secrets.AZUREAPPSERVICE_PUBLISHPROFILE_B2318A4382DB625 }}
package: .
I just deployed an Eclipse Che environment on my microk8s server and it works great with the sample devfiles. Now I want to use my own repo with a custom devfile.
But everytime I try to start the environment I get the following error message: Container tools has state CrashLoopBackOff.
This only happens with the custom devfile.yaml but not with the default one. The problem is, I need a more recent version of Golang, so I need a different file.
This is the devfile.yaml
schemaVersion: 2.1.0
metadata:
name: kubernetes-image-version-checker
components:
- name: tools
container:
image: quay.io/devfile/golang:latest
env:
- name: GOPATH
value: /projects:/home/user/go
- name: GOCACHE
value: /tmp/.cache
memoryLimit: 2Gi
mountSources: true
command: ['/checode/entrypoint-volume.sh']
projects:
- name: kubernetes-image-version-checker
git:
remotes:
origin: "https://gitlab.imanuel.dev/DerKnerd/kubernetes-image-version-checker.git"
I am experimenting in a small lab created with AutomatedLab that contains Windows Server 2022 machines running ActiveDirectory and SQLServer along with CentOS 8.5 machines running a Kubernetes cluster. My test application is a .Net 6 console application that simply connect to a SQLServer database running in the the lab over a trusted connection. It is containerized based on the official aspnet:6.0 image. The Kubernetes POD contains an InitContainer that executes kinit to generate a Kerberos token placed in a shared volume. I have made two versions of the test application: one that uses an OdbcConnection to connect to the database and the second one uses a SqlConnection. The version with the OdbcConnection successfully connects to the database but the one using the SqlConnection crashes when opening the connection to the database.
Here is the code of the application using the OdbcConnection:
using (var connection =
new OdbcConnection(
"Driver={ODBC Driver 17 for SQL Server};Server=sql1.contoso.com,1433;Database=KubeDemo;Trusted_Connection=Yes;"))
{
Log.Information("connection created");
var command = new OdbcCommand
("select * from KubeDemo.dbo.Test", connection);
connection.Open();
Log.Information("Connection opened");
using (var reader = command.ExecuteReader())
{
Log.Information("Read");
while (reader.Read())
{
Console.WriteLine($"{reader[0]}");
}
}
}
The logs of the container show that it can connect to the database and read its content
[16:24:35 INF] Starting the application
[16:24:35 INF] connection created
[16:24:35 INF] Connection opened
[16:24:35 INF] Read
1
Here is the code of the application using the SqlConnection:
using (var connection =
new SqlConnection(
"Server=sql1.contoso.com,1433;Initial Catalog=KubeDemo;Integrated Security=True;"))
{
Log.Information("connection created");
var command = new SqlCommand
("select * from KubeDemo.dbo.Test", connection);
connection.Open();
Log.Information("Connection opened");
using (var reader = command.ExecuteReader())
{
Log.Information("Read");
while (reader.Read())
{
Console.WriteLine($"{reader[0]}");
}
}
}
The container crashes, based on the log when the connection is being opened:
[16:29:58 INF] Starting the application
[16:29:58 INF] connection created
I have deployed the Kubernetes pod with a command "tail -f /dev/null" so that I could execute the application manually and I get an extra line:
[16:29:58 INF] Starting the application
[16:29:58 INF] connection created
Segmentation fault (core dumped)
According to Google, this is C++ error message that indicates an attempt to access an unauthorized memory section. Unfortunately I have no idea how to work around that. Does anyone has an idea how to get it to work?
To be complete, here is the Dockerfile for the containerized application
FROM mcr.microsoft.com/dotnet/aspnet:6.0
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update
RUN apt-get install curl gnupg2 -y
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/debian/11/prod.list > /etc/apt/sources.list.d/mssql-release.list
RUN apt-get update
RUN ACCEPT_EULA=Y apt-get install --assume-yes --no-install-recommends --allow-unauthenticated unixodbc msodbcsql17 mssql-tools
RUN apt-get remove curl gnupg2 -y
RUN echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bash_profile
RUN echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc
WORKDIR /app
EXPOSE 80
COPY ./ .
ENTRYPOINT ["dotnet", "DbTest.dll"]
And the POD Helm template:
apiVersion: v1
kind: Pod
metadata:
name: dbtest
labels:
app: test
spec:
restartPolicy: Never
volumes:
- name: kbr5-cache
emptyDir:
medium: Memory
- name: keytab-dir
secret:
secretName: back01-keytab
defaultMode: 0444
- name: krb5-conf
configMap:
name: krb5-conf
defaultMode: 0444
initContainers:
- name: kerberos-init
image: gambyseb/private:kerberos-init-0.2.0
imagePullPolicy: {{ .Values.image.pullPolicy }}
securityContext:
allowPrivilegeEscalation: false
privileged: false
readOnlyRootFilesystem: true
env:
- name: KRB5_CONFIG
value: /krb5
volumeMounts:
- name: kbr5-cache
mountPath: /dev/shm
- name: keytab-dir
mountPath: /keytab
- name: krb5-conf
mountPath: /krb5
containers:
- name: dbtest
image: {{ .Values.image.repository }}:DbTest-{{ .Chart.AppVersion }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: ASPNETCORE_ENVIRONMENT
value: "{{ .Values.environment.ASPNETCORE }}"
- name: KRB5_CONFIG
value: /krb5
{{/* command:*/}}
{{/* - "tail"*/}}
{{/* - "-f"*/}}
{{/* - "/dev/null"*/}}
securityContext:
allowPrivilegeEscalation: true
privileged: true
ports:
- containerPort: 80
volumeMounts:
- name: kbr5-cache
mountPath: /dev/shm
- name: krb5-conf
mountPath: /krb5
- name: keytab-dir
mountPath: /keytab
{{/* - name: kerberos-refresh*/}}
{{/* image: gambyseb/private:kerberos-refresh-0.1.0*/}}
{{/* imagePullPolicy: {{ .Values.image.pullPolicy }}*/}}
{{/* env:*/}}
{{/* - name: KRB5_CONFIG*/}}
{{/* value: /krb5*/}}
{{/* volumeMounts:*/}}
{{/* - name: kbr5-cache*/}}
{{/* mountPath: /dev/shm*/}}
{{/* - name: keytab-dir*/}}
{{/* mountPath: /keytab*/}}
{{/* - name: krb5-conf*/}}
{{/* mountPath: /krb5*/}}
imagePullSecrets:
- name: {{ .Values.image.pullSecret }}
This may not be Auth reated.
If you are deploying to a linux container you need to make sure you don't deploy System.Data.SqlClient as this is a Windows only library. It will just blow up your container (as you are experiencing) when it first loads the library.
I found that if I added Microsoft.Data.SqlClient it didn't get added but I think I was leaving Dapper or EF to add the dependency and it went into the release as System.Data.SqlClient. As the container blew up in AWS I had very little feedback as to the cause!
See https://devblogs.microsoft.com/dotnet/introducing-the-new-microsoftdatasqlclient/
I'm trying to deploy a CRA + Craco React application via Azure Devops. This is my YML file:
# Node.js React Web App to Linux on Azure
# Build a Node.js React app and deploy it to Azure as a Linux web app.
# Add steps that analyze code, save build artifacts, deploy, and more:
# https://learn.microsoft.com/azure/devops/pipelines/languages/javascript
trigger:
- master
variables:
# Azure Resource Manager connection created during pipeline creation
azureSubscription: '{REDACTED FOR SO}'
# Web app name
webAppName: 'frontend'
# Environment name
environmentName: 'public'
# Agent VM image name
vmImageName: 'ubuntu-latest'
stages:
- stage: Build
displayName: Build stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- task: ArchiveFiles#2
displayName: 'Archive files'
inputs:
rootFolderOrFile: '$(System.DefaultWorkingDirectory)'
includeRootFolder: false
archiveType: zip
archiveFile: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
replaceExistingArchive: true
- upload: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
artifact: drop
- stage: Deploy
displayName: Deploy stage
dependsOn: Build
condition: succeeded()
jobs:
- deployment: Deploy
displayName: Deploy
environment: $(environmentName)
pool:
vmImage: $(vmImageName)
strategy:
runOnce:
deploy:
steps:
- task: AzureRmWebAppDeployment#4
displayName: 'Azure App Service Deploy: '
inputs:
ConnectionType: 'AzureRM'
azureSubscription: 'My Subscription'
appType: 'webAppLinux'
WebAppName: 'frontend'
packageForLinux: '$(Pipeline.Workspace)/drop/$(Build.BuildId).zip'
RuntimeStack: 'NODE|10.10'
StartupCommand: 'npm run start'
ScriptType: 'Inline Script'
InlineScript: |
npm install
npm run build --if-present
The build tasks succeeds. However, the deployment fails after running for ~20 minutes, with the following error:
Starting: Azure App Service Deploy:
==============================================================================
Task : Azure App Service deploy
Description : Deploy to Azure App Service a web, mobile, or API app using Docker, Java, .NET, .NET Core, Node.js, PHP, Python, or Ruby
Version : 4.198.0
Author : Microsoft Corporation
Help : https://aka.ms/azureappservicetroubleshooting
==============================================================================
Got service connection details for Azure App Service:'frontend'
Package deployment using ZIP Deploy initiated.
Deploy logs can be viewed at https://{MYAPPSERVICENAME}.scm.azurewebsites.net/api/deployments/62cf55c3f1434309b71a8334b2696fc9/log
Successfully deployed web package to App Service.
Trying to update App Service Application settings. Data: {"SCM_COMMAND_IDLE_TIMEOUT":"1800"}
App Service Application settings are already present.
Executing given script on Kudu service.
##[error]Error: Unable to run the script on Kudu Service. Error: Error: Unable to fetch script status due to timeout. You can increase the timeout limit by setting 'appservicedeploy.retrytimeout' variable to number of minutes required.
Successfully updated deployment History at https://{MYAPPSERVICENAME}.scm.azurewebsites.net/api/deployments/3641645137779498
App Service Application URL: http://{MYAPPSERVICENAME}.azurewebsites.net
Finishing: Azure App Service Deploy:
This YML solved my issues, along with upgrading Azure app service node version to 14.*
What I did is, I have moved the npm install and npm run build to build stage, and removed it from the deployment inline script stage.
So the package is ready before the deployment, after the successful unzip it will start the app using pm2 serve /home/site/wwwroot/build --no-daemon --spa as its running in a linux app service. (it works if your build directory is within wwwroot), if not please update the path accordingly
# Node.js React Web App to Linux on Azure
# Build a Node.js React app and deploy it to Azure as a Linux web app.
# Add steps that analyze code, save build artifacts, deploy, and more:
# https://docs.microsoft.com/azure/devops/pipelines/languages/javascript
trigger:
- develop
variables:
# Azure Resource Manager connection created during pipeline creation
azureSubscription: 'YOUR-SUBSCRIPTION'
# Web app name
webAppName: 'AZURE_APP_NAME'
# Environment name
environmentName: 'APP_ENV_NAME'
# Agent VM image name
vmImageName: 'ubuntu-latest'
stages:
- stage: Build
displayName: Build stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- task: NodeTool#0
inputs:
versionSpec: '14.x'
displayName: 'Install Node.js'
- script: |
npm install
displayName: 'npm install'
- script: |
npm run build
displayName: 'npm build'
- task: ArchiveFiles#2
displayName: 'Archive files'
inputs:
rootFolderOrFile: '$(System.DefaultWorkingDirectory)'
includeRootFolder: false
archiveType: zip
archiveFile: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
replaceExistingArchive: true
- upload: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
artifact: drop
- stage: Deploy
displayName: Deploy stage
dependsOn: Build
condition: succeeded()
jobs:
- deployment: Deploy
displayName: Deploy
environment: $(environmentName)
pool:
vmImage: $(vmImageName)
strategy:
runOnce:
deploy:
steps:
- task: AzureRmWebAppDeployment#4
displayName: 'Azure App Service Deploy: poultry-web-test'
inputs:
azureSubscription: $(azureSubscription)
appType: webAppLinux
WebAppName: $(webAppName)
packageForLinux: '$(Pipeline.Workspace)/drop/$(Build.BuildId).zip'
RuntimeStack: 'NODE|14-lts'
StartupCommand: 'pm2 serve /home/site/wwwroot/build --no-daemon --spa'