drone ci publish generated latex pdf - continuous-deployment

actually I am using travis but I want to change to drone.
For all tex documents I'm using a small Makefile with a Container to generate my pdf file and deploy it on my repository.
But since I'm using gitea I want to set up my integration pipeline with drone, but I don't know how I can configure the .drone.yml to deploy my pdf file on every tag als release.
Actually I'm using the following .drone.yml and I am happy the say, that's build process works fine at the moment.
clone:
git:
image: plugins/git
tags: true
pipeline:
pdf:
image: volkerraschek/docker-latex:latest
pull: true
commands:
- make
and this is my Makefile
# Docker Image
IMAGE := volkerraschek/docker-latex:latest
# Input tex-file and output pdf-file
FILE := index
TEX_NAME := ${FILE}.tex
PDF_NAME := ${FILE}.pdf
latexmk:
latexmk \
-shell-escape \
-synctex=1 \
-interaction=nonstopmode \
-file-line-error \
-pdf ${TEX_NAME}
docker-latexmk:
docker run \
--rm \
--user="$(shell id -u):$(shell id -g)" \
--net="none" \
--volume="${PWD}:/data" ${IMAGE} \
make latexmk
Which tags and conditions are missing in my drone.yml to deploy my index.pdf as release in gitea when I push a new git tag?
Volker

I have this setup on my gitea / drone pair. This is a MWE of my .drone.yml:
pipeline:
build:
image: tianon/latex
commands:
- pdflatex <filename.tex>
gitea_release:
image: plugins/gitea-release
base_url: <gitea domain>
secrets: [gitea_token]
files: <filename.pdf>
when:
event: tag
So rather than setting up the docker build in the Makefile, we add a step using docker image with latex, compile the pdf, and use a pipeline step to release.
You'll also have to set your drone repo to trigger builds on a tags and set a gitea API token to use. To set the API token, you can use the command line interface:
$ drone secret add <org/repo> --name gitea_token --value <token value> --image plugins/gitea-release
You can set up the drone repo to trigger builds in the repository settings in the web UI.
Note that you'll also likely have to allow *.pdf attachments in your gitea settings, as they are disallowed by default. In your gitea app.ini add this to the attachment section:
[attachment]
ENABLED = true
PATH = /data/gitea/attachments
MAX_SIZE = 10
ALLOWED_TYPES = */*

In addition to Gabe's answer, if you are using an NGINX reverse proxy, you might also have to allow larger file uploads in your nginx.conf. (This applies to all file types, not just .pdf)
server {
[ ... ]
location / {
client_max_body_size 10M; # add this line
proxy_pass http://gitea:3000;
}
}
This fixed the problem for me.

Related

Setup Xdebug for Shopware docker failed

I try to setup Xdebug for shopware-docker without success.
VHOST_[FOLDER_NAME_UPPER_CASE]_IMAGE=ghcr.io/shyim/shopware-docker/6/nginx:php74-xdebug
After replacing your Folder Name and running swdc up Xdebug should be activated.
Which folder name should I place?
Using myname, the same name as in /var/www/html/myname, return error on swdc up myname:
swdc up myname
[+] Running 2/0
⠿ Network shopware-docker_default Created 0.0s
⠿ Container shopware-docker-mysql-1 Created 0.0s
[+] Running 1/1
⠿ Container shopware-docker-mysql-1 Started 0.3s
.database ready!
[+] Running 0/1
⠿ app_myname Error 1.7s
Error response from daemon: manifest unknown
EDIT #1
With this setup VHOST_MYNAME_IMAGE=ghcr.io/shyim/shopware-docker/6/nginx:php81-xdebug (versioned Xdebug) the app started:
// $HOME/.config/swdc/env
...
VHOST_MYNAME_IMAGE=ghcr.io/shyim/shopware-docker/6/nginx:php81-xdebug
But set a debug breakpoint (e.g. in index.php), nothing happens
EDIT #2
As #Alex recommend, i place xdebug_break() inside my code and it works.
Stopping on the breakpoint the debugger log aswers with hints/warnings like described in the manual:
...
Cannot find a local copy of the file on server /var/www/html/%my_path%
Local path is //var/www/html/%my_path%
...
click on Click to set up path mapping to open the modal
click inside modal select input Use path mapping (...)
input field File path in project response with undefined
But i have already set up the mapping like described in the manual, go to File | Settings | PHP | Servers:
Why does not work my mapping? Where failed my set up?
The path mapping needs to be between your local project path on your workstation and the path inside the docker containers. Without xDebug has a hard time mapping the breakpoints from PHPStorm to the actual code inside the container.
If mapping the path correctly does not work and if its a possibility for you, i can highly recommend switching to http://devenv.sh for your development enviroment. Shopware itself promotes this new enviroment in their documentation: https://developer.shopware.com/docs/guides/installation/devenv and provides an example on how to enable xdebug:
# devenv.local.nix File
{ pkgs, config, lib, ... }:
{
languages.php.package = pkgs.php.buildEnv {
extensions = { all, enabled }: with all; enabled ++ [ amqp redis blackfire grpc xdebug ];
extraConfig = ''
# Copy the config from devenv.nix and append the XDebug config
# [...]
xdebug.mode=debug
xdebug.discover_client_host=1
xdebug.client_host=127.0.0.1
'';
};
}
A correct path mapping should not be needed here, as your local file location is the same for XDebug and your PHPStorm.

Flutter pod file issues

its seems that my pod file is broken and I can't install app on my iPhone.
I got error pod file
"Running pod install
Exited (sigterm)
Exception: Error running pod install". I have already tried with remove old pod file, flutter clean, flutter get packages and build for iOS again and got the same error. You can find my pod file and pubspec.yaml below and there are no error in pubspec.yaml after a build. Any suggestion on how I can fix this error?
Pod file:
# Uncomment this line to define a global platform for your project
# platform :ios, '9.0'
# CocoaPods analytics sends network stats synchronously affecting flutter build latency.
ENV['COCOAPODS_DISABLE_STATS'] = 'true'
project 'Runner', {
'Debug' => :debug,
'Profile' => :release,
'Release' => :release,
}
def flutter_root
generated_xcode_build_settings_path = File.expand_path(File.join('..', 'Flutter', 'Generated.xcconfig'), __FILE__)
unless File.exist?(generated_xcode_build_settings_path)
raise "#{generated_xcode_build_settings_path} must exist. If you're running pod install manually, make sure flutter pub get is executed first"
end
File.foreach(generated_xcode_build_settings_path) do |line|
matches = line.match(/FLUTTER_ROOT\=(.*)/)
return matches[1].strip if matches
end
raise "FLUTTER_ROOT not found in #{generated_xcode_build_settings_path}. Try deleting Generated.xcconfig, then run flutter pub get"
end
require File.expand_path(File.join('packages', 'flutter_tools', 'bin', 'podhelper'), flutter_root)
flutter_ios_podfile_setup
target 'Runner' do
use_frameworks!
use_modular_headers!
flutter_install_all_ios_pods File.dirname(File.realpath(__FILE__))
end
post_install do |installer|
installer.pods_project.targets.each do |target|
flutter_additional_ios_build_settings(target)
end
end
**Pubspec**
environment:
sdk: ">=2.7.0 <3.0.0"
dependencies:
flutter:
sdk: flutter
sqflite: any
intl: ^0.16.1
percent_indicator: "^1.0.13"
scoped_model: ^1.0.1
assets_audio_player: ^1.0.1
screen: ^0.0.5
flutter_launcher_icons: "^0.7.0"
shared_preferences: ^0.5.3+1
persistent_bottom_nav_bar: any
custom_navigator: ^0.3.0
flutter_svg: ^0.18.0
http: ^0.12.1
async: ^2.4.1
stacked: ^1.6.0
stacked_services: ^0.4.3
provider: ^4.3.1
get_it: ^4.0.2
firebase_core: ^0.4.0+9
firebase_analytics: ^5.0.2
firebase_auth: ^0.16.1
firebase_storage: ^3.1.6
cloud_firestore: ^0.13.6
google_maps_flutter: ^0.5.28+1
map_view: "^0.0.14"
google_maps_webservice: ^0.0.6
geolocator: ^5.3.1
flutter_polyline_points: ^0.2.1
image_picker: ^0.6.7+2
cached_network_image: ^2.2.0+1
flutter_icons:
android: "launcher_icon"
ios: true
image_path: "assets/icons/tomato.png"
# The following adds the Cupertino Icons font to your application.
# Use with the CupertinoIcons class for iOS style icons.
cupertino_icons: ^0.1.2
dev_dependencies:
flutter_test:
sdk: flutter
# For information on the generic Dart part of this file, see the
# following page: https://dart.dev/tools/pub/pubspec
# The following section is specific to Flutter.
flutter:
# The following line ensures that the Material Icons font is
# included with your application, so that you can use the icons in
# the material Icons class.
uses-material-design: true
# To add assets to your application, add an assets section, like this:
assets:
- assets/images/
- assets/icons/
# - images/a_dot_ham.jpeg
# An image asset can refer to one or more resolution-specific "variants", see
# https://flutter.dev/assets-and-images/#resolution-aware.
# For details regarding adding assets from package dependencies, see
# https://flutter.dev/assets-and-images/#from-packages
# To add custom fonts to your application, add a fonts section here,
# in this "flutter" section. Each entry in this list should have a
# "family" key with the font family name, and a "fonts" key with a
# list giving the asset and other descriptors for the font. For
# example:
fonts:
- family: Oxygen
fonts:
- asset: fonts/Oxygen-Regular.ttf
- asset: fonts/Oxygen-Bold.ttf
weight: 700
- asset: fonts/Oxygen-Light.ttf
weight: 300
# For details regarding fonts from package dependencies,
# see https://flutter.dev/custom-fonts/#from-packages
Solved it with:
flutter upgrade
flutter clean
flutter pub get
rm -Rf ios/Pods
rm -Rf ios/.symlinks
pod cache clean --all
rm -Rf ios/Flutter/Flutter.framework
flutter build ios

Automating JX installation process via ansible2.9

Am trying to install jenkins-x version 2.0.785 via ansible 2.9.9.
How do I handle the prompts like "Please enter the name you wish to use with git:" that I get while installing JX? There are multiple prompts to be handled that I will get when I execute the JX install command.
I get the above mentioned prompt even though "--git-username=automation" is already passed in the JX install command. I tried with both expect and shell module in ansible.
Kindly, suggest me a solution where I can handle these prompts via ansible.
Tried:-
- name: Handling multiple prompts
expect:
command: jx install --git-provider-kind bitbucketserver --git-provider-url http://rtx-swtl-git.fnc.net.local --git-username automation --default-environment-prefix TestProject --git-api-token MzI1ODg1NjA1NTk4OqjiP9N3lr4iHt9L5rofdaWMqsgW --on-premise --external-ip {{ hostvars[groups["kubemaster"][0]]["ip"] }} --verbose --static-jenkins=true --provider=openshift
responses:
Question:
- Please enter the name you wish to use with git: automation
timeout: 60
- name: Handling multiple prompts
expect:
command: jx install --git-provider-kind bitbucketserver --git-provider-url http://rtx-swtl-git.fnc.net.local --git-username automation --default-environment-prefix TestProject --git-api-token MzI1ODg1NjA1NTk4OqjiP9N3lr4iHt9L5rofdaWMqsgW --on-premise --external-ip {{ hostvars[groups["kubemaster"][0]]["ip"] }} --verbose --static-jenkins=true --provider=openshift
responses:
Please enter the name you wish to use with git: "automation"
- name: Handling multiple prompts
become: yes
shell: |
automation '' | jx install --git-provider-kind bitbucketserver --git-provider-url http://rtx-swtl-git.fnc.net.local --git-username automation --default-environment-prefix Testproject --git-api-token MzI1ODg1NjA1NTk4OqjiP9N3lr4iHt9L5rofdaWMqsgW --on-premise --external-ip {{ hostvars[groups["kubemaster"][0]]["ip"] }} --verbose --static-jenkins true --provider openshift
These doesn't give any errors in stderr section of ansible logs, the only thing is I receive the below attached logs in RED and it doesn't proceed further with the installation steps.
Output:-
fatal: [master]: FAILED! => {
"changed": true,
"cmd": "jx install --git-provider-kind bitbucketserver --git-provider-url http://rtx-swtl-git.fnc.net.local --git-username automation --default-environment-prefix TestProject --git-api-token MzI1ODg1NjA1NTk4OqjiP9N3lr4iHt9L5rofdaWMqsgW --on-premise --external-ip 167.254.204.90 --verbose --static-jenkins=true --provider=openshift --domain=jenkinsx.io",
"delta": "0:03:00.190343",
"end": "2020-06-17 06:44:03.620694",
"invocation": {
"module_args": {
"chdir": null,
"command": "jx install --git-provider-kind bitbucketserver --git-provider-url http://rtx-swtl-git.fnc.net.local --git-username automation --default-environment-prefix TestProject --git-api-token MzI1ODg1NjA1NTk4OqjiP9N3lr4iHt9L5rofdaWMqsgW --on-premise --external-ip 167.254.204.90 --verbose --static-jenkins=true --provider=openshift --domain=jenkinsx.io",
"creates": null,
"echo": false,
"removes": null,
"responses": {
"Question": [
{
"Please enter the name you wish to use with git": "automation"
},
{
"Please enter the email address you wish to use with git": "automation#fujitsu.com"
},
{
"\\? Do you wish to use automation as the local Git user for http://rtx-swtl-git.fnc.net.local server": "y"
},
{
"\\? Do you wish to use http://rtx-swtl-git.fnc.net.local as the pipelines Git server": "y"
}
]
},
"timeout": 180
}
},
"msg": "command exceeded timeout",
"rc": null,
"start": "2020-06-17 06:41:03.430351",
"stdout": "\u001b[1m\u001b[32m?\u001b[0m\u001b[0m \u001b[1mConfigured Jenkins installation type\u001b[0m: \u001b[36mStatic Jenkins Server and Jenkinsfiles\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: checking installation flags\r\n\u001b[36mDEBUG\u001b[0m: flags after checking - &{ConfigFile: InstallOnly:false Domain: ExposeControllerURLTemplate: ExposeControllerPathMode: AzureRegistrySubscription: DockerRegistry:docker-registry.default.svc:5000 DockerRegistryOrg: Provider:openshift VersionsRepository:https://github.com/jenkins-x/jenkins-x-versions.git VersionsGitRef: Version: LocalHelmRepoName:releases Namespace:jx CloudEnvRepository:https://github.com/jenkins-x/cloud-environments NoDefaultEnvironments:false RemoteEnvironments:false DefaultEnvironmentPrefix:TestProject LocalCloudEnvironment:false EnvironmentGitOwner: Timeout:6000 HelmTLS:false RegisterLocalHelmRepo:false CleanupTempFiles:true Prow:false DisableSetKubeContext:false Dir: Vault:false RecreateVaultBucket:true Tekton:false KnativeBuild:false BuildPackName: Kaniko:false GitOpsMode:false NoGitOpsEnvApply:false NoGitOpsEnvRepo:false NoGitOpsEnvSetup:false NoGitOpsVault:false NextGeneration:false StaticJenkins:true LongTermStorage:false LongTermStorageBucketName: CloudBeesDomain: CloudBeesAuth:}\r\n\u001b[36mDEBUG\u001b[0m: Setting the dev namespace to: \u001b[32mjx\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: Using helmBinary \u001b[32mhelm\u001b[0m with feature flag: \u001b[32mnone\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: \u001b[32mhelm\u001b[0m is already available on your PATH at \u001b[32m/usr/bin/helm\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: \u001b[32mkubectl\u001b[0m is already available on your PATH at \u001b[32m/usr/bin/kubectl\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: \u001b[32mhelm\u001b[0m is already available on your PATH at \u001b[32m/usr/bin/helm\u001b[0m\r\nContext \"jx/master-167-254-204-90-nip-io:8443/waruser\" modified.\r\n\u001b[36mDEBUG\u001b[0m: \u001b[32mkubectl\u001b[0m is already available on your PATH at \u001b[32m/usr/bin/kubectl\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: \u001b[32mhelm\u001b[0m is already available on your PATH at \u001b[32m/usr/bin/helm\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: Storing the kubernetes provider openshift in the TeamSettings\r\n\u001b[36mDEBUG\u001b[0m: Enabling helm template mode in the TeamSettings\r\nGit configured for user: \u001b[32mautomation\u001b[0m and email \u001b[32mautomation#fujitsu.com\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: Using \u001b[32mhelm2\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: Skipping \u001b[32mtiller\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: Using helmBinary \u001b[32mhelm\u001b[0m with feature flag: \u001b[32mtemplate-mode\u001b[0m\r\n\u001b[36mDEBUG\u001b[0m: Initialising Helm '\u001b[32minit --client-only\u001b[0m'\r\nhelm installed and configured\r\nNot installing ingress as using OpenShift which uses Route and its own mechanism of ingress\r\nEnabling anyuid for the Jenkins service account in namespace jx\r\nscc \"anyuid\" added to: [\"system:serviceaccount:jx:jenkins\"]\r\nscc \"hostaccess\" added to: [\"system:serviceaccount:jx:jenkins\"]\r\nscc \"privileged\" added to: [\"system:serviceaccount:jx:jenkins\"]\r\nscc \"anyuid\" added to: [\"system:serviceaccount:jx:default\"]\r\n\u001b[36mDEBUG\u001b[0m: Long Term Storage not supported by provider 'openshift', disabling this option\r\nSet up a Git username and API token to be able to perform CI/CD\r\n\u001b[36mDEBUG\u001b[0m: merging pipeline secrets with local secrets\r\n\u001b[0G\u001b[2K\u001b[1;92m? \u001b[0m\u001b[1;99mDo you wish to use automation as the local Git user for http://rtx-swtl-git.fnc.net.local server: \u001b[0m\u001b[37m(Y/n) \u001b[0m\u001b[?25l\u001b7\u001b[999;999f\u001b[6n",
"stdout_lines": [
"\u001b[1m\u001b[32m?\u001b[0m\u001b[0m \u001b[1mConfigured Jenkins installation type\u001b[0m: \u001b[36mStatic Jenkins Server and Jenkinsfiles\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: checking installation flags",
"\u001b[36mDEBUG\u001b[0m: flags after checking - &{ConfigFile: InstallOnly:false Domain: ExposeControllerURLTemplate: ExposeControllerPathMode: AzureRegistrySubscription: DockerRegistry:docker-registry.default.svc:5000 DockerRegistryOrg: Provider:openshift VersionsRepository:https://github.com/jenkins-x/jenkins-x-versions.git VersionsGitRef: Version: LocalHelmRepoName:releases Namespace:jx CloudEnvRepository:https://github.com/jenkins-x/cloud-environments NoDefaultEnvironments:false RemoteEnvironments:false DefaultEnvironmentPrefix:TestProject LocalCloudEnvironment:false EnvironmentGitOwner: Timeout:6000 HelmTLS:false RegisterLocalHelmRepo:false CleanupTempFiles:true Prow:false DisableSetKubeContext:false Dir: Vault:false RecreateVaultBucket:true Tekton:false KnativeBuild:false BuildPackName: Kaniko:false GitOpsMode:false NoGitOpsEnvApply:false NoGitOpsEnvRepo:false NoGitOpsEnvSetup:false NoGitOpsVault:false NextGeneration:false StaticJenkins:true LongTermStorage:false LongTermStorageBucketName: CloudBeesDomain: CloudBeesAuth:}",
"\u001b[36mDEBUG\u001b[0m: Setting the dev namespace to: \u001b[32mjx\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: Using helmBinary \u001b[32mhelm\u001b[0m with feature flag: \u001b[32mnone\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: \u001b[32mhelm\u001b[0m is already available on your PATH at \u001b[32m/usr/bin/helm\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: \u001b[32mkubectl\u001b[0m is already available on your PATH at \u001b[32m/usr/bin/kubectl\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: \u001b[32mhelm\u001b[0m is already available on your PATH at \u001b[32m/usr/bin/helm\u001b[0m",
"Context \"jx/master-167-254-204-90-nip-io:8443/waruser\" modified.",
"\u001b[36mDEBUG\u001b[0m: \u001b[32mkubectl\u001b[0m is already available on your PATH at \u001b[32m/usr/bin/kubectl\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: \u001b[32mhelm\u001b[0m is already available on your PATH at \u001b[32m/usr/bin/helm\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: Storing the kubernetes provider openshift in the TeamSettings",
"\u001b[36mDEBUG\u001b[0m: Enabling helm template mode in the TeamSettings",
"Git configured for user: \u001b[32mautomation\u001b[0m and email \u001b[32mautomation#fujitsu.com\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: Using \u001b[32mhelm2\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: Skipping \u001b[32mtiller\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: Using helmBinary \u001b[32mhelm\u001b[0m with feature flag: \u001b[32mtemplate-mode\u001b[0m",
"\u001b[36mDEBUG\u001b[0m: Initialising Helm '\u001b[32minit --client-only\u001b[0m'",
"helm installed and configured",
"Not installing ingress as using OpenShift which uses Route and its own mechanism of ingress",
"Enabling anyuid for the Jenkins service account in namespace jx",
"scc \"anyuid\" added to: [\"system:serviceaccount:jx:jenkins\"]",
"scc \"hostaccess\" added to: [\"system:serviceaccount:jx:jenkins\"]",
"scc \"privileged\" added to: [\"system:serviceaccount:jx:jenkins\"]",
"scc \"anyuid\" added to: [\"system:serviceaccount:jx:default\"]",
"\u001b[36mDEBUG\u001b[0m: Long Term Storage not supported by provider 'openshift', disabling this option",
"Set up a Git username and API token to be able to perform CI/CD",
"\u001b[36mDEBUG\u001b[0m: merging pipeline secrets with local secrets",
"\u001b[0G\u001b[2K\u001b[1;92m? \u001b[0m\u001b[1;99mDo you wish to use automation as the local Git user for http://rtx-swtl-git.fnc.net.local server: \u001b[0m\u001b[37m(Y/n) \u001b[0m\u001b[?25l\u001b7\u001b[999;999f\u001b[6n"
]
}
PLAY RECAP *************************************************************************************************************************************************************
master : ok=3 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Helm, JX, Git, Ansible versions:-

Any way to have a common netlify.toml file for a single repository and multiple sites?

I'm looking out a way to define two site builds on netlify, sourced from the same repo, using a single common netlify.toml. Is it possible to do so?
I have a GitHub repository named hugo-dream-plus for which I've configured two website builds on netlify, namely dream-plus-posts and dream-plus-cards. Both of these builds share the same environment variables and mostly all of the configurations, except for the build commands:
hugo --config cards.toml #For dream-plus-cards
hugo --config posts.toml #For dream-plus-posts
I was wondering if there was a way for me to create a common netlify.toml file, since the repo is the same for both builds, for both these sites.
I've already used the web UI for configuring each build separately, but it's quite bothersome to modify each of them, that's why I'm preferring the above scenario.
What I plan to do is to have all configurations shared between the two builds except for the build command, which would be defined separately as shown above.
As of the date of this answer, Netlify does not support a way to change values in the netlify.toml, because it is read in prior to your build. Except for the headers and redirects, which allow you to change at build.
Using Environment Variables directly as values ($VARIABLENAME) in your netlify.toml file is not supported.
However
You could run a script command and have it changed based on the domain or an environment variable. There are a few setups that would work.
Here is how I might accomplish what you want based on the domain name.
netlify.toml
[build]
command = "node ./scripts/custom.js"
publish = "public"
scripts/custom.js
const exec = require('child_process').exec;
const site = process.env.URL || "https://example.com";
const domain = site.split('/')[site.split('/').length - 1];
let buildCommand;
switch(domain) {
case "dream-plus-posts.netlify.com":
buildCommand = 'hugo --config posts.toml';
break;
case "dream-plus-cards.netlify.com":
buildCommand = 'hugo --config cards.toml';
break;
default:
throw `Domain ${domain} is invalid`;
}
async function execute(command){
return await exec(command, function(error, stdout, stderr){
if (error) {
throw error;
}
console.log(`site: ${site}`);
console.log(`domain: ${domain}`);
console.log(stdout);
});
};
execute(buildCommand);
Things to note:
I did not test the stdout to the log using this method with Hugo. The child process captures the output and returns it in stdout.
We don't want to capture errors, because we want our build to fail on errors so this will cause an exit code other than 0
You can inline other commands with this solution (i.e. "node ./scripts/custom.js && some other command before deploy")
You could also just check an environment variable you set rather than domain name

RM + DSC to node in untrusted domain

So I mention the untrusted domain aspect because I went through all the hoops around credential delegation and trusted hosts lists etc to allow me to successfully push a DSC configuration from my RM server to a target node (not using RM, just native DSC). I get that bit and it works, great.
Now when I use those same scripts in RM (with some minor edits for the format expected by RM), RM reports a successful deploy but all that has happened is the components bits have been copied to the target node to the default location for $applicationPathRoot (C:\Windows\DtlDownloads), there is no real evidence of an attempt to apply a mof file.
My RM server and target nodes are in different domains with no trust. Both servers are W2k8R2 (+ WMF4 of course). I'm running with Update 4 of RM server and client.
Here are the DSC scripts I'm running in RM:
CopyDSCResources.ps1
Configuration CopyDSCResource
{
param (
[Parameter(Mandatory=$false)]
[ValidateNotNullOrEmpty()]
[String] $ModulePath = "$env:ProgramFiles\WindowsPowershell\Modules")
#[PSCredential] $credential = get-credential
Node VCTSCFDSMWEB01
{
File DeployWebDeployResource
{
Ensure = "Present"
SourcePath = "C:\test.txt"
DestinationPath = "D:\temp"
Force = $true
Type = "File"
}
}
}
CopyDSCResource -ConfigurationData $configData -Verbose
# test outside of RM
#CopyDSCResource -ConfigurationData CopyDSCResource.ConfigData.psd1
#Start-DscConfiguration -Path .\CopyDSCResource -Credential $credential -Verbose -Wait
CopyDSCResource.ConfigData.psd1
##{
$configData = #{
AllNodes = #(
#{
NodeName = "*"
PSDscAllowPlainTextPassword = $true
},
#{
NodeName = "VCTSCFDSWEB01.rlg.test"
Role = "WebServer"
}
)
}
I'm afraid I cant seem to upload screenshots from my current location but in terms of RM, I have a vNext environment with a single server linked, a vNext release path with a single 'Dev' stage and a vNext release template with a single 'Deploy PS/DSC' action. The configuration of the action is:
ServerName - VCTSCFDSMWEB01
ComponentName - COpyDSCResource vNext
PSScriptPath - copydscresources.ps1
PSConfigurationPath - copydscresource.configdata.psd1
UseCredSSP - true
When I run a new release, the deploy stage reports success and when I view the Deployment log files I get the following:
Upload components - Successfully uploaded to the normalized store.
Deploy Using PS/DSC - Copying recursively from \vcxxxxtfs03\Drops\CorrespondenceCI\CorrespondenceCI20150114.1\Scripts to C:\Windows\DtlDownloads\CopyDSCResource vNext succeeded.
Finally the DSC event log has the following:
Job {CD3BE350-4072-4C8B-835F-4B4D1C46D65D} :
Configuration is sent from computer NULL by user sid S-1-5-18.
This compares markedly to the same event log entry when run outside of RM:
Job {34F78498-CF18-4F2A-9874-EB54FDA2D990} :
Configuration is sent from computer VCXXXXTFS01 by user sid S-1-5-21-1034805355-1149422947-1317505720-10867.
Any pointers appreciated
It would be good if I could see evidence of a mof file being created on the RM server for example, anybody know where I can find this??
Turns out the crucial element was that my DSC script had to use an environment variable for naming the node. So:
Node $env:COMPUTERNAME
No idea why but it works!

Resources