Configure a local build plan with Harbormaster and Drydock - continuous-deployment

I'm trying to create a simple build plan using Harbormaster and Drydock:
Whenever a commit is made the build plan Deployment should be triggered. This can be easily done with Harold.
The build plan Deployment has some build steps which run a command.
I know Drydock and Harbormaster are prototypes so it seems there is no much documentation.
So first I've created a build plan and added two build steps for testing:
Lease Host build step with localhost as name and linux as platform
Run Command build step with php /var/www/ci/test.php as command and localhost as host
But the error message after a manual start was:
exception 'Exception' with message 'Lease has been broken!' in /var/www/phabricator/src/applications/drydock/storage/DrydockLease.php:172
Stack trace:
#0 /var/www/phabricator/src/applications/drydock/storage/DrydockLease.php(198): DrydockLease::waitForLeases(Array)
#1 /var/www/phabricator/src/applications/harbormaster/step/HarbormasterLeaseHostBuildStepImplementation.php(32): DrydockLease->waitUntilActive()
#2 /var/www/phabricator/src/applications/harbormaster/worker/HarbormasterTargetWorker.php(52): HarbormasterLeaseHostBuildStepImplementation->execute(Object(HarbormasterBuild), Object(HarbormasterBuildTarget))
#3 /var/www/phabricator/src/infrastructure/daemon/workers/PhabricatorWorker.php(91): HarbormasterTargetWorker->doWork()
#4 /var/www/phabricator/src/infrastructure/daemon/workers/storage/PhabricatorWorkerActiveTask.php(162): PhabricatorWorker->executeTask()
#5 /var/www/phabricator/src/infrastructure/daemon/workers/PhabricatorTaskmasterDaemon.php(22): PhabricatorWorkerActiveTask->executeTask()
#6 /var/www/libphutil/src/daemon/PhutilDaemon.php(183): PhabricatorTaskmasterDaemon->run()
#7 /var/www/libphutil/scripts/daemon/exec/exec_daemon.php(125): PhutilDaemon->execute()
#8 {main}
Could anybody give me some hints how to run commands on localhost with Harbormaster and Drydock?

The problem was that I have not created any resource through Drydock. Here how you can get execute a command with Harbormaster and Drydock:
Create a Drydock Blueprint (e.g. Blueprint 4711).
Create a Passphrase SSH Private Key for Drydock which can be used to access your local host through SSH (e.g. K123).
Create a Drydock Resource through the CLI for your local host:
./bin/drydock create-resource --blueprint 4711 --name localhost --attributes host=localhost,platform=linux,remote=true,port=22,path=/var/drydock,credential=123
Create a Harbormaster Build Plan.
Add a Build Step (Lease Host) to your Build Plan, use your Drydock Blueprint as Artifact and linux as Platform.
Add a second Build Step to your Build Plan (Run Command) with the command you want and your Drydock Blueprint as Host.
Using the server itself for CD/CI might be only an option for small installations.

You may be interested in this awesome guide I published on Wikibooks to cover most of the aspects. If you want to understand how Almanac, Drydock and Harbormaster interact each other:
https://en.wikibooks.org/wiki/Phabricator_Administrator%27s_Handbook/Continuous_integration
Of course you will find some screenshots and nice schemas like this one:
This guide somehow expands the official documentation and, moreover, it includes a troubleshooting section (that covers your exact problem!).
Have a good reading!

Related

Starting SFDX: Deploy Source to Org - Slow, Really Slow, In fact it never finished

I was doing a deploy to Salesforce (my first) via; Right-click; "Deploy to Org" on a single file. There's no useful output to say what's going on.
14:35:11.774 Starting SFDX: Deploy Source to Org
I've read elsewhere that Salesforce can be exceptionally slow when it comes to deployment, but ten minutes (and counting) to deploy a single file seems very slow indeed. Is there a way to debug into what's happening, or is it just a black box?
Production or Sandbox? "Normal" or "source tracked" (like scratch org for example). Do you have "Output" view / tab / thing at the bottom of your VSCode where you should normally see command results (slightly different from "Terminal").
If you login to the target org and setup -> monitor deployments do you even see your attempt? If you deploy to prod by default it'll run all tests which in complex orgs may be expensive, take 1h for example.
You may get results faster if you run just 1 test related to the class you're deploying - but for that you need to whip out some script-fu.
sfdx force:source:deploy -u mytargetuser -p "force-app/main/default/classes/AccountTriggerHandler.cls" -l RunSpecifiedTests -r "AccountTriggerHandlerTest" --verbose --loglevel fatal -c

Setting up a Flink cluster with Podman for a beampipeline with flinkrunner

My goal is to create a streaming pipeline to read data from Apache Kafka, process the data, and write back to it.
Because of security reasons, I want to avoid Docker and use Podman.
I have set up a minimal cluster via a docker-compose.yml with a jobmanager, taskmanager and a Python SDK harness worker. The SDK harness worker seems to get stuck when i try to execute a pipeline.
Running the pipeline (reading a multi-line .txt file and writing it back in a file) it gets transferred to the jobmanager and taskmanager correctly, but then goes idle. When I look in the pythonsdk container, the logs show the following message repeatedly:
2022/12/04 16:13:02 Starting worker pool 1: python -m
apache_beam.runners.worker.worker_pool_main --service_port=50000
--container_executable=/opt/apache/beam/boot
Starting worker with command ['/opt/apache/beam/boot', '--id=1-1',
'--logging_endpoint=localhost:45087',
'--artifact_endpoint=localhost:35323',
'--provision_endpoint=localhost:36435',
'--control_endpoint=localhost:33237']
2022/12/04 16:16:31 Failed to obtain provisioning information: failed to
dial server at localhost:36435
caused by:
context deadline exceeded
Here is a link to a test pipeline that was created:
Example on github
Environment:
Debian 11;
Podman;
Python 3.2.9;
apache-beam==2.38.0; and
podman-compose
The setup of the cluster defined in:
docker-compose.yml
1x flink-jobmanager (flink version 1.14)
1x flink-taskmanager
1x Python Harness SDK
I chose to create a SDK container manually because I don't have Docker installed and Flink fails when it tries to create a container
over Docker.
I suspect that I have made a mistake in the network setup or there are some configurations missing for the harness worker, but I could not figure out the problem. Any thoughts?
Crossposted in user mailing list of beam.apache.org

How to deploy SQL Server Express on Docker Desktop Kubernetes

I've been studying "Kubernetes Up and Running" by Hightower et al (first edition) Chapter 13 where they discussed creating a Reliable MySQL Singleton (Since I just discovered that there is a second edition, I guess I'll be buying it soon).
Using their MySQL reliable singleton example as a model, I've been looking for some sample YAML files to make a similar deployment with Microsoft SQL Server (Express) on Docker Desktop for Kubernetes.
Apparently I need YAML files to deploy
Persistent Volume
Volume claim (should this be NFS?)
SQL Server (Express edition) replica set (in spite of the fact that this is just a singleton).
I've tried this example but I'm confused because it does not contain a persistent volume & claim and it does not work. I get the error
Error: unable to recognize "sqlserver.yml": no matches for kind "Deployment" in version "apps/v1beta1"
Can someone please point me to some sample YAML files that are not Azure specific that will work on Docker Desktop Kubernetes for Windows 10? After debugging my application, I'll want to deploy this to Azure (AKS).
Wed Jul 15 2020 Update
I left out the "-n namespace" for the helm install command (possibly because I'm using Helm and you are using helm v2?).
That install command still did not work. Then I did a
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
Now this command works:
helm install todo-app-database stable/mssql-linux
Progress!
When I do a "k get pods" I see that my todo-app-mssql-linux database is in the pending state. So I did a
kubectl get events
and I see
Warning FailedScheduling pod/todo-app-database-mssql-linux-8668d9b88c-lsh5l 0/1 nodes are available: 1 Insufficient memory.
I've been google searching for "Kubernetes insufficient memory" and can find no match.
I suspect this is a problem specific to "Docker Desktop Kubernetes".
When I look at the output for
helm -n ns-todolistdemo template todo-app-database stable/mssql-linux
I see the deployment is asking for 2Gi. (Interesting: when I use the template command, the "-n ns-todolistdemo" does not cause an error like it does with the install command).
So I do
kubectl describe deployment todo-app-database-mssql-linux >todo-app-database-mssql-linux.yaml
I edit the yaml file to change 2Gi to 1Gi.
kubectl apply -f todo-app-database-mssql-linux.yaml
I get this error:
error: error parsing todo-app-database-mssql-linux.yaml: error converting YAML to JSON: yaml: line 9: mapping values are not allowed in this context
Hmm... that did not work. I try delete:
kubectl delete deployment todo-app-database-mssql-linux
kubectl create -f todo-app-database-mssql-linux.yaml
I get this error:
error: error validating "todo-app-database-mssql-linux.yaml": error validating data: invalid object to validate; if you choose to ignore these errors, turn validation off with --validate=false
So I try apply:
kubectl apply -f todo-app-database-mssql-linux.yaml
Same error!
Shucks.... Is there a way to adjust the memory allocation for Docker Desktop?
Thank you
Siegfried
Short answer
https://github.com/helm/charts/blob/master/stable/mssql-linux/templates/pvc-master.yaml
Detailed Answer
Docker For Desktop comes already with a default StorageClass :
This storage class is responsible for auto-provisioning of PV whenever you create a PVC.
If you have a YAML definition of PVC (persistent volume claim), you just need to keep storageClass empty, so it will use the default.
k get storageclass
NAME PROVISIONER AGE
hostpath (default) docker.io/hostpath 11d
This is fair enough as Docker-For-Desktop Cluster is a one node cluster. So if your DB crashes and the cluster opens it again , it will not move to another node, because simply, you have a single node :)
Now should write the YAML of PVC from scratch ?
No , you don't need. Because Helm should be your best friend.
( I explained below Why you have to use Helm even without deep learning curve)
Fortunately, the community provides a chart called stable/mssql-linux..
Let's run it together :
helm -n <your-namespace> install todo-app-database stable/mssql-linux
# helm -n <namespace> install <release-name> <chart-name-from-community>
If you want to check the YAML (namely PVC) that Helm computed, you can run template instead of install
helm -n <your-namespace> template todo-app-database stable/mssql-linux
Why I give you the answer with Helm ?
Writing YAML from scratch lets reinventing the wheel while others do it.
The most efficient way is to reuse what community prepare for you.
However, you may ask: How can i reuse what others doing ?
That's why Helm comes.
Helm comes to be your installer of any application on top of kubernetes regardless how much YAML does your app require.
Install it now and hit the ground choco install kubernetes-helm

SSH Agent Plugin v1.17 with Jenkins Declaritive Pipeline not working with Windows

I have been having issues getting my multibranch pipeline to perform git commands with an SSH key via the SSH Agent plugin on Windows.
I am able to successfully perform a git clone with the ssh from Git Bash on windows server that is running Jenkins.
In my pipeline log I am getting the following error when trying to use the sshagent plugin:
[ssh-agent] Looking for ssh-agent implementation... Could not find
ssh-agent: IOException: Cannot run program "ssh-agent": CreateProcess
error=2, The system cannot find the file specified Check if ssh-agent
is installed and in PATH [ssh-agent] FATAL: Could not find a suitable
ssh-agent provider
I have seen that installing Apache Tomcat Native libraries has helped some people, but the steps for doing so are not very descriptive.
Any help is appreciated. Thanks!

KNIME Command Line Execution - ClassNotFoundException

I'd like to schedule a KNIME workflow. The workflow does its job very good as long as I start it from the KNIME GUI application. When I execute the same workflow via command line, java complains that com.microsoft.sqlserver.jdbc.SQLServerDriver
could not be found (ClassNotFoundException).
I invoke it via:
"D:\Progamme\KNIME\knime.exe" -nosplash -application -consoleLog org.knime.product.KNIME_BATCH_APPLICATION -preferences="absolutepathto\preferences.epf" -workflowDir="absolutepathto\workflow"
Since the error message signals missing content in the java CLASSPATH I also tried to add the parameters
-vmargs -classpath .;"absolutepathto/sqljdbc42.jar"
But still I earn a java slap, pointing to the same error...
I also tried to run the command from within the knime.exe's directory and I also tried to add the JAR file to Preferences -> Java -> Build Path -> Classpath Variable / User Libraries (referenced via the -preference argument). But that had no effect.
Did anybody face the same problems? Maybe with other third party JARs?
It is all about a Database connector that is configured like this:
Does the integrated security maybe force a misleading error?
System spec: KNIME 3.2.2 on Windows Server 2008 R2
Update - extract from preferences file
/configuration/org.eclipse.core.net/org.eclipse.core.net.hasMigrated=true
/configuration/org.eclipse.ui.ide/MAX_RECENT_WORKSPACES=10
/configuration/org.eclipse.ui.ide/RECENT_WORKSPACES=<list of some workspaces>
/configuration/org.eclipse.ui.ide/RECENT_WORKSPACES_PROTOCOL=3
/configuration/org.eclipse.ui.ide/SHOW_RECENT_WORKSPACES=false
/configuration/org.eclipse.ui.ide/SHOW_WORKSPACE_SELECTION_DIALOG=true
Is there maybe a problem due to the fact that it is a shared KNIME instance among several users and the command line execution does not know which workspace has to be chosen? Is the workspace somehow needed and why?
Partial Solution:
I finally managed it but I don't know exactly why it works now. What I did was to load a fresh portable version of KNIME and ran the same commands only changing the executable path to the new portable version. Before that I started the portable version once to set the workspace directory and register the database driver in preferences dialog and .ini file, nothing else, same configuration so far as the shared KNIME instance. What I am really wondering abpout is that from now on the commands are also working with the shared KNIME instance. I really don't know what caused the change that let KNIME find the driver class.
Info
Because I encountered a few more problems within shared environment in KNIME command line mode, that led to undeterministic execution results, I wrote a little .NET library. This gives me more flexibility/control over the workflow execution (which returncodes and error messages occured and so on). You can find it here if you're interested: KnimeNet
I took a very minimal approach:
cd "C:\Program Files\KNIME"
.\knime -nosplash -noexit -consoleLog -reset -application org.knime.product.KNIME_BATCH_APPLICATION -workflowFile="D:\Work\Knime Workflows\Output\CMD_Test.knwf" -preferences="D:\Work\Knime Workflows\Output\CMD_Test.epf"

Resources