Ansible get_url behind Proxy requirements - ansible-2.x

I am trying to download various artefacts for confluent version using get_url module. I am behind a proxy and below is my playbook.
I have to put in a proxy information for one of the downloads, but not for other. Trying to find out how do I determine which ones need proxy details defined in the task and which ones should not have that information. I got verify cert error when I had added proxy information to the second task.
Is there a way to avoid setting that information in the task for the first download task as well
tasks:
- name: Download Confluent enterprise version
get_url:
url: https://packages.confluent.io/archive/7.0/confluent-7.0.7.tar.gz
dest: /export/home/svcuser/tmp
use_proxy: yes
register: showconfluentdlstatus
environment:
http_proxy: http://myuserid:mypassword#proxy.prudential.com:8080/
https_proxy: https://myuserid:mypassword#proxy.prudential.com:8080/
- name: show confluent enterprise download status
debug: var=showconfluentdlstatus
- name: uncompress confluent enterprise
unarchive:
src: /export/home/svcuser/tmp/confluent-7.0.7.tar.gz
dest: /export/home/svcuser/tmp/confluent_7.0.7/
register: unarchiveconfluentstatus
- name: show unarchive confluent status status
debug: var=unarchiveconfluentstatus
- name: Download Confluent playbook for same version as enterprise confluent version
# Proxy doesn't seem to be needed for this
get_url:
url: https://github.com/confluentinc/cp-ansible/archive/refs/heads/7.0.7-post.zip
dest: /export/home/svcuser/tmp
register: showconfluentplaybookdlstatus
- name: show confluent playbook for same version as enterprise confluent version download status
debug: var=showconfluentplaybookdlstatus
- name: uncompress playbook for same version as enterprise confluent version download status
unarchive:
src: /export/home/svcuser/tmp/cp-ansible-7.0.7-post.zip
dest: /export/home/svcuser/tmp/confluent_7.0.7/
register: unarchiveconfluentplaybookstatus
- name: show unarchive confluent playbook for same version as enterprise confluent version status
debug: var=unarchiveconfluentplaybookstatus

Related

Custom docker image as workspace

I just deployed an Eclipse Che environment on my microk8s server and it works great with the sample devfiles. Now I want to use my own repo with a custom devfile.
But everytime I try to start the environment I get the following error message: Container tools has state CrashLoopBackOff.
This only happens with the custom devfile.yaml but not with the default one. The problem is, I need a more recent version of Golang, so I need a different file.
This is the devfile.yaml
schemaVersion: 2.1.0
metadata:
name: kubernetes-image-version-checker
components:
- name: tools
container:
image: quay.io/devfile/golang:latest
env:
- name: GOPATH
value: /projects:/home/user/go
- name: GOCACHE
value: /tmp/.cache
memoryLimit: 2Gi
mountSources: true
command: ['/checode/entrypoint-volume.sh']
projects:
- name: kubernetes-image-version-checker
git:
remotes:
origin: "https://gitlab.imanuel.dev/DerKnerd/kubernetes-image-version-checker.git"

What are the correct /etc/exports settings for Kubernetes NFS Storage?

I have a simple NFS server (followed instructions here) connected to a Kubernetes (v1.24.2) cluster as a storage class. When a new PVC is created, it creates a PV as expected with a new directory on the NFS server.
The NFS provider was deployed as instructed here.
My issue is that containers don't seem to be able to perform all the functions they expect to when interacting with the NFS server. For example:
A PVC and PV are created with the following yml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mssql-data
spec:
storageClassName: nfs-client
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Gi
This creates a directory on the NFS server as expected.
Then this deployment is created to use the PVC:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mssql-deployment
spec:
replicas: 1
selector:
matchLabels:
app: mssql
template:
metadata:
labels:
app: mssql
spec:
terminationGracePeriodSeconds: 30
hostname: mssqlinst
securityContext:
fsGroup: 10001
containers:
- name: mssql
image: mcr.microsoft.com/mssql/server:2019-latest
ports:
- containerPort: 1433
env:
- name: MSSQL_PID
value: "Developer"
- name: ACCEPT_EULA
value: "Y"
- name: SA_PASSWORD
value: "Password123"
volumeMounts:
- name: mssqldb
mountPath: /var/opt/mssql
volumes:
- name: mssqldb
persistentVolumeClaim:
claimName: mssql-data
The server comes up and responds to requests but does so with the error:
[S0002][823] com.microsoft.sqlserver.jdbc.SQLServerException: The operating system returned error 1117(The request could not be performed because of an I/O device error.) to SQL Server during a read at offset 0x0000000009a000 in file '/var/opt/mssql/data/master.mdf'. Additional messages in the SQL Server error log and operating system error log may provide more detail. This is a severe system-level error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.
My /etc/exports file has the following contents:
/srv *(rw,no_subtree_check,no_root_squash)
When the SQL container starts, it doesn't undergo any container restarts but the SQL service within the container appears to get into some sort of restart loop until a connection is attempted and then it throws the error and appears to stop.
Is there something I'm missing in the /etc/exports file? I tried variations with sync, async, and insecure but can't seem to get past the SQL error.
I gather from the error that this has something to do with the container's ability to read/write from/to the disk. Am I in the right ballpark?
The config that ended up working was:
/srv *(rw,no_root_squash,insecure,sync,no_subtree_check)
This was after a reinstall of the cluster. No significant changes elsewhere but still seems like there may have been more to the issue than this one config.

Azure DevOps React Deployment: Unable to run the script on Kudu Service. Error: Error: Unable to fetch script status due to timeout

I'm trying to deploy a CRA + Craco React application via Azure Devops. This is my YML file:
# Node.js React Web App to Linux on Azure
# Build a Node.js React app and deploy it to Azure as a Linux web app.
# Add steps that analyze code, save build artifacts, deploy, and more:
# https://learn.microsoft.com/azure/devops/pipelines/languages/javascript
trigger:
- master
variables:
# Azure Resource Manager connection created during pipeline creation
azureSubscription: '{REDACTED FOR SO}'
# Web app name
webAppName: 'frontend'
# Environment name
environmentName: 'public'
# Agent VM image name
vmImageName: 'ubuntu-latest'
stages:
- stage: Build
displayName: Build stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- task: ArchiveFiles#2
displayName: 'Archive files'
inputs:
rootFolderOrFile: '$(System.DefaultWorkingDirectory)'
includeRootFolder: false
archiveType: zip
archiveFile: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
replaceExistingArchive: true
- upload: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
artifact: drop
- stage: Deploy
displayName: Deploy stage
dependsOn: Build
condition: succeeded()
jobs:
- deployment: Deploy
displayName: Deploy
environment: $(environmentName)
pool:
vmImage: $(vmImageName)
strategy:
runOnce:
deploy:
steps:
- task: AzureRmWebAppDeployment#4
displayName: 'Azure App Service Deploy: '
inputs:
ConnectionType: 'AzureRM'
azureSubscription: 'My Subscription'
appType: 'webAppLinux'
WebAppName: 'frontend'
packageForLinux: '$(Pipeline.Workspace)/drop/$(Build.BuildId).zip'
RuntimeStack: 'NODE|10.10'
StartupCommand: 'npm run start'
ScriptType: 'Inline Script'
InlineScript: |
npm install
npm run build --if-present
The build tasks succeeds. However, the deployment fails after running for ~20 minutes, with the following error:
Starting: Azure App Service Deploy:
==============================================================================
Task : Azure App Service deploy
Description : Deploy to Azure App Service a web, mobile, or API app using Docker, Java, .NET, .NET Core, Node.js, PHP, Python, or Ruby
Version : 4.198.0
Author : Microsoft Corporation
Help : https://aka.ms/azureappservicetroubleshooting
==============================================================================
Got service connection details for Azure App Service:'frontend'
Package deployment using ZIP Deploy initiated.
Deploy logs can be viewed at https://{MYAPPSERVICENAME}.scm.azurewebsites.net/api/deployments/62cf55c3f1434309b71a8334b2696fc9/log
Successfully deployed web package to App Service.
Trying to update App Service Application settings. Data: {"SCM_COMMAND_IDLE_TIMEOUT":"1800"}
App Service Application settings are already present.
Executing given script on Kudu service.
##[error]Error: Unable to run the script on Kudu Service. Error: Error: Unable to fetch script status due to timeout. You can increase the timeout limit by setting 'appservicedeploy.retrytimeout' variable to number of minutes required.
Successfully updated deployment History at https://{MYAPPSERVICENAME}.scm.azurewebsites.net/api/deployments/3641645137779498
App Service Application URL: http://{MYAPPSERVICENAME}.azurewebsites.net
Finishing: Azure App Service Deploy:
This YML solved my issues, along with upgrading Azure app service node version to 14.*
What I did is, I have moved the npm install and npm run build to build stage, and removed it from the deployment inline script stage.
So the package is ready before the deployment, after the successful unzip it will start the app using pm2 serve /home/site/wwwroot/build --no-daemon --spa as its running in a linux app service. (it works if your build directory is within wwwroot), if not please update the path accordingly
# Node.js React Web App to Linux on Azure
# Build a Node.js React app and deploy it to Azure as a Linux web app.
# Add steps that analyze code, save build artifacts, deploy, and more:
# https://docs.microsoft.com/azure/devops/pipelines/languages/javascript
trigger:
- develop
variables:
# Azure Resource Manager connection created during pipeline creation
azureSubscription: 'YOUR-SUBSCRIPTION'
# Web app name
webAppName: 'AZURE_APP_NAME'
# Environment name
environmentName: 'APP_ENV_NAME'
# Agent VM image name
vmImageName: 'ubuntu-latest'
stages:
- stage: Build
displayName: Build stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- task: NodeTool#0
inputs:
versionSpec: '14.x'
displayName: 'Install Node.js'
- script: |
npm install
displayName: 'npm install'
- script: |
npm run build
displayName: 'npm build'
- task: ArchiveFiles#2
displayName: 'Archive files'
inputs:
rootFolderOrFile: '$(System.DefaultWorkingDirectory)'
includeRootFolder: false
archiveType: zip
archiveFile: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
replaceExistingArchive: true
- upload: $(Build.ArtifactStagingDirectory)/$(Build.BuildId).zip
artifact: drop
- stage: Deploy
displayName: Deploy stage
dependsOn: Build
condition: succeeded()
jobs:
- deployment: Deploy
displayName: Deploy
environment: $(environmentName)
pool:
vmImage: $(vmImageName)
strategy:
runOnce:
deploy:
steps:
- task: AzureRmWebAppDeployment#4
displayName: 'Azure App Service Deploy: poultry-web-test'
inputs:
azureSubscription: $(azureSubscription)
appType: webAppLinux
WebAppName: $(webAppName)
packageForLinux: '$(Pipeline.Workspace)/drop/$(Build.BuildId).zip'
RuntimeStack: 'NODE|14-lts'
StartupCommand: 'pm2 serve /home/site/wwwroot/build --no-daemon --spa'

How to use Google Cloud Debugger on Cloud Run with Django

I'm trying to use Google Cloud Debugger on Cloud Run with Django. I red this document.
https://cloud.google.com/debugger/docs/setup/python
What I did.
I turned on Debugger in google cloud.
Install google-python-cloud-debugger.
I created source-context.json same directory with models.py
I add this code in manage.py
try:
import googleclouddebugger
googleclouddebugger.enable()
except ImportError:
pass
I update container of Google Cloud Run. How ever I cant find any application in Debugger
I imported my source code from GitHub. I can see my code in Debugger. However, I couldn't check break point in Debugger page.
How to debug Django on Clod Run? Please help me.
Update
I did this 2 step.
Add logger Cloud Debugger Agent rights to the service account from IAM.
Connect GitHub repository with Google Cloud Source
Cloud Debugger works on local environment. However it doesn't work in Cloud Run.
This picture has only local application. I can't find Cloud Run application.
This is my yaml file. (I'm using Cloud Run as full managed mode)
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: my_app
namespace: '135253772466'
selfLink: /apis/serving.knative.dev/v1/namespaces/135253772466/services/my_app
uid: 61b4ac55-4aab-4d33-801d-d21b0d116ea4
resourceVersion: AAWmjubgiTg
generation: 176
creationTimestamp: '2020-04-14T12:38:39.484473Z'
labels:
cloud.googleapis.com/location: asia-northeast1
annotations:
run.googleapis.com/client-name: gcloud
serving.knative.dev/creator: 135253772466#cloudbuild.gserviceaccount.com
serving.knative.dev/lastModifier: 135253772466#cloudbuild.gserviceaccount.com
client.knative.dev/user-image: gcr.io/my_project/my_app
run.googleapis.com/client-version: 291.0.0
spec:
traffic:
- percent: 100
latestRevision: true
template:
metadata:
name: my_app-00176-wud
annotations:
run.googleapis.com/client-name: gcloud
client.knative.dev/user-image: gcr.io/my_project/my_app
run.googleapis.com/client-version: 291.0.0
autoscaling.knative.dev/maxScale: '1000'
spec:
timeoutSeconds: 900
serviceAccountName: 135253772466-compute#developer.gserviceaccount.com
containerConcurrency: 80
containers:
- image: gcr.io/my_project/my_app
ports:
- containerPort: 8080
env:
- name: CLOUD_RUN_HOST
value: my_app-u3ljntrlma-an.a.run.app
resources:
limits:
cpu: 1000m
memory: 2048Mi
status:
conditions:
- type: Ready
status: 'True'
lastTransitionTime: '2020-05-26T15:39:32.595Z'
- type: ConfigurationsReady
status: 'True'
lastTransitionTime: '2020-05-26T15:39:25.640Z'
- type: RoutesReady
status: 'True'
lastTransitionTime: '2020-05-26T15:39:32.595Z'
observedGeneration: 176
traffic:
- revisionName: my_app-00176-wud
percent: 100
latestRevision: true
latestReadyRevisionName: my_app-00176-wud
latestCreatedRevisionName: my_app-00176-wud
address:
url: https://my_app-u3ljntrlma-an.a.run.app
url: https://my_app-u3ljntrlma-an.a.run.app

Where is the cassandra.yaml location on the MAC?

I am working on the docker environment, and executed docker exec -it mycassandra cqlsh. Then, I am inserting the data, and it is occurring the following error:
WriteTimeout - Error from server: code=1100
By this, it tells me that I need to find out the cassandra.yaml document and amend the write-time, but I can not find that on my MAC.
Could you tell me how can I find it and how to amend the document?
Thanks.
for those who have installed it as brew install cassandra, yaml file will located in usr/local/etc/cassandra
If you are running the official cassandra image then cassandra.yaml may be found at /etc/cassandra/cassandra.yaml in the container. If you want to create a custom cassandra.yaml file then you may try to overwrite it in your Dockerfile or docker-compose.yml file. For example, in my docker-compose.yml file I have something like:
services:
cassandra:
image: cassandra:3.11.4
volumes:
- ./cassandra.yaml:/etc/cassandra/cassandra.yaml
which causes the cassandra.yaml file in the container to be overwritten by my local cassandra.yaml.
I hope this helps.
From the provided example, it seems that the database is executed from inside a container. So the cassandra.yaml that you're looking for will be created on the fly when the container is started up, based on the configuration that you provided.
We have set Cassandra Containers with Kubernetes, and execute them in docker, based on the instructions found here, and been able to modify the settings of the cassandra.yaml file in the configuration of the statefulset, updating the variables in env for the spec of the container.
For example, to modify the seeds list, the cluster name, and the rack of the C* cluster named c-test-qa:
apiVersion: apps/v1
kind: StatefulSet
...
spec:
serviceName: c-test-qa
replicas: 1
selector:
matchLabels:
app: c-test-qa
template:
metadata:
labels:
app: c-test-qa
spec:
containers:
- name: c-test-qa
image: cassandra:3.11
imagePullPolicy: IfNotPresent
...
env:
- name: CASSANDRA_SEEDS
value: c-test-qa-0.c-test-qa.qa.svc.cluster.local
- name: CASSANDRA_CLUSTER_NAME
value: "testqa"
- name: CASSANDRA_RACK
value: "DC1"
- name: CASSANDRA_RACK
value: "CustomRack1"
...
On MacOS :
It will be found in either of the below locations:
Cassandra package installations: /etc/cassandra
Cassandra tarball installations: install_location/conf
DataStax Enterprise package installations: /etc/dse/cassandra
DataStax Enterprise tarball installations: install_location/resources/cassandra/conf

Resources