Azure: Logic App, Container , and mounting an Azure File Share - azure-logic-apps

I am trying to run a container instance using logic apps and I need to mount an azure file share.
I am using a container with azure file share. This article is very clear on how to do this with command line. https://learn.microsoft.com/en-us/azure/container-instances/container-instances-volume-azure-files
This works exactly as expected:
az container create `
--resource-group pass `
--name $CONTAINER `
--image "$REGISTRY/$CONTAINER::latest" `
--restart-policy Never `
--registry-username $USERNAME `
--registry-password $PASSWORD `
--os-type Linux `
--cpu 0.2 `
--memory 0.5 `
--azure-file-volume-account-name $ACI_PERS_STORAGE_ACCOUNT_NAME `
--azure-file-volume-account-key $STORAGE_KEY `
--azure-file-volume-mount-path $ACI_MOUNT_PATH `
--azure-file-volume-share-name $ACI_PERS_SHARE_NAME `
--environment-variables ID=XXXX
az container logs --resource-group pass --name $CONTAINER
How do you create the logic app task for the container. I found the following properties:
Container Volume Mount Path - 1
Container Volume Mount Name - 1
But I cannot see the equivalent of
--azure-file-volume-account-name $ACI_PERS_STORAGE_ACCOUNT_NAME
--azure-file-volume-account-key $STORAGE_KEY
Also I need to:
delete the container after the run
delete a folder in the file share after the run ( I can't find a delete folder task, just delete file)

Make sure az is up-to-date and use the following
az container create \
--resource-group $ACI_PERS_RESOURCE_GROUP \
--name hellofiles \
--image mcr.microsoft.com/azuredocs/aci-hellofiles \
--dns-name-label aci-demo \
--ports 80 \
--azure-file-volume-account-name $ACI_PERS_STORAGE_ACCOUNT_NAME \
--azure-file-volume-account-key $STORAGE_KEY \
--azure-file-volume-share-name $ACI_PERS_SHARE_NAME \
--azure-file-volume-mount-path /aci/logs/

Related

Zeppelin authentication with Jdbc realm

I have been trying to set up zeppelin with authentication with Shiro JDBC realm. After all my attempts, I have not been able to get it working. The basic authentication works but with JDBC realm it fails.
The zeppelin server was created following the doc: http://zeppelin.apache.org/docs/0.9.0/quickstart/kubernetes.html
The POD is working.
I enabled the Shiro by extending the docker image. My Dockerfile:
ARG ZEPPELIN_IMAGE=apache/zeppelin:0.9.0
FROM ${ZEPPELIN_IMAGE}
#https://hub.docker.com/r/apache/zeppelin/dockerfile
WORKDIR ${Z_HOME}
ADD /zeppelin/shiro.ini ${Z_HOME}/conf/
ADD https://repo1.maven.org/maven2/mysql/mysql-connector-java/6.0.4/mysql-connector-java-6.0.4.jar ${Z_HOME}/lib/
ENV CLASSPATH=${Z_HOME}/lib/mysql-connector-java-6.0.4.jar:${CLASSPATH}
ENTRYPOINT [ "/usr/bin/tini", "--" ]
WORKDIR ${Z_HOME}
CMD ["bin/zeppelin.sh"]
My shiro.ini taken from https://gist.github.com/adamjshook/6c42b03fdb09b60cd519174d0aec1af5
[main]
ds = com.mysql.jdbc.jdbc2.optional.MysqlDataSource
ds.serverName = localhost
ds.databaseName = zeppelin
ds.user = zeppelin
ds.password = zeppelin
jdbcRealm = org.apache.shiro.realm.jdbc.JdbcRealm
jdbcRealmCredentialsMatcher = org.apache.shiro.authc.credential.Sha256CredentialsMatcher
jdbcRealm.credentialsMatcher = $jdbcRealmCredentialsMatcher
ps = org.apache.shiro.authc.credential.DefaultPasswordService
pm = org.apache.shiro.authc.credential.PasswordMatcher
pm.passwordService = $ps
jdbcRealm.dataSource = $ds
jdbcRealm.credentialsMatcher = $pm
shiro.loginUrl = /api/login
[urls]/** = authc
Now, when I deploy the zeppelin server, I get:
rg.apache.shiro.config.ConfigurationException: Unable to instantiate class [com.mysql.jdbc.jdbc2.optional.MysqlDataSource] for object named 'ds'. Please ensure you've specified the fully qualified class name correctly.
at org.apache.shiro.config.ReflectionBuilder.createNewInstance(ReflectionBuilder.java:327)
at org.apache.shiro.config.ReflectionBuilder$InstantiationStatement.doExecute(ReflectionBuilder.java:961)
at org.apache.shiro.config.ReflectionBuilder$Statement.execute(ReflectionBuilder.java:921)
at org.apache.shiro.config.ReflectionBuilder$BeanConfigurationProcessor.execute(ReflectionBuilder.java:799)
at org.apache.shiro.config.ReflectionBuilder.buildObjects(ReflectionBuilder.java:278)
at org.apache.shiro.config.IniSecurityManagerFactory.buildInstances(IniSecurityManagerFactory.java:181)
at org.apache.shiro.config.IniSecurityManagerFactory.createSecurityManager(IniSecurityManagerFactory.java:139)
at org.apache.shiro.config.IniSecurityManagerFactory.createSecurityManager(IniSecurityManagerFactory.java:107)
at org.apache.shiro.config.IniSecurityManagerFactory.createInstance(IniSecurityManagerFactory.java:98)
at org.apache.shiro.config.IniSecurityManagerFactory.createInstance(IniSecurityManagerFactory.java:47)
at org.apache.shiro.config.IniFactorySupport.createInstance(IniFactorySupport.java:150)
at org.apache.shiro.util.AbstractFactory.getInstance(AbstractFactory.java:47)
Caused by: org.apache.shiro.util.UnknownClassException: Unable to load class named [com.mysql.jdbc.jdbc2.optional.MysqlDataSource] from the thread context, current, or system/application ClassLoaders. All heuristics have been exhausted. Class could not be found.
at org.apache.shiro.util.ClassUtils.forName(ClassUtils.java:152)
at org.apache.shiro.util.ClassUtils.newInstance(ClassUtils.java:168)
at org.apache.shiro.config.ReflectionBuilder.createNewInstance(ReflectionBuilder.java:320)
... 40 more
Not sure why it is failing even I have defined the jar file on classpath.
Issue with jar was not having the right permissions. Got it fixed with below Dockerfile
ARG ZEPPELIN_IMAGE=apache/zeppelin:0.9.0
FROM ${ZEPPELIN_IMAGE}
#https://hub.docker.com/r/apache/zeppelin/dockerfile
WORKDIR ${Z_HOME}
USER root
ADD /zeppelin/shiro.ini ${Z_HOME}/conf/
ADD https://repo1.maven.org/maven2/mysql/mysql-connector-java/6.0.4/mysql-connector-java-6.0.4.jar ${Z_HOME}/lib/
ENV CLASSPATH=${Z_HOME}/lib/mysql-connector-java-6.0.4.jar:${CLASSPATH}
RUN chmod 777 ${Z_HOME}/lib/mysql-connector-java-6.0.4.jar
USER 1000
ENTRYPOINT [ "/usr/bin/tini", "--" ]
WORKDIR ${Z_HOME}
CMD ["bin/zeppelin.sh"]

Exporting azure database in powershell with New-AzSqlDatabaseExport does not always return the OperationStatusLink, resulting in an exception

I am writing a powershell script to export an Azure database to a bacpac file using the New-AzSqlDatabaseExport command (following the documentation here.
When I run the powershell script, the results I get are inconsistent. When I open a new powershell window and run the export database script, everything runs as expected, and I get back an OperationStatusLink, so I can check the progress of the export as it progresses. However, once the export completes, if I try running the powershell script a 2nd time within the same window, the export will not return the OperationStatusLink. This will cause Get-AzSqlDatabaseImportExportStatus to fail with the following exception: Cannot bind argument to parameter 'OperationStatusLink' because it is null.
Below are the steps to reproduce, as well as a snippet of powershell script. Any suggestions as to what I could possibly try to ensure that New-AzSqlDatabaseExport always returns an OperationStatusLink would be greatly appreciated.
Steps to Reproduce:
Open powershell window
Log in to Azure
Run script to export database to bacpac
Expected Result: Export is successful and OperationStatusLink is provided
Actual Result: Export is successful and OperationStatusLink is provided
Run script to export database to bacpac
Expected Result: Export is successful and OperationStatusLink is provided
Actual Result: Export is successful and OperationStatusLink is not provided
Powershell script:
Connect-AzAccount
Select-AzSubscription -SubscriptionName 'subscription name'
BackupAzureDatabase.ps1 `
-DatabaseName "testDB" `
-ResourceGroupName "group1" `
-ServerName "testserver" `
-serverAdmin "admin" `
-serverPassword "********" `
BackupAzureDatabase.ps1:
Param(
[string][Parameter(Mandatory=$true)] $DatabaseName,
[string][Parameter(Mandatory=$true)] $ResourceGroupName,
[string][Parameter(Mandatory=$true)] $ServerName,
[string][Parameter(Mandatory=$true)] $ServerAdmin,
[string][Parameter(Mandatory=$true)] $ServerPassword,
)
Process{
# some code to get the storage info and credentials
$ExportRequest = New-AzSqlDatabaseExport `
-ResourceGroupName $ResourceGroupName `
-ServerName $ServerName `
-DatabaseName $DatabaseName `
-StorageKeytype $StorageKeytype `
-StorageKey $PrimaryKey `
-StorageUri $BacpacUri `
-AdministratorLogin $Creds.UserName `
-AdministratorLoginPassword $Creds.Password
$ExportStatus = Get-AzSqlDatabaseImportExportStatus `
-OperationStatusLink $ExportRequest.OperationStatusLink
# Get-AzSqlDatabaseImportExportStatus throws an exception, since OperationStatusLink is empty/null most of the time
}
This seems to be a regression in Az.Sql module introduced in 2.10.0 and it still active with the current version (2.11.0)
Symptoms:
when initiating export operation the following exception raised: New-AzSqlDatabaseExport: Missing the required 'networkIsolation' parameters for ImportExport operation.
The issue:
this should be optional parameter, and the parameter name is incorrect, it should be -UseNetworkIsolation instead.
Workaround:
target your script to older version of the module, 2.9.1 seems to be OK.
Long term solution:
The fix already committed, it should be available in the next releases on the module.
Source of information:
https://github.com/Azure/azure-powershell/issues/13097
Update 2020-11-04
The recent version of the module already contains the fix.
(2.11.1)
https://www.powershellgallery.com/packages/Az/5.0.0

drone ci publish generated latex pdf

actually I am using travis but I want to change to drone.
For all tex documents I'm using a small Makefile with a Container to generate my pdf file and deploy it on my repository.
But since I'm using gitea I want to set up my integration pipeline with drone, but I don't know how I can configure the .drone.yml to deploy my pdf file on every tag als release.
Actually I'm using the following .drone.yml and I am happy the say, that's build process works fine at the moment.
clone:
git:
image: plugins/git
tags: true
pipeline:
pdf:
image: volkerraschek/docker-latex:latest
pull: true
commands:
- make
and this is my Makefile
# Docker Image
IMAGE := volkerraschek/docker-latex:latest
# Input tex-file and output pdf-file
FILE := index
TEX_NAME := ${FILE}.tex
PDF_NAME := ${FILE}.pdf
latexmk:
latexmk \
-shell-escape \
-synctex=1 \
-interaction=nonstopmode \
-file-line-error \
-pdf ${TEX_NAME}
docker-latexmk:
docker run \
--rm \
--user="$(shell id -u):$(shell id -g)" \
--net="none" \
--volume="${PWD}:/data" ${IMAGE} \
make latexmk
Which tags and conditions are missing in my drone.yml to deploy my index.pdf as release in gitea when I push a new git tag?
Volker
I have this setup on my gitea / drone pair. This is a MWE of my .drone.yml:
pipeline:
build:
image: tianon/latex
commands:
- pdflatex <filename.tex>
gitea_release:
image: plugins/gitea-release
base_url: <gitea domain>
secrets: [gitea_token]
files: <filename.pdf>
when:
event: tag
So rather than setting up the docker build in the Makefile, we add a step using docker image with latex, compile the pdf, and use a pipeline step to release.
You'll also have to set your drone repo to trigger builds on a tags and set a gitea API token to use. To set the API token, you can use the command line interface:
$ drone secret add <org/repo> --name gitea_token --value <token value> --image plugins/gitea-release
You can set up the drone repo to trigger builds in the repository settings in the web UI.
Note that you'll also likely have to allow *.pdf attachments in your gitea settings, as they are disallowed by default. In your gitea app.ini add this to the attachment section:
[attachment]
ENABLED = true
PATH = /data/gitea/attachments
MAX_SIZE = 10
ALLOWED_TYPES = */*
In addition to Gabe's answer, if you are using an NGINX reverse proxy, you might also have to allow larger file uploads in your nginx.conf. (This applies to all file types, not just .pdf)
server {
[ ... ]
location / {
client_max_body_size 10M; # add this line
proxy_pass http://gitea:3000;
}
}
This fixed the problem for me.

Wildfly CLI XA-Datasource missing property

I have searched in the official Wildfly 10 documentation and searched the net but strangely I didn't find the solution to my problem. When I run the CLI and try to configure an XA-Datasource I can't configure the property xa-datasource-property.
These are the commands I have tried:
/subsystem=datasources/xa-data-source=TestDataSource/:add(driver-name=XA-Oracle,jndi-name=java:jboss/datasources/testDS,background-validation=false,enlistment-trace=false,flush-strategy=FailingConnectionOnly,max-pool-size=20,min-pool-size=10,no-recovery=false,password=TEST,pool-prefill=true,query-timeout=10,same-rm-override=false,statistics-enabled=true,track-statements=NOWARN,url-property=jdbc:oracle:thin:#TEST:orcl,user-name=USERNAME,validate-on-match=false,enabled=true,allow-multiple-users=false,xa-datasource-properties={"URL"=>{"value"=>"jdbc:oracle:thin"}})
/subsystem=datasources/xa-data-source=TestDataSource/:add(driver-name=XA-Oracle,jndi-name=java:jboss/datasources/testDS,background-validation=false,enlistment-trace=false,flush-strategy=FailingConnectionOnly,max-pool-size=20,min-pool-size=10,no-recovery=false,password=TEST,pool-prefill=true,query-timeout=10,same-rm-override=false,statistics-enabled=true,track-statements=NOWARN,url-property=jdbc:oracle:thin:#TEST:orcl,user-name=USERNAME,validate-on-match=false,enabled=true,allow-multiple-users=false,xa-datasource-properties={"name"=>"URL","value"=>"jdbc:oracle:thin"})
/subsystem=datasources/xa-data-source=TestDataSource/:add(driver-name=XA-Oracle,jndi-name=java:jboss/datasources/testDS,background-validation=false,enlistment-trace=false,flush-strategy=FailingConnectionOnly,max-pool-size=20,min-pool-size=10,no-recovery=false,password=TEST,pool-prefill=true,query-timeout=10,same-rm-override=false,statistics-enabled=true,track-statements=NOWARN,url-property=jdbc:oracle:thin:#TEST:orcl,user-name=USERNAME,validate-on-match=false,enabled=true,allow-multiple-users=false,xa-datasource-property={"name"=>"URL","value"=>"jdbc:oracle:thin"})
/subsystem=datasources/xa-data-source=TestDataSource/:add(driver-name=XA-Oracle,jndi-name=java:jboss/datasources/testDS,background-validation=false,enlistment-trace=false,flush-strategy=FailingConnectionOnly,max-pool-size=20,min-pool-size=10,no-recovery=false,password=TEST,pool-prefill=true,query-timeout=10,same-rm-override=false,statistics-enabled=true,track-statements=NOWARN,url-property=jdbc:oracle:thin:#TEST:orcl,user-name=USERNAME,validate-on-match=false,enabled=true,allow-multiple-users=false,xa-datasource-property={"URL"=>{"value"=>"jdbc:oracle:thin"}})
No matter which type of configuration I try it will tell me the property xa-datasource-properties or xa-datasource-property is unknown. When using TAB to get code completion it will offer my a lot of properties but the required one is not to be found.
Additionally if I leave it out it will say:
{
"outcome" => "failed",
"failure-description" => "WFLYJCA0069: At least one xa-datasource-property is required for an xa-datasource",
"rolled-back" => true
}
What am I missing?
For some weird reason it was only possible using a different syntax which looks like this:
xa-data-source add --name=Test --allow-multiple-users=false --connectable=true --driver-name=XA-Oracle --enabled=true --interleaving=false --jndi-name=java:jboss/datasources/test --max-pool-size=20 --min-pool-size=10 --no-tx-separate-pool=false --pad-xid=false --password=PASSWORD --pool-prefill=true --use-ccm=true --use-java-context=true --user-name=USERNAME --wrap-xa-resource=true --xa-datasource-properties=URL=jdbc:oracle:thin
I don't understand why the preferred method of cli syntax was not working but using this method it will be configurable.
If somebody knows a way to make it work in the other syntax I would really appreciate it.
On Wildfly 10 this works:
/subsystem=datasources/xa-data-source=TestDataSource/:add(driver-name=XA-Oracle,jndi-name=java:jboss/datasources/testDS,background-validation=false,enlistment-trace=false,flush-strategy=FailingConnectionOnly,max-pool-size=20,min-pool-size=10,no-recovery=false,password=TEST,pool-prefill=true,query-timeout=10,same-rm-override=false,statistics-enabled=true,track-statements=NOWARN,url-property=jdbc:oracle:thin:#TEST:orcl,user-name=USERNAME,validate-on-match=false,enabled=true,allow-multiple-users=false,xa-datasource-class=oracle.jdbc.xa.client.OracleXADataSource)
Note the last property, which is required otherwise testing the connection may fail.
Also, there was apparently a bug in Wildfly 13, where it fails to interpret the # symbol and reports an error that
at least one or more xa-datasource-property is required.
A fix was supposed to be put into Wildfly 14. However, I am seeing that the problem persists.
Even WildFly 18 have this problem. I used a sample command like this below..
To do the xa-datasource creation with xa-datasource-properties
xa-data-source add --jndi-name=java:/jdbc/TesteOracle --name=TesteOracle --driver-name=oracle --password=usuario --user-name=senha
--xa-datasource-properties=URL=jdbc:oracle:thin:#localhost:1521:HE
it seems that the datasource MUST be disabled at creation time ..
it work for me as
xa-data-source add \
--name=talentia-xxxxx\
--driver-name=sqlserver \
--jndi-name=java:jboss/datasources/xxxxx \
--user-name=xxxxx \
--password=xxxx \
--min-pool-size=10 \
--max-pool-size=20 \
--enabled=false \
--use-java-context=true
/subsystem=datasources/xa-data-source=xxxxxx/xa-datasource-properties=URL:add( \
value=jdbc:sqlserver://xxxxxxxx \
)
/subsystem=datasources/xa-data-source=xxxxxxx:write-attribute( \
name=valid-connection-checker-class-name, \
value=org.jboss.jca.adapters.jdbc.extensions.mssql.MSSQLValidConnectionChecker \
)
/subsystem=datasources/xa-data-source=xxxxx:write-attribute( \
name=background-validation, \
value=true \
)
/subsystem=datasources/xa-data-source=xxxxx:write-attribute( \
name=same-rm-override, \
value=true \
)
xa-data-source enable --name = xxxxx
It can be done interactively via CLI by declaring all properties with data source in a batch for example:
[standalone#localhost:9990 /] batch
[standalone#localhost:9990 / #] /subsystem=datasources/xa-data-source=MariaDBXADS:add(driver-name=mariadb-xa, jndi-name=java:jboss/datasources/MariaDBXADS, user-name=jdv_user, password=jdv_pass, use-java-context=true)
[standalone#localhost:9990 / #] /subsystem=datasources/xa-data-source=MariaDBXADS/xa-datasource-properties=test:add(value=test-value)
[standalone#localhost:9990 / #] run-batch
The batch executed successfully

RM + DSC to node in untrusted domain

So I mention the untrusted domain aspect because I went through all the hoops around credential delegation and trusted hosts lists etc to allow me to successfully push a DSC configuration from my RM server to a target node (not using RM, just native DSC). I get that bit and it works, great.
Now when I use those same scripts in RM (with some minor edits for the format expected by RM), RM reports a successful deploy but all that has happened is the components bits have been copied to the target node to the default location for $applicationPathRoot (C:\Windows\DtlDownloads), there is no real evidence of an attempt to apply a mof file.
My RM server and target nodes are in different domains with no trust. Both servers are W2k8R2 (+ WMF4 of course). I'm running with Update 4 of RM server and client.
Here are the DSC scripts I'm running in RM:
CopyDSCResources.ps1
Configuration CopyDSCResource
{
param (
[Parameter(Mandatory=$false)]
[ValidateNotNullOrEmpty()]
[String] $ModulePath = "$env:ProgramFiles\WindowsPowershell\Modules")
#[PSCredential] $credential = get-credential
Node VCTSCFDSMWEB01
{
File DeployWebDeployResource
{
Ensure = "Present"
SourcePath = "C:\test.txt"
DestinationPath = "D:\temp"
Force = $true
Type = "File"
}
}
}
CopyDSCResource -ConfigurationData $configData -Verbose
# test outside of RM
#CopyDSCResource -ConfigurationData CopyDSCResource.ConfigData.psd1
#Start-DscConfiguration -Path .\CopyDSCResource -Credential $credential -Verbose -Wait
CopyDSCResource.ConfigData.psd1
##{
$configData = #{
AllNodes = #(
#{
NodeName = "*"
PSDscAllowPlainTextPassword = $true
},
#{
NodeName = "VCTSCFDSWEB01.rlg.test"
Role = "WebServer"
}
)
}
I'm afraid I cant seem to upload screenshots from my current location but in terms of RM, I have a vNext environment with a single server linked, a vNext release path with a single 'Dev' stage and a vNext release template with a single 'Deploy PS/DSC' action. The configuration of the action is:
ServerName - VCTSCFDSMWEB01
ComponentName - COpyDSCResource vNext
PSScriptPath - copydscresources.ps1
PSConfigurationPath - copydscresource.configdata.psd1
UseCredSSP - true
When I run a new release, the deploy stage reports success and when I view the Deployment log files I get the following:
Upload components - Successfully uploaded to the normalized store.
Deploy Using PS/DSC - Copying recursively from \vcxxxxtfs03\Drops\CorrespondenceCI\CorrespondenceCI20150114.1\Scripts to C:\Windows\DtlDownloads\CopyDSCResource vNext succeeded.
Finally the DSC event log has the following:
Job {CD3BE350-4072-4C8B-835F-4B4D1C46D65D} :
Configuration is sent from computer NULL by user sid S-1-5-18.
This compares markedly to the same event log entry when run outside of RM:
Job {34F78498-CF18-4F2A-9874-EB54FDA2D990} :
Configuration is sent from computer VCXXXXTFS01 by user sid S-1-5-21-1034805355-1149422947-1317505720-10867.
Any pointers appreciated
It would be good if I could see evidence of a mof file being created on the RM server for example, anybody know where I can find this??
Turns out the crucial element was that my DSC script had to use an environment variable for naming the node. So:
Node $env:COMPUTERNAME
No idea why but it works!

Resources