Run Keycloak 19 with SQL Server in Azure WebApp - sql-server

I already have a v16 running in Azure. Now, I'm trying to run a Keycloak 19 in Azure WebApp (with Azure SQL Server), but the container always stop with timeout.
My dockerfile
FROM quay.io/keycloak/keycloak:latest as builder
ENV KC_HEALTH_ENABLED=true
ENV KC_METRICS_ENABLED=true
ENV KC_FEATURES=token-exchange
RUN curl -sL https://github.com/aerogear/keycloak-metrics-spi/releases/download/2.5.3/keycloak-metrics-spi-2.5.3.jar -o /opt/keycloak/providers/keycloak-metrics-spi-2.5.3.jar
RUN /opt/keycloak/bin/kc.sh \
build \
--db=mssql \
--transaction-xa-enabled=false
FROM quay.io/keycloak/keycloak:latest
COPY --from=builder /opt/keycloak/ /opt/keycloak/
WORKDIR /opt/keycloak
RUN keytool -genkeypair -storepass password -storetype PKCS12 -keyalg RSA -keysize 2048 -dname "CN=server" -alias server -ext "SAN:c=DNS:localhost,IP:127.0.0.1" -keystore conf/server.keystore
ENV KC_DB=mssql
ENV KC_DB_URL=jdbc:sqlserver://<SERVER>:1433;databaseName=keycloak
ENV KC_DB_USERNAME=<USER>
ENV KC_DB_PASSWORD=<PASS>
ENV KC_HOSTNAME=localhost
EXPOSE 8443
ENTRYPOINT ["/opt/keycloak/bin/kc.sh", "start", "--optimized"]
I run locally normally, the problem only occurs in Azure
The container Log
2022-09-06T01:35:02.819Z INFO - Pulling image: marcem/keycloak:19.0.1
2022-09-06T01:35:04.669Z INFO - 19.0.1 Pulling from marcem/keycloak
2022-09-06T01:35:04.670Z INFO - Digest: sha256:41fe4fe72ecc4625032ef08b91fc3c64739b53482dd83a15d77c9e2b4f0f12e0
2022-09-06T01:35:04.671Z INFO - Status: Image is up to date for marcem/keycloak:19.0.1
2022-09-06T01:35:04.674Z INFO - Pull Image successful, Time taken: 0 Minutes and 1 Seconds
2022-09-06T01:35:04.686Z INFO - Starting container for site
2022-09-06T01:35:04.687Z INFO - docker run -d --expose=8443 --name idteste19_0_b2d18046 -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITES_PORT=8443 -e WEBSITE_SITE_NAME=idteste19 -e WEBSITE_AUTH_ENABLED=False -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=idteste19.azurewebsites.net -e WEBSITE_INSTANCE_ID=d666afc5e23f437c473fe3731926e159eed3db588814c4ad67c48018d825c3c4 -e WEBSITE_USE_DIAGNOSTIC_SERVER=False marcem/keycloak:19.0.1
2022-09-06T01:35:04.687Z INFO - Logging is not enabled for this container.
Please use https://aka.ms/linux-diagnostics to enable logging to see container logs here.
2022-09-06T01:35:06.990Z INFO - Initiating warmup request to container idteste19_0_b2d18046 for site idteste19
2022-09-06T01:35:22.306Z INFO - Waiting for response to warmup request for container idteste19_0_b2d18046. Elapsed time = 15.3159746 sec
2022-09-06T01:35:38.239Z INFO - Waiting for response to warmup request for container idteste19_0_b2d18046. Elapsed time = 31.2483851 sec
2022-09-06T01:35:54.129Z INFO - Waiting for response to warmup request for container idteste19_0_b2d18046. Elapsed time = 47.1388503 sec
2022-09-06T01:36:09.300Z INFO - Waiting for response to warmup request for container idteste19_0_b2d18046. Elapsed time = 62.3097502 sec
2022-09-06T01:36:24.480Z INFO - Waiting for response to warmup request for container idteste19_0_b2d18046. Elapsed time = 77.4895726 sec
2022-09-06T01:36:40.237Z INFO - Waiting for response to warmup request for container idteste19_0_b2d18046. Elapsed time = 93.2471132 sec
2022-09-06T01:36:55.426Z INFO - Waiting for response to warmup request for container idteste19_0_b2d18046. Elapsed time = 108.4360961 sec
2022-09-06T01:37:10.588Z INFO - Waiting for response to warmup request for container idteste19_0_b2d18046. Elapsed time = 123.5979024 sec
2022-09-06T01:37:25.747Z INFO - Waiting for response to warmup request for container idteste19_0_b2d18046. Elapsed time = 138.7566758 sec
2022-09-06T01:37:40.925Z INFO - Waiting for response to warmup request for container idteste19_0_b2d18046. Elapsed time = 153.9341915 sec
2022-09-06T01:37:56.075Z INFO - Waiting for response to warmup request for container idteste19_0_b2d18046. Elapsed time = 169.0848266 sec
2022-09-06T01:38:12.088Z INFO - Waiting for response to warmup request for container idteste19_0_b2d18046. Elapsed time = 185.097369 sec
2022-09-06T01:38:27.253Z INFO - Waiting for response to warmup request for container idteste19_0_b2d18046. Elapsed time = 200.2621661 sec
2022-09-06T01:38:42.393Z INFO - Waiting for response to warmup request for container idteste19_0_b2d18046. Elapsed time = 215.4024129 sec
2022-09-06T01:38:57.060Z ERROR - Container idteste19_0_b2d18046 for site idteste19 did not start within expected time limit. Elapsed time = 230.0696036 sec
2022-09-06T01:38:57.086Z ERROR - Container idteste19_0_b2d18046 didn't respond to HTTP pings on port: 8443, failing site start. See container logs for debugging.
2022-09-06T01:38:57.093Z INFO - Stopping site idteste19 because it failed during startup.
The keycloak log
2022-09-06T01:35:18.552235082Z 2022-09-06 01:35:13,613 INFO [org.keycloak.common.Profile] (main) Preview feature enabled: token_exchange
2022-09-06T01:35:18.554454187Z 2022-09-06 01:35:13,647 INFO [org.keycloak.quarkus.runtime.hostname.DefaultHostnameProvider] (main) Hostname settings: FrontEnd: localhost, Strict HTTPS: true, Path: <request>, Strict BackChannel: false, Admin: <request>, Port: -1, Proxied: false
2022-09-06T01:35:18.620453452Z 2022-09-06 01:35:16,830 INFO [org.keycloak.common.crypto.CryptoIntegration] (main) Detected crypto provider: org.keycloak.crypto.def.DefaultCryptoProvider
2022-09-06T01:35:20.874562080Z 2022-09-06 01:35:20,872 WARN [org.infinispan.CONFIG] (keycloak-cache-init) ISPN000569: Unable to persist Infinispan internal caches as no global state enabled
2022-09-06T01:35:20.913982278Z 2022-09-06 01:35:20,913 WARN [org.infinispan.PERSISTENCE] (keycloak-cache-init) ISPN000554: jboss-marshalling is deprecated and planned for removal
2022-09-06T01:35:21.007026510Z 2022-09-06 01:35:21,006 INFO [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000556: Starting user marshaller 'org.infinispan.jboss.marshalling.core.JBossUserMarshaller'
2022-09-06T01:35:22.045335289Z 2022-09-06 01:35:22,038 INFO [org.infinispan.CONTAINER] (keycloak-cache-init) ISPN000128: Infinispan version: Infinispan 'Triskaidekaphobia' 13.0.9.Final
2022-09-06T01:35:22.439656965Z 2022-09-06 01:35:22,439 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000078: Starting JGroups channel `ISPN`
2022-09-06T01:35:22.442769173Z 2022-09-06 01:35:22,442 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000088: Unable to use any JGroups configuration mechanisms provided in properties {}. Using default JGroups configuration!
2022-09-06T01:35:22.762236477Z 2022-09-06 01:35:22,761 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the send buffer of socket MulticastSocket was set to 1.00MB, but the OS only allocated 212.99KB
2022-09-06T01:35:22.773617706Z 2022-09-06 01:35:22,773 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the receive buffer of socket MulticastSocket was set to 20.00MB, but the OS only allocated 212.99KB
2022-09-06T01:35:22.778530319Z 2022-09-06 01:35:22,777 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the send buffer of socket MulticastSocket was set to 1.00MB, but the OS only allocated 212.99KB
2022-09-06T01:35:22.783215231Z 2022-09-06 01:35:22,782 WARN [org.jgroups.protocols.UDP] (keycloak-cache-init) JGRP000015: the receive buffer of socket MulticastSocket was set to 25.00MB, but the OS only allocated 212.99KB
2022-09-06T01:35:24.868971104Z 2022-09-06 01:35:24,868 INFO [org.jgroups.protocols.pbcast.GMS] (keycloak-cache-init) 3ede773e307d-43775: no members discovered after 2018 ms: creating cluster as coordinator
2022-09-06T01:35:24.891014759Z 2022-09-06 01:35:24,890 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000094: Received new cluster view for channel ISPN: [3ede773e307d-43775|0] (1) [3ede773e307d-43775]
2022-09-06T01:35:24.901995387Z 2022-09-06 01:35:24,900 INFO [org.infinispan.CLUSTER] (keycloak-cache-init) ISPN000079: Channel `ISPN` local address is `3ede773e307d-43775`, physical addresses are `[169.254.129.3:52868]`
2022-09-06T01:35:26.153975737Z 2022-09-06 01:35:26,153 INFO [org.keycloak.connections.infinispan.DefaultInfinispanConnectionProviderFactory] (main) Node name: 3ede773e307d-43775, Site name: null
2022-09-06T01:35:28.435231448Z 2022-09-06 01:35:28,434 INFO [io.quarkus] (main) Keycloak 19.0.1 on JVM (powered by Quarkus 2.7.6.Final) started in 21.561s. Listening on: https://0.0.0.0:8443
2022-09-06T01:35:28.436131450Z 2022-09-06 01:35:28,435 INFO [io.quarkus] (main) Profile prod activated.
2022-09-06T01:35:28.436823052Z 2022-09-06 01:35:28,436 INFO [io.quarkus] (main) Installed features: [agroal, cdi, hibernate-orm, jdbc-h2, jdbc-mariadb, jdbc-mssql, jdbc-mysql, jdbc-oracle, jdbc-postgresql, keycloak, logging-gelf, narayana-jta, reactive-routes, resteasy, resteasy-jackson, smallrye-context-propagation, smallrye-health, smallrye-metrics, vault, vertx]
I tried change the KC_HOSTNAME (from localhost to xxx.azurewebsites.net), KC_HOSTNAME_PORT (to 443, 8443), PROXY_ADDRESS_FORWARDING (to false and true) and WEBSITES_PORT (to 80, 8080 and 8443). But unsuccessfully.
Any ideas?
Thanks a lot

I managed to get the Keycloak UI be shown and works to login. Now the only problem is with SSL cert.
But to get it started do this steps
in Dockerfile i added
CMD ["start", "--hostname-strict false --hostname-strict-https false"]
after the [EntryPoint]
Then in Azure Configuration you need to add variables
PORT: 8080
and
WEBSITES_PORT: 8080
and
KC_DB_URL="jdbc:sqlserver://mydatabaseserver.database.windows.net;database=mydatabase"
You can also check the logs by enabling App Service Logs and then login to the Advanced Tools and check the Log files. The file ending with the _docker.log is the file to check if it starts up correctly.

I was also able to solve the problem by setting environment variables:
ENV KC_HOSTNAME_STRICT=false
ENV KC_HOSTNAME_STRICT_HTTPS=false
ENV KC_HTTP_PORT=8080
ENV KC_HTTP_ENABLED=true
and using ENTRYPOINT
ENTRYPOINT ["/opt/keycloak/bin/kc.sh", "start"]

Related

discord bot errors (500) every 14 min being hosted on Google App Engine

I have a discord bot that is being hosted on Google App Engine. It will work and run, and then roughly every ~14 min, the bot goes offline, and I see these errors:
Upon further review of the error logs, this is the output:
logMessage: "This request caused a new process to be started for your application, and thus caused your application code to be loaded for the first time. This request may thus take longer and use more CPU than a typical request for your application."
severity: "INFO"
time: "2021-10-03T16:29:18.831860Z"
}
1: {
logMessage: "The warmup request failed. Please check your warmup handler implementation and make sure it's working correctly."
severity: "INFO"
time: "2021-10-03T16:29:18.831862Z"
}
2: {
logMessage: "Process terminated because it failed to respond to the start request with an HTTP status code of 200-299 or 404."
severity: "ERROR"
time: "2021-10-03T16:29:18.831863Z"
My app.yaml file is as follows:
runtime: python38
instance_class: B1
manual_scaling:
instances: 1
entrypoint: python3 bot.py
I'm quite new to GCP and hosting web services, so I am quite lost. Any help here is deeply appreciated.
You need to provide a url handler for /_ah/start. (Might as well also provide for /_ah/stop and /_ah/warmup too). Those are calls GAE will make to start and stop your app. They should return an http response of 200. Here is an example, in Flask:
#app.route('/_ah/start')
#app.route('/_ah/stop')
#app.route('/_ah/warmup')
def warmup():
# Handle your warmup logic here, e.g. set up a database connection pool
return '', 200, {}
EDIT: Valid responses are 200–299 or 404

TFX/Apache Beam -> Flink jobs hang when running on more than one task manager

When I am trying to run a TFX pipeline/Apache Beam job on a Flink runner, it works fine when using 1 task manager (on one node) with parallelism 2 (2 task slots per task manager). But hangs when I try it with higher parallelism on more than one task manager with the message constantly repeating on both task managers:
INFO org.apache.beam.runners.fnexecution.environment.ExternalEnvironmentFactory [] - Still waiting for startup of environment from a65a0c5f8f962428897aac40763e57b0-1334930809.eu-central-1.elb.amazonaws.com:50000 for worker id 1-1
The Flink cluster runs on a native Kubernetes deployment on an AWS EKS Kubernetes Cluster.
I use the following parameters:
"--runner=FlinkRunner",
"--parallelism=4",
f"--flink_master={flink_url}:8081",
"--environment_type=EXTERNAL",
f"--environment_config={beam_sdk_url}:50000",
"--flink_submit_uber_jar",
"--worker_harness_container_image=none",
EDIT: Adding additional info about the configuratio
I have configured the Beam workers to run as side-cars (at least this is my understanding of how it should work), by setting the Flink parameter:
kubernetes.pod-template-file.taskmanager
it is pointing out to a template file with contents:
kind: Pod
metadata:
name: taskmanager-pod-template
spec:
#hostNetwork: true
containers:
- name: flink-main-container
#image: apache/flink:scala_2.12
env:
- name: AWS_REGION
value: "eu-central-1"
- name: S3_VERIFY_SSL
value: "0"
- name: PYTHONPATH
value: "/data/flink/src"
args: ["taskmanager"]
ports:
- containerPort: 6122 #22
name: rpc
- containerPort: 6125
name: query-state
livenessProbe:
tcpSocket:
port: 6122 #22
initialDelaySeconds: 30
periodSeconds: 60
- name: beam-worker-pool
env:
- name: PYTHONPATH
value: "/data/flink/src"
- name: AWS_REGION
value: "eu-central-1"
- name: S3_VERIFY_SSL
value: "0"
image: 848221505146.dkr.ecr.eu-central-1.amazonaws.com/flink-workers
imagePullPolicy: Always
args: ["--worker_pool"]
ports:
- containerPort: 50000
name: pool
livenessProbe:
tcpSocket:
port: 50000
initialDelaySeconds: 30
periodSeconds: 60
I have also created a kubernetes load balancer for the task managers, so clients can connect on port 50000. So I use that address when configuring:
f"--environment_config={beam_sdk_url}:50000",
EDIT 2: Looks like the Beam SDK harness on one task manager wants to connect to the endpoint running on the other task manager, but looks for it on localhost:
Log from beam-worker-pool on TM 2:
2021/08/11 09:43:16 Failed to obtain provisioning information: failed to dial server at localhost:33705
caused by:
context deadline exceeded
The provision endpoint on TM 1 is the one actually listening on the port 33705, while this is looking for it on localhost, so cannot connect to it.
EDIT 3: Showing how I test this:
...............
TM 1:
========
$ kubectl logs my-first-flink-cluster-taskmanager-1-1 -c beam-worker-pool
2021/08/12 09:10:34 Starting worker pool 1: python -m apache_beam.runners.worker.worker_pool_main --service_port=50000 --container_executable=/opt/apache/beam/boot
Starting worker with command ['/opt/apache/beam/boot', '--id=1-1', '--logging_endpoint=localhost:33383', '--artifact_endpoint=localhost:43477', '--provision_endpoint=localhost:40983', '--control_endpoint=localhost:34793']
2021/08/12 09:13:05 Failed to obtain provisioning information: failed to dial server at localhost:40983
caused by:
context deadline exceeded
TM 2:
=========
$ kubectl logs my-first-flink-cluster-taskmanager-1-2 -c beam-worker-pool
2021/08/12 09:10:33 Starting worker pool 1: python -m apache_beam.runners.worker.worker_pool_main --service_port=50000 --container_executable=/opt/apache/beam/boot
Starting worker with command ['/opt/apache/beam/boot', '--id=1-1', '--logging_endpoint=localhost:40497', '--artifact_endpoint=localhost:36245', '--provision_endpoint=localhost:32907', '--control_endpoint=localhost:46083']
2021/08/12 09:13:09 Failed to obtain provisioning information: failed to dial server at localhost:32907
caused by:
context deadline exceeded
Testing:
.........................
TM 1:
============
$ kubectl exec -it my-first-flink-cluster-taskmanager-1-1 -c beam-worker-pool -- bash
root#my-first-flink-cluster-taskmanager-1-1:/# curl localhost:40983
curl: (7) Failed to connect to localhost port 40983: Connection refused
root#my-first-flink-cluster-taskmanager-1-1:/# curl localhost:32907
Warning: Binary output can mess up your terminal. Use "--output -" to ...
TM 2:
=============
root#my-first-flink-cluster-taskmanager-1-2:/# curl localhost:32907
curl: (7) Failed to connect to localhost port 32907: Connection refused
root#my-first-flink-cluster-taskmanager-1-2:/# curl localhost:40983
Warning: Binary output can mess up your terminal. Use "--output -" to tell
Warning: curl to output it to your terminal anyway, or consider "--output
Not sure how to fix this.
Thanks,
Gorjan
It's not recommended to try to connect to the same environment with different task managers. Usually we recommend setting up the Beam workers as side cars to the task managers so there's a 1:1 correspondence, then connecting via localhost. See the example config at https://github.com/GoogleCloudPlatform/flink-on-k8s-operator/blob/master/examples/beam/without_job_server/beam_flink_cluster.yaml and https://github.com/GoogleCloudPlatform/flink-on-k8s-operator/blob/master/examples/beam/without_job_server/beam_wordcount_py.yaml
I was able to fix this by setting the Beam SDK address to localhost instead of using a load balancer. So the config I use now is:
"--runner=FlinkRunner",
"--parallelism=4",
f"--flink_master={flink_url}:8081",
"--environment_type=EXTERNAL",
"--environment_config=localhost:50000", # <--- Changed the address to localhost
"--flink_submit_uber_jar",
"--worker_harness_container_image=none",

Flink 1.10.0 - The heartbeat of ResourceManager with id xxxx timed out

I am running flink standalone cluster HA in kubernetes. The same setup runs perfectly when using Flink 1.9 but getting below error continuously when using Flink 1.10.
INFO org.apache.flink.runtime.taskexecutor.TaskExecutor - The heartbeat of ResourceManager with id 783439e4ead380c60498e32a8e1c0ce3 timed out.
DEBUG org.apache.flink.runtime.taskexecutor.TaskExecutor - Close ResourceManager connection 783439e4ead380c60498e32a8e1c0ce3.
org.apache.flink.runtime.taskexecutor.exceptions.TaskManagerException: The heartbeat of ResourceManager with id 783439e4ead380c60498e32a8e1c0ce3 timed out.
at org.apache.flink.runtime.taskexecutor.TaskExecutor$ResourceManagerHeartbeatListener.notifyHeartbeatTimeout(TaskExecutor.java:1842)
at org.apache.flink.runtime.heartbeat.HeartbeatMonitorImpl.run(HeartbeatMonitorImpl.java:109)
flink-conf.yaml :
jobmanager.rpc.address: xx.xxx.xx.xxx
jobmanager.rpc.port: 6123
jobmanager.heap.size: 1500m
taskmanager.memory.process.size: 4000m
taskmanager.numberOfTaskSlots: 1
parallelism.default: 1
jobmanager.execution.failover-strategy: region
state.backend: filesystem
state.checkpoints.dir: file:///checkpoints
state.savepoints.dir: file:///savepoints
high-availability: zookeeper
high-availability.jobmanager.port: 50010
high-availability.zookeeper.quorum: xx.xx.xx.xx:xxxx
high-availability.zookeeper.path.root: /flink
high-availability.cluster-id: /ABCD
high-availability.storageDir: file:///recovery
heartbeat.interval: 60000
heartbeat.timeout: 60000
taskmanager.debug.memory.log: true
taskmanager.debug.memory.log-interval: 10000
taskmanager.memory.managed.fraction: 0.1
blob.server.port: 6124
query.server.port: 6125

Flink TaskManager livenessProbe doesn't work

I'm following this doc to configure probes for JobManager and TaskManager on Kubernetes.
JobManager works perfectly, but TaskManager doesn't work. I noticed in the pod log that the liveness probe failed:
Normal Killing 3m36s kubelet, gke-dagang-test-default-pool-494df2ba-vhs5 Killing container with id docker://taskmanager:Container failed liveness probe.. Container will be killed and recreated.
Warning Unhealthy 37s (x8 over 7m37s) kubelet, gke-dagang-test-default-pool-494df2ba-vhs5 Liveness probe failed: dial tcp 10.20.1.54:6122: connect: connection refused
I'm wondering does TM actually listen on 6122?
Flink version: 1.9.0
Turns out it is because I didn't add taskmanager.rpc.port: 6122 in flink-config.yaml, now it works perfectly.

SonarQube docker image does not run successfully under App Service ACI

I try to implement sonarqube continuous inspection in azure devops with help of windows container instance. After creating azure sonarqube instance (Docker sonarqube latest image) and azure sql database, I try to bind sonarqube windows instance with azure sql server with help of below azure CLI command :
az webapp config connection-string set --resource-group $RESOURCE_GROUP_NAME --name $WEBAPP_NAME -t SQLAzure --settings SONARQUBE_JDBC_URL=$DB_CONNECTION_STRING --connection-string-type SQLAzure
az webapp config container set --name $WEBAPP_NAME--resource-group $RESOURCE_GROUP_NAME --docker-custom-image-name $CONTAINER_REGISTRY_FQDN/$CONTAINER_IMAGE_NAME:$CONTAINER_IMAGE_TAG --docker-registry-server-url https://$CONTAINER_REGISTRY_FQDN --docker-registry-server-user $REG_ADMIN_USER --docker-registry-server-password $REG_ADMIN_PASSWORD
for this i used server less sonarqube setup approach derived in below article:
https://github.com/Hupka/sonarqube-azure-setup
But i am getting below container logs while running sonarqube container instance:
2019-06-13 14:28:34.362 INFO - Logging is not enabled for this container.
Please use https://aka.ms/linux-diagnostics to enable logging to see container logs here.
2019-06-13 14:28:38.819 INFO - Initiating warmup request to container SonarQubewebappName for site sonarqube-docker
2019-06-13 14:28:54.260 INFO - Waiting for response to warmup request for container SonarQubewebappName. Elapsed time = 15.4410269 sec
2019-06-13 14:29:12.285 INFO - Waiting for response to warmup request for container SonarQubewebappName. Elapsed time = 33.4654201 sec
2019-06-13 14:29:28.296 INFO - Waiting for response to warmup request for container SonarQubewebappName. Elapsed time = 49.4772459 sec
2019-06-13 14:29:44.637 INFO - Waiting for response to warmup request for container SonarQubewebappName. Elapsed time = 65.8173845 sec
**2019-06-13 14:29:56.670 ERROR - Container SonarQubewebappName for site SonarQubewebappName has exited, failing site start
2019-06-13 14:29:56.693 ERROR - Container SonarQubewebappName didn't respond to HTTP pings on port: 9000, failing site start.**
getting :( Application Error while accessing sonarqube.
However same configuration works for sonarqube docker instance which is created locally and linked with same azure sql database.
Can you please help me out with this error.
az webapp config connection-string set --resource-group $RESOURCE_GROUP_NAME --name $WEBAPP_NAME -t SQLAzure --settings SONARQUBE_JDBC_URL=$DB_CONNECTION_STRING --connection-string-type SQLAzure
az webapp config container set --name $WEBAPP_NAME--resource-group $RESOURCE_GROUP_NAME --docker-custom-image-name $CONTAINER_REGISTRY_FQDN/$CONTAINER_IMAGE_NAME:$CONTAINER_IMAGE_TAG --docker-registry-server-url https://$CONTAINER_REGISTRY_FQDN --docker-registry-server-user $REG_ADMIN_USER --docker-registry-server-password $REG_ADMIN_PASSWORD

Resources