Unable to connect SQLServer inside docker container - sql-server

I'm creating a container, which has an .net core application, based on the official microsoft image.
I'm getting the following error when the application try to connect to SQL server, it only happens inside the container: A connection was successfully established with the server, but then an error occurred during the pre-login handshake. (provider: SSL Provider, error: 31 - Encryption(ssl/tls) handshake failed)
My application uses the following packages for communcicate with sqlserver:
System.Data.SqlClient 4.8.2
Dapper 2.0.78
My connectionstring looks like this:
Data Source=xxx.xxx.xxx.xxx;Initial Catalog=XXXXXXXX;User ID=XXXXX;Password=XXXX;MultipleActiveResultSets=True;Connect Timeout=60;TrustServerCertificate=true
My docker Version isthe latest avaliable at the time: "Docker version 20.10.2, build 2291f61" is running on windows and this is how i'm creating the docher image:
FROM mcr.microsoft.com/dotnet/sdk:3.1 AS build-env
WORKDIR /prj
COPY . .
RUN dotnet publish myapp.csproj -o /app
FROM mcr.microsoft.com/dotnet/aspnet:3.1
WORKDIR /app
RUN update-ca-certificates --fresh
COPY --from=build-env /app .
ENTRYPOINT ["dotnet", "myapp.dll"]
I've been googling for some time now, and I didn't found a solution that works for me, can some one help?

This error typically occurs in client environments like docker image containers, Unix clients, or Windows clients where TLS 1.2 is the minimum supported TLS protocol.
Configure TLS/SSL settings in the docker image/client environment to connect with TLS 1.0.
MinProtocol = TLSv1
CipherString = DEFAULT#SECLEVEL=1

Related

Keycloak 19 docker can't find suitable driver for sqlserver

I deployed keycloak 19 in azure app service using the quay.io/keycloak/keycloak:19.0.1 image; It's working fine when using the dev file database; but I'm having trouble when I tried to connect on my SQL server database, I followed the instruction here but I'm getting the error below
ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: No suitable driver found for jdbc:sqlserver://MySDBServer:1433;databaseName=MyDatabaseName
Here's my Setup in app settings configurations
KC_DB:mssql
KC_DB_URL_HOST:[MyDatabaseServer]
KC_DB_PASSWORD:[MyDatabasePassword]
KC_DB_USERNAME:[MyDatatabaseUsername]
KC_DB_URL_DATABASE:[MydatabaseName]
KC_PROXY:edge
Passing the --optimized parameter to the start should fix it.
bin/kc.[sh|bat] start --optimized
In case of a Dockerfile, your ENTRYPOINT will look like:
ENTRYPOINT ["/opt/keycloak/bin/kc.sh", "start", "--optimized"]
You can read about the --optimized parameter at the links given below.
Optimize the Keycloak startup
Changes to the server configuration and startup

How to add mssql-jdbc_auth to ad Docker Image?

im new to Docker and trying to create a Docker image which is connected to an external MS SQL Database, which is not inside of a Docker Container.
The problem is, that i cant run the Image as a container, because the following error occurs:
Caused by: com.microsoft.sqlserver.jdbc.SQLServerException: This driver is not configured for integrated authentication.
java.lang.UnsatisfiedLinkError: no mssql-jdbc_auth-9.2.1.x64 in java.library.path: /usr/java/packages/lib:/usr/lib64:/lib64:/lib:/usr/lib
I don't know how to copy the file mssql-jdbc_auth-9.2.1.x64 from my local drive to the docker image, because in my local file the auth file is present and the connection works.
FROM sapmachine
COPY target/*.jar Test-0.0.1-SNAPSHOT.jar
ENV JAVA_OPTS=""
EXPOSE 8082
ENTRYPOINT ["sh","-c","java $JAVA_OPTS -Djava.security-egd=file:/dev/./urandom -jar /Test-0.0.1-SNAPSHOT.jar"]
Thank you for your help!

How can I connect to a Sql Server on the internet from a .net Core Linux Docker container (using Docker for Windows)?

We're trying to access a Sql Server database that's on a remote network from within a Docker container. However, every attempt at connecting just hangs the application (no exception, no timeout, just endless nothing).
It's easy to reproduce by creating a new .Net Core 3.1 console application and adding this code (add package reference to Microsoft.Data.SqlClient):
using System;
using System.Threading.Tasks;
using Microsoft.Data.SqlClient;
namespace ConsoleApp1
{
class Program
{
static async Task Main(string[] args)
{
var connStr = "Data Source=<public server ip>,1433;Integrated Security=False;Initial Catalog=TestDatabase;User ID=user;Password=password";
await using var connection = new SqlConnection(connStr);
await connection.OpenAsync();
Console.WriteLine("Hello World!");
}
}
}
Then add Docker support via Visual Studio, which generates a DOCKERFILE:
#See https://aka.ms/containerfastmode to understand how Visual Studio uses this Dockerfile to build your images for faster debugging.
FROM mcr.microsoft.com/dotnet/core/runtime:3.1-buster-slim AS base
WORKDIR /app
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["ConsoleApp1/ConsoleApp1.csproj", "ConsoleApp1/"]
RUN dotnet restore "ConsoleApp1/ConsoleApp1.csproj"
COPY . .
WORKDIR "/src/ConsoleApp1"
RUN dotnet build "ConsoleApp1.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "ConsoleApp1.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "ConsoleApp1.dll"]
Now when we start the project as the ConsoleApp1-profile the connection is made as expected. When starting the project using the Docker-profile the application just hangs on the await connection.OpenAsync(); call.
There's no exception or anything. It just doesn't connect and just sits there indefinitely...
I realise running a Linux container on a Windows machine involves all kinds of magic, but creating a webrequest to an external page does work. Which gives me the impression that it has something to do with the port number or the nature of a connection to a Sql Server.
Any help would be greatly appreciated.
As it turns out, the new version of the mcr.microsoft.com/dotnet/core/runtime:3.1-buster-slim source-image is much stricter on the TLS/Cipher usage.
This, combined with a bug in the SqlClient caused this behaviour. As it turns out the issue and various resolutions/workarounds is very extensively discussed on the SqlClient repository:
https://github.com/dotnet/SqlClient/issues/201

Jenkins Artifactory plug-in: Error occurred while requesting version information: Connection refused

I get the error "Error occurred while requesting version information: Connection refused" when I test the connection in Jenkins configuration for Artifactory plug-in. I have tried it with Anonymous access enabled in Artifactory, with Anonymous access disabled, and tried all three options (Supported, Unsupported, Required) for Password Encryption in Artifactory. I have Default Deployer Credentials in my Jenkins Artifactory configuration, and I have tested the connection with 'Use Different Resolver Credentials' and without. I consistently get this error.
Any help/ideas would be greatly appreciated
I also ran in a similar problem yesterday.
Problem:
I was running jenkins and artifactory in two different docker containers on my local. I had exposed port 8086 for artifactory and could access it using http://localhost:8086/artifactory in my browser. But giving the same url for artifactory in jenkins produced the above reported error in question.
Solution:
For some unknown reasons, jenkins artifactory plugin couldn't resolve http://localhost:8086/artifactory even though the docker mappings was correct and it was possible to connect to artifactory web based console with the same URL.
Replacing "localhost" with docker container IP did the trick.
Name of my container in which artifactory was running was docker-plgr_artifactory_1
Admins-MacBook-Pro-2:~ prakash.tiwari$ docker exec -it docker-plgr_artifactory_1 cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.18.0.2 08038bc9449b
The IP of container was 172.18.0.2. So I replaced http://localhost:8086/artifactory with http://172.18.0.2:8081/artifactory and jenkins was now able to connect to artifactory. (8081 is the port in docker container at which artifactory was running. You'd have given it at the time of running the container. Alternatively, you can find it by running docker ps and checking the value under PORTS field.)
Credit: https://www.arvinep.com/2016/04/jenkins-docker-container-problem.html
Note: I know this solution doesn't explain the cause and why it works, but I hope it at least helps some people and saves their time.
I see that you asked this question a while ago. I just had to deal with a very similar situation. I had loaded the root and intermediate certificates into the cacerts files found under the 4 version of Java on the build server. The problem was that Jenkins uses it's own cacerts file found in the Jenkins install folder. Once I loaded the certs there I was able to test the connection to artifactory and upload the build artifacts. I hope this helps

Error connecting to Redis Server from Node.js on Amazon AWS EC2 server

I am trying to run a node.js server and a Redis server on an Amazon AWS Ec2 micro instance .
I have installed Redis Server and the redis-server command runs fine .
I use 'Forever' to keep the Redis-Server running . And it works fine .
But when I start my Node server , it fails to connect to the Redis-Server .
It gives the following error -
Error Error: Redis connection to 127.0.0.1:6379 failed - connect ECONNREFUSED
Doing a 'Forever List' shows that the redis server is running fine .
info: Forever processes running
data: uid command script forever pid logfile uptime
data: [0] _pXw node app.js 26670 26671 /home/ubuntu/.forever/_pXw.log 0:0:0:13.463
data: [1] ylT1 node redis-server 25013 26681
I have verified that when the redis-server starts , it starts at 6379 port .
Can anyone help me explain why this error is happening and how I fix this ?
I use the following code to connect to Redis . I have the client libraries installed for Redis .
var redis = require("redis"),
client = redis.createClient();
Everything runs fine when I run the code on my localhost .
If you are going to use Redis outside of AWS you can try next steps that helped me to connect Redis Server working on AWS from my local Nodejs application:
1) On AWS: sudo cp /etc/redis/redis.conf.backup /etc/redis/redis.conf. Backup saves you a lot of energy figuring out whats wrong :)
2) On AWS: stop redis-server: sudo /etc/init.d/redis-server stop
3) On AWS: open /etc/redis/redis.conf and find a line bind 127.0.0.1. Copy and paste new line below bind 0.0.0.0. So you could have several lines with bind parameter. BTW, port of connection can be changed in redis.conf as well
4) On AWS: start redis-server: sudo /etc/init.d/redis-server start
5) On AWS: type redis-cli ping you should see PONG message if redis-server started ok
6) On AWS: Now open Sequrity Group for your running isntance and add New Rule with "Type" - Custom TCP Rule, Port Range - 6379
7) In your local Nodejs application:
var redis = require("redis");
var redisClient = redis.createClient(redis_port, redis_host);
nodejs redis aws
Have you checked the redis client-server connection on AWS using the ping-pong routine. Next maybe you should try running it without forever, as root.

Resources