LDAPs and memberOf attribute in Ansible AWX - active-directory

I want to connect my AWX instance via LDAPs to our MS AD, but where/and how to install the CA root-trusted certificate?
Furthermore, I want to grant a login only from users of a certain group (memberOf), where I do not know how to deposit this attribute

If you're running your AWX instance in docker.
Install the certificates on your machine where you're running docker on. During the installation provide the path to root certs (inventory file in installer dir):
ca_trust_dir=/etc/pki/ca-trust/source/anchors
If you have AWX already installed and don't want to re-deploy - install certificates to awx_web and awx_task containers.
Copy cert and go to the container, e.g.
docker cp cert.crt awx_task:/etc/pki/ca-trust/source/anchors/your_org.crt
docker exec -it awx_task /bin/bash
Finally install the cert:
update-ca-trust enable
update-ca-trust extract
Repeat for the second container (awx_web)

Related

Ubuntu integration to windows domain

``
Hello
I am migrating an Ubuntu Bionic 18.4 Linux server to a windows domain
I followed the steps below:
1- update packages first.
2- install the required packages.
sudo apt -y install realmd sssd sssd-tools sssd-ad libnss-sss libpam-sss adcli samba-common-bin oddjob oddjob-mkhomedir packagekit
sudo apt-get install -y krb5-user sssd-krb5
pam ????
3- Server Network config
create file 99_config.yaml (/etc/netplan/99_config.yaml)
configure IP , DNS server and domain
Change server hostname to a fully qualified domain name
sudo hostnamectl set-hostname serverName.mydomain
change /etc/hosts
add or update line 127.0.0.1 serverName.mydomain
apply change : sudo netplan apply
4- Discover the domain
realm discover mydomain (work fine)
5- Keberos config
REALM (EN MAJUSCULE)= mydomain
kdc = my domaine Active Directory Server IP
admin_server = my domaine Active Directory Server Name
6- Join ubuntu server to the domain
realm join MyNameServerIP mamadi.fofana (work fine)
7- Modify pam to automatically create a home directory for AD users
pam-auth-update
Check “activate mkhomedir”.
8- Test to see if the integration is working correctlyPermalink
id myuserName#myDomain
getent myuserName#myDomain
groups myuserName#myDomain
All those 3 above commands work fine
9- Admin config
Update sudoers file to include your domain administrators security group with full sudo access:
sudo nano /etc/sudoers.d/admins
Add the necessary lines to it. For example:
user ALL=(ALL) ALL
%Domain\ Admins ALL=(ALL) ALL
To avoid adding the domain name to the username every time, configure this.
sudo nano /etc/sssd/sssd.conf
Change the ‘use_fully_qualified_names’ value to False.
Restart and check:
sudo systemctl restart sssd
allow authorization for some AD users or groups
sudo realm permit myUserName#myDomain, someUserName#myDomain
sudo realm permit -g 'Domain Admins'
Login using SSH via another terminal:
ssh -l myuserName#myDomain MyUbuntuServerIP
At first it worked; several domain users managed to connect with ssh , fileZilla and directly on the server
with their domain credential.
The only concern was that the resolution didn't work with the ubuntu server name; we used the IP address
To fix the name resolution problem, I had to install and configure samba and nmbd
Suddenly after a few days, I couldn't connect to the server with the domain accounts
with SSH I have the message
Connection closed by ServerIP port 22
directly on the server
i have the message
Sorry that didn't work, please try again
I am however sure of the password, and other users have failed to connect
Do you have an idea of the origin of the problem or a way to debug to identify the source of the problem?
the migration worked at first, then stopped recognizing domain user passwords
I specify that although the users of the domain cannot connect,
the following commands still work and show correct outputs
realm discover mydomain (work fine)
id myuserName#myDomain
getent myuserName#myDomain
groups myuserName#myDomain
Please assist
8- Test to see if the integration is working correctlyPermalink
id myuserName#myDomain
getent myuserName#myDomain
groups myuserName#myDomain
All those 3 above commands work fine

Trouble connecting from docker container with ASP.NET core to SQL Server container

I have a container deploying a front end in ASP.NET Core trying to connect to the backend SQL Server database. I am running windows 10 with Docker desktop v19.03.13.
The website container is built on
FROM mcr.microsoft.com/dotnet/core/sdk:3.1 AS build-env
WORKDIR /app
EXPOSE 80
# Copy csproj and restore as distinct layers
COPY ./FOOBAR/*.csproj ./
RUN dotnet restore
# Copy everything else and build
COPY . ./
RUN dotnet publish -c Release -o out
# Build runtime image
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1
WORKDIR /app
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "FOOBAR.dll"]
The database is built on
FROM mcr.microsoft.com/mssql/server:2017-latest
USER root
COPY setup.sql setup.sql
COPY import-data.sh import-data.sh
COPY entrypoint.sh entrypoint.sh
RUN chmod +x entrypoint.sh
CMD /bin/bash ./entrypoint.sh
Everything works marvelous when running outside docker, .NET, Python, SQL Server Management Studio.
In .NET, my connection string is:
Server=localhost;Database=FOOBAR;Integrated Security=True
In Python:
DRIVER={ODBC Driver 17 for SQL Server};server=localhost;database=FOOBAR;Trusted_Connection=yes;
So I need to deploy this to a network that does not have a domain controller so I need to handle all database authentication.
When I build my containers, I change my .NET connection string to
dbConnection="Server=host.docker.internal;Database=FOOBAR;User Id=sa;Password=Password1!;"
I spawn my containers with
docker run -e ACCEPT_EULA=Y -e SA_PASSWORD=Password1! -p 1433:1433 -v c:\temp\:/var/opt/mssql/data --name foobar_db -d foobar_db:1.0
docker run -p 8080:80 --name foobar --link foobar_db:foobar_db -d foobar:1.0
My containers spin up, my database is deployed just fine. From the host, I can use SQL Server Management Studio and Python, and connect to my database container using the credentials above, and connect and perform read/writes perfectly.
When I connect from the .NET using
Server=host.docker.internal;Database=FOOBAR;User Id=sa;Password=Password1!;
I can see my SQL Server container complain about an invalid login,
Login failed for user '6794cfd81d48\Guest'
where I can confirm that 6794cfd81d48 is the hash of my SQL Server container, foobar_db.
IIS serves up webpages just fine, the problem is connecting to the database. Even though I am providing the correct username and password, I am unable to connect from another container to the SQL Server container because it thinks that I am a guest to that container. Depending on the deployment environment, normally I would create SQL Server logins, either for a machine or for a user, but not in this case.
There was an offending IT security software application that was identified as the cause. IT set a passthrough so the docker applications could by the security and everything worked as designed.

How to use aws Lightsail for my react build

I'm trying to use lightsail to host a website.
It almost works fine but I have to write example.com:5000 but I don't know what to do to remove this :5000.
I used npm run build to create a file and I use pm2 to serve it automatically on this port.
Since you're using PM2 to serve the react application, you can serve it directly in port 80 by doing the following:
Connect to your server (Note: Only root can bind ports which are less than 1024 so that's why we're going to use authbind which allow this port binding for non-root users)
Bind the 80 port using authbind by executing the following commands:
sudo apt-get install authbind Install the authbind package
sudo touch /etc/authbind/byport/80 Create a "binding file" to bind port 80
sudo chown YOUR-USER /etc/authbind/byport/80 Make your user the owner of this file (make sure to replace YOUR-USER with your username)
chmod 755 /etc/authbind/byport/80 Set the access right for this file
Start the app by using authbind --deep pm2
You can view more information about these steps via the official PM2 documentation: https://pm2.keymetrics.io/docs/usage/specifics/
Also, if you're just serving a React application, you can use S3 to host it since it's pretty cheap and you gives you advantages such as CDN and other features. If you're doing that just make sure to enable CORS in your S3 bucket.

How to connect Docker with Azure Data Studio?

I install docker container on mac(OS X) and install Microsoft SQL 2017 image file on docker.So, I try to connect docker with Azure Data Studio but didn't connect it. Can I connect docker with Azure Data Studio and How to configure it? Please help me, thank a lot.
Use 127.0.0.1,1433 instead of 127.0.0.1:1433
This syntax is what my ASP.NET Core app uses as syntax so I figured MS liked that format for connection strings and such.
This worked for me. Hope it helps.
I was able to run SQL server on MAC using Docker by running it along with the Azure Data Studio.
In order to connect to a server, you need to go to preferences of your Docker settings and increase the Memory allocation from the default of 2GB to minimum 4GB (as SQL server needs min 3.25GB space). Save and restart the docker.
Once restarted, all you need to do is pull the docker image of the sql server and download it. this can be done by below commands on your terminal . FYI, I am using bash commands below:
Command 1:
sudo docker pull mcr.microsoft.com/mssql/server:2017-latest
This will pull the latest vesion docker image and download. Once done, you need to set your SQL authentication on the server for your database. Follow below commands:
Command 2:
sudo docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=<SetYourPasswordHere>' \
-p 1433:1433 --name sql1 \
-d mcr.microsoft.com/mssql/server:2017-latest
This sets your password and uses the port 1433 for SQL server (which is the default port). To confirm if the image has been created and the SQL server is running on docker, execute the below command to check log(s).
Command 3:
docker ps
To check all instances in your history of dockers( i.e. if you already had dockers installed before you are attempting this SQL connection/execution), run the below command and it will give you all the logs of all instances you have created
Command 4:
docker ps -a
or
docker ps -all
Once, you have completed above steps and see that the docker has created SQL instance, you need to go to Azure Data Studio and set the below credentials to access the server that you just created above using Docker.
Server: localhost
Authentication Type: SQL Authentication
Username: sa
Password: <Check Command 2 to see what you entered in the password where it says SetYourPasswordHere>
Hope this helps in your tryst with running SQL server on your MAC. All the Best!
You certainly can connect to a sql server image running in a docker container through azure data studio,
Based on the details mentioned in the question, I'm assuming that you have followed the steps on Microsoft docs for configuring sql server with docker,
The following command is needed to configure and run the SQL Server image docker container:
sudo docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=your-strong-password’ -p 1433:1433 -d mcr.microsoft.com/mssql/server:2017-latest;
To quickly verify
check that the image is running by running:
docker ps -a
And checking the status column (with the correct instance name) to be 'UP',
Then launch Azure Data Studio and fill the connection details:
If you have followed all the default settings in setting up the image, this should work for you,
Hope this helps,
I hope first you have installed sql-cli(make sure you have node.js installed in your system),
Then connect to Mssql with command -> mssql -u -p
try to connect/create a database with docker first then connect from Azure Data Studio

Connect to docker sqlserver via ssh

I've created a docker container that contains a mssql Database. On the command line ip a gives an ip address for the container, however trying to ssh into it username#docker_ip_address yields ssh: connect to host ip_address port 22: Connection refused. So I'm wondering if I am even able to ssh into the container so I don't have to always be using the docker tool docker exec .... and if so how would I go about doing that?
To ssh into container you should full-fill followings
SSH server(Openssh) should be installed within the container and ssh service should be running
Port 22 should be published from container (when you run the container).more info here > Publish ports on Docker
docker ps command should display mapped ports 22
Hope above information helps for you to understand the situation...
If your container contains a database server, the normal way to interact with will be through an SQL client that connects to it; Google suggests SQL Server Management Studio and that connector libraries exist for popular languages. I'm not clear what you would do given a shell in the container, and my main recommendation here would be to focus on working with the server in the normal way.
Docker containers normally run a single process, and that's normally the main server process. In this case, the container runs only SQL Server. As some other answers here suggest, you'd need to significantly rearchitect the container to even have it be possible to run an ssh daemon, at which point you need to worry about a bunch of other things like ssh host keys and user accounts and passwords that a typical Docker image doesn't think about at all.
Also note that the Docker-internal IP address (what you got from ip addr; what docker inspect might tell you) is essentially useless. There are always better ways to reach a container (using inter-container DNS to communicate between containers; using the host's IP address or DNS name to reach published ports from the same or other hosts).
Basically, alter your Dockerfile to something like the following - that will install openssh-server, alter a prohibitive default configs and start the service:
# FROM a-image-with-mssql
RUN echo "root:toor" | chpasswd
RUN apt-get update
RUN apt-get install -y openssh-server
COPY entrypoint.sh .
RUN cd /;wget https://gist.githubusercontent.com/spekulant/e04521d6c6e1ccffbd3455c673518c5b/raw/1e4f6f2cb32caf3a4a9f73b02efdcbd5dde4ba7a/sshd_config
RUN rm /etc/ssh/sshd_config; cp sshd_config /etc/ssh/
ENTRYPOINT ["./entrypoint.sh"]
# further commands
Now you've got yourself an image with ssh server inside, all you have to do is start the service, you cant do RUN service ssh start because it won't work - docker specifics, refer to the documentation. You have to use a Entrypoint like the following:
#!/bin/bash
set -e
sh -c 'service ssh start'
exec "$#"
Put it in a file entrypoint.sh next to your Dockerfile - remember to chmod 755 entrypoint.sh it. There's one thing to mention here, you still wouldn't be able to ssh into the container - the default SSH server configuration doesn't allow login into root account using a password. So you either change the configs yourself and provide it to the image, or you can trust me and use the file I created - inspect it with the link from Dockerfile - nothing malicious there, only a change from prohibit-password to yes.
Fortunately for us - MSSQL official images start from Ubuntu so all the commands above fit perfectly into the environment.
Edit
Be sure to ask if something is unclear or I'm jumping too fast.

Resources