ubuntu 18.04 sssd not creating keytab file but works fine in 20.04 - ubuntu-18.04

Trying to bind a ubuntu 18.04 (because of compatibility issues with another app, need to use this specific version) I use a mod script:
#!/bin/bash
apt install -y realmd sssd oddjob oddjob-mkhomedir adcli samba-common
realm leave
realm discover xxxx.local
realm join -U xxxx vgmtl.local
echo -e "[sssd]
domains = xxxx.local
config_file_version = 2
services = nss, pam, pac
[domain/xxxx.local]
ad_domain = xxxx.local
krb5_realm = xxxx.LOCAL
realmd_tags = manages-system joined-with-samba
cache_credentials = True
id_provider = ad
auth_provider = ad
chpass_provider = ad
access_provider = ad
krb5_store_password_if_offline = True
default_shell = /bin/bash
ldap_id_mapping = False
use_fully_qualified_names = False
override_homedir = /home/xxxx/%u
enumerate = true
ad_enable_dns_sites = False
ad_enabled_domains = xxxx.local
ad_gpo_ignore_unreadable = True
" > /etc/sssd/sssd.conf
chown root:root /etc/sssd/sssd.conf
chmod 700 /etc/sssd/sssd.conf
rm -rf /var/lib/sss/db/*
service sssd restart
When doing the realm join it gets stuck on:
Using GSS-SPNEGO for SAL bind
and just hangs there, the other thing I noticed, is that no xxxx.keytab seems created....
If running on ubuntu 20.04 --> runs perfect !?
Any help appreciated

Related

How to deploy snowflake DB with sqitch?

I'm new to sqitch.
I'm trying to create a Jenkins pipeline to deploy snowflake using sqitch.
So, I got sqitch and snowsql installed to my Ubuntu 18.04 LTS in Azure.
I just:
clone existing sqitch repository
.git
.gitignore
sqitch
snowflake
deploy
revert
sqitch
sqitch.conf
sqitch.plan
modify the following config: $WORKSPACE/.snowsql/config
[connections]
accountname = acc_name
region = east-us-2.azure
username = user_name
private_key_path = "/path/rsa_key.p8"
authenticator = SNOWFLAKE_JWT
create sqitch config: $WORKSPACE/sqitch/snowflake/sqitch.conf
[core]
engine = snowflake
[engine "snowflake"]
target = dev
client = snowsql
[target "dev"]
uri = "db:snowflake://client.east-us-2.azure/DEV_DB?Driver=SnowflakeDSIIDriver;warehouse=DEV_WH;authenticator=SNOWFLAKE_JWT;UID=DEV;PRIV_KEY_FILE=/path/rsa_key.p8;PRIV_KEY_FILE_PWD=password;"
Then I try to run "sqitch verify" and get the following output:
Trace begun at /usr/share/perl5/App/Sqitch/Engine.pm line 116
App::Sqitch::Engine::load('App::Sqitch::Engine', 'HASH(0x55ba14ad8ce0)') called at /usr/share/perl5/App/Sqitch/Target.pm line 55
App::Sqitch::Target::__ANON__('App::Sqitch::Target=HASH(0x55ba14ad8470)') called at (eval 278) line 22
App::Sqitch::Target::engine('App::Sqitch::Target=HASH(0x55ba14ad8470)') called at /usr/share/perl5/App/Sqitch/Command/status.pm line 113
App::Sqitch::Command::status::execute(undef) called at /usr/share/perl5/App/Sqitch.pm line 205
App::Sqitch::try {...} at /usr/share/perl5/Try/Tiny.pm line 100
eval {...} at /usr/share/perl5/Try/Tiny.pm line 93
Try::Tiny::try('CODE(0x55ba14acfb70)', 'Try::Tiny::Catch=REF(0x55ba15e7f2b0)') called at /usr/share/perl5/App/Sqitch.pm line 225
App::Sqitch::go('App::Sqitch') called at /usr/bin/sqitch line 14
What am I doing wrong?
For me it looks like the system missing some packages or so.
Sqitch was installed as follows:
sudo apt-get install sqitch libdbd-pg-perl libdbd-odbc-perl
Looks like I needed to do the following:
sudo apt install cpanminus && cpanm App::Sqitch

Zeppelin authentication with Jdbc realm

I have been trying to set up zeppelin with authentication with Shiro JDBC realm. After all my attempts, I have not been able to get it working. The basic authentication works but with JDBC realm it fails.
The zeppelin server was created following the doc: http://zeppelin.apache.org/docs/0.9.0/quickstart/kubernetes.html
The POD is working.
I enabled the Shiro by extending the docker image. My Dockerfile:
ARG ZEPPELIN_IMAGE=apache/zeppelin:0.9.0
FROM ${ZEPPELIN_IMAGE}
#https://hub.docker.com/r/apache/zeppelin/dockerfile
WORKDIR ${Z_HOME}
ADD /zeppelin/shiro.ini ${Z_HOME}/conf/
ADD https://repo1.maven.org/maven2/mysql/mysql-connector-java/6.0.4/mysql-connector-java-6.0.4.jar ${Z_HOME}/lib/
ENV CLASSPATH=${Z_HOME}/lib/mysql-connector-java-6.0.4.jar:${CLASSPATH}
ENTRYPOINT [ "/usr/bin/tini", "--" ]
WORKDIR ${Z_HOME}
CMD ["bin/zeppelin.sh"]
My shiro.ini taken from https://gist.github.com/adamjshook/6c42b03fdb09b60cd519174d0aec1af5
[main]
ds = com.mysql.jdbc.jdbc2.optional.MysqlDataSource
ds.serverName = localhost
ds.databaseName = zeppelin
ds.user = zeppelin
ds.password = zeppelin
jdbcRealm = org.apache.shiro.realm.jdbc.JdbcRealm
jdbcRealmCredentialsMatcher = org.apache.shiro.authc.credential.Sha256CredentialsMatcher
jdbcRealm.credentialsMatcher = $jdbcRealmCredentialsMatcher
ps = org.apache.shiro.authc.credential.DefaultPasswordService
pm = org.apache.shiro.authc.credential.PasswordMatcher
pm.passwordService = $ps
jdbcRealm.dataSource = $ds
jdbcRealm.credentialsMatcher = $pm
shiro.loginUrl = /api/login
[urls]/** = authc
Now, when I deploy the zeppelin server, I get:
rg.apache.shiro.config.ConfigurationException: Unable to instantiate class [com.mysql.jdbc.jdbc2.optional.MysqlDataSource] for object named 'ds'. Please ensure you've specified the fully qualified class name correctly.
at org.apache.shiro.config.ReflectionBuilder.createNewInstance(ReflectionBuilder.java:327)
at org.apache.shiro.config.ReflectionBuilder$InstantiationStatement.doExecute(ReflectionBuilder.java:961)
at org.apache.shiro.config.ReflectionBuilder$Statement.execute(ReflectionBuilder.java:921)
at org.apache.shiro.config.ReflectionBuilder$BeanConfigurationProcessor.execute(ReflectionBuilder.java:799)
at org.apache.shiro.config.ReflectionBuilder.buildObjects(ReflectionBuilder.java:278)
at org.apache.shiro.config.IniSecurityManagerFactory.buildInstances(IniSecurityManagerFactory.java:181)
at org.apache.shiro.config.IniSecurityManagerFactory.createSecurityManager(IniSecurityManagerFactory.java:139)
at org.apache.shiro.config.IniSecurityManagerFactory.createSecurityManager(IniSecurityManagerFactory.java:107)
at org.apache.shiro.config.IniSecurityManagerFactory.createInstance(IniSecurityManagerFactory.java:98)
at org.apache.shiro.config.IniSecurityManagerFactory.createInstance(IniSecurityManagerFactory.java:47)
at org.apache.shiro.config.IniFactorySupport.createInstance(IniFactorySupport.java:150)
at org.apache.shiro.util.AbstractFactory.getInstance(AbstractFactory.java:47)
Caused by: org.apache.shiro.util.UnknownClassException: Unable to load class named [com.mysql.jdbc.jdbc2.optional.MysqlDataSource] from the thread context, current, or system/application ClassLoaders. All heuristics have been exhausted. Class could not be found.
at org.apache.shiro.util.ClassUtils.forName(ClassUtils.java:152)
at org.apache.shiro.util.ClassUtils.newInstance(ClassUtils.java:168)
at org.apache.shiro.config.ReflectionBuilder.createNewInstance(ReflectionBuilder.java:320)
... 40 more
Not sure why it is failing even I have defined the jar file on classpath.
Issue with jar was not having the right permissions. Got it fixed with below Dockerfile
ARG ZEPPELIN_IMAGE=apache/zeppelin:0.9.0
FROM ${ZEPPELIN_IMAGE}
#https://hub.docker.com/r/apache/zeppelin/dockerfile
WORKDIR ${Z_HOME}
USER root
ADD /zeppelin/shiro.ini ${Z_HOME}/conf/
ADD https://repo1.maven.org/maven2/mysql/mysql-connector-java/6.0.4/mysql-connector-java-6.0.4.jar ${Z_HOME}/lib/
ENV CLASSPATH=${Z_HOME}/lib/mysql-connector-java-6.0.4.jar:${CLASSPATH}
RUN chmod 777 ${Z_HOME}/lib/mysql-connector-java-6.0.4.jar
USER 1000
ENTRYPOINT [ "/usr/bin/tini", "--" ]
WORKDIR ${Z_HOME}
CMD ["bin/zeppelin.sh"]

Atom + Xdebug setup

I'm trying to setup Xdebug because I'm tired of using echoes and var_dumps.
I'm using Atom as my IDE with the php-debug plugin.
I'm using the laravel Homestead vm as a server. forwarded port 9000 to 9999
Atom has the plugin installed with the following settings:
I set a breakpoint in Atom, I browse to my page but the breakpoint doesn't trigger.
I have a hunch that it has something to do with the IDE key but I don't know how to proceed.
Does anyone know the IDE key for atom?
Or is there something else I'm missing?
edit: I've added the following to xdebug.ini:
xdebug.auto_trace = 0
xdebug.collect_includes = 1
xdebug.collect_params = 1
xdebug.collect_return = 0
xdebug.collect_vars = "Off"
xdebug.default_enable = "On"
xdebug.dump.COOKIE = ""
xdebug.dump.FILES = ""
xdebug.dump.GET = ""
xdebug.dump.POST = ""
xdebug.dump.REQUEST = ""
xdebug.dump.SERVER = ""
xdebug.dump.SESSION = ""
xdebug.dump_globals = 1
xdebug.dump_once = 1
xdebug.dump_undefined = 0
xdebug.extended_info = 1
xdebug.file_link_format = ""
xdebug.idekey = "VVVDEBUG"
xdebug.manual_url = "http://www.php.net"
xdebug.max_nesting_level = 100
xdebug.overload_var_dump = 1
xdebug.profiler_append = 0
xdebug.profiler_enable = 0
xdebug.profiler_enable_trigger = 1
xdebug.profiler_output_dir = "/tmp"
xdebug.profiler_output_name = "cachegrind.out.%t-%s"
xdebug.remote_autostart = 1
xdebug.remote_enable = 1
xdebug.remote_handler = "dbgp"
xdebug.remote_host = "192.168.50.1"
xdebug.remote_log = /srv/log/xdebug-remote.log
xdebug.remote_mode = "req"
xdebug.remote_port = 9000
xdebug.show_exception_trace = 0
xdebug.show_local_vars = 0
xdebug.show_mem_delta = 0
xdebug.trace_format = 0
xdebug.trace_options = 0
xdebug.trace_output_dir = "/tmp"
xdebug.trace_output_name = "trace.%c"
xdebug.var_display_max_children = -1
xdebug.var_display_max_data = -1
xdebug.var_display_max_depth = -1
Like #sparkos72 says, the atom ide key xdebug.atom works for me on Ubuntu 16.04 and Debian 7 :-). I try to extend their answer.
I used php-debug atom extension with this config in the xdebug.ini (path: /etc/php5/apache2/conf.d/xdebug.ini).
xdebug.remote_enable=1
xdebug.remote_handler=dbgp
xdebug.remote_mode=req
xdebug.remote_host=172.17.0.1
xdebug.remote_port=9000
xdebug.idekey=xdebug.atom
xdebug.remote_autostart=true
I have a docker container with debian 7 + apache + php 5, that's the reason because I use ip 172.17.0.1 instead of 127.0.0.1. My "real" machine is an Ubuntu.
Hope it helps.
IDE key is xdebug.atom for me (mac/apache). In the Atom php-debug config, set the server port to 9000
None of the answers worked for me, so I'm leaving one of my own.
Setup
Make sure xdebug is enabled; sudo phpenmod xdebug or equivalent
Make sure xdebug is correctly configured:
xdebug.remote_enable=1
xdebug.remote_handler=dbgp
xdebug.remote_mode=req
xdebug.remote_host=172.17.0.1 (as per your setup)
xdebug.remote_port=9000 (as per your setup)
Didn't do xdebug.remote_autostart because it starts the debug stuff even for normal requests. So, even if you are not in the debug mode for the Chrome / Firefox extension, an attempt to convey debugging data will be made, which might slow things down without any reason.
Install Xdebug chrome/firefox extension and setup IDE Key as xdebug-atom (I found xdebug.atom to also be working).
Debugging
When you actually want to debug something,
Add a breakpoint where you want in your code; Alt + F9 or equivalent.
Open the PHP Debug panel in Atom; Ctrl + Alt + D or equivalent. If you are on certain versions Ubuntu, you might want to change the shortcut Ctrl + Alt + D might act as Show Desktop. In which case, click on the PHP Debug button in the IDE to toggle the debugger.
The debugger should say something like Listening to port 9000 or the port you setup in the PHP Debug settings.
Open the page you want to debug in your browser (if not already open) and click on the debugging extension icon to enable Debug mode. This actually sets a cookie in your document like XDEBUG_SESSION=xdebug-atom, etc.
Once the debug extension is enabled, refresh the page and you should be able to use the debugger (if everything went right).
Hope this helps. Took a while to get it working. Now I don't have to buy PHP Storm!
I know it is late but again:
Your .ini file have this:
xdebug.idekey = "VVVDEBUG"
which means you need to set the value "VVVDEBUG"(without quotes) in "The Easiest Xdebug" extension settings where you use "Atom" currently
IDE key is xdebug-atom for me on PHP3.2+ ATOM+ XAMPP+ XDEBUG Helper, strongly recommending to use XDEBUG Helper, you could set IDE key there.
I was trying the same thing, but never I found a real solution and I saw that xdebug started debug when find a error exeption I started put error in my code when I need debug and works well, example put this code in the line that you want debug.
#xdebug_start:
and watch the magic.
I did a video installing the plugin.
https://www.youtube.com/watch?v=jD0TIzYMFzQ

Kerberos Join Active Directory Domain Failure (uBuntu)

I try to join Active Directory and Samba 4 in Ubuntu 12.04.05.
When I run host -t SRV _kerberos._udp.test.sg I get the error:
Host _kerberos._udp.test.sg not found: 3(NXDOMAIN)
meanwhile
$# host -t SRV _ldap._tcp.test.sg
_ldap._tcp.test.sg has SRV record 0 0 389 4ecapsvsg6.test.sg.
$# host -t A 4ECAPSVSG6.test.sg
4ECAPSVSG6.test.sg has address 10.153.64.5
My /etc/samba/smb.conf:
# Global parameters
[global]
workgroup = TEST
realm = TEST.SG
netbios name = 4ECAPSVSG6
server role = active directory domain controller
dns forwarder = 10.153.64.5
security = ads
use kerberos keytab = true
password server = 4ecapsvsg6.test.sg
allow dns updates = nonsecure and secure
bind interfaces only = no
server services = +smb -s3fs
dcerpc endpoint servers = +winreg +srvsvc
passdb backend = samba4
server services = smb, rpc, nbt, wrepl, ldap, cldap, kdc, drepl, winbind, ntp_signd, kcc, dnsupdate, dns
My /etc/krb5.conf:
[libdefaults]
default_realm = TEST.SG
krb4_config = /etc/krb.conf
krb4_realms = /etc/krb.realms
kdc_timesync = 1
ccache_type = 4
forwardable = true
proxiable = true
[realms]
4ECAP.SG = {
kdc = 4ecapsvsg6.test.sg:88
admin_server = 4ecapsvsg6.test.sg:749
default_domain = test.sg
}
[domain_realm]
.test.sg = TEST.SG
test.sg = TEST.SG
[login]
krb4_convert = true
krb4_get_tickets = false
My /etc/hosts:
127.0.0.1 localhost
127.0.1.1 4ecapsvsg6
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
10.153.64.5 4ecapsvsg6.test.sg 4ecapsvsg6
What is the solution? Without it I cannot run join domain with command:
sudo net ads join
which comes out error like
Failed to join domain: failed to lookup DC info for domain 'TEST' over rpc: Logon failure
I did kinit administrator and klist, result:
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: administrator#TEST.SG
Valid starting Expires Service principal
26/03/2015 14:29:04 27/03/2015 00:29:04 krbtgt/TEST.SG#TEST.SG
renew until 27/03/2015 14:29:00
meanwhile i include my /etc/resolv.conf
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
# DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 10.153.64.5
search test.sg
domain test.sg
After i google this past week, lucky i found this site http://edoceo.com/howto/samba4
Happens to be i need to edit my dnsmasq (/etc/dnsmasq.conf)
add this line :
srv-host=_kerberos._tcp.test.sg,4ecapsvsg6.test.sg,88
srv-host=_kerberos._tcp.dc._msdcs.test.sg,4ecapsvsg6.test.sg,88
srv-host=_kerberos._udp.test.sg,4ecapsvsg6.test.sg,88
srv-host=_kpasswd._tcp.test.sg,4ecapsvsg6.test.sg,464
srv-host=_kpasswd._udp.test.sg,4ecapsvsg6.test.sg,464
and disable Bind9 (which installed along with Samba4 by default)
Now the problems gone :)
Only one problems remains, how to connect to AD (which i'll open another thread for that)

RM + DSC to node in untrusted domain

So I mention the untrusted domain aspect because I went through all the hoops around credential delegation and trusted hosts lists etc to allow me to successfully push a DSC configuration from my RM server to a target node (not using RM, just native DSC). I get that bit and it works, great.
Now when I use those same scripts in RM (with some minor edits for the format expected by RM), RM reports a successful deploy but all that has happened is the components bits have been copied to the target node to the default location for $applicationPathRoot (C:\Windows\DtlDownloads), there is no real evidence of an attempt to apply a mof file.
My RM server and target nodes are in different domains with no trust. Both servers are W2k8R2 (+ WMF4 of course). I'm running with Update 4 of RM server and client.
Here are the DSC scripts I'm running in RM:
CopyDSCResources.ps1
Configuration CopyDSCResource
{
param (
[Parameter(Mandatory=$false)]
[ValidateNotNullOrEmpty()]
[String] $ModulePath = "$env:ProgramFiles\WindowsPowershell\Modules")
#[PSCredential] $credential = get-credential
Node VCTSCFDSMWEB01
{
File DeployWebDeployResource
{
Ensure = "Present"
SourcePath = "C:\test.txt"
DestinationPath = "D:\temp"
Force = $true
Type = "File"
}
}
}
CopyDSCResource -ConfigurationData $configData -Verbose
# test outside of RM
#CopyDSCResource -ConfigurationData CopyDSCResource.ConfigData.psd1
#Start-DscConfiguration -Path .\CopyDSCResource -Credential $credential -Verbose -Wait
CopyDSCResource.ConfigData.psd1
##{
$configData = #{
AllNodes = #(
#{
NodeName = "*"
PSDscAllowPlainTextPassword = $true
},
#{
NodeName = "VCTSCFDSWEB01.rlg.test"
Role = "WebServer"
}
)
}
I'm afraid I cant seem to upload screenshots from my current location but in terms of RM, I have a vNext environment with a single server linked, a vNext release path with a single 'Dev' stage and a vNext release template with a single 'Deploy PS/DSC' action. The configuration of the action is:
ServerName - VCTSCFDSMWEB01
ComponentName - COpyDSCResource vNext
PSScriptPath - copydscresources.ps1
PSConfigurationPath - copydscresource.configdata.psd1
UseCredSSP - true
When I run a new release, the deploy stage reports success and when I view the Deployment log files I get the following:
Upload components - Successfully uploaded to the normalized store.
Deploy Using PS/DSC - Copying recursively from \vcxxxxtfs03\Drops\CorrespondenceCI\CorrespondenceCI20150114.1\Scripts to C:\Windows\DtlDownloads\CopyDSCResource vNext succeeded.
Finally the DSC event log has the following:
Job {CD3BE350-4072-4C8B-835F-4B4D1C46D65D} :
Configuration is sent from computer NULL by user sid S-1-5-18.
This compares markedly to the same event log entry when run outside of RM:
Job {34F78498-CF18-4F2A-9874-EB54FDA2D990} :
Configuration is sent from computer VCXXXXTFS01 by user sid S-1-5-21-1034805355-1149422947-1317505720-10867.
Any pointers appreciated
It would be good if I could see evidence of a mof file being created on the RM server for example, anybody know where I can find this??
Turns out the crucial element was that my DSC script had to use an environment variable for naming the node. So:
Node $env:COMPUTERNAME
No idea why but it works!

Resources