Gitlab Active Directory issues - gitlab-7.7.1_omnibus - active-directory

I am having issues with Active Directory authentication via LDAP on Gitlab omnibus. I have tested the credentials and bind dn using ldapsearch and received a response with no issues, but for some reason I am not seeing any attempts at connecting when I login as an AD user on the gitlab frontend. I receive the error "Could not authorize you from Ldapmain because "Invalid credentials"." no matter if I'm using valid credentials or not.
I also receive the following from sudo gitlab-rake gitlab:check:
** Invoke gitlab:ldap:check (first_time)
** Invoke environment
** Execute gitlab:ldap:check
Checking LDAP ...
LDAP users with access to your GitLab server (only showing the first 100 results)
Server: ldapmain
Checking LDAP ... Finished
Please let me know if my explanation is not clear, or if you think that additional information would be helpful. I tried searching around and am not finding my exact issue.
My configuration is as follows:
gitlab_rails['ldap_enabled'] = true
gitlab_rails['ldap_servers'] = YAML.load <<-EOS # remember to close this block with 'EOS' below
main: # 'main' is the GitLab 'provider ID' of this LDAP server
## label
#
# A human-friendly name for your LDAP server. It is OK to change the label later,
# for instance if you find out it is too large to fit on the web page.
#
# Example: 'Paris' or 'Acme, Ltd.'
label: 'LDAP'
host: 'myadserver.my.domain.net'
port: 389
uid: 'sAMAccountName'
method: 'plain' # "tls" or "ssl" or "plain"
bind_dn: 'CN=Gitlab,OU=Service Accounts,OU=Washington\, D.C.,OU=United States,OU=NA,DC=my,DC=domain,DC=net'
password: 'mypasswrd'
# This setting specifies if LDAP server is Active Directory LDAP server.
# For non AD servers it skips the AD specific queries.
# If your LDAP server is not AD, set this to false.
active_directory: true
# If allow_username_or_email_login is enabled, GitLab will ignore everything
# after the first '#' in the LDAP username submitted by the user on login.
#
# Example:
# - the user enters 'jane.doe#example.com' and 'p#ssw0rd' as LDAP credentials;
# - GitLab queries the LDAP server with 'jane.doe' and 'p#ssw0rd'.
#
# If you are using "uid: 'userPrincipalName'" on ActiveDirectory you need to
# disable this setting, because the userPrincipalName contains an '#'.
allow_username_or_email_login: true
# Base where we can search for users
#
# Ex. ou=People,dc=gitlab,dc=example
#
base: 'OU=Washington\, D.C.,OU=United States,OU=NA,DC=my,DC=domain,DC=net'
# Filter LDAP users
#
# Format: RFC 4515 http://tools.ietf.org/search/rfc4515
# Ex. (employeeType=developer)
#
# Note: GitLab does not support omniauth-ldap's custom filter syntax.
#
#user_filter: ''
EOS

This was, of course, a whitespace issue. See lines below:
gitlab_rails['ldap_enabled'] = true
gitlab_rails['ldap_servers'] = YAML.load <<-EOS # remember to close this block with 'EOS' below
main: # 'main' is the GitLab 'provider ID' of this LDAP server
## label
#
# A human-friendly name for your LDAP server. It is OK to change the label later,
# for instance if you find out it is too large to fit on the web page.
#
# Example: 'Paris' or 'Acme, Ltd.'
label: 'LDAP'

Related

freeradius + ldap + google-authenticator

I want to implement login to my vpn service with password + google_otp. freeradius as auth server and ldap as backend_database.
I have completed the following work:
enable pam Authentication Module in /etc/raddb/sites-enabled/default
add a line "DEFAULT Auth-Type := PAM" to /etc/raddb/users
enable ldap module and add ldap site to freeradis, I confirm that raidus use ldap database is working properly.
Overwrite the contents of /etc/pam.d/radiusd
auth requisite pam_google_authenticator.so secret=/tmp/.google_authenticator user=root forward_pass
auth required pam_unix.so use_first_pass
run test cmd:(testpa is my password,271082 is otp)
radtest perlingzhao testpa271082 localhost 1812 testing123
radius log:
(0) [pap] = noop
(0) } # authorize = updated
(0) Found Auth-Type = pam
(0) # Executing group from file /etc/raddb/sites-enabled/default
(0) authenticate {
(0) pam: Using pamauth string "radiusd" for pam.conf lookup
(0) pam: ERROR: pam_authenticate failed: User not known to the underlying authentication module
(0) [pam] = reject
(0) } # authenticate = reject
(0) Failed to authenticate the user
(0) Using Post-Auth-Type Reject
log in /var/log/secure:
radiusd(pam_google_authenticator)[11728]: Accepted google_authenticator for perlingzhao
pam_unix(radiusd:auth): check pass; user unknown
pam_unix(radiusd:auth): authentication failure; logname=root uid=0 euid=0 tty= ruser= rhost=
I know this is because there is no local user, user info is in ldap.
anyone can help me , tell me how to config can solve this problem, thanks.
I can suggest using a PHP script for OTP validation instead of PAM modules, it does not create real local users but only verifies the TOTP itself. PHP has LDAP functions as well.
authorize{
update control {
Auth-Type := `/usr/bin/php -f /etc/raddb/yourscript.php '%{User-Name}' '%{User-Password}' '%{Client-IP-Address}'`
}
There is a commercial product that appears to fully meet your requirements.
P.S. I am affiliated with #1

I need to find out disabled users from ldap

I am trying to find out whether a user is disabled in ldap using ldapsearch utility but I have been unsuccessful so far. This is what i have got so far
ldapsearch -h hostname -D 'Service Account' -b 'basedn' sAMAccountName='disabled user' -w 'password'
# extended LDIF
#
# LDAPv3
# base <basedn> with scope subtree
# filter: sAMAccountName=disabled user
# requesting: ALL
#
# search result
search: 2
result: 0 Success
# numResponses: 1
I have even tried with -LLL nsaccountlock it give me nothing. Its the same with a random string for user as well.
I need to find out that the user that I am specifying whether its an active or disabled user or not a user at all. Am I doing something wrong? is there another utility I can use to determine if the user is disabled
You can use this filter:
(&(objectCategory=person)(objectClass=user)(userAccountControl:1.2.840.113556.1.4.803:=2))
To find all users with the User-Account-Control value of 0x00000002

JBoss Fuse JMX not working

I tried to connect JMX rmi url in Jboss fuse container for monitoring the queues.
The URL not connected in jconsole,
service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi/camel
I want to implement in my bundle, How to connect MBean server in JBoss Fuse?
Advance Thanks.
IMHO just wrong URL.
You can see the current settings of your server in the org.apache.karaf.management.cfg.
For example:
#
# Port number for RMI registry connection
#
rmiRegistryPort = 1099
#
# Host for RMI registry
#
rmiRegistryHost = 0.0.0.0
#
# Port number for RMI server connection
#
rmiServerPort = 44444
#
# Host for RMI server
#
rmiServerHost = 0.0.0.0
#
# Name of the JAAS realm used for authentication
#
jmxRealm = karaf
#
# The service URL for the JMXConnectorServer
#
serviceUrl = service:jmx:rmi://${rmiServerHost}:${rmiServerPort}/jndi/rmi://${rmiRegistryHost}:${rmiRegistryPort}/karaf-${karaf.name}
#
# Whether any threads started for the JMXConnectorServer should be started as daemon threads
#
daemon = true
#
# Whether the JMXConnectorServer should be started in a separate thread
#
threaded = true
#
# The ObjectName used to register the JMXConnectorServer
#
objectName = connector:name=rmi
In my case URL looks like service:jmx:rmi://0.0.0.0:44444/jndi/rmi://0.0.0.0:1099/karaf-root
P.S. And don't forget to specify a user name and password.
Finally solved the issue with the karaf username and password,
Check with the username and password in users.properties file.
service:jmx:rmi:///jndi/rmi://localhost:1099/karaf-root
It should work.
JMXServiceURL url = new JMXServiceURL(serviceURL);
HashMap<String, String[]> environment = new HashMap<String, String[]>();
String username = "admin";
String password = "admin";
String[] credentials = new String[] { username, password };
environment.put("jmx.remote.credentials", credentials);
connectorServer = JMXConnectorFactory.connect(url,environment);

Drupal 7 not working with drupalvm

I was trying to create a drupalvm instance running drupal 7 by changing the "core" and "version" as suggested in the readme file, and then running vagrant up, but the issue is that after doing so it keeps on installing drupal8 (default).
Following are the drupal.make.yml file and the config.yml file that I edited before building the machine.
drupal.make.yml
---
api: 2
# Basic Drush Make file for Drupal. Be sure to update the drupal_major_version
# variable inside config.yml if you change the major version in this file.
# Drupal core (major version, e.g. 6.x, 7.x, 8.x).
core: "7.x"
projects:
# Core.
drupal:
type: "core"
download:
# Drupal core branch (e.g. "6.x", "7.x", "8.0.x").
branch: "7.0.x"
working-copy: true
# Other modules.
devel: "1.x-dev"
config.yml
---
# `vagrant_box` can also be set to geerlingguy/centos6, geerlingguy/centos7,
# geerlingguy/ubuntu1204, parallels/ubuntu-14.04, etc.
vagrant_box: geerlingguy/ubuntu1404
vagrant_user: vagrant
vagrant_synced_folder_default_type: nfs
# If you need to run multiple instances of Drupal VM, set a unique hostname,
# machine name, and IP address for each instance.
vagrant_hostname: drupalvm.dev
vagrant_machine_name: drupalvm
vagrant_ip: 192.168.88.88
# Allow Drupal VM to be accessed via a public network interface on your host.
# Vagrant boxes are insecure by default, so be careful. You've been warned!
# See: https://docs.vagrantup.com/v2/networking/public_network.html
vagrant_public_ip: ""
# A list of synced folders, with the keys 'local_path', 'destination', and
# a 'type' of [nfs|rsync|smb] (leave empty for slow native shares). See
# http://docs.drupalvm.com/en/latest/extras/syncing-folders/ for more info.
vagrant_synced_folders:
# The first synced folder will be used for the default Drupal installation, if
# build_makefile: is 'true'.
- local_path: ~/Documents/projectohri/drupalvm
destination: /var/www/drupalvm
type: nfs
create: true
# Memory and CPU to use for this VM.
vagrant_memory: 1024
vagrant_cpus: 2
# The web server software to use. Can be either 'apache' or 'nginx'.
drupalvm_webserver: apache
# Set this to false if you are using a different site deployment strategy and
# would like to configure 'vagrant_synced_folders' and 'apache_vhosts' manually.
build_makefile: true
drush_makefile_path: /vagrant/drupal.make.yml
# Set this to false if you don't need to install drupal (using the drupal_*
# settings below), but instead copy down a database (e.g. using drush sql-sync).
install_site: true
# Settings for building a Drupal site from a makefile (if 'build_makefile:'
# is 'true').
drupal_major_version: 7
drupal_core_path: "/var/www/drupalvm/drupal"
drupal_domain: "drupalvm.dev"
drupal_site_name: "Drupal"
drupal_install_profile: standard
drupal_enable_modules: [ 'devel' ]
drupal_account_name: admin
drupal_account_pass: admin
drupal_mysql_user: drupal
drupal_mysql_password: drupal
drupal_mysql_database: drupal
# Additional arguments or options to pass to `drush site-install`.
drupal_site_install_extra_args: []
# Cron jobs are added to the root user's crontab. Keys include name (required),
# minute, hour, day, weekday, month, job (required), and state.
drupalvm_cron_jobs: []
# - {
# name: "Drupal Cron",
# minute: "*/30",
# job: "drush -r {{ drupal_core_path }} core-cron"
# }
# Drupal VM automatically creates a drush alias file in your ~/.drush folder if
# this variable is 'true'.
configure_local_drush_aliases: true
# Apache VirtualHosts. Add one for each site you are running inside the VM. For
# multisite deployments, you can point multiple servernames at one documentroot.
# View the geerlingguy.apache Ansible Role README for more options.
apache_vhosts:
- servername: "{{ drupal_domain }}"
documentroot: "{{ drupal_core_path }}"
extra_parameters: |
ProxyPassMatch ^/(.*\.php(/.*)?)$ "fcgi://127.0.0.1:9000{{ drupal_core_path }}"
- servername: "adminer.drupalvm.dev"
documentroot: "/opt/adminer"
- servername: "xhprof.drupalvm.dev"
documentroot: "/usr/share/php/xhprof_html"
- servername: "pimpmylog.drupalvm.dev"
documentroot: "/usr/share/php/pimpmylog"
apache_remove_default_vhost: true
apache_mods_enabled:
- expires.load
- ssl.load
- rewrite.load
# Nginx hosts. Each site will get a server entry using the configuration defined
# here. Set the 'is_php' property for document roots that contain PHP apps like
# Drupal.
nginx_hosts:
- server_name: "{{ drupal_domain }}"
root: "{{ drupal_core_path }}"
is_php: true
- server_name: "adminer.drupalvm.dev"
root: "/opt/adminer"
is_php: true
- server_name: "xhprof.drupalvm.dev"
root: "/usr/share/php/xhprof_html"
is_php: true
- server_name: "pimpmylog.drupalvm.dev"
root: "/usr/share/php/pimpmylog"
is_php: true
nginx_remove_default_vhost: true
# MySQL Databases and users. If build_makefile: is true, first database will
# be used for the makefile-built site.
mysql_databases:
- name: "{{ drupal_mysql_database }}"
encoding: utf8
collation: utf8_general_ci
mysql_users:
- name: "{{ drupal_mysql_user }}"
host: "%"
password: "{{ drupal_mysql_password }}"
priv: "{{ drupal_mysql_database }}.*:ALL"
# Comment out any extra utilities you don't want to install. If you don't want
# to install *any* extras, make set this value to an empty set, e.g. `[]`.
installed_extras:
- adminer
- drupalconsole
- mailhog
- memcached
# - nodejs
- pimpmylog
# - redis
# - ruby
# - selenium
# - solr
- varnish
- xdebug
- xhprof
# Add any extra apt or yum packages you would like installed.
extra_packages:
- unzip
# `nodejs` must be in installed_extras for this to work.
nodejs_version: "0.12"
nodejs_npm_global_packages: []
# `ruby` must be in installed_extras for this to work.
ruby_install_gems_user: "{{ vagrant_user }}"
ruby_install_gems: []
# You can configure almost anything else on the server in the rest of this file.
extra_security_enabled: false
drush_version: master
drush_keep_updated: true
drush_composer_cli_options: "--prefer-dist --no-interaction"
firewall_allowed_tcp_ports:
- "22"
- "25"
- "80"
- "81"
- "443"
- "4444"
- "8025"
- "8080"
- "8443"
- "8983"
firewall_log_dropped_packets: false
# PHP Configuration. Currently-supported versions: 5.5, 5.6, 7.0.
php_version: "5.6"
php_memory_limit: "192M"
php_display_errors: "On"
php_display_startup_errors: "On"
php_enable_php_fpm: true
php_realpath_cache_size: "1024K"
php_sendmail_path: "/usr/sbin/ssmtp -t"
php_opcache_enabled_in_ini: true
php_opcache_memory_consumption: "192"
php_opcache_max_accelerated_files: 4096
php_max_input_vars: "4000"
composer_path: /usr/bin/composer
composer_home_path: '/home/vagrant/.composer'
# composer_global_packages:
# - { name: phpunit/phpunit, release: '#stable' }
# Run specified scripts after VM is provisioned. Path is relative to the
# `provisioning/playbook.yml` file.
post_provision_scripts: []
# - "../examples/scripts/configure-solr.sh"
# MySQL Configuration.
mysql_root_password: root
mysql_slow_query_log_enabled: true
mysql_slow_query_time: 2
mysql_wait_timeout: 300
adminer_install_filename: index.php
# Varnish Configuration.
varnish_listen_port: "81"
varnish_default_vcl_template_path: templates/drupalvm.vcl.j2
varnish_default_backend_host: "127.0.0.1"
varnish_default_backend_port: "80"
# Pimp my Log settings.
pimpmylog_install_dir: /usr/share/php/pimpmylog
pimpmylog_grant_all_privs: true
# XDebug configuration. XDebug is disabled by default for better performance.
php_xdebug_default_enable: 0
php_xdebug_coverage_enable: 0
php_xdebug_cli_enable: 1
php_xdebug_remote_enable: 1
php_xdebug_remote_connect_back: 1
# Use PHPSTORM for PHPStorm, sublime.xdebug for Sublime Text.
php_xdebug_idekey: PHPSTORM
php_xdebug_max_nesting_level: 256
# Solr Configuration (if enabled above).
solr_version: "4.10.4"
solr_xms: "64M"
solr_xmx: "128M"
# Selenium configuration.
selenium_version: 2.46.0
# Other configuration.
known_hosts_path: ~/.ssh/known_hosts
7.0.x is not a valid drupal version. Re-read the docs above that link in the drupal.make.yml and change it to "7.x"
Also, be sure to run vagrant destroy to remove all traces of the old instance. It could be that it isn't downloading a new copy, just using the D8 that it downloaded already.

Geonetwork database whit Ldap Connection error

I'm trying to connect my ldap with the geonetwork database but every time I log in it doesn't show the administrator button. Then I check the database and it is empty. I am using GeOrchestra 13.09 in a localhost enviroment, the geoserver and mapfishapp are running well and they log in without a problem.
My config-security.properties is
Core security properties
logout.success.url=/index.html
passwordSalt=secret-hash-salt=
# LDAP Connection Settings
ldap.base.provider.url=ldap://localhost:389
ldap.base.dn=dc=geobolivia,dc=gob,dc=bo
ldap.security.principal=cn=admin,dc=geobolivia,dc=gob,dc=bo
ldap.security.credentials=geobolivia
ldap.base.search.base=ou=users
ldap.base.dn.pattern=uid={0},${ldap.base.search.base}
#ldap.base.dn.pattern=mail={0},${ldap.base.search.base}
# Define if groups and profile information are imported from LDAP. If not, local database is used.
# When a new user connect first, the default profile is assigned. A user administrator can update
# privilege information.
ldap.privilege.import=true
ldap.privilege.export=true
ldap.privilege.create.nonexisting.groups=false
# Define the way to extract profiles and privileges from the LDAP
# 1. Define one attribute for the profile and one for groups in config-security-overrides.properties
# 2. Define one attribute for the privilege and define a custom pattern (use LDAPUserDetailsContextMapperWithPa$
ldap.privilege.pattern=
#ldap.privilege.pattern=CAT_(.*)_(.*)
ldap.privilege.pattern.idx.group=1
ldap.privilege.pattern.idx.profil=2
# 3. Define custom location for extracting group and role (no support for group/role combination) (use LDAPUser$
#ldap.privilege.search.group.attribute=cn
#ldap.privilege.search.group.object=ou=groups
#ldap.privilege.search.group.query=(&(objectClass=posixGroup)(memberUid={0})(cn=EL_*))
#ldap.privilege.search.group.pattern=EL_(.*)
#ldap.privilege.search.privilege.attribute=cn
#ldap.privilege.search.privilege.object=ou=groups
#ldap.privilege.search.privilege.query=(&(objectClass=posixGroup)(memberUid={0})(cn=SV_*))
#ldap.privilege.search.privilege.pattern=SV_(.*)
ldap.privilege.search.group.attribute=cn
ldap.privilege.search.group.object=ou=groups
ldap.privilege.search.group.query=(&(objectClass=posixGroup)(memberUid={1})(cn=EL_*))
ldap.privilege.search.group.pattern=EL_(.*)
ldap.privilege.search.privilege.attribute=cn
ldap.privilege.search.privilege.object=ou=groups
ldap.privilege.search.privilege.query=(&(objectClass=posixGroup)(memberUid={1})(cn=SV_ADMIN))
ldap.privilege.search.privilege.pattern=SV_(.*)
# Run LDAP sync every day at 23:30
# Run LDAP sync every day at 23:30
#ldap.sync.cron=0 30 23 * * ?
ldap.sync.cron=0 * * * * ?
#ldap.sync.cron=0 0/1 * 1/1 * ? *
ldap.sync.startDelay=60000
ldap.sync.user.search.base=${ldap.base.search.base}
ldap.sync.user.search.filter=(&(objectClass=*)(mail=*#*)(givenName=*))
ldap.sync.user.search.attribute=uid
ldap.sync.group.search.base=ou=groups
ldap.sync.group.search.filter=(&(objectClass=posixGroup)(cn=EL_*))
ldap.sync.group.search.attribute=cn
ldap.sync.group.search.pattern=EL_(.*)
# CAS properties
cas.baseURL=https://localhost:8443/cas
cas.ticket.validator.url=${cas.baseURL}
cas.login.url=${cas.baseURL}/login
cas.logout.url=${cas.baseURL}/logout?url=${geonetwork.https.url}/
<import resource="config-security-cas.xml"/>
<import resource="config-security-cas-ldap.xml"/>
# either the hardcoded url to the server
# or if has the form it will be replaced with
# the server details from the server configuration
geonetwork.https.url=https://localhost/geonetwork-private/
#geonetwork.https.url=https://geobolivia.gob.bo:443
#geonetwork.https.url=https://localhost:443
The geonetwork.log shows these results:
2014-03-11 13:41:00,004 DEBUG [geonetwork.ldap] - LDAPSynchronizerJob starting ...
2014-03-11 13:41:00,006 DEBUG [org.springframework.ldap.core.support.AbstractContextSource] - Got Ldap context on server 'ldap://localhost:389/dc=geobolivia,dc=gob,dc=bo'
2014-03-11 13:41:00,008 DEBUG [org.springframework.beans.factory.support.DefaultListableBeanFactory] - Returning cached instance of singleton bean 'resourceManager'
2014-03-11 13:41:00,026 DEBUG [geonetwork.ldap] - LDAPSynchronizerJob done.
2014-03-11 13:41:26,429 INFO [geonetwork.lucene] - Done running PurgeExpiredSearchersTask. 0 versions still cached.
2014-03-11 13:41:56,430 INFO [geonetwork.lucene] - Done running PurgeExpiredSearchersTask. 0 versions still cached.
and the that appear in the geonetwork.log is
2014-03-11 13:44:06,426 INFO [jeeves.service] - Dispatching : xml.search.keywords
2014-03-11 13:44:06,427 ERROR [jeeves.service] - Exception when executing service
2014-03-11 13:44:06,427 ERROR [jeeves.service] - (C) Exc : java.lang.IllegalArgumentException: The thesaurus external.theme.inspire-service-taxonomy does not exist, there for the query cannot be excuted: 'Query [query=SELECT DISTINCT id,uppc,lowc,broader,spa_prefLabel,spa_note FROM {id} rdf:type {skos:Concept},[{id} gml:BoundedBy {} gml:upperCorner {uppc}],[{id} gml:BoundedBy {} gml:lowerCorner {lowc}],[{id} skos:broader {broader}],[{id} skos:prefLabel {spa_prefLabel} WHERE lang(spa_prefLabel) LIKE "es" IGNORE CASE],[{id} skos:scopeNote {spa_note} WHERE lang(spa_note) LIKE "es" IGNORE CASE] WHERE (spa_prefLabel LIKE "***" IGNORE CASE OR id LIKE "*") LIMIT 35 USING NAMESPACE skos=<http://www.w3.org/2004/02/skos/core#>,gml=<http://www.opengis.net/gml#>, interpreter=KeywordResultInterpreter]'
The version of GeoNetwork currently used in geOrchestra does not show the "administration" button on its first page. You have to fire a search, then in "other actions" menu on the top right, you should be able to get to the administration interface. We know that it is not very intuitive, but it should change in the next months (we recently planned an upgrade of GeoNetwork before the end of the year).
Did you solve it? I think in your config-security.properties, at this place ldap.base.dn.pattern=uid={0},${ldap.base.search.base}
you need to replace {0} with the username typed in the sign-in screen of geonetwork

Resources