Ldap error code 32 - active-directory

I'm trying to synchronize OpenLDAP and Active directory together. To do so I'm using a program called LSC-Project which is specified to do this sort of thing.
I have configured the program the best I can however I can't find a way to shake off the following error:
javax.naming.NameNotFoundException: [LDAP: error code 32 - 0000208D: NameErr: DSID-
031001CD,
problem 2001 (NO_OBJECT), data 0, best match of:
'DC=domname,DC=com'
]; remaining name
'uid=user1,ou=Users'
May 09 15:19:25 - ERROR - Error while synchronizing ID uid=user1,ou=Users:
java.lang.Exception:
Technical problem while applying modifications to directory
dn: uid=user1,ou=Users,dc=domname,dc=com
changetype: add
userPassword: 3+kU2th/WMo/v553A24a3SBw2kU=
objectClass: uid
This is the configuration file that the program runs on:
###############################
Destination LDAP directory #
##############################
dst.java.naming.provider.url = ldap://192.168.1.3:389/dc=Windows,dc=com
dst.java.naming.security.authentication = simple
dst.java.naming.security.principal = cn=Administrator,cn=Users,dc=Windows,dc=com
dst.java.naming.security.credentials = 11111
dst.java.naming.referral = ignore
dst.java.naming.ldap.derefAliases = never
dst.java.naming.factory.initial = com.sun.jndi.ldap.LdapCtxFactory
dst.java.naming.ldap.version = 3
dst.java.naming.ldap.pageSize = 1000
#########################
Source LDAP directory
#########################
src.java.naming.provider.url = ldap://192.168.1.2:389/dc=Linux,dc=com
src.java.naming.security.authentication = simple
src.java.naming.security.principal = uid=root,ou=users,dc=Linux,dc=com
src.java.naming.security.credentials = 11111
src.java.naming.referral = ignore
src.java.naming.ldap.derefAliases = never
src.java.naming.factory.initial = com.sun.jndi.ldap.LdapCtxFactory
src.java.naming.ldap.version = 3
#######################
Tasks configuration
#######################
lsc.tasks = Administrator
lsc.tasks.Administrator.srcService = org.lsc.jndi.SimpleJndiSrcService
lsc.tasks.Administrator.srcService.baseDn = ou=users
lsc.tasks.Administrator.srcService.filterAll = (&(objectClass=person))
lsc.tasks.Administrator.srcService.pivotAttrs = uid
lsc.tasks.Administrator.srcService.filterId = (&(objectClass=person)(uid={uid}))
lsc.tasks.Administrator.srcService.attrs = description uid userPassword
lsc.tasks.Administrator.dstService = org.lsc.jndi.SimpleJndiDstService
lsc.tasks.Administrator.dstService.baseDn = cn=Users
lsc.tasks.Administrator.dstService.filterAll = (&(cn=*)(objectClass=organizationalPerson))
lsc.tasks.Administrator.dstService.pivotAttrs = cn, top, person, user, organizationalPerson
lsc.tasks.Administrator.dstService.filterId = (&(objectClass=user) (sAMAccountName={cn}))
lsc.tasks.Administrator.dstService.attrs = description cn userPassword objectClass
lsc.tasks.Administrator.bean = org.lsc.beans.SimpleBean
lsc.tasks.Administrator.dn = "uid=" + srcBean.getAttributeValueById("uid") + ",ou=Users"
dn.real_root = dc=Domname,dc=com
#############################
Syncoptions configuration
#############################
lsc.syncoptions.Administrator = org.lsc.beans.syncoptions.PropertiesBasedSyncOptions
lsc.syncoptions.Administrator.default.action = M
lsc.syncoptions.Administrator.objectClass.action = M
lsc.syncoptions.Administrator.objectClass.force_value = srcBean.getAttributeValueById("cn").toUpperCase()
lsc.syncoptions.Administrator.userPassword.default_value = SecurityUtils.hash(SecurityUtils.HASH_SHA1, "defaultPassword")
lsc.syncoptions.Administrator.default.delimiter=;
lsc.syncoptions.Administrator.objectClass.force_value = "top";"user";"person";"organizationalPerson"
lsc.syncoptions.Administrator.userPrincipalName.force_value = srcBean.getAttributeValueById("uid") + "#Domname.com"
lsc.syncoptions.Administrator.userAccountControl.create_value = AD.userAccountControlSet ( "0", [AD.UAC_SET_NORMAL_ACCOUNT])
I'm suspecting that it has something to do with the baseDn of the Task configuration in the part of the source configuration.
The OSs is ubuntu 10.04 and Windows2K3
Someone suggested to me to make a manual sync between them but I have not found any guides to do so. And this program is pretty much the only thing that says that is does this kind of job without costs.

The baseDn should be the distinguished name of the base object of the search, for example, ou=users,dc=domname,dc=com.
see also
LDAP: Mastering Search Filters
LDAP: Search best practices
LDAP: Programming practices

The main reason for NameNotFoundException is that the object which you're searching doesn't exist or the container in which you are searching is not correct.

In case of Spring-ldap, we used to get this error when we specify the baseDn in the context file(LdapContextSource bean) and also in createUser code to build userDn.we need not specify the dc again in the buildUserDn()
protected Name buildUserDn(String userName) {
DistinguishedName dn = new DistinguishedName();
//only cn is required as the base dn is already specified in context file
dn.add("cn", userName);
return dn;
}

In Active Directory: Users catalog is container class, not OrganizationalUnit, so you should use: cn=users,dc=domname,dc=com

Related

Terraform - Azure - Using "azurerm_windows_virtual_machine" and "azurerm_mssql_virtual_machine" together - but SQL Storage isn't getting configured

This is regarding Terraform on Azure. In my previous project I have used the legacy "azurerm_virtual_machine" resource + ARM Template to provision the "Microsoft.SqlVirtualMachine/SqlVirtualMachines" resource and have the data disks, luns configured.
That works pretty well.
In my current project, we are making use of the newer resource "azurerm_windows_virtual_machine" + "azurerm_mssql_virtual_machine" together to spin up the SQL VMs. However, it's been a dud so far.
Terraform docs example uses the legacy resource "azurerm_windows_virtual_machine".
Problems
Didn't find a way to describe the data disk and lun id in "azurerm_windows_virtual_machine" As a result
When I don't mention the storage_configuration block in the "azurerm_mssql_virtual_machine", Azure portal shows "Drive is not found in the volumes list." under the SQL Virtual Machine resource ( not the Virtual Machine resource) > Configuration section. I have attached a screenshot.
See error below :
If I try to mention data disk and lun in the storage_configuration block of the "azurerm_mssql_virtual_machine" , the provisioning fails with the error
creating Sql Virtual Machine (Sql Virtual Machine Name "ASQLVM" / Resource Group "a-resource-group"):
sqlvirtualmachine.SQLVirtualMachinesClient#CreateOrUpdate: Failure sending request: StatusCode=0 --
Original Error: Code="InvalidDefaultFilePath" Message="Invalid Default File Path"
Is there a good way to provision SQL Virtual Machines using the new "azurerm_windows_virtual_machine" ,+ "azurerm_mssql_virtual_machine" together ?
Check out the following code. I hope it answers your questions if you haven't already succeeded.
resource "azurerm_windows_virtual_machine" "vm" {
count = length(var.instances)
name = upper(element(var.instances, count.index))
location = azurerm_resource_group.resourcegroup[count.index].location
resource_group_name = azurerm_resource_group.resourcegroup[count.index].name
network_interface_ids = [azurerm_network_interface.nic[count.index].id]
size = var.instancesize
zone = var.instancezone
admin_username = var.vmadmin
admin_password = data.azurerm_key_vault_secret.vmadminpwd.value
enable_automatic_updates = "false"
patch_mode = "Manual"
provision_vm_agent = "true"
tags = var.tags
source_image_reference {
publisher = "MicrosoftSQLServer"
offer = "sql2019-ws2019"
sku = "enterprise"
version = "latest"
}
os_disk {
name = "${element(var.instances, count.index)}-osdisk"
caching = "ReadWrite"
storage_account_type = "StandardSSD_LRS"
disk_size_gb = 250
}
}
# add a data disk - we were going to iterate through a collection, but this is easier for now
resource "azurerm_managed_disk" "datadisk" {
count = length(var.instances)
name = "${azurerm_windows_virtual_machine.vm[count.index].name}-data-disk01"
location = azurerm_resource_group.resourcegroup[count.index].location
resource_group_name = azurerm_resource_group.resourcegroup[count.index].name
storage_account_type = "Premium_LRS"
zones = [var.instancezone]
create_option = "Empty"
disk_size_gb = 1000
tags = var.tags
}
resource "azurerm_virtual_machine_data_disk_attachment" "datadisk_attach" {
count = length(var.instances)
managed_disk_id = azurerm_managed_disk.datadisk[count.index].id
virtual_machine_id = azurerm_windows_virtual_machine.vm[count.index].id
lun = 1
caching = "ReadWrite"
}
# add a log disk - we were going to iterate through a collection, but this is easier for now
resource "azurerm_managed_disk" "logdisk" {
count = length(var.instances)
name = "${azurerm_windows_virtual_machine.vm[count.index].name}-log-disk01"
location = azurerm_resource_group.resourcegroup[count.index].location
resource_group_name = azurerm_resource_group.resourcegroup[count.index].name
storage_account_type = "Premium_LRS"
zones = [var.instancezone]
create_option = "Empty"
disk_size_gb = 500
tags = var.tags
}
resource "azurerm_virtual_machine_data_disk_attachment" "logdisk_attach" {
count = length(var.instances)
managed_disk_id = azurerm_managed_disk.logdisk[count.index].id
virtual_machine_id = azurerm_windows_virtual_machine.vm[count.index].id
lun = 2
caching = "ReadWrite"
}
# configure the SQL side of the deployment
resource "azurerm_mssql_virtual_machine" "sqlvm" {
count = length(var.instances)
virtual_machine_id = azurerm_windows_virtual_machine.vm[count.index].id
sql_license_type = "PAYG"
r_services_enabled = true
sql_connectivity_port = 1433
sql_connectivity_type = "PRIVATE"
sql_connectivity_update_username = var.sqladmin
sql_connectivity_update_password = data.azurerm_key_vault_secret.sqladminpwd.value
#The storage_configuration block supports the following:
storage_configuration {
disk_type = "NEW" # (Required) The type of disk configuration to apply to the SQL Server. Valid values include NEW, EXTEND, or ADD.
storage_workload_type = "OLTP" # (Required) The type of storage workload. Valid values include GENERAL, OLTP, or DW.
# The storage_settings block supports the following:
data_settings {
default_file_path = var.sqldatafilepath # (Required) The SQL Server default path
luns = [azurerm_virtual_machine_data_disk_attachment.datadisk_attach[count.index].lun]
}
log_settings {
default_file_path = var.sqllogfilepath # (Required) The SQL Server default path
luns = [azurerm_virtual_machine_data_disk_attachment.logdisk_attach[count.index].lun] # (Required) A list of Logical Unit Numbers for the disks.
}
# temp_db_settings {
# default_file_path = var.sqltempdbfilepath #- (Required) The SQL Server default path
# luns = [3] #- (Required) A list of Logical Unit Numbers for the disks.
# }
}
}
On the contrary to the answer above, the proposed code does not solve the problem.
The problem is:
There´s no clear format on how to pass the default path to the terraform sql resource.
resource "azurerm_mssql_virtual_machine" "sqlserver" {
virtual_machine_id = azurerm_windows_virtual_machine.win.id
sql_license_type = "PAYG"
r_services_enabled = true
auto_patching {
day_of_week = "Sunday"
maintenance_window_duration_in_minutes = 60
maintenance_window_starting_hour = 2
}
storage_configuration {
disk_type = "NEW"
storage_workload_type = "OLTP"
data_settings {
default_file_path = "D:\\Data"
luns = [0]
}
log_settings {
default_file_path = "E:\\log"
luns = [1]
}
temp_db_settings {
default_file_path = "F:\\bin"
luns = [2]
}
}
As you see here, I´m trying to define the correct path setting. But it´s not working.
The target is to attach the disks, and format them so that the SQL resource can take control of the path/disk.
You need use letter of temporary drive D: and omit E: because it's DVD drive.
You need to do it like this:
storage_configuration {
disk_type = "NEW"
storage_workload_type = "OLTP"
data_settings {
default_file_path = "F:\\Data"
luns = [0]
}
log_settings {
default_file_path = "G:\\Log"
luns = [1]
}
temp_db_settings {
default_file_path = "D:\\TempDb"
luns = []
}

Snowflake Python Connector: Copy Command Status and Error Handling

According to the Snowflake docs, when a user executes a copy command it will return 1 of 3 status values:
loaded
load failed
partially loaded
My question is if I use the Python Snowflake Connector (see example code below) to execute a copy command is an exception raised if the status returned is load failed or partially loaded?
Thank you!
copy_dml = 'copy into database.schema.table ' \
'from #fully_qualified_stage pattern = \'.*'+ table_name +'.*[.]json\' ' \
'file_format = (format_name = fully_qualified_json_format) ' \
'force = true;'
try:
import snowflake.connector
#-------------------------------------------------------------------------------------------------------------------------------
#snowflake variables
snowflake_warehouse = credentials.iloc[0]['snowflake_warehouse']
snowflake_account = credentials.iloc[0]['snowflake_account']
snowflake_role = credentials.iloc[0]['snowflake_role']
snowflake_username = credentials.iloc[0]['Username']
snowflake_password = credentials.iloc[0]['Password']
snowflake_connection = ''
cs = ''#snowflake connection cursor
exec_copy_dml = ''
copy_result_field_metadata = ''
copy_result = ''
snowflake_copy_result_df = ''
#-------------------------------------------------------------------------------------------------------------------------------
# load JSON file(s) into Snowflake
snowflake_connection = snowflake.connector.connect(
user = snowflake_username,
password = snowflake_password,
account = snowflake_account,
warehouse = snowflake_warehouse,
role = snowflake_role)
cs = snowflake_connection.cursor()
exec_copy_dml = cs.execute(copy_dml)
copy_result = exec_copy_dml.fetchall()
copy_result_field_metadata = cs.description
snowflake_copy_result_df = snowflake_results_df(copy_result_field_metadata,copy_result)
except snowflake.connector.errors.ProgrammingError as copy_error:
copy_exception_message = "There was a problem loading JSON files to Snowflake," + \
"a snowflake.connector.errors.ProgrammingError exception was raised."
print(copy_exception_message)
raise
except Exception as error_message:
raise
finally:
snowflake_connection.close()
I believe it won't raise exception for load status, you have to check the load status and take necessary action if required.
After you issue your COPY INTO dml, you can run the following query -
SELECT * FROM TABLE(VALIDATE(TABLE_NAME, job_id => '_last'))
This will give you details on the files that you were trying to load. It will normally return empty, unless you encountered issues upload.
You can save this save results in an object and make necessary control adjustments.

Player command failed: Premium required. However i have premium

Im am using a family account (premium) and this code returns a'Premium required' error. My code is as follows:
device_id = '0d1841b0976bae2a3a310dd74c0f3df354899bc8'
def playSpotify():
client_credentials_manager = SpotifyClientCredentials(client_id='<REDACTED>', client_secret='<REDACTED>')
sp = spotipy.Spotify(client_credentials_manager=client_credentials_manager)
playlists = sp.user_playlists('gh8gflxedxmp4tv2he2gp92ev')
#while playlists:
#for i, playlist in enumerate(playlists['items']):
#print("%4d %s %s" % (i + 1 + playlists['offset'], playlist['uri'], playlist['name']))
#if playlists['next']:
#playlists = sp.next(playlists)
#else:
#playlists = None
#sp.shuffle(true, device_id=device_id)
#sp.repeat(true, device_id=device_id)
sp.start_playback(device_id=device_id, context_uri='spotify:playlist:4ndG2qFEFt1YYcHYt3krjv')
When using SpotifyClientCredentials the token that is generated doesn't belong to any user but to an app, hence the error message.
What you need to do is use SpotifyOAuth instead. So to initialize spotipy, just do:
sp = spotipy.Spotify(auth_manager=spotipy.SpotifyOAuth())
This will open a browser tab and require you to sign in to your account.

tac_plus Active Directory config

I seem to be having an issue with the pro bono tac_plus configuration.
my switch is giving me the following log message
May 4 20:58:52 sv5-c1-r104-ae02 Aaa: %AAA-4-EXEC_AUTHZ_FAILED: User jdambly failed authorization to start a shell
if I look at the tac_plus logs it looks like my group mapping is not configured correctly, here is the log
May 4 14:04:22 neteng tac_plus[14476]: 1/9a920270: Start authorization request
May 4 14:04:22 neteng tac_plus[14476]: 1/9a920270: cfg_get: checking user/group jdambly, tag (NULL)
May 4 14:04:22 neteng tac_plus[14476]: 1/9a920270: cfg_get: checking user/group jdambly, tag (NULL)
May 4 14:04:22 neteng tac_plus[14476]: 1/9a920270: user 'jdambly' found
May 4 14:04:22 neteng tac_plus[14476]: 1/9a920270: cfg_get: checking user/group jdambly, tag (NULL)
May 4 14:04:22 neteng tac_plus[14476]: 1/9a920270: jdambly#192.168.0.19: not found: svcname=shell#world protocol=
May 4 14:04:22 neteng tac_plus[14476]: 1/9a920270: jdambly#192.168.0.19: not found: svcname=shell protocol=
May 4 14:04:22 neteng tac_plus[14476]: 1/9a920270: jdambly#192.168.0.19: svcname=shell protocol= not found, default is <unknown>
May 4 14:04:22 neteng tac_plus[14476]: 1/9a920270: Writing AUTHOR/FAIL size=18
here is my config
id = tac_plus {
debug = PACKET AUTHEN AUTHOR MAVIS
access log = /var/log/tac_plus/access.log
accounting log = /var/log/tac_plus/acct.log
authorization log = /var/log/tac_plus/auth.log
mavis module = external {
setenv LDAP_SERVER_TYPE = "microsoft"
#setenv LDAP_HOSTS = "ldaps://xxxxxx:3268"
setenv LDAP_HOSTS = "xxxxxx:3268"
setenv LDAP_SCOPE = sub
setenv LDAP_BASE = "dc=nskope,dc=net"
setenv LDAP_FILTER = "(&(objectclass=user)(sAMAccountName=%s))"
setenv LDAP_USER = "xxxx#nskope.net"
setenv LDAP_PASSWD = "xxxxxxxx"
#setenv AD_GROUP_PREFIX = devops
# setenv REQUIRE_AD_GROUP_PREFIX = 1
# setenv USE_TLS = 0
exec = /usr/local/lib/mavis/mavis_tacplus_ldap.pl
}
user backend = mavis
login backend = mavis
pap backend = mavis
skip missing groups = yes
host = world {
address = 0.0.0/0
prompt = "Welcome\n"
key = cisco
}
group = devops {
default service = permit
service = shell {
default command = permit
default attribute = permit
set priv-lvl = 15
}
}
}
I'm trying to map the ad group devops to the group in the config but I think that's failing and I don't get why
so LONG story short I got this working using the following config.
#!../../../sbin/tac_plus
id = spawnd {
listen = { port = 49 }
spawn = {
instances min = 1
instances max = 10
}
background = no
}
id = tac_plus {
debug = PACKET AUTHEN AUTHOR MAVIS
access log = /var/log/tac_plus/access.log
accounting log = /var/log/tac_plus/acct.log
authorization log = /var/log/tac_plus/auth.log
mavis module = external {
setenv LDAP_SERVER_TYPE = "microsoft"
#setenv LDAP_HOSTS = "ldaps://xxxxxxxxx:3268"
setenv LDAP_HOSTS = "xxxxxxxxx:3268"
#setenv LDAP_SCOPE = sub
setenv LDAP_BASE = "cn=Users,dc=nskope,dc=net"
setenv LDAP_FILTER = "(&(objectclass=user)(sAMAccountName=%s))"
setenv LDAP_USER = "xxxxxxxx"
setenv LDAP_PASSWD = "xxxxxxxx"
#setenv FLAG_FALLTHROUGH=1
setenv UNLIMIT_AD_GROUP_MEMBERSHIP = "1"
#setenv EXPAND_AD_GROUP_MEMBERSHIP=1
#setenv FLAG_USE_MEMBEROF = 1
setenv AD_GROUP_PREFIX = ""
# setenv REQUIRE_AD_GROUP_PREFIX = 1
# setenv USE_TLS = 0
exec = /usr/local/lib/mavis/mavis_tacplus_ldap.pl
}
user backend = mavis
login backend = mavis
pap backend = mavis
skip missing groups = yes
host = world {
address = 0.0.0/0
#prompt = "Welcome\n"
key = cisco
}
group = devops {
default service = permit
service = shell {
default command = permit
default attribute = permit
set priv-lvl = 15
}
}
}
what really did the trick is adding
setenv UNLIMIT_AD_GROUP_MEMBERSHIP = "1"
setenv AD_GROUP_PREFIX = ""
with these settings it's not looking for a prefix to the all the ad groups. This config allows for a direct mappings of ad group to the group configured in this file, in my case the group is called dev ops. also note that I had to use quotes around the 1. without these quests it does not set the var UNLIMIT_AD_GROUP_MEMBERSHIP to one so watch out for that. hopefully this can help someone else so they do not have to go through all the pain I did ;)

Juli tomcat 7 logging.properties

I have a jar that contains a CustomLoginModule to perform JAAS Autherization. This jar is located in ${CATALINA_BASE}/lib . I have an org.apache.juli.logging.Log object that performs logging in the module on different levels. I would like to have a log file e.g. jaas.log where the module's logs be written instead of having them in the Catalina.out or catalina.log. Here's my logging.properties file, with this configuration I am able to create the jaas.log but it stays empty and all log goes to Catalina.out and catalina.log can anybody help with this? Thank you very much.
logging.properties file:
handlers = 1catalina.org.apache.juli.FileHandler,
2localhost.org.apache.juli.FileHandler,
3manager.org.apache.juli.FileHandler, 4host-
manager.org.apache.juli.FileHandler, 5jaasauth.org.apache.juli.FileHandler
#, java.util.logging.ConsoleHandler
.handlers = 1catalina.org.apache.juli.FileHandler,
5jaasauth.org.apache.juli.FileHandler
#.handlers = 1catalina.org.apache.juli.FileHandler,
java.util.logging.ConsoleHandler, 5jaasauth.org.apache.juli.FileHandler
############################################################
# Handler specific properties.
# Describes specific configuration info for Handlers.
############################################################
1catalina.org.apache.juli.FileHandler.level = FINE
1catalina.org.apache.juli.FileHandler.directory = ${catalina.base}/logs
1catalina.org.apache.juli.FileHandler.prefix = catalina.
2localhost.org.apache.juli.FileHandler.level = FINE
2localhost.org.apache.juli.FileHandler.directory = ${catalina.base}/logs
2localhost.org.apache.juli.FileHandler.prefix = localhost.
3manager.org.apache.juli.FileHandler.level = FINE
3manager.org.apache.juli.FileHandler.directory = ${catalina.base}/logs
3manager.org.apache.juli.FileHandler.prefix = manager.
4host-manager.org.apache.juli.FileHandler.level = FINE
4host-manager.org.apache.juli.FileHandler.directory = ${catalina.base}/logs
4host-manager.org.apache.juli.FileHandler.prefix = host-manager.
5jaasauth.org.apache.juli.FileHandler.level = ALL
5jaasauth.org.apache.juli.FileHandler.directory = ${catalina.base}/logs
5jaasauth.org.apache.juli.FileHandler.prefix =jaas-auth.
############################################################
# Facility specific properties.
# Provides extra control for each logger.
############################################################
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].level = INFO
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].handlers =
2localhost.org.apache.juli.FileHandler
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].
[/manager].level = INFO
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].
[/manager].handlers = 3manager.org.apache.juli.FileHandler
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/host-mana
ger].level = INFO
org.apache.catalina.core.ContainerBase.[Catalina].[localhost].[/host-
manager].handlers = 4host-manager.org.apache.juli.FileHandler
com.mymodule.apps.orchestration.CustomLdapLoginModule.level = ALL
com.mymodule.apps.CustomLdapLoginModule.handlers =
5jaasauth.org.apache.juli.FileHandler

Resources