Error enabling AD authentication in GPFS filesystem - filesystems

Ive created a filesystem on a 2 linux node(both running RHEL 7) GPFS cluster. I am trying to enable AD authentication but am receiving an error and unable to find an answer to fix. Here is the process I am following on the manager node:
./spectrumscale file auth ad
Yes to edit template
I fill in the template with the following info:
[file_ad]
servers = bdtestdc01 <--- my test AD server
netbios_name = gpfscluser <--- the name I gave the cluster during setup Is this field looking for another name?
idmap_role = master
bind_username = administrator
bind_password = the domain password of the administrator account
unixmap_domains = bdtest.subdomain.company.com
I save the template and set the password. I then run:
./spectrumscale deploy
It errors at Installing Authentication. The log file says:
Error executing action run on resource 'execute[Configure file authentication]
2015-12-21 10:45:31,440 [ TRACE ] bdgpfs01.subdomain.company.com Chef Client failed. 1 resources updated in 3.641691552 seconds
2015-12-21 10:45:31,456 [ TRACE ] bdgpfs01.subdomain.company.com [2015-12-21T10:45:31-08:00] ERROR: execute[Configure file authentication] (auth::auth_file_configure line 22) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '1'
2015-12-21 10:45:31,456 [ TRACE ] bdgpfs01.subdomain.company.com ---- Begin output of /usr/lpp/mmfs/bin/mmuserauth service create --data-access-method file --type ad --servers 'bdtestbluedc01' --netbios-name 'gpfscluster' --idmap-role 'master' --user-name 'administrator' --password XXXXXX --unixmap-domains 'bdtest.subdomain.company.com' --idmap-range '10000000-299999999' --idmap-range-size '1000000' --enable-nfs-kerberos ----
2015-12-21 10:45:31,456 [ TRACE ] bdgpfs01.subdomain.company.com STDOUT:
2015-12-21 10:45:31,457 [ TRACE ] bdgpfs01.subdomain.company.com STDERR: mmuserauth service create: Syntax error. The correct syntax is:
2015-12-21 10:45:31,457 [ TRACE ] bdgpfs01.subdomain.company.com --unixmap-domains domain(lower value-higher value)
2015-12-21 10:45:31,457 [ TRACE ] bdgpfs01.subdomain.company.com mmuserauth service create: Command failed. Examine previous error messages to determine cause.
2015-12-21 10:45:31,438 [ TRACE ] bdgpfs01.subdomain.company.com mmuserauth service create: Command failed. Examine previous error messages to determine cause.

The domain field was looking for nonwhitespace characters and an ID Map Range.
Instead of using the configuration template, I ran the following command which enabled the authentication successfully...
mmuserauth service create --type ad --data-access-method file --netbios-name bdtestnode --user-name administrator --idmap-role master --servers myADserver --password Passwr0rd --idmap-range-size 1000000 --idmap-range 10000000-299999999
I then ran the following command to test:
id "testdomain\administrator"
It returned the proper groups and IDs

Related

Rundeck Yaml file formatting issue and issue with logging into Rundeck with AD user

I'm trying to allow domain users to log into my rundeck instance by following the guide https://vinusumi.wordpress.com/2017/12/28/setup-active-directory-authentication-for-rundeck/. However, I'm running into 2 issues.
For some reason, I'm unable to log into rundeck with a user thats added to the "rundeck_admins" group. I confirmed that the credentials are correct and I believe that the info I added to the "jaas-activedirectory.conf" is syntactically correct and accurate based on my AD settings. According to the "/var/log/rundeck/service.log" it says the following:
2018-12-13 20:13:29.689 DEBUG --- [tp1465511423-25] ailsUsernamePasswordAuthenticationFilter : Updated SecurityContextHolder to contain null Authentication
2018-12-13 20:13:29.689 DEBUG --- [tp1465511423-25] ailsUsernamePasswordAuthenticationFilter : Delegating to
authentication failure handler grails.plugin.springsecurity.web.authentication.AjaxAwareAuthenticationFailureHandler#51aaa9d4
I'm having trouble figuring out the proper syntax regarding the yaml file used for my "rundeck_users" group
description: "Ops Engineers can launch jobs but not edit them"
context:
project: *
for:
resource:
- equals:
kind: 'node'
allow: [read,update,refresh]
- equals:
kind: 'job'
allow: [read,run,kill]
- equals:
kind: 'adhoc'
allow: [read,run,kill]
- equals:
kind: 'event'
allow: [read,create]
job:
- match:
name: '.*'
allow: [read,run,kill]
adhoc:
- match:
name: '.*'
allow: [read,run,kill]
node:
- match:
nodename: '.*'
allow: [read,run,refresh]
by:
group:
- rundeck_users
---
context:
application: rundeck
description: "Ops Engineers can launch jobs but not edit them"
for:
project:
- match:
name: '*'
allow: [read]
system:
- match:
name: '.*'
allow: [read]
by:
group:
- rundeck_users
1.- Make sure the authentication is being read. When rundeck is starting
<..>
2018-12-14 01:52:57.186 INFO --- [ main] rundeckapp.BootStrap : RSS feeds disabled
2018-12-14 01:52:57.187 INFO --- [ main] rundeckapp.BootStrap : Using jaas authentication <<<<<<<<<
<..>
2.- Verify the yaml content is correct for example using http://www.yamllint.com/
3.- Use an existing/working aclpolicy and use your group for testing purpose and check if the acl policy is causing the issue.
Hope it helps
AD authentication - These steps are relevant for rundeck version 3 and up on centos7.
For older rundeck version see procedure I posted here:
https://groups.google.com/forum/#!msg/rundeck-discuss/P2qQHNpDct4/aP0ot7V2BAAJ
Create AD config file with the following content:
AD {
com.dtolabs.rundeck.jetty.jaas.JettyCachingLdapLoginModule required
debug="true"
contextFactory="com.sun.jndi.ldap.LdapCtxFactory"
providerUrl="ldap://<ip>:389 ldap://<ip>:389"
bindDn="CN=authUser,CN=Users,DC=your,DC=domain,DC=com"
bindPassword="<authUserPassword>"
authenticationMethod="simple"
forceBindingLogin="true"
userBaseDn="CN=Users,DC=your,DC=domain,DC=com"
userRdnAttribute="sAMAccountName"
userIdAttribute="sAMAccountName"
userPasswordAttribute="unicodePwd"
userObjectClass="person"
roleBaseDn="CN=Users,DC=your,DC=domain,DC=com"
roleNameAttribute="sAMAccountName"
roleMemberAttribute="member"
roleObjectClass="group"
cacheDurationMillis="300000"
reportStatistics="true";
};
Create file /etc/sysconfig/rundeckd with the following lines.
Note that LOGIN_MODULE value should be the same as what you set in your file.
export JAAS_CONF=/path/to/file/jaas-AD.conf
export LOGIN_MODULE=AD
I recommend creating full control yaml at first so you can test AD authentication, and then remove permissions as necessary. Note groups in yaml should be the same as in your AD.

Can't start clickhouse service, too many files in ../data/default/<TableName>

I have a strange problem with my standalone clickhouse-server installation. Server was running for some time with nearly default config, except data and tmp directories was replaced to separate disk:
cat /etc/clickhouse-server/config.d/my_config.xml
<?xml version="1.0"?>
<yandex>
<path>/data/clickhouse/</path>
<tmp_path>/data/clickhouse/tmp/</tmp_path>
</yandex>
Today the server stopped responding with connection refused error. It was rebooted and after that the service couldn't completely start:
2018.05.28 13:15:44.248373 [ 2 ] <Information> DatabaseOrdinary (default): 42.86%
2018.05.28 13:15:44.259860 [ 2 ] <Debug> default.event_4648 (Data): Loading data parts
2018.05.28 13:16:02.531851 [ 2 ] <Debug> default.event_4648 (Data): Loaded data parts (2168 items)
2018.05.28 13:16:02.532130 [ 2 ] <Information> DatabaseOrdinary (default): 57.14%
2018.05.28 13:16:02.534622 [ 2 ] <Debug> default.event_5156 (Data): Loading data parts
2018.05.28 13:34:01.731053 [ 3 ] <Information> Application: Received termination signal (Terminated)
Really, I stopped process on 57%, because it starts too long(maybe it could start in an hour or two, I didn't try).
The log level by default is "trace", but I didn't show any reasons of such behavior.
I think the problem is in file count in /data/clickhouse/data/default/event_5156.
Now it is 626023 directories in it and ls -la command do not work in this catalog properly, I have to use find to count files:
# time find . -maxdepth 1 | wc -l
626023
real 5m0.302s
user 0m3.114s
sys 0m24.848s
I have two questions:
1)Why Clickhouse-Server generated so much files and directories, with default config?
2)How can I start the service without data loss in adequate time?
Issue was in data update method. I used script with jdbc connector and have been sending one string per request. After changing scheme to batch update, the issue was solved.

Cognos BI and Active Directory Global Catalog

I've got AD forest with 2 domains, users and groups are separated to different domains (users in first, groups and other resources in second). And if you want to use AD in a such way you have to connect via 3268 port (global catalog). But I ran into a problem that the CognosBI does not work in this configuration. Is there ways to make these products work together in such a configuration?
[ ERROR ] CAM-AAA-0146 The namespace 'nsname' is not available.
[ ERROR ] CAM-AAA-0064 The function 'Configure' failed.
[ ERROR ] CAM-AAA-0064 The function 'ActiveDirectoryAuthConfigure' failed.
[ ERROR ] CCL_ASSERT(supportedDomain);
[ ERROR ] src/ActiveDirectoryProviderHelpers.cpp(2552): CCLAssertError: CCL_RETHROW:
[ ERROR ] src/ActiveDirectoryProviderHelpers.cpp(4099): CCLAssertError: CCL_THROW:
I've finded solution without global catalog, works with standart 389 port with following settings:
chaseReferrals: True
multiDomainTrees: True

ATG catalog export error in startSQLRepository

I want to export the catalog data from atg production. I followed the steps as below.
create FakeXADatasource.properties file in C:\ATG\ATG10.1.1\home\localconfig\atg\dynamo\service\jdbc. (There is mysql user named atguser with password atg123$)
$class=atg.service.jdbc.FakeXADataSource
URL=jdbc:mysql://localhost:3306/prod_lo
user=atguser
password=atg123$
driver=com.mysql.jdbc.Driver
change JTDataSource.properties as below.
$class=atg.service.jdbc.MonitoredDataSource
dataSource=/atg/dynamo/service/jdbc/FakeXADataSource
transactionManager=/atg/dynamo/transaction/TransactionManager
loggingSQLInfo=false
min=10
maxFree=-1
loggingSQLError=false
blocking=true
loggingSQLWarning=false
max=10
loggingSQLDebug=false
then run the "
startSQLRepository.bat -m Store.Storefront -export all
catalogExport.xml -repository /atg/commerce/catalog/ProductCatalog"
command.
but while it processing it gives below error. Anyone know the reason or how to do a complete catalog export? (I have remove the last part of the error log because it exceeds the maximum length of 30000 characters. )
./startSQLRepository -m Store.Storefront -export all catalogExport.xml -repository /atg/commerce/catalog/ProductCatalog
Error:
Error /atg/dynamo/service/jdbc/JTDataSource an exception was
encountered while trying to populate the pool with the starting number
of resources: atg.service.resourcepool.ResourcePoolException:
java.sql.SQLException: Access denied for user 'root'#'localhost'
(using password: NO)
Error /atg/dynamo/service/jdbc/JTDataSource The connection pool failed to initialize propertly, i.e. the starting number of
connections could not be created; check your database accessibility
and JDBC driver configuration
Error /atg/dynamo/service/IdGenerator CONTAINER:atg.service.idgen.IdGeneratorException;
SOURCE:CONTAINER:atg.service.idgen.IdGeneratorException;
SOURCE:java.sql.SQLException:
atg.service.resourcepool.ResourcePoolException: java.sql.SQLException:
Access denied for user 'root'#'localhost' (using password: NO)
Error /atg/dynamo/service/IdGenerator at atg.service.idgen.PersistentIdGenerator.initialize(PersistentIdGenerator.java:389)
Error /atg/dynamo/service/IdGenerator at atg.service.idgen.AbstractSequentialIdGenerator.doStartService(AbstractSequentialIdGenerator.java:643)
try setting max and min poolsizes to 1 and 5
Also make sure your DB is up and running and can be connected to
-DC21
the configuration you are given the startSQLRepository is not taking is at runtime because it is still saying using password no and second error is with you connection pool. my suggestion is for you that try to change only to FakeXADatasource.properties file with username and password. I tried with the same configuration and able to export.

How to load data from the online GAE datastore into the local development server?

I have previously used the approach described in GAE docs to download backups of my entities on the live datastore.
Currently, I have a csv file per entity kind, that I got by writing bulkloader.yaml and using this command:
appcfg.py download_data --config_file=bulkloader.yaml --filename=users.csv --kind=Permission --url=http://your_app_id.appspot.com/_ah/remote_api
I also have a sql3 dump file that I got using the command:
appcfg.py download_data --kind=<kind> --url=http://your_app_id.appspot.com/_ah/remote_api --filename=<data-filename>
Now if I try this command:
appcfg.py upload_data --url=http://your_app_id.appspot.com/_ah/remote_api --kind=<kind> --filename=<data-filename>
Replacing the URL by localhost:8080, it asks me for a username/password. Now even if provide a mock username (test#example.com) in http://localhost:8080/_ah/remote_api and check the "admin" checkbox, it always gives me an authentication error.
The other alternative mentioned in the docs is using this:
appcfg.py upload_data --config_file=album_loader.py --filename=album_data.csv --kind=Album --url=http://localhost:8080/_ah/remote_api <app-directory>
I wrote a loader, and tried it out, it also asks for a username and password, but it accepts anything here. The output is as follows:
/usr/local/google_appengine/google/appengine/api/search/search.py:232: UserWarning: DocumentOperationResult._code is deprecated. Use OperationResult._code instead.
'Use OperationResult.%s instead.' % (name, name))
/usr/local/google_appengine/google/appengine/api/search/search.py:232: UserWarning: DocumentOperationResult._CODES is deprecated. Use OperationResult._CODES instead.
'Use OperationResult.%s instead.' % (name, name))
Application: knowledgetestgame
Uploading data records.
[INFO ] Logging to bulkloader-log-20121113.210613
[INFO ] Throttling transfers:
[INFO ] Bandwidth: 250000 bytes/second
[INFO ] HTTP connections: 8/second
[INFO ] Entities inserted/fetched/modified: 20/second
[INFO ] Batch Size: 10
[INFO ] Opening database: bulkloader-progress-20121113.210613.sql3
Please enter login credentials for localhost
Email: test#example.com
Password for test#example.com:
[INFO ] Connecting to localhost:8080/_ah/remote_api
[INFO ] Starting import; maximum 10 entities per post
[ERROR ] [WorkerThread-4] WorkerThread:
Traceback (most recent call last):
File "/usr/local/google_appengine/google/appengine/tools/adaptive_thread_pool.py", line 176, in WorkOnItems
status, instruction = item.PerformWork(self.__thread_pool)
File "/usr/local/google_appengine/google/appengine/tools/bulkloader.py", line 764, in PerformWork
transfer_time = self._TransferItem(thread_pool)
File "/usr/local/google_appengine/google/appengine/tools/bulkloader.py", line 933, in _TransferItem
self.content = self.request_manager.EncodeContent(self.rows)
File "/usr/local/google_appengine/google/appengine/tools/bulkloader.py", line 1394, in EncodeContent
entity = loader.create_entity(values, key_name=key, parent=parent)
File "/usr/local/google_appengine/google/appengine/tools/bulkloader.py", line 2728, in create_entity
(len(self.__properties), len(values)))
AssertionError: Expected 17 columns, found 18.
[INFO ] [WorkerThread-5] Backing off due to errors: 1.0 seconds
[INFO ] Unexpected thread death: WorkerThread-4
[INFO ] An error occurred. Shutting down...
[ERROR ] Error in WorkerThread-4: Expected 17 columns, found 18.
[INFO ] 980 entities total, 0 previously transferred
[INFO ] 0 entities (278 bytes) transferred in 5.9 seconds
[INFO ] Some entities not successfully transferred
I have ~4000 entities in total, it says here that 980 are transferred, but actually I check the local datastore and I find none of them..
Below is the loader I use (I used NDB for the Guess entity)
import datetime
from google.appengine.ext import db
from google.appengine.tools import bulkloader
from google.appengine.ext.ndb import key
class Guess(db.Model):
pass
class GuessLoader(bulkloader.Loader):
def __init__(self):
bulkloader.Loader.__init__(self, 'Guess',
[('selectedAssociation', lambda x: x.decode('utf-8')),
('suggestionsList', lambda x: x.decode('utf-8')),
('associationIndexInList', int),
('timeEntered',
lambda x: datetime.datetime.strptime(x, '%m/%d/%Y').date()),
('rank', int),
('topicName', lambda x: x.decode('utf-8')),
('topic', int),
('player', int),
('game', int),
('guessString', lambda x: x.decode('utf-8')),
('guessTime',
lambda x: datetime.datetime.strptime(x, '%m/%d/%Y').date()),
('accountType', lambda x: x.decode('utf-8')),
('nthGuess', int),
('score', float),
('cutByRoundEnd', bool),
('suggestionsListDelay', int),
('occurrences', float)
])
loaders = [GuessLoader]
Edit: I just noticed this part in the error message [ERROR ] Error in WorkerThread-0: Expected 17 columns, found 18. while actually I just went through the whole csv file, and made sure that every line has 18 columns. I checked the loader, and found that I was missing the key column, I gave it a type int but this doesn't work.
If you have problems with the authentication, put the following in your appengine_config.py:
if os.environ.get('SERVER_SOFTWARE','').startswith('Development'):
remoteapi_CUSTOM_ENVIRONMENT_AUTHENTICATION = (
'REMOTE_ADDR', ['127.0.0.1'])
then run
appcfg.py download_data --url=http://APPNAME.appspot.com/_ah/remote_api --filename=dump --kind=EntityName
appcfg.py upload_data --url=http://localhost:8080/_ah/remote_api --filename=dump --application=dev~APPNAME
Try just pressing Enter (no username/password). This seemed to do the trick for me. My command (wrapped in a bash script to prevent import errors that I occasionally received) is:
#!/bin/bash
# Modify path
export PYTHONPATH=$PYTHONPATH:.
# Load data
python /path/to/app/config/appcfg.py upload_data \
--config_file=<my_loader.py> \
--filename=<output.csv> \
--kind=<kind> \
--application=dev~<application_id> \
--url=http://localhost:8088/_ah/remote_api ./
When prompted for the Email, I hit enter and all is uploaded to the dev server. I am not using NDB in this case, although I do not believe that should make a difference.

Resources