Apache Zeppelin how to have interpreter configuration saved per user - apache-zeppelin

I am using zeppelin, and mostly my focus of exploration is on JDBC interpreter.
We want to provide a web interface for accessing the DB.
Intend is each user would login to Zeppelin, create its own credentials that should pass to jdbc interpreter.
So interpreter should be a shared one but DB connection should be based on each individual credential
Is this possible? Considering my users authentication is jdbc-realm
Referring document: https://zeppelin.apache.org/docs/0.9.0/setup/security/datasource_authorization.html
My shiro.ini:
[main]
dataSource = org.postgresql.ds.PGPoolingDataSource
dataSource.serverName = localhost
dataSource.databaseName = test
dataSource.user = user_a
dataSource.password = pass_a
ps = org.apache.shiro.authc.credential.DefaultPasswordService
pm = org.apache.shiro.authc.credential.PasswordMatcher
pm.passwordService = $ps
jdbcRealm = org.apache.shiro.realm.jdbc.JdbcRealm
jdbcRealmCredentialsMatcher = org.apache.shiro.authc.credential.Sha256CredentialsMatcher
jdbcRealm.dataSource = $dataSource
jdbcRealm.authenticationQuery = select password from zeppelin.zeppelin_users where username = ?
jdbcRealm.userRolesQuery = select role_name from zeppelin.zeppelin_user_roles where username = ?
jdbcRealm.credentialsMatcher = $pm
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
### If caching of user is required then uncomment below lines
#cacheManager = org.apache.shiro.cache.MemoryConstrainedCacheManager
#securityManager.cacheManager = $cacheManager
### Enables 'HttpOnly' flag in Zeppelin cookies
cookie = org.apache.shiro.web.servlet.SimpleCookie
cookie.name = JSESSIONID
cookie.httpOnly = true
### Uncomment the below line only when Zeppelin is running over HTTPS
#cookie.secure = true
sessionManager.sessionIdCookie = $cookie
securityManager.sessionManager = $sessionManager
# 86,400,000 milliseconds = 24 hour
securityManager.sessionManager.globalSessionTimeout = 86400000
shiro.loginUrl = /api/login
[roles]
role1 = *
role2 = *
role3 = *
admin = *
[urls]
/api/version = anon
/api/cluster/address = anon
# Allow all authenticated users to restart interpreters on a notebook page.
# Comment out the following line if you would like to authorize only admin users to restart interpreters.
/api/interpreter/setting/restart/** = authc
/api/interpreter/** = authc, roles[admin]
/api/notebook-repositories/** = authc, roles[admin]
/api/configurations/** = authc, roles[admin]
/api/credential/** = authc, roles[admin]
/api/admin/** = authc, roles[admin]
#/** = anon
/** = authc
Have created credentials:
And have also removed default username and password from interpreter configuration
Exception: org.postgresql.util.PSQLException: The server requested password-based authentication, but no password was provided.
Version: 0.9.0-preview2
UPDATE: The same things works in 0.8.2 so seems issue with the 0.9.0 build

As per ZEPPELIN-5184 and PR-4008, In 0.9.0, we need to define just the interpreter name in credentials.
Check ZEPPELIN-5189 for more details.

Related

Zeppelin: How to use groups in active direactory (AD) to manage notebook permissions?

Hi I am trying to use active directory (AD) groups to manage notebook permissions. But I found that I can only grant notebook permission to individual users, and not be able to give permissions to a group of users with AD groups.
For example, there are users A, B and C, and AD group named "Team" which contains them. when I grant write permissions to A, B or C separately, everything works as expected. But when the group "Team" is used, none of the permission setting works.
So I am just wondering if I could use AD original groups to manage the notebook permissions?
Here is my shiro.ini file content:
[users]
# Sample LDAP configuration, for user Authentication, currently tested for single Realm
[main]
sha256Matcher = org.apache.shiro.authc.credential.Sha256CredentialsMatcher
sha256Matcher.storedCredentialsHexEncoded = false
iniRealm.credentialsMatcher = $sha256Matcher
### A sample for configuring LDAP Directory Realm
ldapRealm = org.apache.zeppelin.realm.LdapRealm
ldapRealm.contextFactory.url = ldap://ldap.company.com
ldapRealm.userDnTemplate = {0}#company.com
ldapRealm.searchBase= DC=corp,DC=company,DC=com
ldapRealm.userSearchBase = DC=corp,DC=company,DC=com
ldapRealm.groupSearchBase = OU=Groups,DC=corp,DC=company,DC=com
ldapRealm.groupObjectClass=group
ldapRealm.userSearchAttributeName = userPrincipalName
ldapRealm.memberAttribute=member
ldapRealm.groupIdAttribute=cn
ldapRealm.contextFactory.systemUsername = CN=ADAccess,OU=UserBase,DC=corp,DC=company,DC=com
ldapRealm.contextFactory.systemPassword = pssswordAD
ldapRealm.contextFactory.authenticationMechanism = simple
ldapRealm.authorizationEnabled=true
securityManager.realms = $ldapRealm
sessionManager = org.apache.shiro.web.session.mgt.DefaultWebSessionManager
### Enables 'HttpOnly' flag in Zeppelin cookies
cookie = org.apache.shiro.web.servlet.SimpleCookie
cookie.name = JSESSIONID
cookie.httpOnly = true
### Uncomment the below line only when Zeppelin is running over HTTPS
#cookie.secure = true
sessionManager.sessionIdCookie = $cookie
securityManager.sessionManager = $sessionManager
# 86,400,000 milliseconds = 24 hour
securityManager.sessionManager.globalSessionTimeout = 86400000
shiro.loginUrl = /api/login
[roles]
admin = *
[urls]
/api/version = anon
/api/cluster/address = anon
# Allow all authenticated users to restart interpreters on a notebook page.
# Comment out the following line if you would like to authorize only admin users to restart interpreters.
/api/interpreter/setting/restart/** = authc
/api/interpreter/** = authc, roles[admin]
/api/notebook-repositories/** = authc, roles[admin]
/api/configurations/** = authc, roles[admin]
/api/credential/** = authc, roles[admin]
/api/admin/** = authc, roles[admin]
#/** = anon
/** = authc

E-mail alert notification in Grafana

Hello friends I have a problem in configuring the email alert notification in Grafana like always getting like
Failed to send alert notifications
But if I save it's working but while clicking send test that alone does not come properly. Here is my custom.ini config file here below and help me to solve this error friends.
[smtp]
enabled = true
host = smtp.office365.com:587
user = sample12#domain.com
# If the password contains # or ; you have to wrap it with trippel quotes. Ex """#password;"""
password = xxxxxxxxx
;cert_file =
;key_file =
skip_verify = true
from_address = sample12#domain.com
from_name = Grafana
# EHLO identity in SMTP dialog (defaults to instance_name)
;ehlo_identity = dashboard.example.com

Usename Ldapsearch on Active Directory where AD Domain is different from email domain

I am using Ldap in Debian 11 to authenticate users for Postfix against MS Active Directory with domain mandala.com. The request uses the user's email edmond#example.com to search with this script:
<code>
server_host = 192.168.2.3
search_base = dc=example,dc=com
version = 3
bind = yes
start_tls = no
bind_dn = vmail
bind_pw = mypass
scope = sub
query_filter = (&(objectclass=person)(userPrincipalName=%s))
result_attribute= userPrincipalName
result_format = %d/%u/Maildir/
debuglevel = 1
</code>
The problem is that Postfix uses the user's email edmond#example.com while on the Ad the user is edmond#mandala.com, hence the recipient cannot be found.
if I run an ldapsearch on the command line using dc=mandala, dc=com the user is found.
I solved it passing the mail attribute
<code>
query_filter = (&(objectclass=person)(mail=%s))
result_attribute= mail
<code>

Not able to run sssd + AD domain controllers with different FQDN

I have below sssd + ad setup for ssh management.
AD Domain - ad.example.net
AD DC 1 hostname - dc1.example.net
AD DC 2 hostname - dc2.example.net
Linux (Centos) Server hostnames - server.int.example.com -> this I can not change as per Org policy
I don't want to add AD dns into my /etc/resolv.conf, we want to use cloud provided dns resolver
which resolves our domain controllers hostnames with *.example.net
When I add them as
ad_server = dc1.example.net,dc2.example.net
sssd fails with -
[dp_req_reply_gen_error] (0x0080): DP Request [Initgroups #1066]: Finished. Backend is currently offline.
Here is my sssd.conf and krb.conf
sssd.conf -
[sssd]
domains = ad.example.net
reconnection_retries = 3
config_file_version = 2
services = nss, pam, ssh
override_space = _
sbus_timeout = 30
[nss]
reconnection_retries = 3
entry_negative_timeout = 30
entry_cache_nowait_percentage = 7
debug_level = 9
[pam]
reconnection_retries = 3
[domain/default]
cache_credentials = True
entry_cache_timeout = 3600
[domain/ad.example.net]
id_provider = ad
access_provider = ad
ldap_id_mapping = True
auto_private_groups = True
default_shell = /bin/bash
fallback_homedir = /home/%u
use_fully_qualified_names = False
krb5_store_password_if_offline = True
realmd_tags = manages-system joined-with-adcli
ad_domain = ad.example.net
ad_server = dc1.example.net,dc2.example.net
ad_hostname = dev1210utl1.ad.example.net
krb5_realm = AD.example.NET
ldap_user_ssh_public_key = altSecurityIdentities
ldap_user_extra_attrs = altSecurityIdentities:altSecurityIdentities
debug_level = 9
dns_resolver_timeout = 20
krb5_lifetime = 24h
krb5_renewable_lifetime = 7d
krb5_renew_interval = 60s
dyndns_update = false
krb5.conf
# Configuration snippets may be placed in this directory as well
includedir /etc/krb5.conf.d/
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
default_realm = AD.EXAMPLE.NET
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
rdns = false
pkinit_anchors = FILE:/etc/pki/tls/certs/ca-bundle.crt
spake_preauth_groups = edwards25519
default_ccache_name = KEYRING:persistent:%{uid}
ignore_acceptor_hostname = true
[realms]
AD.EXAMPLE.NET = {
kdc = dc1.example.net
admin_server = dc1.example.net
kdc = dc2.example.net
admin_server = dc2.example.net
}
[domain_realm]
.ad.example.net = AD.EXAMPLE.NET
ad.example.net = AD.EXAMPLE.NET
I know, there are different DNS fqdn in my setup, but cant avoid them.
sssd works if i set -
ad_server = dc1.ad.example.net,dc2.ad.example.net
But i have to add AD DNS as my resolver or have them /etc/hosts which i want to avoid.
Any help related to helpful.
If the DC FQDN really is dc1.ad.example.net, why on earth are you attempting to use an alias? The vendor has to fix this configuration, end of story.
You MUST use the correct DC names when you bind - if it thinks you're trying to reach a completely different host/realm, then of course it won't let you connect. You might as well have "dc1.mickeymouse.com" as the alias you're trying to use - it's equally as invalid if the DC FQDN is really "dc1.ad.example.net".
Also, does your cloud provider DNS resolve SRVs like _ldap._tcp.ad.example.net _kerberos_.udp.ad.example.net correctly? This is critical stuff.
It doesn't matter if UPNs available in the domain include "user#example.net" - right now you're dealing with basic LDAP and Kerberos network protocols. If you want to bind/auth with a host, you need to provide the correct name.
The vendor doesn't need to delete the existing ones - although I have to wonder what they're intended for - but they do need to let the correct DC names resolve, whether that's through creating some records or doing what should probably have been done and delegating "ad.example.net" to the DCs as authoritative for that subzone (since I assume that hasn't been done if they don't resolve).
It doesn't matter what your machine hostname is, as long as the DCs can resolve it in the other direction.
If this provider's DNS is public, there should also be some security about external hosts resolving the internal "ad.example.net" namespace (i.e. they shouldn't).
TLDR: if the domain DNS is "ad.example.net" and that is all that subzone is used for - the AD domain - then that should be delegated from the upstream "example.net" name servers to the domain controllers as an authoritative zone.

Unable to make Flask-SQLAlchemy connect to my Google App Engine DB

I am trying to set up a web-application based on Flask using Google App Engine (I'm new to both).
The web-application receives data from the client and it should be processed and saved in a database.
I've tried to use Flask-SQLAlchemy but I'm unable to set it up with Google Cloud SQL, I've used this guide to create a MySQL DB in the same project:
and then I'm trying to use it on my main python code:
app.config('SQLALCHEMY_DATABASE_URI') = 'mysql+mysqldb://root#/Results?unix_socket=/cloudsql/crafty-circlet-164415:psy01'
app.config['SECRET_KEY'] = 'NglfxE8FOP9pgV8fxpyj'
db = SQLAlchemy(app)
class Result(db.Model):
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.Text)
profession = db.Column(db.Text)
year = db.Column(db.Text)
pressure_level = db.Column(db.Integer)
reported_suc_count = db.Column(db.Integer)
marked_suc_count = db.Column(db.Integer)
real_suc_count = db.Column(db.Integer)
insertion_time = db.Column(db.DateTime)
def __init__(self, name, profession, year, pressure_level, reported_suc_count, marked_suc_count, real_suc_count):
self.name = name
self.profession = profession
self.year = year
self.pressure_level = pressure_level
self.reported_suc_count = reported_suc_count
self.marked_suc_count = marked_suc_count
self.real_suc_count = real_suc_count
self.insertion_time = datetime.utcnow()
#app.route('/resultform', methods=['POST', 'GET'])
def resultform():
if request.method == 'POST':
if not request.form['successmatrices']:
flash('please fill all the fields', 'error')
else:
if 'name' in request.form:
name = request.form['name']
else:
name = None
if 'profession' in request.form:
profession = request.form['profession']
else:
profession = None
if 'year' in request.form:
year = request.form['year']
else:
year = None
if 'pressure_level' in request.form:
pressure_level = int(request.form['pressure_level'])
else:
pressure_level = None
if 'successmatrices' in request.form:
successmatrices = int(request.form['successmatrices'])
else:
successmatrices = 0
new_result = Result(name=name, profession=profession, year=year, pressure_level=pressure_level, reported_suc_count=successmatrices, marked_suc_count=len(session['marked']), real_suc_count=len(session['correct']))
db.session.add(new_result)
db.session.commit()
return redirect(url_for('showresults'))
return render_template("resultform.html")
#app.route('/showresults')
def showresults():
return render_template("showresults.html", results=Results.query.all())
if __name__ == '__main__':
db.create_all()
app.run(debug=True)
When I'm trying to run it in my local development (I'm using PyCharm) I receive the following error in the background:
ERROR 2017-04-16 09:20:19,802 wsgi.py:263]
Traceback (most recent call last):
File "C:\Users\<>\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\runtime\wsgi.py", line 240, in Handle
handler = _config_handle.add_wsgi_middleware(self._LoadHandler())
File "C:\Users\<>\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\runtime\wsgi.py", line 299, in _LoadHandler
handler, path, err = LoadObject(self._handler)
File "C:\Users\<>\AppData\Local\Google\Cloud SDK\google-cloud-sdk\platform\google_appengine\google\appengine\runtime\wsgi.py", line 85, in LoadObject
obj = __import__(path[0])
File "C:\Users\<>\PycharmProjects\crafty-circlet-164415\main.py", line 7
app.config('SQLALCHEMY_DATABASE_URI') = 'mysql+mysqldb://root#/Results?unix_socket=/cloudsql/crafty-circlet-164415:psy01'
SyntaxError: can't assign to function call
And after deployment to GAE the following error appears:
Error: Server Error
The server encountered an error and could not complete your request.
Please try again in 30 seconds.
Any idea how to solve this?
app.config is a dictionary, so to add a config value you'll use a [ ] instead of () just as done in you app.config['SECRET_KEY'].
So it should be:
app.config['SQLALCHEMY_DATABASE_URI'] = SQLALCHEMY_DATABASE_URI
Some other pointers too for a successful connection. You'll need to format your connection details properly:
USER = 'root'
PASSWORD = 'your-cloudsql-password'
DATABASE = 'your-cloudsql-database-name'
# connection_name is of the format `project:region:your-cloudsql-instance`
CONNECTION_NAME = 'your-cloudsql-connection-name'
SQLALCHEMY_DATABASE_URI = (
'mysql+pymysql://{user}:{password}#localhost/{database}'
'?unix_socket=/cloudsql/{connection_name}').format(
user=USER, password=PASSWORD,
database=DATABASE, connection_name=CONNECTION_NAME)
app.config['SQLALCHEMY_DATABASE_URI'] = SQLALCHEMY_DATABASE_URI
I'll probably separate my secrets and all other sensitive info into a config file not checked into source or use environment variables, etc.
If you want to locally test your application with your Cloud SQL instance, you'll need to install the Cloud SQL Proxy and add the connection name as an environment variable and the MySQLdb library to your app.yaml
> cloud_sql_proxy -instances=your-connection-name=tcp:3306
Else you can use a local MySQL instance for testing but switching to Cloud SQL when on app engine.
More information on setting up Cloud SQL with App Engine can be found here

Resources