Has anyone used superset as their customer analytics solution? - analytics

Interested to know if anyone has successfully implemented Superset as there customer analytics platform?
We are currently evaluating this, however, some struggle with restricting access to subsets of a dataset/source, or limit access to the specific row data.
For example companyB should only see data relevant to companyB. This is of course a mandatory requirement.
There are a lot of similar questions raised, so would be keen to know if someone has successfully accomplished this.

We are in the process of customizing superset to do something similar (without forking), though it doesn't involve row-level granularity. I was able to accomplish view-level access with a combination of FAB permissions and a custom security manager. Basically:
User logs into superset through OAuth2 with our API.
Security manager makes request to our API for a list of things the user can access.
Superset builds datasources only for those items.
For this to work, we created a custom role that forbids users from doing things like creating new databases/tables, and add each user to that role once they log in. At this point, you may want to make a connector for your own purposes so users can refresh datasources when they want/need to. Superset exposes a config setting for custom connectors so you don't have to modify source to load it. The Druid connector is a good example of this.
So, to answer your question: sort of. Table-level/view-level access control is definitely doable, though it will be a bit of effort. Row-level access control? Probably not, unless your database engine of choice supports row-level access control.

Related

What is the best way to implement permission levels in the MEAN stack?

I've been following the examples in the book "MEAN Machine", and I've implemented a simple token-based authentication system that makes the contents of a certain model only available to authenticated users.
I'd like to take this to a more complex level: I need three different user types.
I am building an app where some users (let's say, vendors) can upload certain data that could only be accessible to certain authenticated users (let's say, consumers), but vendors also need to be able to see, but not edit data uploaded by other vendors. Then, there would be a third type of user, the admin, who would be able to edit and see everything, including the details of other, lower level users.
How should I proceed in constructing this?
Thanks in advance for your help.
As you mentioned that the authentication system is already working and now you need to implement Access List Control. The ACL end implementation depends a lot on your database model and requirements. There are also Node modules which have the support for more advanced models like this acl module https://www.npmjs.com/package/acl, supports also MongoDB.

Do I have to trust the client when using DreamFactory for my backend?

I'm just looking into Dreamfactory, and it seems to me that all permission-logic is to be done by the client.
I want to develop an app with documents, and not all users are allowed to edit all documents. If the logged in user has permissions to the database, how can I prevent that user from making false API calls that delete or modify someone elses document ?
While your observation is correct, the app must control access, we do offer a couple features that will solve your problem.
First off, the roles system is very flexible. You could create different roles, reader, writer, admin, etc. And assign them to the corresponding users. This can be done today.
Secondly, we will be releasing an update in the next few weeks that has several new features, one of which, will also solve your dilemma. I'm not 100% sure what it is going to be named, but it will allow you to have the system automatically inject runtime data (I.e. user ID) into REST service calls to the DSP. Very flexible and powerful.
More information and doc will be available with the release so hold on a tad and we'll get you there!
The upcoming release at the end of the month lets you do a server side filter, or use server side scripting to lock a user to modifying only their own records, or those you see fit.

openam with database

I have a requirement that openam should access users and groups from a MySQL database. In the openam GUI for New Data Store -> Database Repository (Early Access), I could see some configurations related to this. But I am not aware about how to map fields from two or three of MySQL tables (users and groups) to the corresponding attributes of openAM. Also what are the mandatory or optional fields for keeping user and group information? Somebody please point me to good documentation on this.
Also I have a couple of basic queries.
Is it possible to keep policy information in database?
Is it possible to create users, groups and assign policy information from a web application deployed differently (through JSP / servlet). Does the OpenSSO APIs allows to do this.
Thanks.
The Database data store is very primitive (hence it's Early Access), and it is very likely that it won't fit your very specific needs. In that case you can always implement your own Data Store implementation (which can be tricky, I know..), here's a guide for it:
http://docs.forgerock.org/en/openam/10.0.0/dev-guide/index.html#chap-identity-repo-spi
Per your questions:
1) the policies are stored in the configuration store, and I'm not aware of any ways to store them in DataBase
2) yes, there are some APIs which let you perform some changes with the identities stored in the data store, but OpenSSO/OpenAM is not an identity management tool, so it might not be a perfect fit for doing it.
This is pretty simple but sparse documentation makes it a bit tough to do. Check out: http://rahul-ghose.blogspot.com/2014/05/openam-database-connectivity-with-mysql.html
Basically you need to create a MySql database and 2 tables for this. Include all variables you see in the User Configuration dropdown in the Mysql user table, it is okay to remove a few attributes.

Is there a way to prevent users from doing bulk entries in a Postgresql Database

I have 4 new data entry users who are using a particular GUI to create/update/delete entries in our main database. The "GUI" client allows them to see database records on a map and make modifications there, which is fine and preferred way of doing it.
But lately lot of guys have been accessing local database directly using PGAdmin and running bulk queries (i.e. update, insert, delete,etc) which introduces lot of problems like people updating lot of records without knowing or making mistakes while setting values. It also effects our logging procedures as we are calculating averages and time stamps for reporting purposes which are quite crucial to us.
So is there a way to prevent users from using PGAdmin (please remember lot of these guys are working from home and we do not have access to their machines) and running SQL queries directly in the database.
We still have to give them access to certain tables and allow them to execute sql as long as it's coming through a certain client but deny access to same user when he/she tries to execute a query directly in the db.
The only sane way to control access to your database is converting your db access methods to 3-tier structure. You should build a middleware (maybe some rest API or something alike) and use this API from your app. Database should be hidden behind this middleware, so no direct access is possible. From DB point of view, there are no ways to tell if one database connection is from your app, or from some other tool (pgadmin, simple psql or some custom build client). Your database should be accessible only from trusted hosts and clients should not have access to those hosts.
This is only possible if you use a trick (which might get exploited, too, but maybe your users are not smart enought).
In your client app set some harmless parameter like geqo_pool_size=1001 (if it is 1000 normally).
Now write a trigger that checks if this parameter is set and outputs "No access through PGAdmin" if this parameter is not set like from your app (and the username is not your admin username).
Alternatives: Create a temporary table and check for its existance.
I believe you should block direct access to the database, and set an application to which your clients (humans and software ones) will be able to connect.
Let this application filter and pass only allowed commands.
A great care should be taken in the filtering - I would carefully think whether raw SQL would be allowed at all. Personally, I would design some simplified API, which would make me sure that a hypothetical client-attacker (In God we trust, all others we monitor) would not find a way to sneak with some dangerous modification.
I suppose that from security standpoint your current approach is very unsafe.
You should study advanced pg_hba.conf settings.
this file is the key point for use authorization. Basic settings imply only simple authentification methods like passwords and lists of IP, but you can have some more advanced solution.
GSSAPI
kerberos
SSPI
Radius server
any pam method
So your official client can use a more advanced method, like somthing with a third tier API, some really complex authentification mechanism. Then without using the application it will at least becomes difficult to redo these tasks. If the kerberos key is encrypted in your client, for example.
What you want to do is to REVOKE your users write access, then create a new role with write access, then as this role you CREATE FUNCTION defined as SECURITY DEFINER, which updates the table in a way you allow with integrity checks, then GRANT EXECUTE access to this function for your users.
There is an answer on this topic on ServerFault which references the following blog entry with detailed description.
I believe that using middleware as other answers suggest is an unnecessary overkill in your situation. The above solution does not require for the users to change the way they access the database, just restricts their right to modify the data only through the predefined server side methods.

Is there an easy way to make Solr reference different indexes based on a set of credentials submitted with the request?

I'd like to have a single instance of Solr, protected by some sort of authentication, that operated against different indexes based on the credentials used for that authentication. The type of authentication is flexible, although I'd prefer to work with open standards (existing or emerging), if possible.
The core problem I'm attempting to solve is that different users of the application (potentially) have access to different data stored in it, and a user should not be able to search over inaccessible data. Building an index for each user seems the easiest way to guarantee that one user doesn't see forbidden data. Is there, perhaps, an easier way? One that would obviate the need for Solr to have a way to map users to indexes?
Thanks.
The Solr guys have a pretty exhaustive overview of what is possible, see http://wiki.apache.org/solr/MultipleIndexes

Resources