I am new to OIDC/OAuth2 and am looking to set up IS4 a single sign on server. I have an idea for a design but am not sure if it is a correct use of claims.
We have multiple apps used by different companies. Normally, a given identity would only have access to the resources across these apps for a single company, however some might have access to those of multiple companies (for example, an accountant who did the books for multiple clients).
I was thinking to provide these company ids as claims in the JWT. Would this be appropriate or is there a more commonly accepted way to achieve this?
I'd personally steer clear of this and keep the authorisation side of things inside the client app or API. I.e. company 1 and company 2's databases say that user xyz can access them and they do their checks locally.
Related
I have spent 3 days researching this problem and cannot find a solution or similar use case that shows how to solve the problem, so any pointers would be greatly appreciated.
I am creating a web-app that uses Google Cloud Storage and Bigquery. A user registers on the web app and then can upload data to Cloud Storage and Big Query. Two users could be from the same company and therefore should be able to view the same data - i.e. Jack and Jill work for company A and if Jack uploads a massive dataset via this app, Jill should also be able to view it later.
Another scenario will be I have two completely separate clients with users using this web-app. If users from Company A upload data, users from Company B should not be able to view Company A's data, and vice versa. But users from the same company should be able to view the data within their company.
Currently, I have an app that works for a single company. This has a React front-end that uses Firebase for authentication. Once the user is logged in, they can use the app which sends off API calls to a Flask back-end that does some error checking and authentication checking and then fires off an API call to GCP. This uses a service account and the key is loaded as an environment variable in the environment in which the Flask app is running.
However, if Company B want to use the app now, both Company A and Company B will be able to see each other's data and visualize it through the app. In addition, they will be sharing a project (I would like to change this to allocate billing more easily to have each client have their own project).
I ultimately want to get this app onto Kubernetes and ensure that each company is independent of each other, however, do not want to have to have separate URL's for every company using the app. Also, I want to abstract GCP away from the client. I would prefer to authenticate a user based on their login credentials and then they will be given access to their GCP project (via my front-end) accordingly.
I thought about perhaps having separate service keys for each client and then storing the service key info in Firebase, while using the respective keys for API calls but not sure this is best practice. It is however the only strategy I can think of.
If anyone could provide some help or guidance it would be very much appreciated. This is my first GCP project and have not been able to find any answers on GCP, SO, Google Groups, Slack or Medium.
Thanks,
TJ
First if all, welcome on GCP! It's an awesome platform, very powerful and flexible. But not magic.
Indeed, the use case that you describe is specific to your business logic. GCP provides told for securing access for user and VM(through service account) but not for customer. Here you have to implement your own custom and authorisation logic, with a database (I don't recommend bigquery for website, the latency is too high) to list three users, the companies where they work, the blobs of each company...
Nothing is magic and your use case specific.
If you want to discuss more about which component to use and to start, no problem. Let a comment.
We are a SaaS product and we would like to be able have per-user data exports that will be used with various analytical (BI) tools like Tableau or PowerBI. Instead of just managing all those exports manually, we thought of using some cloud database such as AWS Redshift (which will be part of our service). But then, it is not clear how is user will access those databases naturally, unless we do some kind of SSO integration with AWS.
So - what is the best practice for exporting data for analytics use in SaaS products?
In this case you can build your security in to your backend API layer.
First you can set up processes to load your data to Redshift, then make sure that only your backend API server/cluster has access to redshift (e.g. through a vpc with no external ip access to redshift)
Now you have your data, you can validate your user as usual through your backend service, then when a user requests a download through the backend API, the backend can create a query to extract from redshift only the correct data based upon the users security role. In order to make this possible you may need to build some kind of security column into your redshift data model.
I am assuming getting data to redshift is not a problem.
What you are looking for, if I understand correctly is a OEM solutions.
The problem is how does one mimic the security model you have in place for your SaaS offering.
That depends on how complex is your security model.
If it is as simple as just authenticate the user and he has access to all tenant data or the data can be easily filtered for user. Things are simple for you. Trusted authentication will allow you to authenticate that user and user filtering will allow you to show him all that he has access to.
But here is the kicker, if your security is really complex , then it can become really difficult to mimic it within these products.
Here for integrating tableau this link will help:-
https://tableau.github.io/embedding-playbook/#
Power BI, this product am not a fan off. I tried to embed a view in one my applications and data refresh was a big issue.
Its almost like they want you to be a azure shop for real time reporting.( I like GCP more )
If you create the api's and populate datasets then they have crazy restrictions like 1MB/sec etc.
On the other instances datasets can be refreshed only 8 times.
I gave up on them.
Very recently I got a call from Sisense and they seemed promising as well from a OEM perspective. You might was to try them.
So i'm looking into the possibility of having 1 single API store that can showcase APIs across 2 different domains. We are using WSO2 APIM and have all the components up and running on one domain alright. First would this be possible, we know that there is a firewall between the two domains so we would have to open some ACL's to allow this. Also would we be able to share a single Registry DB or would we need to have a Registry DB located in both domains? I'm hoping someone can provide me with a high level architecture view of how this can be achieved.
Thank you!
So to answer my own question looks like we are going to host the APIM Store on one domain and then use a ADFS to allow the users hosted on the other domains AD to gain access to i
I've been thinking about this quite a while and it's bugging my head off, lets say we have a website a mobile app and a database.
Usually when we develop our websites we pretend to store our database credentials in a configuration file and connect the website directly to the database without using a multi-tier architecture, but when it comes to a mobile application such Android or iOS this applications can be engineer reversed meaning that there's a risk of exposing your database credentials.
So I started thinking about this multi-tier architecture and kind of thinking about how Facebook and other social network do their job, they usually make an API and use a lot of HTTP Requests.
Usually social networks APIs have a app_id and a secret_key, this secret key would be used to increase the safety of the application but I'm thinking about how could I store these keys inside my application since I would go back to the begining of my discussion, if I was to use Java I could use the Java Preference Class but that isn't safe either has I saw in this question, plus I would need to make sure my HTTP Requests are CSRF safe.
So, how could I store these keys inside my app? What's the best way to do it, since hard-codding it's out of the question.
You should always require users to log in - never store credentials or private keys in an app you'll be distributing. At the very least, don't store them unless they're specific to the user who has chosen to store them after being validated.
The basic idea is that the user should have to be authenticated in some manner, and how you do that is really too broad to cover in a SO answer. The basic structure should be:
User asks to authenticate at your service and is presented with a challenge
User responds to that challenge (by giving a password or an authentication token from a trusted identity provider).
Service has credentials to access the database, and only allows authenticated users to do so.
There are entire services out there built around providing this kind of thing, particularly for mobile apps.
You might store the users own credentials on the device, and if so it should be encrypted (but you're right, a malicious app could potentially pick them up).
Bottom line: never distribute hard coded access to a database directly.
We are beginning to implement Sitecore for our website at my company. We are in the midst of the discovery phase and evaluating Active Directory module. We have 40-50 users who will be using Sitecore and over a 100 users who will be using some customized applications on top of Sitecore.
The consultancy we hired are asking us to not go with Active Directory since only 40-50 users will be using it. I on the other hand think that using the Active Directory module would be useful in the long run.
Do you guys have any input? What is the recommended practice?
Thanks
It really comes down to how you want to govern your CMS users. The AD module bubbles up those users into the CMS as users and thus exposes them for login. You can even do the same with groups/units. The advantage here is that if a new person joins your org, if you add them to the OU or assign then to a group that has Sitecore access then they gain access to Sitecore.
On the flip side, if you want Sitecore to be it's own entity with its own user profiles and logins, it can do that in a silo without the AD connection
To the CMS, there is no difference where the users are actually authenticated because the provider you select is low level. So the ultimate decision would be more of a governance / IT / process decision as there's really no functional difference.
My recommendation for you is to come up with scenarios or use cases and think through each in both scenarios. Eg you hire 10 people that need author access. With the AD module you just assign them to the OU or group that inherits te author roles in Sitecore and you're done.
I have implemented the Active Directory module a few times now and it works really well when you want to have users to be able to SSO into the authoring interface and manage your security access within Active Directory. You can also use it well for doing end-user SSO if you are building something like an Intranet application on Sitecore.
From a security management perspective, it becomes easier for the organization and also allows you to not worry about having to duplicate users between different environments (Dev, Test, Prod).
That being said, there is a performance overhead with using the Active Directory module that is not present if you use only the native Sitecore security provider. With your number of users, you probably won't see any difference, but with extremely large AD directories with complex group memberships you may run into performance issues if you are using indirect membership (i.e. groups within groups).
An example scenario:
Content item in Sitecore is secured to the role MyDomain\SuperAuthor
User A is directly a member of MyDomain\SuperAuthor
User B is a member of MyDomain\SuperUser
MyDomain\SuperUser group is a member of MyDomain\SuperAuthor
If you use the Sitecore security provider, resolving User B's access is very efficient. Sitecore is able to check the indirect membership quickly using the roles within the system.
If you use the Active Directory module, the indirect membership is disabled by default. Only User A would have access. If you change the configuration setting to enable indirect membership, the module will then allow User B to have access, however you will begin to see a slower performance for that scenario.
As I mentioned before, however, if Active Directory is not very complex as to what is being pulled into Sitecore, you should be fine and probably won't notice these performance impacts.
I don't think number of users should be the sole reason to decide on whether or not to integrate AD nor should it be because you may or may not need it in the long run. I would say integrate with AD because of its most obvious benefits
Single user name and password
Better security
Ease of maintenance
Although number of users becomes and important deciding factor when you need to create several thousand users and setup authorization for them.
The most common reason users are manually created and maintained in sitecore is when you need to create a handful of authors and approver accounts mostly for the marketing team. But if you foresee implementing membership or need to provide access and authorization based on an existing user and group policy then go for AD integration.