I am using Active Directory as a identity and group repository for my web application. I connect to it remotely through the LDAP SSL interface and perform operations such as creating users (who has memberships to one or more groups), creating groups and authenticating users.
I could potentially have tens of thousands of small groups (maybe 50,000) for, say 100,000 user, in a flat structure. I am wondering if this is going to be a performance issue when calling Active Directory through LDAP. Active directory enforces uniqueness on username and group names. Should I be concerned that operations like creating and updating user might become too slow as the number of groups grows?
No. As long as you have your indexes set up correctly. It's not a flat structure once it's indexed.
Related
I have a local Snowflake user account that is used for scheduled processes, like Tableau connections, etc. Regular Snowflake users have their own credentials that tie back to SSO/MFA systems (note: the SSO/MFA is NOT Snowflake's native SSO/MFA functionality.) I use the local user account to take advantage of scheduling and automation of SQL statements in external systems (e.g. Tableau), to avoid needing to MFA every time a connection is made or query is executed.
I would like to enhance the security measures around this local account, since it is not safeguarded by the SSO/MFA architecture that protects individual Snowflake users. Currently the only option that I have found is to create a Snowflake network policy that is assigned to the local user account, restricting to only the IP ranges that external systems call from.
What other options are there to secure these local user accounts?
The below article will help you with all the security measures available in Snowflake, along with best practices.
https://community.snowflake.com/s/article/Snowflake-Security-Overview-and-Best-Practices
I'm having around 2000 Inactive account on AD server.
For some reasons, we cannot remove these accounts out of system.
I'm not see any impact when leaving a huge amount of those inactivate account in our AD server except of slow loading and cannot show all users when we access to the UserOU folder.
So what I consider now is that if any impact to the performance of AD sever later on or any risk when leaving this much disabled accounts?
It should be no problem, but you might still to consider to delete them as described at https://serverfault.com/questions/64175/active-directory-delete-vs-disable-departed-employees
Note, keeping accounts might impact licensing (quote from above link)
that accounts in your AD require a "per-seat" license (if you are
swinging that way), whether or not they are a real person and whether
or not the person is still present. So there is an argument to be made
for deletion!
I'm working on JSF project where I in one webapplication have many users each with an account and a separate password, and back-end the application is connected to a PostgreSQL database. I want the most tight security possible and therefore would like to create a system where, if the JSF webapplication get hacked/compromised, then there will not be access to most of the user accounts in the database (I expect the database to run on a separate server (or separate servers if necessary) hosted by a secured OpenBSD OS).
I think that normally a webapplication connect to a database as a role with a password and then the application has direct access to all the users in the database (for that webapplication). But to make it more secure I am thinking about giving each client user of the webapplication a separate role in the database (each with a separate password of-course). But then there will become millions of roles in the database, and therefore I would like to ask if anyone else have experience with this, and the performance penalties that might occur from using such a design?
And what will happen if the system gets very popular (such as Facebook) and maybe the needs for roles grows to several hundreds of millions (just setting up a best-case scenario)? Will such a design become a dead end?
Thanks.
I've been contemplating this back and forth - right now, I have an app with many users, and each record in the database is tied to the user's authenticated username (by the CreatedBy column). I know this is a common practice for differentiating user records. But, I've been thinking in terms of keeping the records as secure as possible (so that no other user can access them by SQL injection, XSS, etc.) would a good idea be to create a separate database for each user? This would obviously mean if I have 150 users, I'll have 150 databases. Is this generally a bad practice? Is containing each user's records in the same database acceptable when it comes to security of the individual's records so as long as I prevent SQL injection and XSS?
I’m looking at an implement for multi-tenancy in SQL Server. I'm considering a shared database, shared schema and tenant view filter described here. The only drawback is a fragmented connection pool...
Per http://msdn.microsoft.com/en-au/architecture/aa479086, Tenant View Filter is described as follows:
"SQL views can be used to grant individual tenants access to some of the rows in a given table, while preventing them from accessing other rows.
In SQL, a view is a virtual table defined by the results of a SELECT query. The resulting view can then be queried and used in stored procedures as if it were an actual database table. For example, the following SQL statement creates a view of a table called Employees, which has been filtered so that only the rows belonging to a single tenant are visible:
CREATE VIEW TenantEmployees AS
SELECT * FROM Employees WHERE TenantID = SUSER_SID()
This statement obtains the security identifier (SID) of the user account accessing the database (which, you'll recall, is an account belonging to the tenant, not the end user) and uses it to determine which rows should be included in the view"
Thinking this through , if we have one database storing say 5,000 different tenants, then the connection pool is completely fragmented and every time a request is sent to the database ADO.NET needs to establish a new connection and authenticate (remember connection pooling works for each unique connection string) and this approach means you have 5,000 connection strings…
How worried should I be about this? Can someone give me some real world examples of how significant an impact the connection pool has on a busy multi-tenant database server (say servicing 100 requests per second)? Can I just throw more hardware at the problem and it goes away?
Thoughts ??
My sugestion will be to develop a solid API over your database. Scalability, modularity, extensibility, accounting will be the main reasons. Few years down the line you may be found swearing at yourself for playing with SUSER_SID(). For instance, consider multiple tenants managed by one account or situations like whitelabels...
Have a data access api, which will take care of authentication. You can still do authorisation on the DB level, but it's a whole different topic then. Have users and perhaps groups and grant them permissions to tenants.
For huge projects nevertheless, you'll still find it better to have a single DB per big player.
I see I did not answer your main question about fragmented connection pool performance, but I'm convinced there are many valid arguments not to go that path nevertheless.
See http://msdn.microsoft.com/en-us/library/bb669058.aspx for hybrid solution.
See Row level security in SQL Server 2012