what is the limit of account create in an organisation? - snowflake-cloud-data-platform

In snowflake How many account one can create in an organization. I read a soft limit is 25 how far we can go in this count.
Is there any good practice of creating a separate account of the client rather than creating a different DB in an account?
Thanks

By default, the maximum number of accounts in an organization cannot exceed 25.
This can be increased by contacting Snowflake Support.

Related

Azure SQL Database Pricing

I am unable to locate the cost per transaction for a Azure SQL Database.
https://learn.microsoft.com/en-us/azure/sql-database/sql-database-single-databases-manage
I know the SQL Server database is about 5$ per month but how much for the transactions?
If I go to the Azure Pricing Calculator (https://azure.microsoft.com/en-us/pricing/calculator/) they do not seem to have the info. They list the price for a single database as $187.77 so that is not the same service as they one you create if you use the link above.
TL;DR:
Azure SQL pricing is "flat": first you choose a performance level for your database which has a fixed cost (e.g. S6 for $580/mo or S1 for $30/mo), and this is billed by the second. Azure does not bill your account for actual IO/CPU usage.
The rest:
There is no single "cost per transaction" because a "transaction" is not a single uniform amount of work for a database server (e.g. a single SELECT over a small table with indexes is significantly less IO and CPU intensive compared to a MERGE over millions of rows).
There three different types of Azure-SQL deployment in Azure, with their own different formulas for determining monthly cost:
Single database (DTU)
Single database (vCore)
Elastic pool
Managed Instance
I assume you're interested in the "single database" deployment types, as "Managed instance" is a rather niche application and "Elastic pool" is to save money if you have lots (think: hundreds or thousands) of smaller databases. If you have a small number (e.g. under 100) of larger databases (in terms of disk space) then "Single database" is probably right for you. I won't go into detail on the other deployment types.
If you go with DTU-based Single Database deployment (which most users do), then the pricing follows this general formula:
Monthly-price = ( Instances * Performance-level )
Where Performance-level is the selected SKU for the minimum level of performance you need. You can change this level up or down at will at any point in time as you're billed by the second and not per month (but per-second pricing is difficult to work into a monthly price estimate)
A "DTU" (Database Throughput Unit) is a unit of measure that represents the actual cost to Microsoft of running your database, which is then passed on to you somewhat transparently (disregarding whatever profit-margin Microsoft has per-DTU, of course).
When determining what Performance-level to get for your database you should select the performance level that offers the minimum number of DTUs that your application actually needs (you determine this through profiling and estimating, usually by starting off with a high-performance database for a few hours (which won't cost more than a few dollars) and running your application code - if the actual DTU usage numbers are low (e.g. you get an "S6" 400 DTU (~$580/mo) database and see that you only use 20 DTUs under normal loads then you can safely leave it on the "S1" 20DTU (~$30/mo) performance level
The question about what a DTU actually is has been asked before, so to avoid creating a duplicate answer please read that QA here: Azure SQL Database "DTU percentage" metric
It is based on your requirement, I am using a single instance Azure SQL Database, so basically based on your cpu cost and your transaction limit and space called 'DTU'. For this totally based on your requirement.
If it is in VM (Virtual machine), that applied your vm cost and your sqlserver cost (if you do not have licence of sqlserver).
Cost https://azure.microsoft.com/en-us/pricing/calculator/

SQL Server Instance Public Website Security

Is it best practices to separate databases that are used by public website applications into their own database instance, from those databases that have PII info and company IP info? The idea being for security reasons. I have around 30 databases that I am migrating to a new environment and I am finding this to be the hardest decision to make. Does anyone have any advise?
Best practise would to be have these databases on completely separate servers, although virtual servers could be a good option if the network interface is separated by the hypervisor (i.e. it is not possible for one VM to sniff the traffic of other VMs using the same card).
The reason is that if one database is breached, the others are not breached too. Yes, you can setup different users with permissions only to their own database, however defense in depth is recommended. If there are any misconfigurations in SQL server, this will add additional protection.
Is it worth it?
The risk calculation you have to make is whether any breaches will cost the company more than the cost of implementing this.
Single loss expectancy (SLE) = value of asset * exposure factor
Annualised loss expectancy (ALE) = SLE * annual rate of occurrence (ARO)
Value of asset is everything involved in setting up the databases, including acquiring the data, the value of it to owners and users, and the value of the asset to competitors or attackers.
Exposure factor is the percentage of loss a realised threat would have.
ARO is the number of times a threat takes place per year (1 for once a year, 0.5 for once every two years, 2 for twice a year).
So if your ALE is less than the cost to implement and maintain a system with a separate database server for each database per year, then it isn't. However, middle ground could be found and you could separate the data onto a few servers until the numbers stack up.
Different instances are a step up in security over the same instance. However, vulnerabilities that allow an attacker to gain control of the whole server will mean that all of your databases are compromised at once. Such as this one, in an earlier version of SQL Server. There is no guarantee that vulnerabilities such as this one will not be discovered in future.

Is it a good idea to have a single database per user in a Forms Authentication environment, instead of a single multi-tenant database?

I've been contemplating this back and forth - right now, I have an app with many users, and each record in the database is tied to the user's authenticated username (by the CreatedBy column). I know this is a common practice for differentiating user records. But, I've been thinking in terms of keeping the records as secure as possible (so that no other user can access them by SQL injection, XSS, etc.) would a good idea be to create a separate database for each user? This would obviously mean if I have 150 users, I'll have 150 databases. Is this generally a bad practice? Is containing each user's records in the same database acceptable when it comes to security of the individual's records so as long as I prevent SQL injection and XSS?

SQL Server fragmented connection pool performance in multi-tenant applications

I’m looking at an implement for multi-tenancy in SQL Server. I'm considering a shared database, shared schema and tenant view filter described here. The only drawback is a fragmented connection pool...
Per http://msdn.microsoft.com/en-au/architecture/aa479086, Tenant View Filter is described as follows:
"SQL views can be used to grant individual tenants access to some of the rows in a given table, while preventing them from accessing other rows.
In SQL, a view is a virtual table defined by the results of a SELECT query. The resulting view can then be queried and used in stored procedures as if it were an actual database table. For example, the following SQL statement creates a view of a table called Employees, which has been filtered so that only the rows belonging to a single tenant are visible:
CREATE VIEW TenantEmployees AS
SELECT * FROM Employees WHERE TenantID = SUSER_SID()
This statement obtains the security identifier (SID) of the user account accessing the database (which, you'll recall, is an account belonging to the tenant, not the end user) and uses it to determine which rows should be included in the view"
Thinking this through , if we have one database storing say 5,000 different tenants, then the connection pool is completely fragmented and every time a request is sent to the database ADO.NET needs to establish a new connection and authenticate (remember connection pooling works for each unique connection string) and this approach means you have 5,000 connection strings…
How worried should I be about this? Can someone give me some real world examples of how significant an impact the connection pool has on a busy multi-tenant database server (say servicing 100 requests per second)? Can I just throw more hardware at the problem and it goes away?
Thoughts ??
My sugestion will be to develop a solid API over your database. Scalability, modularity, extensibility, accounting will be the main reasons. Few years down the line you may be found swearing at yourself for playing with SUSER_SID(). For instance, consider multiple tenants managed by one account or situations like whitelabels...
Have a data access api, which will take care of authentication. You can still do authorisation on the DB level, but it's a whole different topic then. Have users and perhaps groups and grant them permissions to tenants.
For huge projects nevertheless, you'll still find it better to have a single DB per big player.
I see I did not answer your main question about fragmented connection pool performance, but I'm convinced there are many valid arguments not to go that path nevertheless.
See http://msdn.microsoft.com/en-us/library/bb669058.aspx for hybrid solution.
See Row level security in SQL Server 2012

Too many Active Directory groups a performance issue?

I am using Active Directory as a identity and group repository for my web application. I connect to it remotely through the LDAP SSL interface and perform operations such as creating users (who has memberships to one or more groups), creating groups and authenticating users.
I could potentially have tens of thousands of small groups (maybe 50,000) for, say 100,000 user, in a flat structure. I am wondering if this is going to be a performance issue when calling Active Directory through LDAP. Active directory enforces uniqueness on username and group names. Should I be concerned that operations like creating and updating user might become too slow as the number of groups grows?
No. As long as you have your indexes set up correctly. It's not a flat structure once it's indexed.

Resources