Snowflake Cloud data warehouse - Database best practice limitations - snowflake-cloud-data-platform

Is there a limitation on how many schemas and security objects you can have in one Snowflake database and will there be performance degradation with thousands of these objects?
Will splitting the data into multiple databases help performance?

There is no hard limit to my knowledge. Our experience is that there are dire performance problems when you are having larger thousands of security objects (users, roles, schemas). We have a couple of support cases open about this. So far we've been able to work out some points which affect the performance:
The performance problems are worse when the security objects are changing a lot. There is some sort of cache which helps a lot if the security objects do not change often.
The performance problems are worse when the security graph is complicated. The sheer number of users/roles is less important then their relations (e.g. user A has grant to 1000 roles, each of these has grant to 100 other roles)
Using multiple databases has no effect (apart from the previous point).
Scaling the warehouse has limited effect in that you can offset the "lag" in DCL/DDL queries by faster executing the DQL/DML queries.
The performance of the cloud security functions varies a lot - You can have avg 200ms on one query on one database for a couple of days and then it raises to avg 500ms out of the blue and stays there for a couple of days.
Interestingly we're constantly told by support that we're the only customer experiencing these kind of problems.

Related

Multi-tenancy: What benefit does one-db-per-tenant provide?

From what I've read, it seems like each site on the SE network lives in a single app, but has its own database. This seems costly, and I'm not sure what the benefit is. My novice theory is that since each db has less rows in each table, reads and writes are a little faster across the board.
This benefit comes at the seemingly huge cost of applying any migrations or db updates across databases, which is more time consuming and introduces higher costs for redundancy, DevOps, etc.
What benefit does this have over single-db + single-app architecture?
Keeping each tenant in a separate database makes it very easy to move a highly-demanding tenant to their own server, place their data/log files on faster I/O, etc. If you put everyone in the same database, you're eventually going to hit a wall on your current hardware and then you're either going to move everyone to bigger hardware anyway.
The maintenance part is not really a big deal, and actually makes some things easier. For deployments to identical schemas, you don't really need any additional DevOps resources, you just need a loop (or tools like SQL Farm Combine or Red-Gate Multi Script). Things that are easier are backups - while it sounds like more administrative overhead, and you'll still have about the same amount of overall data to back up, separate databases actually allows you much greater control - you can put different backups on different drives, run them on different schedules, they should be smaller and faster, and you can even keep different tenants on different recovery models. One additional bonus: restoring to a point in time due to a problem with one tenant only affects that tenant.
Another benefit is keeping each tenant's data separate - which sometimes satisfies a legal requirement, but also makes it very easy to delete a tenant or move them to a different server without affecting any other tenant.
Some other narrative in these answers over on dba.SE:
https://dba.stackexchange.com/a/16767/1186
https://dba.stackexchange.com/a/33556/1186
This is also a large facet in the design of cloud-based solutions like Windows Azure SQL Database.
It's similar to any scale up vs. scale out decision.
If you scale out you can distribute workload across more hardware at lower cost (generally). Reads and writes are also easier to optimize on smaller databases, as you mention. Basically, it's just easier to maintain and control performance on smaller databases: indexes are smaller, you don't have to worry about advanced design issues like partitioning, restores are faster in case you have to recover a single tenant's data, you don't have to worry about code mistakes exposing other tenant's data improperly, etc. etc.
As in everything there are plenty of trade offs that have to be considered, as you mentioned. But, there are very clear benefits to avoiding a multi-tenant situation.

The benefits of a multi-database web site?

What are the benefits of creating a multi-database site?
Does the size split increase speed?
In what cases would we want more than one connected to a site, and do we really need it?
The only real benefit I can think of is having multiple tables with the same name. That way, you could run a debugging database and a development database with the same tables, but the debugging database has testing information in it, and you can switch over to the development database when it's ready just by changing a single string. You could also use that to perform seamless backups, and many other things. Speed might possibly be improved a bit if you had tons of tables with tons of fields, but probably not by much.
Edit: Actually, if the multiple databases span multiple servers, you could store information for fast access (for example, in a forum you might store the latest threads and the user information) on the local server, and store, for example, archived threads on a high-capacity but slower to access server, which would definitely increase speed and decrease the cost of space on the main server.
I'm not sure what you mean by multi-database. If you're thinking about replication of one dataset across multiple database servers, you'll gain redundancy, and will also have the option of load balancing queries.
Edit:
You're mentioning "split of size", which may suggest you're thinking about fragmenting or sharding your dataset. That will reduce the amount of data being down or lost if suffering a crash. It will also spread queries across multiple servers, which will enable you to manage higher load.

SQL Server architecture guidance

We are designing a new version of our existing product on a new schema.
Its an internal web application with possibly 100 concurrent users (max)This will run on a SQL Server 2008 database.
On of the discussion items recently is whether we should have a single database of split the database for performance reasons across 2 separate databases.
The database could grow anywhere from 50-100GB over 5 years.
We are Developers and not DBAs so it would be nice to get some general guidance.
[I know the answer is not simple as it depends on the schema, archiving policy, amount of data etc. ]
Option 1 Single Main Database
[This is my preferred option].
The plan would be to have all the tables in a single database and possibly to use file groups and partitioning to separate the data if required across multiple disks. [Use schema if appropriate]. This should deal with the performance concerns
One of the comments wrt this was that the a single server instance would still be processing this data so there would still be a processing bottle neck.
For reporting we could have a separate reporting DB but this is still being discussed.
Option 2 Split the database into 2 separate databases
DB1 - Customers, Accounts, Customer resources etc
DB2 - This would contain the bulk of the data [i.e. Vehicle tracking data, financial transaction tables etc].
These tables would typically contain a lot of data. [It could reside on a separate server if required]
This plan would involve keeping the main data in a smaller database [DB1] and retaining the [mainly] read only transaction type data in a separate DB [DB2]. The UI would mainly read from DB1 and thus be more responsive.
[I'm aware that this option makes it harder for Referential Integrity to be enforced.]
Points for consideration
As we are at the design stage we can at least make proper use of indexes to deal performance issues so thats why option 1 to me is attractive and its more of a standard approach.
For both options we are considering implementing an archiving database.
Apologies for the long Question. In summary the question is 1 DB or 2?
Thanks in advance,
Liam
Option 1 in my opinion is the way to go.
CPU is very unlikely to be your bottleneck with 100 concurrent users providing your workload. You could acquire a single multi-socket server with additional CPU capacity available via hot swap technology to offer room to grow should you wish. Dependent on your availability requirements you could also consider using a Clustering solution to allow for swapping in more processing CPU resource by forced fail over to another node.
The performance of your disk subsystem is going to be your biggest concern. Your design decisions will be influenced by the storage solution you use, which I assume will be SAN technology.
As a minimum you will want to place your LOG(RAID 1) and DATA files(RAID 10 or 5 dependent on workload) on separate LUNS.
Dependent on your table access you may wish to consider placing different Filegroups on separate LUN's. Partitioning your table data could prove advantageous to you but only for large tables.
50 to 100GB and 100 users is a pretty small database by most standards today. Don't over engineer your solution by trying to solve problems that you haven't even seen yet. Splitting it into two databases, especially on two different servers will create a mountain of headaches that you're better off without. Concentrate your efforts on creating a useful product instead.
I agree to the other comments stating that between 50 and 100GB is small these days. I'd also agree that you shouldn't overengineer.
But, if there is a obvious (or not so obvious) logical separation between the entities you store (like you say, one being read-write and the other parts mainly read-only), I'd still split it in different dbs. At least I would design it in a way I could easily factor one piece out. Security would be one reason, management/backup/restore another, easier serviceability (because inherently the design will be better factored and parts better isolated from each other), and, in SQL Server, ability to scale out (or the lack thereof if it is a single database). Separating login and content databases for example often makes sense for bigger web applications.
And, if you really want a sound design, separate your entities in a single db, using different schemas, putting proper permissions on objects, you end up with almost the same effort in my eyes.
Microsoft products like SharePoint, TFS and BizTalk all use several different databases (Though I do not pretend to be aware of the reasons / probably just the outcome of the way they organize their teams).
Especially with regard to that you cannot scale out a single database instance on SQL Server (clustering needs multiple instances), I'd be tempted to split it.
#John: I would never use RAID5. Solves no purpose other than to hurt performance. I agree with the RAID10 approach.
Putting data in another database is not going to make the slightest difference to performance. Performance is a factor of other things entirely.
A reason to create a new database is for maintenance and administration reasons. For example if one set of data needs a different backup and recovery policy or has higher availability requirements.

ensuring database performance as data volume increases

I am currently into a performance tuning exercise. The application is DB intensive with very little processing logic. The performance tuning is around the way DB calls are made and the DB itself.
We did the query tuning, We put the missing indexes, We reduced or eliminated DB calls where possible. The application is performing very well and all is fine.
With smaller data volume (say upto 100,000 records), the performance is fantastic. My Question is, what needs to be done to ensure such good performance at higher data volumes ?
The data volumes are expected to reach 10 million records.
I can think of table and index partitioning, suggesting filesystems optimized for DB storage and periodic archiving to keep the number of rows in check. I would like to know what else could be done. Any tips/strategies/patterns would be very helpful.
Monitoring. Use some tools to monitor performance, and saturation of CPU, memory, and I/O. Make trend lines so you know where your next bottleneck will be before you get there.
Testing. Create mock data so you have 10 million rows on a testing server today. Benchmark the queries you have in your application and see how well they perform as the volume of data increases. You might be surprised at what breaks down first, or it may go exactly as predicted. The point is that you can find out.
Maintenance. Make sure your application and infrastructure support some downtime, because that's always necessary. You might have to defrag and rebuild your indexes. You might have to refactor some of the table structure. You might have to upgrade the server software or apply patches. To do this without interrupting continuous operation, you'll need some redundancy built in to the design.
Research. Find the best journals and blogs for the database brand you're using, and read them (e.g. http://www.mysqlperformanceblog.com if you use MySQL). You can ask good questions like the one you ask here, but also read what other people are asking, and what they're being advised to do about it. You can learn solutions to problems that you don't even have yet, so that when you have them, you'll have some strategies to employ.
Different databases need to be tuned in different ways. What RDBMS are you using?
Also, how do you know whether or not what you have done so far will result in poor performance with larger data sets? Have you tested your current optimisations with a large amount of test data?
When you did this, how did the performance change? If you are able to tune the database so that it performs with the data it has now, there's no reason to think that your methods won't work with a larger data set.
Depending on the RDBMS, the next type of solution is simple: get bigger, beefier hardware. More RAM, more disks, more CPUs.
You are on the right way:
1) Proper indexes
2) DBMS options tuning (memory caches, buffers, internal threads control and so on)
3) Queries tuning (especially log slow queries and then tune/rewrite them)
4) To tune your queries and indexes you may need to research your queries execution plans
5) Poweful dedicated server
6) Think about queries which your client applications send to the database. Are they always necessary? Do you need all the data you ask for? Is it possible to cache some data?
10 million records is probably too small to bother with partitioning. Typically partitioning will only be interesting if your data volumes are an order or magnitude or so bigger than that.
Index tuning for a database with 100,000 rows will probably get you 99% of what you need with 10 million rows. Keep an eye out for table scans or index range scans on the large tables in the system. On smaller tables they are fine and in some cases even optimal.
Archiving old data may help but this is probably overkill for 10 million rows.
One possible optimisation is to move reporting off onto a separate server. This will reduce the burden on the server - reports are often quite anti-social when run on operational systems as the schema tends not to be well optimised for it.
You can use database replication to do this or make a data mart for reporting. Replication is easier to implement but the reports will be less efficient, no more efficient than they were on the production system. Building a star schema data mart will be more efficient for reporting but incur additional development work.

Pro's of databases like BigTable, SimpleDB

New school datastore paradigms like Google BigTable and Amazon SimpleDB are specifically designed for scalability, among other things. Basically, disallowing joins and denormalization are the ways this is being accomplished.
In this topic, however, the consensus seems to be that joins on large tables don't necessarilly have to be too expensive and denormalization is "overrated" to some extent
Why, then, do these aforementioned systems disallow joins and force everything together in a single table to achieve scalability? Is it the sheer volumes of data that needs to be stored in these systems (many terabytes)?
Do the general rules for databases simply not apply to these scales?
Is it because these database types are tailored specifically towards storing many similar objects?
Or am I missing some bigger picture?
Distributed databases aren't quite as naive as Orion implies; there has been quite a bit of work done on optimizing fully relational queries over distributed datasets. You may want to look at what companies like Teradata, Netezza, Greenplum, Vertica, AsterData, etc are doing. (Oracle got in the game, finally, as well, with their recent announcement; Microsoft bought their solition in the name of the company that used to be called DataAllegro).
That being said, when the data scales up into terabytes, these issues become very non-trivial. If you don't need the strict transactionality and consistency guarantees you can get from RDBMs, it is often far easier to denormalize and not do joins. Especially if you don't need to cross-reference much. Especially if you are not doing ad-hoc analysis, but require programmatic access with arbitrary transformations.
Denormalization is overrated. Just because that's what happens when you are dealing with a 100 Tera, doesn't mean this fact should be used by every developer who never bothered to learn about databases and has trouble querying a million or two rows due to poor schema planning and query optimization.
But if you are in the 100 Tera range, by all means...
Oh, the other reason these technologies are getting the buzz -- folks are discovering that some things never belonged in the database in the first place, and are realizing that they aren't dealing with relations in their particular fields, but with basic key-value pairs. For things that shouldn't have been in a DB, it's entirely possible that the Map-Reduce framework, or some persistent, eventually-consistent storage system, is just the thing.
On a less global scale, I highly recommend BerkeleyDB for those sorts of problems.
I'm not too familiar with them (I've only read the same blog/news/examples as everyone else) but my take on it is that they chose to sacrifice a lot of the normal relational DB features in the name of scalability - I'll try explain.
Imagine you have 200 rows in your data-table.
In google's datacenter, 50 of these rows are stored on server A, 50 on B, and 100 on server C. Additionally server D contains redundant copies of data from server A and B, and server E contains redundant copies of data on server C.
(In real life I have no idea how many servers would be used, but it's set up to deal with many millions of rows, so I imagine quite a few).
To "select * where name = 'orion'", the infrastructure can fire that query to all the servers, and aggregate the results that come back. This allows them to scale pretty much linearly across as many servers as they like (FYI this is pretty much what mapreduce is)
This however means you need some tradeoffs.
If you needed to do a relational join on some data, where it was spread across say 5 servers, each of those servers would need to pull data from eachother for each row. Try do that when you have 2 million rows spread across 10 servers.
This leads to tradeoff #1 - No joins.
Also, depending on network latency, server load, etc, some of your data may get saved instantly, but some may take a second or 2. Again, when you have dozens of servers, this gets longer and longer, and the normal approach of 'everyone just waits until the slowest guy has finished' no longer becomes acceptable.
This leads to tradeoff #2 - Your data may not always be immediately visible after it's written.
I'm not sure what other tradeoffs there are, but off the top of my head those are the main 2.
So what I'm getting is that the whole "denormalize, no joins" philosophy exists, not because joins themselves don't scale in large systems, but because they're practically impossible to implement in distributed databases.
This seems pretty reasonable when you're storing largely invariant data of a single type (Like Google does). Am I on the right track here?
If you are talking about data that is virtually read-only, the rules change. Denormalisation is hardest in situations where data changes because the work required is increased and there are more problems with locking. If the data barely changes then denormalisation is not so much of a problem.
Novaday You need to find more interoperational environment for databases. More frequently You don't need only an relational DBs, like MySQL or MS SQL but also Big Data farms as Hadoop or non-relational DBs like MongoDB. In some cases all those DBs will be used in one solution so their performance must be as equal as possible in macro scale. It means, that You will not be able to use let say Azure SQL as relational DB and one VM with 2 cores and 3GB of RAM for MongoDB. You must scale-up Your solution and use DB as a Service when it is possible (if it is not possible, then build Your own cluster in a cloud).

Resources