SQL Server performance - schema vs multiple databases - sql-server

This is purely from performance standpoint and only for SQL Server. I am using SQL Server 2012. I am migrating from a different database server (Ctree). The databases are from less than 100mbs to about 2-3GBs, five in total. There are a lot of tables - over 400 tables in all.
Would it be better in terms of performance only to use a single database and multiple schema or multiple databases as is? The existing logic is that there are multiple databases and there are 7 different applications (C# - ADO.NET) that use these.

I highly doubt you'll see any significant performance benefit from splitting this up into multiple databases, if those are all running on the same physical server machine in the end.
However: if you do split it up into several separate databases, you won't be able to establish referential integrity using foreign key constraints across database boundaries - so that might be a drawback and a negative point.

Related

Speeding up SQL query with multiple joins

I have a .NET e-commerce solution running off a mid-sized SQL Server express database. The system queries the order data which involves many joins (potentially 20 tables) which is quite slow, particularly during periods of heavy use, and I think I have exhausted the options for indexing the tables and optimising the queries.
I now believe the best option going forward is denormalization - see https://msdn.microsoft.com/en-us/library/cc505841.aspx
What I would like to know is:
Would SQL Server columnstore indexes be a better option?
I am considering using in-memory OLTP on the denormalized tables because having the data in memory will undoubtedly make queries faster but it doesn't seem like the intended use, so should I?
Should I use something like ElasticSearch instead, and what would be the benefit over SQL Server in-memory OLTP?
Should I use SQL Server OLAP instead? Seems like overkill...

Advice on SQL Server database architecture

We have a medium-sized web application (multiple instances), querying against a single SQL Server 2014 database.
Not the must robust architecture, no clustering/failover, and we have been getting a few deadlocks recently.
I'm looking at how i can improve the performance and availability of the database, reduce these deadlocks, and have a better backup/failover strategy.
I'm not a DBA, so looking for some advice here.
We currently have the following application architecture:
Multiple web servers reading and writing to a single SQL Server DB
Multiple background services reading and writing to the same single SQL Server DB
I'm contemplating making the following changes:
Split the single DB into two DB's, one read-only and another read-write. The read-write DB replicates the data to the read-only DB using SQL Server replication
Web servers connect to the given DB depending on the operation.
Background servers connect to the read-write DB (most the writes happen here)
Most of the DB queries on the web servers are reads (and a lot of the writes can be offloaded to the background services), so that's the reason for my thoughts here.
I could then also potentially add clustering to the read-only databases.
Is this a good SQL Server database architecture? Or would the DBA's out there simply suggest a clustering approach?
Goals: performance, scalability, reliability
Without more specific details about your server, it's tough to give you specific advice (for example, what's a medium-sized web application? what are the specs on your database server? What's your I/O latency like? CPU contention? Memory utilization?)
At a high level of abstraction, deadlocks usually occur because of two reasons:
Your reads are too slow, and
Your writes are too slow.
There's lots of ways to address both of those issues, but in general:
You can cover a lot of coding sins with good hardware, and
Don't re-architect a solution until you've pursued performance tuning options (including indexing strategies and/or procedure rewrites).
Clustering is generally considered to be used as a strategy for High Availability/Disaster Recovery, not performance augmentation (there are always exceptions).

Performance issue while separation small amount of tables to different schemas

Assume that we have a database on MS SQL Server 2008 with 20-30 tables at the core of our distributed system. Permissions to read and write these tables can vary for each layer of our system.
For example, we have three types of clients, that could connect to our database directly or via some intermediate layer. To eliminate the possibility of incorrect operations we have to correctly set the permissions for each type of client.
The obvious solution is to separate our tables to different SQL Server schemas and set permissions to access objects in schema as a whole. And now we must decide how justified this solution on relatively small amount of tables and how it will impact on performance (it seeems that very often we must to join tables from different schemas).
Joining tables from different schemas will not affect performance.
But, actually, it is better to grant permissions on procedures not on tables.

Can we mirror SQL Server tables from one DB into another with acceptable performance?

We have two servers, A and B
on server A we have DB OPS_001 with a number of tables.
on server B we have DB XYZ with a number of tables.
We consider a project of integrating both systems and start pointing the resulting system to the tables in server B (for foreign keys/etc). We face some technical difficulties physically moving all tables from server A.OPS_001 into B.XYZ due to legacy applications that need to have connections rewritten and compiled.
Is there a way to mirror server A.OPS_001 tables in B.XYZ such a way that the performance is still acceptable (like not taking 1,2 seconds for a select on a PK)? I know acceptable is a very generic term but take in consideration around 150 users rely on those 2 databases from 9am to 5pm.
I've tested linked server views but it's very slow.
Just so you know, A is a SQL Server 2000 and B is SQL Server 2008.
EDIT:
Size of the source DB is 220 tables and the file itself around 14 GB.
The solution was to migrate the database to the same server and since the databases were extremely tightly coupled, they should be merged together and form a single database with relational integrity.

Can we span the SQL server to multiple machines

When the database is becoming huge, how to divide it and span to multiple servers?
How huge? Single instance SQL Server deployments are capable of handling peta-byte databases.
For scale-out one option to look at is Peer-to-Peer Transactional Replication, which can do an in-place scale-out of an application not explicitly designed for such.
Applications that are designed for scale-out ahead of time have more options, for instance consider how MySpace spans over individual 1000 databases by using a message buss.
For more specific answers, you have to provide more specific details about your real case.

Resources