Database Table Synchronization Without Table Dropping? - database

My company's workflow relies on two MSSQL databases: one for web content data and the other is the ERP. I've been doing some proof of concept on some tools that would serve as an intermediary that builds a relationship between the datasets, and thus far its proving to be monumentally faster.
Instead of reading out to both datasets, I'd much rather house a database on the local Linux box that represents the data I'm working with. That way, its less pressure on the system as a whole.
What I don't understand is if there is a way to update this new database without completely dropping the table each time or running through a punishing line by line check. If the records had timestamps, this would be easy...but they don't.
Does anyone have any tips? Am I missing some crucial feature I don't know about, or am I
SOL?
Finally, is there one preferred database stack out there anyone thinks might work better than another? I'm not committed to any technology at this point.
Thanks!

Have you read about the MERGE statement in SQL? It allows update or inserts on existing tables.
I assume your tables have primary keys even though you say there is no timestamp.

Related

split table rows of data into multiple tables according to column obeying constraints

I have a source flat file with about 20 columns of data an roughly 11K records. Each record (row) contains info such as
patientID,PatietnSSN.PatientDOB,PatientSex,PatientName,Patientaddress,PatientPhone,PatientWorkPhone,PatientProvider,PatientReferrer,PatientPrimaryInsurance,PatientInsurancePolicyID.
My goal is to move this data to a sql database.
I have created a database with the below datamodel
I know want to do a bulk insert to move all the records however I am unsure how to do that as you can see there are and have to be constraints in order to ensure referential integrity. What should my approach be? am I going about this all wrong? thus far I have used SSIS to import the data into a single staging table and now I must figure out how to write the 11k plus records to the individual tables in which they belong... so record 1 of the staging table will create 1 record across almost all of the tables minus perhaps the ones where there are 1 to many relationships like "provider" and "Referrer" as one provider will be linked to many patients but one patient can only have one provider.
I hope I have explained this well enough. Please help!
As the question is generic, I'll approach the answer in a generic way as well - in an attempt to at least get you asking the right questions.
Your goal is to get flat-file data into a relational database. This is a very common operation and is at least a subset of the ETL process. So you might want to start your search by reading more on ETL.
Your fundamental problem, as I see it, is two-fold. First, you have a large amount of data to insert. Second, you are inserting into a relational database.
Starting with the second problem first; Not all of your data can be inserted each time. For example, you have a provider table that holds a 1:many relationship with a patient. That means that you will have to ask the question of each patient row in your flat table as to whether the provider exists or needs creating. Also, you have seeded Ids, meaning that in some instance you have to maintain your order of creation so that you can reference the id of a created entry in the next created entry. What this means to you is that your effort will be more complex than a simple set of SQL inserts. You need to logic associated with the effort. There are several ways to approach this.
Pure SQL/TSQL; It can be accomplished but would be a lot of work and hard to debug/troubleshoot
Write a program: This gives you a lot of flexibility, but means you will have to know how to program and use programming tools for working with a database (such as an ORM)
Use an automated ETL tool
Use SQL Server's flat-file import abilities
Use an IDE with import capabilities - such as Toad, Datagrip, DBeaver, etc.
Each of these approaches will take some research and learning on your part -- this forum cannot teach you how to use them. And the decision as to which one you want to use will somewhat depend on how automated the process should be.
Concerning your first issue -- large data inserts. SQL has the facility for Bulk inserts docs, but you will have to condition your data first.
Personally (as per my comments), I am a .Net developer. But given this task, I would still script it up in Python. The learning curve is very kind in Python and it has lots of great tools for working with files and database. .Net and EF carry with it a lot of overhead with respect to what you need to know to get started that python doesn't -- but that is just me.
Hope this helps get you started.
Steve you are a boss, thank you. Ed thanks to you as well!
I have taken everyone's guidance into consideration and concluded that I will not be able to get away with a simple solution for this.
There are bigger implications so it makes sense to accomplish this ground work task in such a way that allows me to exploit my efforts for future projects. I will be proceeding with a simple .net web app using EF to take care of the data model and write a simple import procedure to pull the data in.
I have a notion of how I will accomplish this but with the help of this board I'm sure success is to follow! Thanks all-Joey
For the record tools I plan on using (I agree with the complexity and learning curve opinions but have an affinity for MS products).
Azure SQL Database (data store)
Visual Studio 2017 CE (ide)
C# (Lang)
.net MVC (project type)
EF 6 (orm)
Grace (cause I'm only human :-)

When is a flat DB design acceptable

When is it ok to use a flat DB table design nowadays. Ever? What I mean is when is it ok to abandon the wisdom of relational database design and revert back a flat table structure that incorporates no links, adding extra columns to add more data, when we should be creating a key to another table to store multiple rows.
I'm working on some ideas to discuss with a product management team. When I initially asked the question "Why are all these tables flat in nature" I was told that
"Read centric databases display better performance with a flat table structure."
I struggle with this explanation b/c a flat design present so many barriers to progress down the road.
Thoughts?
"Read centric databases display better performance with a flat table structure." This statement says table won't/rarely be used to insert/update/delete operations. In that case table must be properly indexed to get good performance. Since there won't be any kind of joins so table would be using lot of filters in where clause hence indexing is really important to be used appropriately.
This kind of scenario is usually used in data warehouses. When we designs warehouses, we usually eliminates primary/foreign keys and uses business primary keys. This is because of huge database in wareshouse.
Never.
Whatever problem you think you are going to solve by ignoring relational database theory, you will only create many more intractable problems. Furthermore, the original problem that you attempt to avoid by ignoring relational theory will invariably be based on a misconception anyway.
Short answer: Almost always!
Your website almost never needs conventional database!
After 20 years of working as an IT admin with big and small projects I can say with confidence that over 90% of todays websites do not need DataBase AT ALL.
It's just another layer of obfuscation that most companies and people can do without.
Face the facts people. Most websites out there don't get a single hit in a day so talking about DataBase performance is quite silly when it comes to HUGE majority of websites today (2019).
That means that over 90% of these sites could and should switch to some flat file CMS/CMR like PageKit, Grav or Bludit (It's my personal favorite because of its minimalistic approach. It disdains flatDB and uses ordinary folders to contain articles in HTML files.)
I never did figure out why CMS leaders like WordPress and Joomla insist on complicating their default setup by forcing their users to use DataBase connection and configuration that's often the reason the site malfunctions. If and only when site actually needs some type of DB like for instance if it has many user accounts then DB is warranted. Still, most websites have only a hand-full user accounts.
Many times we see some site down because the DataBase engine is down or can't handle so many simultaneous connections while Apache or NginX web-servers are still up and running.
Don't just follow others blindly. Time to be brave and lead.

Parallel bulk loading using partition switching of indexed table in SQL Server 2008

This is a follow up to a previous question of mine after definitely deciding on partition switching as the best way to quickly get data into a heavily indexed fact type table that needs to remain available to readers.
While it seems to be the best way, it is not quite good enough to really satisfy the requirement to allow several (< 5) users to bulk insert at the same time, have the new data indexed and to appear in the indexed views (not necessarily real indexed views, just selects that rely on indices).
The idea of partitioning was that each partition and the index subtree rooted at the partition could, in parallel, be locked as read-only, copied into a working table, new data inserted/updated and the indexes rebuilt then switched back into the main table so readers aren't affected.
The problem is the single working table. Each parallel bulk insert needs its own copy, with the same constraints as the main table to allow switching.
So far I've hit several walls trying to get around this bottleneck:
I tried partitioning the working table using the same partition
function. This doesn't work because you can't disable the indexes on
a partition basis to insert into one while rebuilding the index on
another.
Creating a temporary table as the working table. This
doesn't work because, while you can use the same index names, you
can't easily dynamically create the constraints and can't switch
that in anyway.
Have a fixed set of named working tables? How can I select one and work with it under an alias so I have just one stored proc?
Dynamic SQL? I've tried very hard to avoid going that route. It's complicated as it is.
Big challenge but has anyone got any ideas before I accept the bottleneck? Would Sql 2012 help? How do proper data warehouses cope with this?
How do proper data warehouses cope with this? Compromise and set realistic goals for the EDW. The data warehouse can't be everything to everyone. Make sure that what you're implementing is the best solution for the business (not just the techies/analysts). Are your goals realistic if you cannot find solutions from experienced peers and experts?
Associate a cost with all of the hoops you jump through. Does the data really need to be up to the minute? What if I told you that we needed to spend another $200,000 on storage because we're constantly duplicating partitions and rebuilding indexes and the current solution can't keep up with the IOPS demand? At some point, they're going to figure out that it's not free. While you don't need to just say no, you do need to be realistic and up-front about the cost associated. Additionally, your storage admin will thank you.
As for 2012, there is a new columnstore index which can reduce or replace all of the current nonclustereds you're using to cover all you're analysts search requests. It's highly compressed, covers a very wide variety of search arguments, and utilizes the new Batch execution mode. It performs best on low selectivity queries like the ones frequently performed on fact tables. The one catch is that you can't directly do updates. You'll have to switch the partition out to a staging table, drop the columnstore on the staging table, update the staging table, add the columnstore back, then switch the partition back into the fact table. It sounds like alot, but could be significantly faster and require less IO than maintaining all of those nonclustereds.
My question has always been "Is it really a fact table if it is constantly changing?". This is not OLTP is it? Try offsetting transactions or at least push all updates to a scheduled off-peak time. Updating fact tables is becoming a thing of the past. All of the big boys are moving toward the "Update frowned upon" column oriented architecture for data warehousing. PowerPivot and the Analysis Services Tabular Model are built on the columnstore technology.
Finally, Review Kimballs' DW Toolkit books. He has several that lay out best practices and cover edge-case scenarios. What I learned from them was that Data Warehouse Development is not just Database Development on steroids. It also involves politics and focusing resources on what's best for the business.

Replication - syncronizing most of the data some of the time

I have some data that isn't properly "partitioned" (for lack of a better word).
All inserts, processing and reporting happen on the same table. The bulk of the processing happens not long after the insert and not long after that it becomes immutable (we're talking days).
I could do all inserts and processing on a new table that I replicate to the old table. When I detect that the data has become immutable I would delete the data from the new table, but I would edit the delete replication stored procedure so that the delete did not replicate.
How bad an idea is this? <edit1>That is, editing the replication stored procedure.</edit1>
It seems attractive at the moment (I haven't slept on it yet) because it might mitigate a performance problem with only very small changes to the application. It also seems like it might be a good way to shoot myself in the foot.
Edit1:
I like the idea of inserting into two tables because I can avoid the view and the maintenance window described in Jono's answer. No offense, Jono, I actually use this technique elsewhere.
I might want to use replication because one table might be in another database (I know, I didn't mention this) and that way I don't have to worry about committing to two tables, I just let replication handle that.
My actual concern (that I didn't make clear) is that editing the replication stored procedure could end up being a deployment/maintenance headache.
I wouldn't advocate replication to solve a performance issue (unless it's a problem of physical data distribution); if anything it's going to slow your system down as the changes are propagated to their destination. If you're using a single server, I'd suggest adding a second table with the same schema as the first, but with your indexes optimised for the kind of work you do in your processing phase. Then create a view that selects from both tables, and use that view in any query where you want the union of both tables. You could then throw more hardware at the second table (I'm thinking of a separate file group over more spindles) and then migrate the data on a weekly delay into the first table, during an available maintenance window.

What are the advantages of using a single database for EACH client?

In a database-centric application that is designed for multiple clients, I've always thought it was "better" to use a single database for ALL clients - associating records with proper indexes and keys. In listening to the Stack Overflow podcast, I heard Joel mention that FogBugz uses one database per client (so if there were 1000 clients, there would be 1000 databases). What are the advantages of using this architecture?
I understand that for some projects, clients need direct access to all of their data - in such an application, it's obvious that each client needs their own database. However, for projects where a client does not need to access the database directly, are there any advantages to using one database per client? It seems that in terms of flexibility, it's much simpler to use a single database with a single copy of the tables. It's easier to add new features, it's easier to create reports, and it's just easier to manage.
I was pretty confident in the "one database for all clients" method until I heard Joel (an experienced developer) mention that his software uses a different approach -- and I'm a little confused with his decision...
I've heard people cite that databases slow down with a large number of records, but any relational database with some merit isn't going to have that problem - especially if proper indexes and keys are used.
Any input is greatly appreciated!
Assume there's no scaling penalty for storing all the clients in one database; for most people, and well configured databases/queries, this will be fairly true these days. If you're not one of these people, well, then the benefit of a single database is obvious.
In this situation, benefits come from the encapsulation of each client. From the code perspective, each client exists in isolation - there is no possible situation in which a database update might overwrite, corrupt, retrieve or alter data belonging to another client. This also simplifies the model, as you don't need to ever consider the fact that records might belong to another client.
You also get benefits of separability - it's trivial to pull out the data associated with a given client ,and move them to a different server. Or restore a backup of that client when the call up to say "We've deleted some key data!", using the builtin database mechanisms.
You get easy and free server mobility - if you outscale one database server, you can just host new clients on another server. If they were all in one database, you'd need to either get beefier hardware, or run the database over multiple machines.
You get easy versioning - if one client wants to stay on software version 1.0, and another wants 2.0, where 1.0 and 2.0 use different database schemas, there's no problem - you can migrate one without having to pull them out of one database.
I can think of a few dozen more, I guess. But all in all, the key concept is "simplicity". The product manages one client, and thus one database. There is never any complexity from the "But the database also contains other clients" issue. It fits the mental model of the user, where they exist alone. Advantages like being able to doing easy reporting on all clients at once, are minimal - how often do you want a report on the whole world, rather than just one client?
Here's one approach that I've seen before:
Each customer has a unique connection string stored in a master customer database.
The database is designed so that everything is segmented by CustomerID, even if there is a single customer on a database.
Scripts are created to migrate all customer data to a new database if needed, and then only that customer's connection string needs to be updated to point to the new location.
This allows for using a single database at first, and then easily segmenting later on once you've got a large number of clients, or more commonly when you have a couple of customers that overuse the system.
I've found that restoring specific customer data is really tough when all the data is in the same database, but managing upgrades is much simpler.
When using a single database per customer, you run into a huge problem of keeping all customers running at the same schema version, and that doesn't even consider backup jobs on a whole bunch of customer-specific databases. Naturally restoring data is easier, but if you make sure not to permanently delete records (just mark with a deleted flag or move to an archive table), then you have less need for database restore in the first place.
To keep it simple. You can be sure that your client is only seeing their data. The client with fewer records doesn't have to pay the penalty of having to compete with hundreds of thousands of records that may be in the database but not theirs. I don't care how well everything is indexed and optimized there will be queries that determine that they have to scan every record.
Well, what if one of your clients tells you to restore to an earlier version of their data due to some botched import job or similar? Imagine how your clients would feel if you told them "you can't do that, since your data is shared between all our clients" or "Sorry, but your changes were lost because client X demanded a restore of the database".
As for the pain of upgrading 1000 database servers at once, some fairly simple automation should take care of that. As long as each database maintains an identical schema, then it won't really be an issue. We also use the database per client approach, and it works well for us.
Here is an article on this exact topic (yes, it is MSDN, but it is a technology independent article): http://msdn.microsoft.com/en-us/library/aa479086.aspx.
Another discussion of multi-tenancy as it relates to your data model here: http://www.ayende.com/Blog/archive/2008/08/07/Multi-Tenancy--The-Physical-Data-Model.aspx
Scalability. Security. Our company uses 1 DB per customer approach as well. It also makes code a bit easier to maintain as well.
In regulated industries such as health care it may be a requirement of one database per customer, possibly even a separate database server.
The simple answer to updating multiple databases when you upgrade is to do the upgrade as a transaction, and take a snapshot before upgrading if necessary. If you are running your operations well then you should be able to apply the upgrade to any number of databases.
Clustering is not really a solution to the problem of indices and full table scans. If you move to a cluster, very little changes. If you have have many smaller databases to distribute over multiple machines you can do this more cheaply without a cluster. Reliability and availability are considerations but can be dealt with in other ways (some people will still need a cluster but majority probably don't).
I'd be interested in hearing a little more context from you on this because clustering is not a simple topic and is expensive to implement in the RDBMS world. There is a lot of talk/bravado about clustering in the non-relational world Google Bigtable etc. but they are solving a different set of problems, and lose some of the useful features from an RDBMS.
There are a couple of meanings of "database"
the hardware box
the running software (e.g. "the oracle")
the particular set of data files
the particular login or schema
It's likely Joel means one of the lower layers. In this case, it's just a matter of software configuration management... you don't have to patch 1000 software servers to fix a security bug, for example.
I think it's a good idea, so that a software bug doesn't leak information across clients. Imagine the case with an errant where clause that showed me your customer data as well as my own.

Resources