MS Access database with no relations - database

Can anyone recommend a tool or suggest the approach when dealing with MS Access database with no relationships between tables?
As part of data migration project I am creating data mapping definition rules but it becomes more and more difficult and time consuming to correctly identify source tables/fields for extraction.
I have many tables with the same data appearing in different places. Furthermore, as there were no validation rules when data was input, many entries contain spelling errors or generally do not match expected data type. Most of the tables however already have the keys (primary & foreign) created.
I am looking for a quick solution to rebuild the database (*.mdb), ideally with a use of some software which could identify all potential data issues, suggest corrections, allow for adjustments and finally left off with fully relational database where the data can easily be identified and not scattered all over the place.
I have some general knowledge of databases and SQL but didn't use Access much before so I'm trying to save myself some of the time. And - if it matters - I don't care about database performance at all... Only the data itself. I will be extracting it to *.csv files later anyway...
Comments, suggestions and/or other considerations will be appreciated.
Thanks in advance
J.

I don't believe there is any software that will analyze an Access database and use some kind of artificial intelligence to generate a new database with good data and strong relationships.
My recommendation though is to export all the data into SQL Server (or even MySQL) and then work with it there. It's much easier to manipulate the data with a real query language instead of trying to scrub data in Access.
You can do mass updates, comparisons, joins, etc. with SQL Server. You can query the schema easily (write queries to see if a field appears in a table), change schemas/table definitions with code, etc.
Then once you're done you can use jobs (SSIS) to export the data to CSV.
(You can download SQL Express if you don't have/can't afford SQL Server.)

Related

Most Efficient Way to Migrate Un-Normalized Data in an Access Database to a Normalized Form in a SQL Server Database

I've been doing some research on this topic for a while now and can't seem to find a similar instance to my issue. I will try and explain everything as best I can, as simply as I can.
The problem is in the title; I am trying to migrate data from an Access database to SQL Server. Typically, this isn't really a hard problem as there exists several import/export tools within SQL Server but I am looking for the best solution. That or some advice/tips as I am somewhat new to database migration. I will now begin to explain my situation.
So I am currently working on migrating data that exists in an Access “database” (database in quotes because I don’t think it is actually a database, you’ll know why in a minute) in an un-normalized form. What I mean by un-normalized is that all of the data is in one table. This table has about 150+ columns and the rows number in the thousands. Yikes, I know; this is what I’ve walked into lol. Anyways, sitting down and sorting through everything, I’ve designed relationships for the data that normalize it nicely in its new home, SQL Server. Enter my predicament (or at least part of it). I have the normalized database set up to hold the data but I’m not sure how to import it, massage/cut it up, and place it in the respective tables I’ve set up.
Thus far I’ve done a bunch of research into what can be done and for starters I have found out about the SQL Server Migration Assistant. I’ve begun messing with it and was able to import the data from Access into SQL Server, but not in the way I wanted. All I got was a straight copy & paste of the data into my SQL Server database, exactly as it was in the Access database. I then learned about the typical practice of setting up a global table/staging area for this type of migration, but I am somewhat of a novice when it comes to using TSQL. The heart of my question comes down to this; Is there some feature in SQL Server (either its import/export tool or the SSMA) that will allow me to send the data to the right tables that already exist in my normalized SQL Server database? Or do I import to the staging area and write the script(s) to dissect and extract the data to the respective normalized table? If it is the latter, can someone please show me some tips/examples of what the TSQL would look like to do this sort of thing. Obviously I couldn’t expect exact scripts from anyone without me sharing the data (which I don’t have the liberty of as it is customer data), so some cookie cutter examples will work.
Additionally, future data is going to come into the new database from various sources (like maybe excel for example) so that is something to keep in mind. I would hate to create a new issue where every time someone wants to add data to the database, a new import, sort, and store script has to be written.
Hopefully this hasn’t been too convoluted and someone will be willing (and able) to help me out. I would greatly appreciate any advice/tips. I believe this would help other people besides me because I found a lot of other people searching for similar things. Additionally, it may lead to TSQL experts showing examples of such data migration scripts and/or an explanation of how to use the tools that exist in such a way the others hadn’t used before or have functions/capabilities not adequately explained in the documentation.
Thank you,
L
First this:
Additionally, future data is going to come into the new database from
various sources (like maybe excel for example)...?
That's what SSIS is for. Setting up SSIS is not a trivial task but it's not rocket science either. SQL Server Management Studio has an Import/Export Wizard which is a easy-to-use SSIS package creator. That will get you started. There's many alternatives such as Powershell but SSIS is the quickest and easiest solution IMO. Especially when dealing with data from multiple sources.
SSIS works nicely with Microsoft Products as data sources (such as Excel and Sharepoint).
For some things too, you can create an MS Access Front-end that interfaces with SQL Server via sql server stored procedures. It just depends on the target audience. This is easy to setup. A quick google search will return many simple examples. It's actually how I learned SQL server 20+ years ago.
Is there some feature in SQL Server that will allow me to send the
data to the right tables that already exist in my normalized SQL
Server database?
Yes and don't. For what you're describing it will be frustrating.
Or do I import to the staging area and write the script(s) to dissect
and extract the data to the respective normalized table?
This.
If it is the latter, can someone please show me some tips/examples of
what the TSQL would look like to do this sort of thing.
When dealing with denormalized data a good splitter is important. Here's my two favorites:
DelimitedSplit8K
PatternSplitCM
In SQL Server 2016 you also have split_string which is faster (but has issues).
Another must have is a good NGrams function. The link I posted has the function attached at the bottom of the article. I have some string cleaning functions here.
The links I posted have some good examples.
I agree with all the approaches mentioned: Load the data into one staging table (possibly using SSIS) then shred it with T-SQL (probably wrapped up in stored procedures).
This is a custom piece of work that needs hand built scripts. There's no automated tool for this because both your source and target schemas are custom schemas. So you'd need to define all that mapping and rules somewhow.... and no SSIS does not magically do this!
It sounds like you have a target schema and mappings between source and target schema already worked out
As an example your first step is to load 'lookup' tables with this kind of query:
INSERT INTO TargetLookupTable1 (Field1,Field2,Field3)
SELECT DISTINCT Field1,Field2,Field3
FROM SourceStagingTable
TargetLookupTable1 should already have an identity primary key defined (which is not mentioned in the above query because it is auto generated)
This is where you will find your first problem. You'll almost definitely find your distinct query just gives you a whole lot of duplicated mispelt data rubbish data. So before you even load your lookup table you need to do data cleansing.
I suggest you clean the data in your source system directly but it depends how comfortable you are with that.
Next step is: assuming your data is all clean and you've loaded a dozen lookup tables in this way..
Now you need to load transactions but you don't know the lookup key that you just generated!
The trick is to pre-include an empty column for this in your staging table to record this
Once you've loaded up your lookup table you can write the key back into the staging table. This query matches back on the fields you used to load the lookup, and writes the key back into the staging table
UPDATE TGT
SET MyNewLookupKey = NewLookupTable.MyKey
FROM SourceStagingTable TGT
INNER JOIN
NewLookupTable
ON TGT.Field1 = NewLookupTable.Field1
AND TGT.Field2 = NewLookupTable.Field2
AND TGT.Field3 = NewLookupTable.Field3
Now you have a column called MyNewLookupKey in your staging table which holds the correct lookup key to load into you transaction table
Ongoing uploads of data is a seperate issue but you might want to investigate an MS Access Data Project (although they are apparently being phased out, they are very handy for a front end into SQL Server)
The thing to remember is: if there is anything ambiguous about your data, for example, "these rows say my car is black but these rows say my car is white", then you (a human) needs to come up with a rule for "disambiguating" it. It can't be done automatically.
So there are quite a number of ways to skin this cat. I don't know much about the "Migration Assistant", but I somehow doubt it's going to make your life easier given what you're trying to do.
I'd just dump the whole denormalized mess into a single big staging table then shred it where you need it using SQL. I know you asked for help with the TSQL, but without having some idea of what the denormalized data is and how you want to re-shape it, all I can do really is suggest you read up on SQL in general (select, from, where, group by, etc).
You could also do the work in SSIS, but ultimately the solution you use is largely going to depend on the nature of how you need to normalize the big denormalized data set. IMHO doing this in SQL is usually the easiest way, but then again when you're a hammer, everything looks like a nail.
As far as future proofing the process, how you import the Access data probably will have little bearing on how you'd import Excel data. If you have a whole lot of different data sources which you'll need to incorporate on a recurring basis, SSIS might be a good choice to invest some time and effort into for the long run. No matter what, incorporating data from a distinct data source takes time and effort. You'll have to do some extra work no matter what. I would weight how frequently you think you'll have to integrate a given data source, and how much effort is involved to massage it into the format you want.
I have a completely different opinion. Because I do both database development and Microsoft's Power BI - - on the PBI side we come across a lot of non-normalized data because a lot of the data is coming in from excel.
My guess is that what is now in Access was an import of something originally began in excel.
Excel Power Query and PBI offers transforms to pivot and unpivot layout. I would use these tools to do that task. Then import the results into SQL.

How divide the database up into into smaller databases

I have a big SQL Server 2012 database.
I want to split it into preferences and data.
However I find that SQL Server does not seem to support the idea of dividing your data up into object oriented databases. It seems to rely on everything being in the same database.
For example foreign keys are not supported in database. Also cross database joins are a real pain to do.
How would someone typically go about doing this? Is it just a limitation of SQL Server that I should use the same DB for everything?
SQl Server provides partitioning feature. As per wikipedia
A partition is a division of a logical database or its constituting elements into distinct independent parts. Database partitioning is normally done for manageability, performance or availability reasons
1.Horizontal partitioning
2.Vertical partitioning
Each has it is own file group.it can be configured
Visit this links that should help
MSDN
SQLAuthority
I am sure there are plenty of tutorials out there.
SQL Server is a relational database, so there really shouldn't be an expectation that it would support a fundamentally different architecture implied by an object or object-oriented database.
I don't understand your comment that "foreign keys are not supported in database." Foreign keys are all part of the integrity constraints in SQL Server, and a detail description of how to create them is available here
I think you might want to be more specific about the type of data you're trying to split up, and why you want to put them in physically separate databases. A refinement of your problem might help us provide better answers.

Data migration between different DBMS's

As i couldnt get any satisfying answer to my Question it seems we have to write our own program for that, we are in the design phase and we are thinking which format shall we use to backup the data.
The program will be written in Delphi.
Needed is Exporting/Importing data between Oracle/Informix/Msserver, very important here is the Performance issue, as this program will run on a 1-2 GB Databases. Beside the normal data there are Blobs in the Database which have to be backuped.
We thought of Xml-Data or comma-separated data as both are transparent (which is nice to have), but Blobs must be considered here. Paradox format is not optinal in this case.
Can anybody recommend some performant formats?
Any other Ideas to achieve the same Goal are welcome.
Thanx in Advance.
I use an excellent program called OmegaSync for my backups, but it will only handle Informix via ODBC and not directly. If you find you can use OmegaSync, you'll find its performance to be excellent, because it compares the databases first, and then syncs only the differences. You might want to use this idea if you decide to do the programming yourself if efficiency is your number one goal.
But programming database conversion is very complex as others answers to your question have said. So why not just develop the SQL you need, and do the conversion that way. For example see: Convert Informix Schema to Oracle Schema Or Any Other RDBMS For moving the data, check out sources like: Moving non-informaix data between computers and dbspaces
You can optimize the SQL to what I'm sure will be an adequate speed if you dump and load your data smartly.
DbUnit is a popular tool which can extract and load data in XML format, see
http://www.dbunit.org/faq.html#extract
// partial database export
QueryDataSet partialDataSet = new QueryDataSet(connection);
partialDataSet.addTable("FOO", "SELECT * FROM TABLE WHERE COL='VALUE'");
partialDataSet.addTable("BAR");
FlatXmlDataSet.write(partialDataSet, new FileOutputStream("partial.xml"));
// full database export
IDataSet fullDataSet = connection.createDataSet();
FlatXmlDataSet.write(fullDataSet, new FileOutputStream("full.xml"));
Did you check ODI (Oracle Data Integrator) It has support for lots of source databases. It is able to capture changes from the source databases and integrate them in the target database. It is performant but has a price tag.
Ronald.
The new DBExpress framework give the possibility to exporting/importing data between many databases. you can check this CodeRage session Deep Dive into dbExpress by John Kaster
You should use your own binary format, integrated by (xml for text/streams for Blobs).
If you have to export metadata too and not only data, it could be very complex. There are many subtle (and not so subtle differences) among the databases you're going to use, that such a format should be general enough and the exporting/importing code should be able to translate and map metadata across databases, and because an external application can't write directly to the database internal structures, it would have to generate the db proper DDL to create the data structures.
As long as this is a proprietary format, IMHO its design is the least of your issues, if size and performance are important and the file is read sequentially it would not be difficult to design a binary format.
Anyway import/exports and backups are two different tasks. If you have to backup a database, use its facilities. They usually allow far more control, i.e. point-in-time recovery. If you have to move data across databases that's another issue - I would write just the code to move data, not metadata, pre-creating the required structure in the target database.
You could give Toad (Quest Software) a try.
It supports all your mentioned platforms and can do things like 'Export table data to INSERT statements' on your source platform which can then be run on the target platform.
IIRC there is even some Toad-internal backup-format which might be cross-platform.
Toad Communities:
Toad for ORACLE
Toad for SQL SERVER
Toad for OTHER RDMBS (including Informix)
Some videos about exporting, importing:
YouTube: Toad for Data Analysts v2.7 Export Enhancements
YouTube: Toad for Data Analysts v2.7 Import Enhancements

Database Backup

Scenario
i want to take backup from 7 client database to 1 server database.
i dont know structure of the db { either server or client db }.
both databases are having old data. now i have to make the tool take the backup for that.
and should possible to backup old data also[if any updates done on old data.]
please help to find the solution for this.
1. how can i proceed with the problem.
2. database not specified, may be MS access or Sql server 2005
3. In which i can implement this [ I am thinking of doing it in c#]
please help me to find the solution
I'm not sure why you would want to go about it this way - if you are merely trying to copy the client databases (which I interpret as being "file based") then why not simply take copies of their files as part of the wider backup strategy?
If you to write the backup stuff to place all the data in a server based RDBMS, then you are also going to have to think about how you restore that information later on - which presumably means even more coding for you.
So - I don't think this is a good idea, but if you are determined, I would start off by writing a class (which will be almost abstract) dedicated to the purpose of reading the structure of the client database (tables, fields, views etc). I'd then inherit from that to get a specific class for doing this for each individual type of client DB. Once you have that, you can use ADO.Net to read values from the tables in the Client DB, populate datatables with the information, and then write that information back out to the Server based DB.
I really can't stress enough though that I don't like this idea - it seems overly complicated, and also won't deal with functions etc.
Good luck anyway,
Martin
Advisability of doing this aside, one simple answer for a particular subset of the problem would be to create a DSN for a target SQL Server (or any server database) and in Access export table by table to the DSN. You can do this through the Access UI and it can be automated within Access with DoCmd.TransferDatabase. It can be a little fiddly figuring out the proper connect string, and you'd also need to do something about renaming the exported tables so there are no collisions between databases, but that can be handled quite easily in a bit of VBA code.
I post this only because many people overlook the Access capability to export to an ODBC DSN, which requires no writing of DDL and so forth. It may or may not make correct choices about target data types, though, so you'd have to see in any particular situation if it's good enough or not.

Are heterogeneous database systems in practice?

I was probing around a bit in the realm of databases and hit the notion of having heterogeneous databases. I googled and found this - link text
My question is what kind of scenario would put this into practice and is it really useful? Is it just another thing which was thought about but not implemented or in case it was implemented, then it got restricted to a very niche area?
cheers
I would say yes, very much so. One implementation I am familiar with is integrating MAS90 with an LOB production system. The data is duplicated in both but accessed and used in different ways.
I've worked on a heterogeneous system before. It's a commercial system to manage study abroad programs for large universities, and they had installations on Oracle, MySql, and Sql Server. I was an outside consultant handling a very specific conversion project, though, so I didn't get to see many of the issues involved in making it work well everywhere.
I do remember that the single biggest hurdle I had to deal with was Oracle's lack of a simple autoincrement-style column and having to set up separate sequences instead. There were a number of datatype mismatches as well, but there was a pretty good system in place to just map those.
Note that even here, each customer only had one kind of database. We didn't have to worry about replicating data itself between db types (aside from a few common lookup tables). Just structure.
Different departments in your company might use different databases. I pull data in and push data to from the following
SQL Server
Oracle
Sybase IQ
Access
MySQL
FoxPro
Flat files
Excel files
The SQL Server database is the repository off all the data but it pull from many different databases to populate data and then data will be pushed to different databases for departmental use

Resources