Sample Employee SQL Server database with data - sql-server

Hi I need some sample SQL Server Employee database with data such as id, surname, name, age, adress etc. It must be quite big, I search with google, but I don't find any good sample.
Can any body help ?

The only tool I can think of is Red Gate Data Generator.
Otherwise, you'd be looking at someone's actual data or expecting someone to provide such a tool free of charge.

You may find it a challenge to find such a database, privacy concerns prevent most publication of personal data such as you require. You may find one using pseudo-data, that is data that looks like what you want but is not about 'real' people. But you will probably find it easier to generate your own such pseudo-data. If you take this approach you can be sure that the data you generate meets your requirements too.

if you have access to Microsoft Visual studio for Database professionals it has a data generator built in which you can use
link text
Also the AdventureWorks db also has a Employee table i think.

This might not be Employee based, but is definitely worth having a look
AdventureWorks

The Northwind database contains some sample employee data in a .mdf file which can be attached to SQL Server.

The old Northwind database has Employee table but that's quite small.

Related

Most Efficient Way to Migrate Un-Normalized Data in an Access Database to a Normalized Form in a SQL Server Database

I've been doing some research on this topic for a while now and can't seem to find a similar instance to my issue. I will try and explain everything as best I can, as simply as I can.
The problem is in the title; I am trying to migrate data from an Access database to SQL Server. Typically, this isn't really a hard problem as there exists several import/export tools within SQL Server but I am looking for the best solution. That or some advice/tips as I am somewhat new to database migration. I will now begin to explain my situation.
So I am currently working on migrating data that exists in an Access “database” (database in quotes because I don’t think it is actually a database, you’ll know why in a minute) in an un-normalized form. What I mean by un-normalized is that all of the data is in one table. This table has about 150+ columns and the rows number in the thousands. Yikes, I know; this is what I’ve walked into lol. Anyways, sitting down and sorting through everything, I’ve designed relationships for the data that normalize it nicely in its new home, SQL Server. Enter my predicament (or at least part of it). I have the normalized database set up to hold the data but I’m not sure how to import it, massage/cut it up, and place it in the respective tables I’ve set up.
Thus far I’ve done a bunch of research into what can be done and for starters I have found out about the SQL Server Migration Assistant. I’ve begun messing with it and was able to import the data from Access into SQL Server, but not in the way I wanted. All I got was a straight copy & paste of the data into my SQL Server database, exactly as it was in the Access database. I then learned about the typical practice of setting up a global table/staging area for this type of migration, but I am somewhat of a novice when it comes to using TSQL. The heart of my question comes down to this; Is there some feature in SQL Server (either its import/export tool or the SSMA) that will allow me to send the data to the right tables that already exist in my normalized SQL Server database? Or do I import to the staging area and write the script(s) to dissect and extract the data to the respective normalized table? If it is the latter, can someone please show me some tips/examples of what the TSQL would look like to do this sort of thing. Obviously I couldn’t expect exact scripts from anyone without me sharing the data (which I don’t have the liberty of as it is customer data), so some cookie cutter examples will work.
Additionally, future data is going to come into the new database from various sources (like maybe excel for example) so that is something to keep in mind. I would hate to create a new issue where every time someone wants to add data to the database, a new import, sort, and store script has to be written.
Hopefully this hasn’t been too convoluted and someone will be willing (and able) to help me out. I would greatly appreciate any advice/tips. I believe this would help other people besides me because I found a lot of other people searching for similar things. Additionally, it may lead to TSQL experts showing examples of such data migration scripts and/or an explanation of how to use the tools that exist in such a way the others hadn’t used before or have functions/capabilities not adequately explained in the documentation.
Thank you,
L
First this:
Additionally, future data is going to come into the new database from
various sources (like maybe excel for example)...?
That's what SSIS is for. Setting up SSIS is not a trivial task but it's not rocket science either. SQL Server Management Studio has an Import/Export Wizard which is a easy-to-use SSIS package creator. That will get you started. There's many alternatives such as Powershell but SSIS is the quickest and easiest solution IMO. Especially when dealing with data from multiple sources.
SSIS works nicely with Microsoft Products as data sources (such as Excel and Sharepoint).
For some things too, you can create an MS Access Front-end that interfaces with SQL Server via sql server stored procedures. It just depends on the target audience. This is easy to setup. A quick google search will return many simple examples. It's actually how I learned SQL server 20+ years ago.
Is there some feature in SQL Server that will allow me to send the
data to the right tables that already exist in my normalized SQL
Server database?
Yes and don't. For what you're describing it will be frustrating.
Or do I import to the staging area and write the script(s) to dissect
and extract the data to the respective normalized table?
This.
If it is the latter, can someone please show me some tips/examples of
what the TSQL would look like to do this sort of thing.
When dealing with denormalized data a good splitter is important. Here's my two favorites:
DelimitedSplit8K
PatternSplitCM
In SQL Server 2016 you also have split_string which is faster (but has issues).
Another must have is a good NGrams function. The link I posted has the function attached at the bottom of the article. I have some string cleaning functions here.
The links I posted have some good examples.
I agree with all the approaches mentioned: Load the data into one staging table (possibly using SSIS) then shred it with T-SQL (probably wrapped up in stored procedures).
This is a custom piece of work that needs hand built scripts. There's no automated tool for this because both your source and target schemas are custom schemas. So you'd need to define all that mapping and rules somewhow.... and no SSIS does not magically do this!
It sounds like you have a target schema and mappings between source and target schema already worked out
As an example your first step is to load 'lookup' tables with this kind of query:
INSERT INTO TargetLookupTable1 (Field1,Field2,Field3)
SELECT DISTINCT Field1,Field2,Field3
FROM SourceStagingTable
TargetLookupTable1 should already have an identity primary key defined (which is not mentioned in the above query because it is auto generated)
This is where you will find your first problem. You'll almost definitely find your distinct query just gives you a whole lot of duplicated mispelt data rubbish data. So before you even load your lookup table you need to do data cleansing.
I suggest you clean the data in your source system directly but it depends how comfortable you are with that.
Next step is: assuming your data is all clean and you've loaded a dozen lookup tables in this way..
Now you need to load transactions but you don't know the lookup key that you just generated!
The trick is to pre-include an empty column for this in your staging table to record this
Once you've loaded up your lookup table you can write the key back into the staging table. This query matches back on the fields you used to load the lookup, and writes the key back into the staging table
UPDATE TGT
SET MyNewLookupKey = NewLookupTable.MyKey
FROM SourceStagingTable TGT
INNER JOIN
NewLookupTable
ON TGT.Field1 = NewLookupTable.Field1
AND TGT.Field2 = NewLookupTable.Field2
AND TGT.Field3 = NewLookupTable.Field3
Now you have a column called MyNewLookupKey in your staging table which holds the correct lookup key to load into you transaction table
Ongoing uploads of data is a seperate issue but you might want to investigate an MS Access Data Project (although they are apparently being phased out, they are very handy for a front end into SQL Server)
The thing to remember is: if there is anything ambiguous about your data, for example, "these rows say my car is black but these rows say my car is white", then you (a human) needs to come up with a rule for "disambiguating" it. It can't be done automatically.
So there are quite a number of ways to skin this cat. I don't know much about the "Migration Assistant", but I somehow doubt it's going to make your life easier given what you're trying to do.
I'd just dump the whole denormalized mess into a single big staging table then shred it where you need it using SQL. I know you asked for help with the TSQL, but without having some idea of what the denormalized data is and how you want to re-shape it, all I can do really is suggest you read up on SQL in general (select, from, where, group by, etc).
You could also do the work in SSIS, but ultimately the solution you use is largely going to depend on the nature of how you need to normalize the big denormalized data set. IMHO doing this in SQL is usually the easiest way, but then again when you're a hammer, everything looks like a nail.
As far as future proofing the process, how you import the Access data probably will have little bearing on how you'd import Excel data. If you have a whole lot of different data sources which you'll need to incorporate on a recurring basis, SSIS might be a good choice to invest some time and effort into for the long run. No matter what, incorporating data from a distinct data source takes time and effort. You'll have to do some extra work no matter what. I would weight how frequently you think you'll have to integrate a given data source, and how much effort is involved to massage it into the format you want.
I have a completely different opinion. Because I do both database development and Microsoft's Power BI - - on the PBI side we come across a lot of non-normalized data because a lot of the data is coming in from excel.
My guess is that what is now in Access was an import of something originally began in excel.
Excel Power Query and PBI offers transforms to pivot and unpivot layout. I would use these tools to do that task. Then import the results into SQL.

Migrating data from Paradox to SQL Server

The company I work for has a very old system that uses Paradox.
I need to rewrite this system and I need to migrate the data from Paradox to SQL Server 2008.
How do I do this?
I found that it was possible to log into the Paradox database, copy and paste the data into Excel and then import the data into SQL Server in the normal way.
I had to do something exactly the same for a company I am working for
There are a few options
Export data in csv files through paradox
Or if your really lazy like me get the free trial for ESF database migration tool
you can select the entire Schema or database you want to convert
and convert everything all at once.
I was a little skeptical at first so I played with the free trial before buying it, but I must say it was worth buying.
Hope this helps it had me annoyed for a few months.

Fake people database for a creditworthiness definition app

I have a task to create a decision tree for defining a person's creditworthiness based on their information. I need a database containing about 1000 of persons with their personal information, of course they can be fake. Where can I get this database? Thanks in advance
Redgate's SQL Data Generator 1.2
This article might be of interest if you do not want to buy a tool:
Generate test data using SQL
If you have access to the Database Edition of Visual Studio 2008:
How Do I: Generate Test Data using Visual Studio Team System Database Edition?
VS2010 : Generating Test Data for Databases by Using Data Generators
I know it might be late, but here's the German credit scoring dataset. It contains 1000 observations, and the data is real.
http://www.stat.uni-muenchen.de/service/datenarchiv/kredit/kredit_e.html
Try Fake name generator
You can order bulk data for free from http://www.fakenamegenerator.com/order.php

600 tables in DDBB

I'm a very young software architect. Now I'm working in a very large and I have to lead a group of developers to rewrite all the mortgage system of the bank.
I'm looking at database tables and I realize that there is no any data model, neither documentation. The worst part is that there are about 1000 tables in dev environment, and like 600 in production. I trust more the production environment, but anyway, what can I do? I mean, I can suicide me or something, but is there any good reverse engineering tool, so at least I could get the schema definition with the relations between tables and comments extracted from the fields? Can you advice me something?
Thanks in advance.
If you are lucky and the database actually uses primary and foreign keys, you can get some excellent documentation with SchemaSpy, a nice command line tool written in Java.
Update: I've just remembered that Oracle SQL Developer has a similar tool (create a connection, right click on its icon and choose "Generate DB Doc") though it doesn't draw graphs.
Try connecting with a tool called TOAD - it has been some years since i used it but IIRC you could select the appropriate schema that you want to inspect and it will give you a tree view with all the tables and views and you can expand the table nodes to see the column details. No doubt the tool has moved on considerably since i last used it.
You can extract the comments like so
select * from dba_tab_comments
where owner not in ('SYS', 'SYSTEM')
and
select * from dba_col_comments
where owner not in ('SYS', 'SYSTEM')
As for reverse engeneering: if you're going to draw an ERD with 600+ tables, this is (probably) going to be too large anyway. I'd first try to find "clusters" of related tables and then use a specialized tool to draw these clusters.
Obviously, you want to make sure that the entire schema has foreign keys enforced. You might want to look at
select * from dba_constraints
where constraint_type = 'R' and
owner not in ('SYS', 'SYSTEM')
to see if all foreign keys.
Unfortunately this is oracle. For mysql there is a pretty nice tool called Mysql Workbench.

How can I frequently/automatically compare the structure of the same Database?

Does anyone know a tool that automatically compares the structure of a database and gives a report of what was changed? If it can be integrated into a CruiseControl, even better.
By structure I also mean functions and stored procedures. Which is probably not clear at first sight ;-)
The idea was to integrate that into a cruise control server
SQL Compare by Redgate is purportedly good at this.
Failing that you can get a good idea of a schema compare by comparing the results of queries:
select * from information_schema.columns
select * from information_schema.tables
ApexSQL Diff and ApexSql Data Diff. They work great for me. Also, I've used the Redgate tools in the past with good success.
SQL Effects Clarity is very good at comparing two Databases, and it's free (it has a paid version that allows you to do a bit more as well)
For integrating into CC if the output of the tool is plaintext you can publish it as an artifact. If the output is .xml you can merge it into the log and write an xsl that will include it in the email & web pages.
I use Redgate SQL Compare (link). It is about $400 per seat but is well worth the price
I would definitely go with AdeptSQL if you're using MSSQL. It's the least good looking but the most talented db compare tool amongst the ones I've tried. It can compare both the structure and the data. It tells you which tables exist on one db but does not exist on the other, compares the structure and data of the common ones and it can produce the script to synchronize the two. It's not free but has a 30 day trial (as far as I can remember)

Resources