Databases: Insert from multipler servers - database

Rally DB newbie question:
I am trying to insert user records to a DB. The id type could be an autoincrementing serial, or an INT.
How do I insert a record with an ID that is unique, and I can get that ID back, making sure that if the request is handled by multiple application servers, then I don't generate duplicate id's.
e.g.
Server 1 needs to insert: ( 'John', 'Smith', 25 )
Server 2 needs to insert: ( 'John', 'Rambo', 25 )
The app server wants the id's of the generated records back. I can't do a select based on attributes because
They could be duplicate
It's expensive.
One solution is that each app server also inserts a server id, server update no, combination and then selects on the basis of that.
I feel like this should be such a generic problem that there is would be a much simpler solution.
I'm using PostgreSQL if it matters.

With postgres you can use the RETURNING clause to return the value of a column such as
INSERT INTO table (col1,col2,col3) VALUES (1,2,3) RETURNING id;

You didn't mention what language and tools you're using. That matters, as the standard doesn't really cover this, but many client platforms have their own abstractions.
In particular JDBC has Statement.getGeneratedKeys() and Statement.RETURN_GENERATED_KEYS.
I don't think there's any equivalent in the ODBC interface. I didn't find one with a quick search, though found that some vendors add it as an extension.
For other clients, it just depends on what you're using. Some ORM layers have their own handling, e.g. Hibernate (and other JPA implementations) handle key generation, as does ActiveRecord (blech), SQLAlchemy, etc.
Otherwise, as Lucas says, you can just use the PostgreSQL extension RETURNING the_key_column_names_here. (9.5 should hopefully add RETURNING PRIMARY KEY too).
(The SQL spec provides GENERATED ALWAYS but as far as I know, no standard way to return the values. Many databases don't implement GENERATED anyway.)

Related

What is the conventional way to hard-code values in a database?

My application has a database table that is used to record the attendance of employees. And the column attedance_status has only three possible values - "present", "absent", "on_leave", and NULL as default.
How do I add it to the database? So far I have come up with two possible ways.
Create another table attendance_status with status_id and status_value and add the above values to it. And then use the id in the application for all SQL queries.
Probably the bad way. Hardcode the values (maybe in a config file) and use it throughout the app's SQL queries.
Am I missing the right way? How should this be approached?
Either will work, but Option 1 will give you flexibility in the event that the requirements change and is the standard data model. I would, however, name my columns a little differently. I would have id, value, name. Then the references become attendance_status.id and attendance_status.value. The third column is available for use in displays or reports or whatever. value is on_leave and name is On leave.
Option 2 works provided the data input point is totally closed. If someone codes new functionality there is the risk that he or she will invent something different to mean the same thing like onLeave.

Get audit history records of any entity record as per CRM view

I want to display all audit history data as per MS CRM format.
I have imported all records from AuditBase table from CRM to another Database server table.
I want this table records using SQL query in Dynamics CRM format (as per above image).
I have done so far
select
AB.CreatedOn as [Created On],SUB.FullName [Changed By],
Value as Event,ab.AttributeMask [Changed Field],
AB.changeData [Old Value],'' [New Value] from Auditbase AB
inner join StringMap SM on SM.AttributeValue=AB.Action and SM.AttributeName='action'
inner join SystemUserBase SUB on SUB.SystemUserId=AB.UserId
--inner join MetadataSchema.Attribute ar on ab.AttributeMask = ar.ColumnNumber
--INNER JOIN MetadataSchema.Entity en ON ar.EntityId = en.EntityId and en.ObjectTypeCode=AB.ObjectTypeCode
--inner join Contact C on C.ContactId=AB.ObjectId
where objectid='00000000-0000-0000-000-000000000000'
Order by AB.CreatedOn desc
My problem is AttributeMask is a comma separated value that i need to compare with MetadataSchema.Attribute table's columnnumber field. And how to get New value from that entity.
I have already checked this link : Sql query to get data from audit history for opportunity entity, but its not giving me the [New Value].
NOTE : I can not use "RetrieveRecordChangeHistoryResponse", because i need to show these data in external webpage from sql table(Not CRM database).
Well, basically Dynamics CRM does not create this Audit View (the way you see it in CRM) using SQL query, so if you succeed in doing it, Microsoft will probably buy it from you as it would be much faster than the way it's currently handled :)
But really - the way it works currently, SQL is used only for obtaining all relevant Audit view records (without any matching with attributes metadata or whatever) and then, all the parsing and matching with metadata is done in .NET application. The logic is quite complex and there are so many different cases to handle, that I believe that recreating this in SQL would require not just some simple "select" query, but in fact some really complex procedure (and still that might be not enough, because not everything in CRM is kept in database, some things are simply compiled into the libraries of application) and weeks or maybe even months for one person to accomplish (of course that's my opinion, maybe some T-SQL guru will prove me wrong).
So, I would do it differently - use RetrieveRecordChangeHistoryRequest (which was already mentioned in some answers) to get all the Audit Details (already parsed and ready to use) using some kind of .NET application (probably running periodically, or maybe triggered by a plugin in CRM etc.) and put them in some Database in user-friendly format. You can then consume this database with whatever external application you want.
Also I don't understand your comment:
I can not use "RetrieveRecordChangeHistoryResponse", because i need to
show these data in external webpage from sql table(Not CRM database)
What kind of application cannot call external service (you can create a custom service, don't have to use CRM service) to get some data, but can access external database? You should not read from the db directly, better approach would be to prepare a web service returning the audit you want (using CRM SDK under the hood) and calling this service by external application. Unless of course your external app is only capable of reading databases, not running any custom web services...
It is not possible to reconstruct a complete audit history from the AuditBase tables alone. For the current values you still need the tables that are being audited.
The queries you would need to construct are complex and writing them may be avoided in case the RetrieveRecordChangeHistoryRequest is a suitable option as well.
(See also How to get audit record details using FetchXML on SO.)
NOTE
This answer was submitted before the original question was extended stating that the RetrieveRecordChangeHistoryRequest cannot be used.
As I said in comments, Audit table will have old value & new value, but not current value. Current value will be pushed as new value when next update happens.
In your OP query, ab.AttributeMask will return comma "," separated values and AB.changeData will return tilde "~" separated values. Read more
I assume you are fine with "~" separated values as Old Value column, want to show current values of fields in New Value column. This is not going to work when multiple fields are enabled for audit. You have to split the Attribute mask field value into CRM fields from AttributeView using ColumnNumber & get the required result.
I would recommend the below reference blog to start with, once you get the expected result, you can pull the current field value using extra query either in SQL or using C# in front end. But you should concatenate again with "~" for values to maintain the format.
https://marcuscrast.wordpress.com/2012/01/14/dynamics-crm-2011-audit-report-in-ssrs/
Update:
From the above blog, you can tweak the SP query with your fields, then convert the last select statement to 'select into' to create a new table for your storage.
Modify the Stored procedure to fetch the delta based on last run. Configure the sql job & schedule to run every day or so, to populate the table.
Then select & display the data as the way you want. I did the same in PowerBI under 3 days.
Pros/Cons: Obviously this requirement is for reporting purpose. Globally reporting requirements will be mirroring database by replication or other means and won't be interrupting Prod users & Async server by injecting plugins or any On demand Adhoc service calls. Moreover you have access to database & not CRM online. Better not to reinvent the wheel & take forward the available solution. This is my humble opinion & based on a Microsoft internal project implementation.

Data migration from MS SQL to PostgreSQL using SQLAlchemy

TL;DR
I want to migrate data from a MS SQL Server + ArcSDE to a PostgreSQL + PostGIS, ideally using SQLAlchemy.
I am using SQLAlchemy 1.0.11 to migrate an existing database from MS SQL 2012 to PostgreSQL 9.2 (upgrade to 9.5 planned).
I've been reading about this and found a couple of different sources (Tyler Lesmann, Inada Naoki, Stefan Urbanek, and Mathias Fussenegger) with a similar approach for this task:
Connect to both databases
Reflect the tables of the source database
Iterate over the tables and for each table
Create an equal table in the target database
Fetch rows in the source and insert them in the target database
Code
Here is a short example using the code from the last reference.
from sqlalchemy import create_engine, MetaData
src = create_engine('mssql://user:pass#host/database?driver=ODBC+Driver+13+for+SQL+Server')
dst = create_engine('postgresql://user:pass#host/database')
meta = MetaData()
meta.reflect(bind=src)
tables = meta.tables
for tbl in tables:
data = src.execute(tables[tbl].select()).fetchall()
if data:
dst.execute(tables[tbl].insert(), data)
I am aware that fetching all the rows at the same time is a bad idea, it can be done with an iterator or with fetchmany, but that is not my issue now.
Problem 1
All the four examples fail with my databases. One of the errors I get is related to a column of type NVARCHAR:
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) type "nvarchar" does not exist
LINE 5: "desigOperador" NVARCHAR(100) COLLATE "SQL_Latin1_General_C...
^
[SQL: '\nCREATE TABLE "Operators" (\n\t"idOperador" INTEGER NOT NULL, \n\t"idGrupo" INTEGER, \n\t"desigOperador" NVARCHAR(100) COLLATE "SQL_Latin1_General_CP1_CI_AS", \n\t"Rua" NVARCHAR(200) COLLATE "SQL_Latin1_General_CP1_CI_AS", \n\t"Localidade" NVARCHAR(200) COLLATE "SQL_Latin1_General_CP1_CI_AS", \n\t"codPostal" NVARCHAR(10) COLLATE "SQL_Latin1_General_CP1_CI_AS", \n\tdataini DATETIME, \n\tdataact DATETIME, \n\temail NVARCHAR(50) COLLATE "SQL_Latin1_General_CP1_CI_AS", \n\turl NVARCHAR(50) COLLATE "SQL_Latin1_General_CP1_CI_AS", \n\tPRIMARY KEY ("idOperador")\n)\n\n']
My understanding from this error is that PostgreSQL doesn't have NVARCHAR but VARCHAR, which should be equivalent. I thought that SQLAlchemy would automatically map both of them to String in its layer of abstraction, but perhaps it doesn't work that way in this case.
Question: Should I define all the classes/tables beforehand, for instance, in models.py, in order to avoid errors like this? If so, how would that integrate with the given (or other) workflow?
In fact, this error was obtained running the code from Urbanek, where I can specify which tables I want to copy. Running the sample above, leads me to...
Problem 2
The MS SQL installation is a geodatabase that is using ArcSDE (Spatial Database Engine). For that reason, some of the columns are of a non-defaultGeometry type. On the PostgreSQL side, I am using PostGIS 2.
When trying to copy tables with those types, I get warnings like these:
/usr/local/lib/python2.7/dist-packages/sqlalchemy/dialects/mssql/base.py:1791: SAWarning: Did not recognize type 'geometry' of column 'geom'
(type, name))
/usr/local/lib/python2.7/dist-packages/sqlalchemy/dialects/mssql/base.py:1791: SAWarning: Did not recognize type 'geometry' of column 'shape'
Those are later followed by another error (this one was actually thrown when executing the provided code above):
sqlalchemy.exc.ProgrammingError: (psycopg2.ProgrammingError) relation "SDE_spatial_references" does not exist
LINE 1: INSERT INTO "SDE_spatial_references" (srid, description, aut...
^
I think that it failed to create the columns referred in the warnings, but the error was thrown at a later step when those columns were needed.
Question: The question is an extension of the previous one: how to do the migration with custom (or defined somewhere else) types?
I know about GeoAlchemy2 that can be used with PostGIS. GeoAlchemy supports MS SQL Server 2008, but in that case I guess I'm stuck with SQLAlchemy 0.8.4 (perhaps with less nice features). Also, I found here that it is possible to do the reflection using types defined by GeoAlchemy. However, my questions remain.
Possibly related
https://stackoverflow.com/questions/34475241/how-to-migrate-from-mysql-to-postgressql-using-pymysql
SqlAlchemy: export table to new database
https://stackoverflow.com/questions/34956523/sqlalchemy-custom-column-type-use-bindparam-as-multiple-function-parameters
SQLAlchemy Reflection Using Metaclass with Column Override
Edit
When I saw the error referring SDE_spatial_references I thought that it could be something related to ArcSDE, because the same machine also has ArcGIS for Server installed. Then I've learned that MS SQL Server also has some Spatial Data Types, and then I confirmed this is the case. I was wrong with this edit: the database is indeed using ArcSDE.
Edit 2
Here are some more details that I forgot to include.
The migration doesn't have to be done with SQLAlchemy. I'd thought that would be a good idea because:
I prefer to work with Python
The solution has to be with FOSS
Ideally, it would be in a way easily reproducible, and possible to launch and wait
After the migration I'd like to use Alembic to conduct further schema migrations
Other things that I have tried and failed (can't remember now the exact reasons, but I'd go through them again if any answer refers them):
Kettle
Geokettle
ogr2ogr (still trying this approach)
Database details:
Small database, ± 3 GB
± 40 tables
There are tables with both spatial and non-spatial data
Both databases (SQL Server and PostgreSQL) in the same server, which is running Windows Server 2008
No big problem with downtime (up to 8 hours would be fine)
Here is my solution using SQLAlchemy. This is a long-blog-like post, I hope that it is something acceptable here, and useful to someone.
Possibly, this also works with other combinations of source and target databases (besides MS SQL Server and PostgreSQL, respectively), although they were not tested.
Workflow (sort of TL;DR)
Inspect the source automatically and deduce the existing table models (this is called reflection).
Import previously defined table models which will be used to create the new tables in the target.
Iterate over the table models (the ones existing in both source and target).
For each table, fetch chunks of rows from source and insert them into target.
Requirements
SQLAlchemy
GeoAlchemy2
sqlacodegen
Detailed steps
1. Connect to the databases
SQLAlchemy calls engine to the object that handles the connection between the application and the actual database. So, to connect to the databases, an engine must be created with the corresponding connection string. The typical form of a database URL is:
dialect+driver://username:password#host:port/database
You can see some example of connection URL's in the SQLAlchemy documentation.
Once created, the engine will not establish a connection until it is explicitly told to do so, either through the .connect() method or when an operation which is dependent on this method is invoked (e.g., .execute()).
con = ms_sql.connect()
2. Define and create tables
2.1 Source database
Tables in the source side are already defined, so we can use table reflection:
from sqlalchemy import MetaData
metadata = MetaData(source_engine)
metadata.reflect(bind=source_engine)
You may see some warnings if you try this. For example,
SAWarning: Did not recognize type 'geometry' of column 'Shape'
That is because SQLAlchemy does not recognize custom types automatically. In my specific case, this was because of an ArcSDE type. However, this is not problematic when you only need to read data. Just ignore those warnings.
After the table reflection, you can access the existing tables through that metadata object.
# see all the tables names
print list(metadata.tables)
# handle the table named 'Troco'
src_table = metadata.tables['Troco']
# see that table columns
print src_table.c
2.2 Target database
For the target, because we are starting a new database, it is not possible to use tables reflection. However, it is not complicated to create the table models through SQLAlchemy; in fact, it might be even simpler than writing pure SQL.
from sqlalchemy import Column, Integer, String
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class SomeClass(Base):
__tablename__ = 'some_table'
id = Column(Integer, primary_key=True)
name = Column(String(50))
Shape = Column(Geometry('MULTIPOLYGON', srid=102165))
In this example there is a column with spatial data (defined here thanks to GeoAlchemy2).
Now, if you have tenths of tables, defining so many tables may be baffling, tedious, or error prone. Luckily, there is sqlacodegen, a tool that reads the structure of an existing database and generates the corresponding SQLAlchemy model code. Example:
pip install sqlacodegen
sqlacodegen mssql:///some_local_db --outfile models.py
Because the purpose here is just to migrate the data, and not the schema, you can create the models from the source database, and just adapt/correct the generated code to the target database.
Note: It will generate mixed class models and Table models. Read here about this behavior.
Again, you will see similar warnings about unrecognized custom data types. That is one of the reasons why we now have to edit the models.py file and adjust the models. Here are some hints on things to adjust:
The columns with custom data types are defined with NullType. Replace them with the proper type, for instance, GeoAlchemy2's Geometry.
When defining Geometry's, pass the correct geometry type (linestring, multilinestring, polygon, etc.) and the SRID.
PostgreSQL character types are variable length capable, and SQLAlchemy will map String columns to them by default, so we can replace all Unicode and String(...) by String. Note that it is not required, nor advisable (don't quote me on this), to specify the number of characters in String, just omit them.
You will have to double check, but, probably, all BIT columns are in fact Boolean.
Most numeric types (e.g., Float(...), Numeric(...)), likewise for character types, can be simplified to Numeric. Be careful with exceptions and/or some specific case.
I have noticed some issues with columns defined as indexes (index=True). In my case, because the schema will be migrated, these should not be required now and could be safely removed.
Make sure the table and column names are the same in both databases (reflected tables and defined models), this is a requirement for a later step.
Now we can connect the models and the database together, and create all the tables in the target side.
Base.metadata.bind = postgres
Base.metadata.create_all()
Notice that, by default, .create_all() will not touch existing tables. In case you want to recreate or insert data into an existing table, it is required to DROP it beforehand.
Base.metadata.drop_all()
3. Get data
Now you are ready to copy data from one side and, later, paste it into the other. Basically, you just need to issue a SELECT query for each table. This is something possible and easy to do over the layer of abstraction provided by SQLAlchemy ORM.
data = ms_sql.execute(metadata.tables['TableName'].select()).fetchall()
However, this is not enough, you will need a little bit more of control. The reason for that is related to ArcSDE. Because it uses a proprietary format, you can retrieve the data but you cannot parse it correctly. You would get something like this:
(1, Decimal('0'), u' ', bytearray(b'\x01\x02\x00\x00\x00\x02\x00\x00\x00#\xb1\xbf\xec/\xf8\xf4\xc0\x80\nF%\x99(\xf9\xc0#\xe3\xa5\x9b\x94\xf6\xf4\xc0\x806\xab>\xc5%\xf9\xc0'))
The workaround here was to convert the geometric column to the Well Known Text (WKT) format. This conversion has to take place in the database side. ArcSDE is there, so it knows how to convert it. So, for example, in the TableName there is a column with spatial data called shape. The required SQL statement should look like this:
SELECT [TableName].[shape].STAsText() FROM [TableName]
This uses .STAsText(), a geometry data type method of the SQL Server.
If you are not working with ArcSDE, the following steps are not required:
iterate over the tables (only those that are defined in both the source and in the target),
for each table, look for a geometry column (list them beforehand)
build a SQL statement like the one above
Once a statement is built, SQLAlchemy can execute it.
result = ms_sql.execute(statement)
In fact, this does not actually get the data (compare with the ORM example -- notice the missing .fetchall() call). To explain, here is a quote from the SQLAlchemy docs:
The returned result is an instance of ResultProxy, which references a
DBAPI cursor and provides a largely compatible interface with that of
the DBAPI cursor. The DBAPI cursor will be closed by the ResultProxy
when all of its result rows (if any) are exhausted.
The data will only be retrieved just before it is inserted.
4. Insert data
Connections are established, tables are created, data have been prepared, now lets insert it. Similarly to getting the data, SQLAlchemy also allows to INSERT data into a given table through its ORM:
postgres_engine.execute(Base.metadata.tables['TableName'].insert(), data)
Again, this is easy, but because of non-standard formats and erroneous data, further manipulation will probably be required.
4.1 Matching columns
First, there were some issues with matching the source columns with the target columns (of the same table) -- perhaps this was related to the Geometry column. A possible solution is to create a Python dictionary, which maps the values from the source column to the key (name) of the target column.
This is performed row by row -- although, it is not so slow as one would guess, because the actual insertion will be by several rows at the same time. So, there will be one dictionary per row, and, instead of inserting the data object (which is a list of tuples; one tuple corresponds to one row), you will be inserting a list of dictionaries.
Here is an example for one single row. The fetched data is a list with one tuple, and values is the built dictionary.
# data
[(1, 6, None, None, 204, 1, True, False, 204, 1.0, 1.0, 1.0, False, None]
# values
[{'DateDeleted': None, 'sentidocirculacao': False, 'TempoPercursoMed': 1.0,
'ExtensaoTroco': 204, 'OBJECTID': 229119, 'NumViasSentido': 1,
'Deleted': False, 'TempoPercursoMin': 1.0, 'IdCentroOp': 6,
'IDParagemInicio': None, 'IDParagemFim': None, 'TipoPavimento': True,
'TempoPercursoMax': 1.0, 'IDTroco': 1, 'CorredorBusext': 204}]
Note that Python dictionaries are not ordered, that is why the numbers in both lists are not in the same position. The geometric column was removed from this example for simplification.
4.2 Fixing geometries
Probably, the previous workaround would not be required if this issue had not occurred: sometimes geometries are stored/retrieved with the wrong type.
In MSSQL/ArcSDE, the geometry data type does not specify which type of geometry it is being stored (i.e., line, polygon, etc.). It only cares that it is a geometry. This information is stored in another (system) table, called SDE_geometry_columns (see in the bottom of that page). However, Postgres (PostGIS, actually) requires the geometry type when defining a geometric column.
This leads to spatial data being stored with the wrong geometry type. By wrong I mean that it is different than what it should be. For instance, looking at SDE_geometry_columns table (excerpt):
f_table_name geometry_type
TableName 9
geometry_type = 9 corresponds to ST_MULTILINESTRING. However, there are rows in TableName table which are stored (or received) as ST_LINESTRING. This mismatch raises an error in Postgres side.
As a workaround, you can edit the WKT while creating the aforementioned dictionaries. For example, 'LINESTRING (10 12, 20 22)' is transformed to MULTILINESTRING ((10 12, 20 22))'.
4.3 Missing SRID
Finally, if you are willing to keep the SRID's, you also need to define them when creating geometric columns.
If there is a SRID defined in the table model, it has to be satisfied when inserting data in Postgres. The problem is that when fetching geometry data as WKT with the .STAsText() method, you lose the SRID information.
Luckily, PostGIS supports an Extended-WKT (E-WKT) format that includes the SRID.
The solution here is to include the SRID when fixing the geometries. With the same example, 'LINESTRING (10 12, 20 22)' is transformed to 'SRID=102165;MULTILINESTRING ((10 12, 20 22))'.
4.4 Fetch and insert
Once everything is fixed, you are ready to insert. As referred before, only now the data will be actually retrieved from the source. You can do this in chunks (a user defined amount) of data, for instance, 1000 rows at a time.
while True:
rows = data.fetchmany(1000)
if not rows:
break
values = [{key: (val if key.lower() != "shape" else fix(val, 102165))
for key, val in zip(keys, row)} for row in rows]
postgres_engine.execute(target_table.insert(), values)
Here fix() is the function that will correct the geometries and prepend the given SRID to geometric columns (which are identified, in this example, by the column name of "shape") -- like described above --, and values is the aforementioned list of dictionaries.
Result
The result is a copy of the schema and data, existing on a MS SQL Server + ArcSDE database, into a PostgreSQL + PostGIS database.
Here are some stats, from my use case, for performance analysis. Both databases are in the same machine; the code was executed from a different machine, but in the same local network.
Tables | Geometry Column | Rows | Fixed Geometries | Insert Time
---------------------------------------------------------------------------------
Table 1 MULTILINESTRING 1114797 702 17min12s
Table 2 None 460874 --- 4min55s
Table 3 MULTILINESTRING 389485 389485 4min20s
Table 4 MULTIPOLYGON 4050 3993 34s
Total 3777964 871243 48min27s
I faced the same problems trying to migrate from Oracle 9i to MySQL.
I built etlalchemy to solve this problem, and it has currently been tested migrating to and from MySQL, PostgreSQL, SQL Server, Oracle and SQLite. It leverages SQLAlchemy, and BULK CSV Import features of the aforementioned RDBMS's (and can be quite fast!).
Install (non El-capitan): pip install etlalchemy
Install (El-capitan): pip install --ignore-installed etlalchemy
Run:
from etlalchemy import ETLAlchemySource, ETLAlchemyTarget
# Migrate from SQL Server onto PostgreSQL
src = ETLAlchemySource("mssql+pyodbc://user:passwd#DSN_NAME")
tgt = ETLAlchemyTarget("postgresql://user:passwd#hostname/dbname",
drop_database=True)
tgt.addSource(src)
tgt.migrate()
I'd recommend this flow with two big steps to migrate:
Migrate schema
Dump source DB schema, preferably to some unified format across data tools like UML (this and next steps will need and be easier with toll like Toad Data Modeler or IBM Rational Rose).
Change tables definitions from source types to target types when needed with TDM or RR. E. g. get rid of varchar(n) and stick to text in postgres, unless you specifically need application to crash on data layer with strings longer than n. Omit (for now) complex types like geometry, if there is no way to convert them in data modeling tools.
Generate a DDL-file for target DB (mentioned tools are handy here, again).
Create (and add to tables) complex types as they should be handled by target RDBMS. Try to insert a couple of entries to be sure datatypes are consistent. Add these types to your DDL-file.
You may also want to disable checks like foreign key constraints here.
Migrate data
Dump simple tables (i. e. with scalar fields) to a CSV.
Import simple tables data.
Write a simple piece of code to select complex data from source and to insert this into target (it is easier than it sounds, just select -> map attributes -> insert). Do not write migration for all complex types in one code routine, keep it simple, divide and conquer.
If you have not disabled checks while you were migrating schema it is possible that you need to repeat steps 2 and 3 for different tables (that's why, well, disable checks :)).
Enable checks.
This way you will split your migration process in simple atomic steps, and failure on a step 3 of data migration will not cause you to move back to the schema migration, etc. You can just truncate a couple of tables, and rerun data import if something fail.

Encryption on the fly

here is something interesting that I have been asked. It has to do with the encryption of data in a non encrypted database.
The story has as follows. We have a database, not encrypted and also none column encrypted in any of its tables. Now, we'd like to control the trafic of the data depending on who is asking for this. Let me explain more clear:
We have a table with the name: table1
This table has one column with the name: SName
We'd like to reach the following result. A user connected to the SQL Server Management Studio if runs the following query:
select * from table1
to take no result or if he/she takes a result, this result to be scrambled.
Now from inside the application the table should exchange data from/to the application in the normal mode.
Do you know if there is a setting, or an implementation or an external tool that can provide this functionality?
I think that this is quite interesting case!
Thank you.
Use permissions to stop that person reading the table at all.
Or use a VIEW to hide the table and have a WHERE clause in that that applies a filter silently: this could refer to another table with a list of approved users.
This isn't really an encryption (well, obfuscation in this case) issue.

Merging multiple Access databases into SQL Server

We have a program in which each user is given their own Access database. We'd like to merge these all together into a single SQL Server database.
The problem is that, using the SQL Server import/export wizard, the primary/foreign keys do not get updated. So for instance if one user has this table:
1 Apple
2 Banana
and another user has this:
1 Coconut
2 Cheeseburger
the resulting table looks like this:
1 Apple
2 Banana
1 Coconut
2 Cheeseburger
Similarly, anything that referenced Banana by its primary key (2) is now referencing both Banana and Cheeseburger, which will not make the vegans very happy.
Is there any way to automatically update the primary/foreign key references when importing, other than writing an extremely long and complex import-script?
If you need to keep them fully compartmentalized, you have to assign some kind of partitioning column to each table. Is there a reason you need your SQL Server to have the same referential integrity as Access? Are you just importing to SQL Server for read-only reporting? In that case, I would not bother with RI. The queries will all require a partitionid/siteid/customerid. You could enforce that for single-entity access by wrapping tables with a table-valued UDF which required the partitionid. For cross-site that doesn't work.
If you are just loading to SQL Server for reporting, I would also consider altering the data model to support reporting (i.e. a dimensional model is sometimes better than a normalized model) instead of worrying about transaction processing.
I think we need to know more about the underlying goals.
Need more information of requirements.
My basic question is 'Do you need to preserve the original record key?' e.g. 1:apple in table T of user-database A; 1:coconut in table T of user-database B. Table T is assumed to have the same structure in all database instances. Reasons I can suppose that you may want to preserve the original data: (a) you may have a requirement to the reference the original data (maybe a visual for previous reporting), and/or (b) there may be a data dependency in the application itself.
If the answer is 'no,' then you are probably interested only in preserving all of the distinct data values. Allow the SQL table to build using a new key and constrain the SQL table field such that it contains unique data. This approach seems to preserve the original table structure (but not the original key value or its 'location') and may suffice to meet your requirement.
If the answer is 'yes,' I do not see a way around creating an index that preserves a pointer to the original database and the key that was created in its table T. This approach would seem to require an application modification.
The best approach in this case is probably to split the incoming data into two tables: one to identify the database and original key, another to identify the distinct data values. For example: (database) table D has records such as 'A:1:a,' 'A:2:b,' 'B:1:c,' 'B:2:d,' 'B:15:a,' 'C:8:a'; (data) table T1 has records such as 'a:apple,' 'b:banana,' 'c:coconut,' 'd:cheeseburger' where 'A' describes the original database 'location,' 1 is the original value in location 'A,' and 'a' is a value that equates records in table D and table T1. (Otherwise you have a lot of redundant data in the one table; e.g. A:1:apple, B:15:apple, C:8:apple.) Also, T1 has a structure similar to the original T and is seems to be more directly useful in the application.
Ended up creating an SSIS project for this. SSIS is a visual programming tool made by Microsoft (and part of their "Business Integration Studio", which comes with SQL Server) designed for solving exactly these sorts of problems.
Why not let Access use its replication manager to merge the databases? This will allow you to identify the conflicts and resolve them before importing to SQL Server. I'm fairly confident it will retain the foreign key relationships. If I understand your situation correctly, and the databases are the same structure with different data, you could load the combined database to the application and verify the data before moving to SQL Server.
What version of Access are you using? Here's a link for Access 2000. Use the language to adjust search parameters to fit your version.
http://technet.microsoft.com/en-us/library/cc751054.aspx

Resources