I have converted MS Access tables and queries to SQL Server tables and views and linked them back to Access. While I was doing this migration, people were using the old Access frontend not linked to SQL Server. Now, the data that I have in SQL Server is the old data when I started the migration. I have created tables, indexes, queries etc in my SQL Server which uses the old data.
Now, I want to deploy the SQL database and link it to Access. Is there a way for me to delete my old data and migrate the new data to SQL Server database while preserving all the schema?
Edit 1:
If you did the migration using SSMA (Sql migration assistant for Access) then you can simply re-run that saved project.
The first time you run SSMA, it will create the data tables on sql server, and then transfer the data.
However, you can open that same SSMA project again, and re-run it, will give you the option to delete the data on SQL server, and send up the existing Access data again.
One of the “really” great features of SSMA is it lets you re-send the data. So you can slice and dice, and try the migration MANY MANY times.
Once you get the migration going the way you want, you migrate the data. You then work on getting your front end to work with sql server. During this time, no doubt users are still using the older system (non sql server).
For example, SSMA allows you to add a PK to each table (if it does not have one). I often found a “few” tables such as for driving combo boxes etc. does not (did not) have a PK for that table that say drives a combo box. So during the migration, you want to let SSMA create the PK for you. You can do this manually after the migration, but then you need to write down some “cheat” notes, since as you point out, you going to have to do the migration again later on.
So, if you make any “manual” changes to the data structures, then you want to “save” those changes in the event that you migrate again. The beauty of this, is WHEN you in the table design mode (sql server), you can right click, and choose “script” changes. So if you make say 10 or more changes to each table, you can save your changes into a sql script. So now you can migrate, and then run those scripts.
Now, after the migration, you get to work making this front end work with SQL server. During this time no doubt users are STILL working on the old system (access back end).
Once your new front end is working fine with sql server, then you pick a day for the new roll out. You after work, or during down time, re-run the SSMA project you saved. The result is now SQL server has the most up to date data. And then you are now able to roll out and deploy out the new front end that is linked to SQL server.
As noted, while SSMA can migrate Access queries, I VERY strong recommend you don’t do this. Just migrate the data, and link the front end tables to sql server. At this point, 99% of your Access appcation will work as before. You “may” have to change the VBA open recordset commands (to add dbOpenDynaset, and dbSeeChageesto that openRecordSet command (but that is a global search and replace – not much time at all).
So you likely have lots of code like this:
Set rst = CurrentDb.OpenRecordset(strSQL)
And you need to change above to:
Set rst = CurrentDb.OpenRecordset(strSQL, dbOpenDynaset, dbSeeChanges)
The above will thus allow 99% of your VBA reocrdset code to work as before without changes.
The only “common” got ya, is with a Access back end, the autonumber ID is generated INSTANT as a you dirty a form, or dirtry (add too) a record. This allows code to grab the auto number PK right away.
So such old code as:
Set rstRecords = CurrentDb.OpenRecordset("tblmain")
rstRecords.AddNew
' lots of some "code" here follows
lngPK = rst!ID
In above, note how my VBA code grabs the PK auto number.
In sql server, you cannot grab that PK until AFTER you force the record save. And DAO has a VERY nasty issue is that after you issue a update (during add only – I repeat during adding reocrds only!!!), then the record pointer jumps off the current record. This DOES NOT occur when you using DAO recordsets to update a existing record (again: only for new reocdors).
So, so above code now becomes:
Set rstRecords = CurrentDb.OpenRecordset("tblmain")
rstRecords.AddNew
' code can go here to add data, or set values to the reocord
rstRecords.Update
rstRecords.Bookmark = rstRecords.LastModified
lngNext = rstRecords!ID
rstRecords.Close
So, for code that grabs the autonumber PK right away, we have to do two things:
Force a record write (update)
And then after the update, re-positon the record pointer. (you ONLY need this re-postion when adding – not edits, but I often do this anyway). This re-position issue is perhaps my LARGEST pain of using DAO (ADO does not require this re-position).
So your code add/sets the fields etc. in that reocrdset do NOT have to be changes. So leave that code that does whatever the heck the code did before.
Now issue the update, AND THEN GRAB the autonumber PK.
So above should cover 99% of your VBA code you have to change. Even in a rather large project, the above issue will only occur in a few places. (I find that I can search for “.add” in the code base, and rather fast determine if code is grabbing the autonumber PK before the “.update” command is issued.
The same goes for forms. When a user starts typing, the form becomes “dirty”. With Access back end, the autonumber PK can be grabbed by code, but with sql server back end, you have to issue a record save in the form, and THEN grab the PK ID.
So, you add this one line:
If me.Dirty = True then me.Dirty = false
lngID = me!id
So you added the one line to force a record save (me.Dirty = false).
And again I tend to find even with say 150 forms, only 1 or 2 will do this “grabbing” of PK id before the forms record has been saved. So this “lack” of autonumber being able to grabbed for new records will occur for both forms, and VBA reocordset code. Few forms do grab the PK autonumber ID, but some do need this (say to add child records). However, existing forms + sub forms do NOT have this issue, since access ALWAYS issues a record save when the focus jumps from the main for to any sub form.
Anyway, once you get the new front end working (and of course one linked the front end using the same table names as before).
If I recall, SSMA tends to put “dbo” in front of the Access table link names – you don’t want that. The dbo schema on sql server side is the default, and again that should not pose any issues or problems.
So yes, SSMA allows you to re-run the migration, and it allows you to delete your data on SQL server during that re-migration. You not need to delete the old data, SSMA can do this for you.
I have a production database of 20 TB data. We migrated our database from Oracle to SQL Server. Our old application was based on a Cobol based platform. After migrating to SQL Server indexes are giving good results.
I am creating a schema with new set of indexes without any data. Now I want to migrate only the data.
Import/Export utility will take load log time and will fill up the log files also. Is there any other alternative of this ?
My advice would be:
Set the recovery model to simple. See here.
Remove the indexes.
Batch insert the rows or use select into (this minimizes logging).
Re-create the indexes.
I admit that I haven't had to do this sort of thing in a long time in SQL Server. There may be other methods that are faster -- such as backing up a table space/partition and restoring it in another location.
You may use bcp utility to import/export data. For full details see here https://learn.microsoft.com/en-us/sql/tools/bcp-utility?view=sql-server-2017
I'm working on an application that requires a lot of data. This data is stored in SAP (some big enterprise planning tool) and needs to be loaded in an Oracle database. The data I'm talking about is 15.000+ rows long and each row has 21 columns.
Every time an interaction is made with SAP (4 times a day), those 15.000 rows are exported and have to be loaded in the Oracle database. I'll try to explain what I do now to achieve my goal:
Export data from SAP to a CSV file
Remove all rows in the Oracle database
Load the exported CSV file and import this into the Oracle database
What you can conclude from this is that the data has to be updated in the Oracle database if there is a change in the row. This process takes about 1 minute.
Now I'm wondering if it would be faster to check each row in the Oracle database for changes in the CSV file. The reason why I ask this before trying it first is because it requires a lot of coding to do what my question is about. Maybe someone has done something similar before and can guide me with the best solution.
All the comments helped me reduce the time. First Truncate, then insert all rows with the Oracle DataAccess library instead of OleDb.
Any easy way to get mysql server to query from an iseries (as/400 db2)? I have the odbc installed so I can query and export the data manually to my desktop and then import it to mysql.
The problem is the as400 database is so huge the performance is poor. I need to run a query every 1/2 hour or so on mysql to pull the new updated information on the iseries database.
Basically how do you use odbc on the mysql server to query from the iseries odbc?
I haven't worked on an iSeries for over 10 years but - here is what I know/remember.
You create physical files and then logicals(sort sequences) over them.
To help make it as efficient as possible the FIRST logical that will be executed during a "reorg" should contain ALL the fields you will use in any subsequent select/sequence logicals. Then the following logicals will use the first logical to built themselves - it is now ONLY using an index instead of a physical file.
Second when you use open query it looks for a logical that is "pre-built". If it can't find one at least "near" what it needs it has to build one of its own every time.
My next point is the file you are reading and selecting from. When a record is added does it update physical/logicals immediately? On open? On close?
If you are looking for speed for your query then you don't want to be busy updating the records which have been added.
Note that if these are order entry type of records the update may be deliberately delayed to enhance the data entry process.
Hope this helps - An "updated" and "appropriate" keyed and sequenced logical will make a huge difference.
If you don't know the iSeries you need someone who does that can check that side. Cheers Ted
Data replication. One method is using a row update timestamp and using column to drive the replication.
alter table mylib.mytable add column
UPDATETS TIMESTAMP GENERATED ALWAYS FOR EACH ROW
ON UPDATE AS ROW CHANGE TIMESTAMP NOT NULL
Now your replication would use the updatets column and pull rows with a updatets greater than the current max(updatets) in the mysql database.
I am tasked with exporting the data contained inside a MaxDB database to SQL Server 200x. I was wondering if anyone has gone through this before and what your process was.
Here is my idea but its not automated.
1) Export data from MaxDB for each table as a CSV.
2) Clean the CSV to remove ? (which it uses for nulls) and fix the date strings.
3) Use SSIS to import the data into tables in SQL Server.
I was wondering if anyone has tried linking MaxDB to SQL Server or what other suggestions or ideas you have for automating this.
Thanks.
AboutDev.
I managed to find a solution to this. There is an open source MaxDB library that will allow you to connect to it through .Net much like the SQL provider. You can use that to get schema information and data, then write a little code to generate scripts to run in SQL Server to create tables and insert the data.
MaxDb Data Provider for ADO.NET
If this is a one time thing, you don't have to have it all automated.
I'd pull the CSVs into SQL Server tables, and keep them forever, will help with any questions a year from now. You can prefix them all the same, "Conversion_" or whatever. There are no constraints or FKs on these tables. You might consider using varchar for every column (or the ones that cause problems, or not at all if the data is clean), just to be sure there are no data type conversion issues.
pull the data from these conversion tables into the proper final tables. I'd use a single conversion stored procedure to do everything (but I like tsql). If the data isn't that large millions and millions of rows or less, just loop through and build out all the tables, printing log info as necessary, or inserting into exception/bad data tables as necessary.