I have a large Oracle source database with many objects and wish to migrate a comparatively small set of table definitions to a SQL Server instance using Microsoft's dedicated migration tool SSMA. I ran the migration tool previously, having to leave it processing overnight due to the quantity of objects. When I tried to save the project, frustratingly, the machine ran out of memory, taking me back to where I started.
I initially connected as SYSTEM, so created a new user that could select only from the tables for migration, along with CREATE SESSION and CONNECT privileges. This failed on connection to Oracle due to the dictionary tables being inaccessible.
I then added granted SELECT ON ANY DICTIONARY to the new user and connected to the Oracle source. This time, the connection was successful, but I believe the entire dictionary is being read due to the amount of time it's already taken to load the objects into SSMA.
What I would like to know is: is there an easy way to constrain the set of tables being loaded into SSMA, with the intention of speeding up the connection process?
Related
We had an intern who was given written instructions for deleting old data from a database based on dates (from within our ERP system). They were fascinated by the results and just kept deleting instead of stopping at the required date. There are now 4 years of missing records in the production database. I have these records in my development database, which is in a different instance on a different server. Is there a way to transfer just those 4 years worth of data from my development database to my production database, checking, of course, to make sure there are no duplicates (unique index on transaction number).
I haven't tried anything yet because I'm not sure where to start. I do have a test database on the same instance as the production database that I could use to test the transfer with.
There are several ways to do this. Assuming that this is on a different machine, you will want to create a Linked Server on your dev machine to link to the target server (Or, technically, a link from the production server to your dev machine could be used as well). Then, perform an insert of the selected records from the source to the target.
More efficiently, you can use the Export Data functionality. Right click on the database (Not the server / instance, but the database) and select Tasks / Export Data from the popup menu. This will pop up the SQL Server Import and Export Wizard. Use your query above to select the data for export.
If security considerations interfere with this, create a duplicate of the table(s) with alternate names (e.g. MyInvRecords) in a new database, and export the data into those tables. Back up that DB, transfer it to someplace accessible from the target server, restore that DB, then transfer the rows back into the original DB.
I haven't had to use anything but these methods before, so one of them should work for you.
A basic insert will work just fine.
Insert ProdDB.schema.YourTable
([Columns])
select ([Columns])
from TestDB.schema.YourTable
where YourDateRange predicates here
We're running SQL Server 2012 / .Net Framework 4.5.1
We have an application that does the following:
Extract all table data from a source database using an instance of .Net's SqlBulkCopy.
Delete all data in a target database using regular SQL statements.
Deploy the data from the source database to the target database using an instance of .Net's SqlBulkCopy.
The third step is successful when the SQL connection uses my Active Directory account, but fails with the following error message when using a SQL Server account created for this purpose: Cannot find the object "[SchemaName].[TableName]" because it does not exist or you do not have permissions.
Interestingly, the process runs through about a dozen tables before hitting one that causes this error. Manual verification proves that a) The table exists on the target, b) The problem user can select from the table, and c) the problem user can manually insert into the table with the standard INSERT INTO [SchemaName].[TableName] ([Columns]) VALUES ([Values]) format. BCP also works for that user, but using SqlBulkCopy from a .Net application fails for the same user.
Our DBA (A pretty seasoned guy, so far as I can tell, actually) says that the database permissions on the target database are IDENTICAL between the two users, but reality would seem to suggest this is not the case.
Googling the problem shows that the user should have the db_owner or db_ddladmin roles. The user actually belongs to both.
Anyway, solving the local problem is of secondary concern, since I can get done what I need done with my AD account. What I'd really like to know is whether there is a baked-in way to compare the differences in permissions between two users. If not, can this be done with a T-SQL query of some kind?
Thanks, guys and gals!
Here's my permissions script that I use. It's generally the approach that everyone uses, unless they have a schema compare product via Visual Studio, Red Gate, etc. http://www.csvreader.com/posts/permissions_list.php
Are you specifying the schema on the destination table with SqlBulkCopy? Is it possible that you're running into a user owned schema instance?
It's also been my experience that SqlBulkCopy only requires select and insert on the destination table. BCP requires the escalated permissions that you described, which is another benefit of SqlBulkCopy.
I have an access 2003 database that holds all of my business data. This access database gets updated every few hours during the day.
We're currently writing a website that will need to use the data from the access database. This website (for the time being) will have only read only capabilities. Meaning there will only need to be one way transfer of data (Access -> SQL).
I'm imaging there's a way to perform this data migration from access to SQL server programatically. Does anyone have any links to something I can read about?
If this practice sounds odd, and you'd like to suggest another way to do this (or a situation where data can go both ways (Access -> SQL, SQL -> Access), that's perfectly fine.
The company is going to continue using Access 2003 for their business functionality. There's no way around that. But I'd like to build the (readonly) website on top of SQL Server.
The strategy you outlined can be very challenging. You could use INSERT queries to copy new Access rows to SQL Server, as described in another answer.
However, if you have changes to existing Access rows, and you also want those changes propagated to SQL Server, it won't be so simple. And it will be more complicated still if you want deleted Access rows deleted from SQL Server, too.
It seems more reasonable to me to use a different approach. Migrate the data to SQL Server once. Then replace the tables in your Access database with ODBC links to the SQL Server tables. Thereafter, changes to the data from within your Access application will not require a separate synchronization step ... they will already be in SQL Server. And you won't need to write any code to synchronize them.
If your concern is that the connections between the web server and SQL Server be read-only, just set them up that way. You can still independently allow read-write permissions for your Access application.
To do the initial data migration and set the SQL Server automatically, I would use the SQL Server Migration Assistant. The only thing you should definitely change that I can think of would be to turn off the Identity property on any columns that have it - to be explained below (MS Access calls Identity autonumber). Once you have your tables loaded, you can set up a dsnless connection to the database (and tables) you just created.
I haven't used the method just linked, but I believe it allows you to use SQL Server authentication to connect to the db. The benefit of using this method is you can easily change which SQL Server instance and/or database your are connecting to for development and testing.
There might be a better, automated way, but you can create several insert queries doing left joins from the primary key of the Access table to the SQL Server table, and putting a WHERE clause that specifies the SQL Server PrimaryKey must be null. This is why you need to turn off the Identity property in the SQL Server tables, so that you can insert the new data.
Finally, put the name of each query in one function, then run the function periodically.
I have used Microsoft's free SQL Server Migration Assistant (SSMA) to migrate Access to SQL Server. The tool is very simple to use. The only problem I have encountered with the tool was overloaded data types when migrating. What I mean by this is a small string will get converted to a NVARCHAR(MAX) in some instances. Otherwise, the tool is very handy and can be reused after setting up a 'profile'.
I have two databases, one on a remote server the other local. (SQL Server 2008)
The database on my local server has the entire structure setup but no data. I would like to copy the data from the remote server to my server and I am wondering the best method in which to do this.
The main issue I am experiencing is the user that I have to the remote database has limited permissions. I cannot read the stored procedures, user defined functions so when I use Import/Export wizard I do not get the schema etc. So a regular dump/restore is not working for me as it restores the tables without the Primary Keys/Foreign Keys and the stored procedures.
I'd like to do this,
INSERT INTO localtable SELECT * FROM remotedb.table
I was having issues because of the IDENTITY fields and I had to explicitly name all of the columns. Also I am not sure if SQL Server Management Studio allows you to use two different databases, remote and local, so I was looking for any advice.
I have also tried applications like SQL FTP and Backup and it fails because it runs out of memory (I have 16GB of memory on the machine and the DB is like 4GB). I also can use the SQL Server import/export wizard but then I don't get the schema information. I also tried SQL Compare from Red Gate and it runs into issues with the permissions. Unfortunately I do not have the time to request and gain access to a new user so I was hoping someone had a creative idea.
You can definitely use SQL Server Backups for this. It will not run out of memory. If it does please tell us the message (because likely you are misinterpreting it). This is the fastest possible and the most complete solution.
You can tell the export wizard to also script the schema. It is hidden under "advanced" somewhere (terrible UI). But the script will be extremely big and I know of no way to execute it.
You can drop all schema objects except PKs in the target database. Then you can use remote queries to copy all the data over. You will not get any problems with foreign keys and identity columns if you drop the beforehand. After you are done you can recreate all those objects. It is probably best if you use a transaction for all of this because that way you get consistent source data from a point-in-time.
I need to upsize a split Access database, i.e., one that's currently split between tow mdb files, a front-end and back-end. I see many webpages that in essence say, "run the Upsizing Wizard." My first, very basic question:
Should I be running this wizard in my front-end mdb or my back-end mdb?
I assume I don't want to link main mdb -> backend mdb -> sql server. Should I run the wizard on the backend mdb, and then in the frontend mdb change the linked tables to point to sql server rather than to the backend mdb? If so, how is this done? When I right-click and go into the Linked Table Manager for a table in the frontend (linked to the backend md), it only seems to let me choose a new mdb file.
I would agree with your first guess: you will want to run the wizard on the back-end mdb.
Once that's in SQL Server, also as you guessed, you'll want to link the front end to work with the SQL Server data. One way to do this is to set up an ODBC data source for your new SQL Server database and select that in the Linked Table Manager.
Open the Data Sources (ODBC) shortcut: in XP Pro, this is in the Control Panel under Administrative Tools. (If you don't see it, you probably don't have permission to create a data source, so you'll have to work with your network people to do this.) This will open the ODBC Administrator.
On the File DSN tab, click Add.... You'll see a list of available drivers. Select SQL Server and click Next. (If the front end is only being used on your machine, you can create a System DSN instead.)
Find a common location and name your data source.
Click Next and Finish. This will set up the first part of the data source, and will open the SQL Server data source wizard.
Name the data source and select the server on which you've put the upsized back-end database.
Change the rest of the settings as needed (you may not need to change much, but the scope of those changes may require a second question) and click through to Finish.
Once you have the data source set up, then Get External Data should give you the option to select it as your source. (In 2007, you can get there from the External Data ribbon. ODBC data sources are available under More.)
To expand a little further based on Matt's follow-up questions:
How you do it is a design choice. I recommend upsizing the back-end mdb because that would allow you to keep whatever forms and such you had in Access; I think it's less of a transition if your data is in SQL Server.
Before you upsized, your tables were linked to the back-end database, and the Linked Table Manager showed the links. After you set up the ODBC data source and linked those tables, it'll show that link. You'll view the links in two different ways because they're actually different types of links (Access vs. ODBC), even though the links may look the same in your front-end mdb.
Personally I have found that the upsizing wizard does a very bad job of determining correct datatypes. I would create the tables myself in SQL server using the datatypes I need, then move the data to the existing tables from Access. other wise you will be stuck with text data when you could use varchar or float when you really need decimal.
Once the data has been moved then I would delete the Access tables and link to the SQL Server tables.
Do not do anything without having a backup copy of the database first.
As a matter of standard paranoia, I would just make a backup copy of the existing files and run the Upsizing Wizard on the front end. If anything undesirable happens, just revert the changes by overwriting with the backup copy.
Update the front end, and it will import the back end tables before it upsizes. I did this a week ago with a successful result.
However, any queries that use -1 instead of Yes will fail. Any full table deletes on tables without a primary key will fail, and you will get different behaviour from that than you will by merely using a pass-through SQL query to truncate table. The trunc will delete all rows, the Access version may leave a blank.
Also you'll need to include dbSeeChanges anywhere you have a recordset opening on a table with an autonumber column data type. SQL changes these to Identity data types, then gripes before you try to open the table. Good luck.
Do it all in the front end
You can simply export the tables to SQL Server.
You can then delete the linked tables you have in your frontend.
Then link the connection to SQL Server
Check:
when you open tables you get records
all your queries run
compile your code
You will also have to consider how you are releasing the front end. If you are using a dsn file you will need to provide that to each user.
You will need to determine how the end user accesses SQL Server. Are you using a single login with the username and password stored in the connection?
You could also split your backend DB into multiple Access DB and link them in the frontend.