I'm used to working with SQL Server and when I want to copy a DB there, I just need a handful of clicks in the wizard and voila...a complete copy of the DB, without taking the source DB offline.
We now also have an Oracle 11g because some machines require it, and I want to make a copy of the database. Just a copy on the same server, to use as a test DB for my software development.
All instructions that I find are pages full of steps, using RMAN or not, you have to write scripts, use command line stuff...I'm amazed at how inefficient such a common task is when using Oracle.
Aren't there any easy ways of copying a DB? Maybe just exporting everything to a SQL file, then editing it to use another DB name, and then executing it again?
I see that in SQL Developer you can choose 'Database Copy...' from the Tools menu, but it asks a destination connection. How can I select a destination when creating the destination DB is the whole point of running the wizard? Or is a connection not the same as a DB?
Thanks for helping me out here!
You're generally going to need a new database to copy the data to, and the data could be copied with datapump export/import. There aren't many ways of getting around that I'm afraid, but one option that you might consider is to make more use of VM's such as Oracle's own VirtualBox, as they can be cloned very easily with an absolute certainty of byte-by-byte fidelity.
Incidentally, one problem in making logical copies (via export/import) of a database is that it's easy to end up with a different physical pattern to the table and indexes, which can lead to unexpected differences in query optimisation.
Related
I need to automate the migration from an Access (2003) to an SQL Server DB (2005 or 2008). The upsizing should be done automatically as part of a build process.
I need that because there are 2 versions of the software, a single user rich client and a web version. Access DB is used for single user to minimize setup effort, SQL Server to improve performance and scaling with many simultanious users. Access should be the "leading" DB, meaning devs do changes in Access DB and those are propagated to the SQL server within the build process. Many changes will occur, so doing it manually is not an option.
I am new to the Microsoft world, so I dont know appropriate tools for that. What tools can I use and how? I know how to do it (by clicking) with the upsizing assistant. Perhaps I can automate that somehow?
Thanks in advance for your answers.
Cheers,
Arne
Why not use a SQL Server backend for The Access database? Then changes are only made once and all is in synch.
The problem here is that you trying to do versing with two different systems.
If all of the design changes can be done in Access, then you can simply up-size that to sql server. The problem here is do you need to modify the existing sql server databases? And, do you need this over time? Upsizing the access database will get the new version up to sql server, but it WILL NOT help for existing sql server databases.
If you need over time to have changes to tables on BOTH systems and existing systems then I would state to the access developers that ALL changes to the tables MUST be written as ddl sql. You then use the ddl executed on Access to make the changes. The developer MUST THEN AT THAT POINT take that same ddl and run it on the sql side. It might need slight modifications. They fix it to run on sql server. So, then you wind up with two scripts. One for sql server, and one for Access. In the case of access you can write a few lines of code to read and process a text file of ddl. (that same code can be used for all scripts).
That way, you build up a equivalent set of changes for each upgrade. So, simply NEVER allow design changes direct to the access tables. If you allow design changes with the table designer then you are in big trouble as you don't now know what changes need to be carried over to sql server.
However, as better recommend approach I would suggest that table desing changes are done FIRST in sql server. The reason is the table designer in sql visual studio tools allow the changes just made to be scripted to the clipboard (a simple right click). The other reason is that the table designer is VERY much like the access one. So, this is easy even for access developers with little sql server experience to use. So, that simple right click of "script changes to clip board" would create the need change script for sql server (they would save this into a file for sql server). They then take the ddl, and run it on the access side. The reason for this is the ddl might need some slight mods for Access (but, they did not have to write the ddl statement). So, this means the ddl gets written for them. This means you know the sql server ddl is correct. They then simply take that ddl and use again to change the access table also. The ddl might need slight changes for access, but that's just fine because they need the table change and then run and fix this untill the simple table change is made. So this scripted ddl would be then saved for the access update into a file also. The ddl script might need slight modifications for access so I think this approach is best. As long as this is being changed/fixed as they go along you be fine.
So over a period of changes, you wind up with two scripts that keep the changes in sync for both systems.
If you don't need changes to existing sql and access databases, then just let the access folks have at it, and then simply up size the whole database to sql server in one shot and you done.
I can't really think much of any other approach between the above two extremes.
You could have the users make the change in access and then go to sql server to make the change in the sql table design and then save the sql server change script. However, this would not get you a change script for the access side. If a change script is NOT needed for access side then this would work very well as you would get a sql server change script.
The above would be how I go about this and it quite workable...
Scenario
i want to take backup from 7 client database to 1 server database.
i dont know structure of the db { either server or client db }.
both databases are having old data. now i have to make the tool take the backup for that.
and should possible to backup old data also[if any updates done on old data.]
please help to find the solution for this.
1. how can i proceed with the problem.
2. database not specified, may be MS access or Sql server 2005
3. In which i can implement this [ I am thinking of doing it in c#]
please help me to find the solution
I'm not sure why you would want to go about it this way - if you are merely trying to copy the client databases (which I interpret as being "file based") then why not simply take copies of their files as part of the wider backup strategy?
If you to write the backup stuff to place all the data in a server based RDBMS, then you are also going to have to think about how you restore that information later on - which presumably means even more coding for you.
So - I don't think this is a good idea, but if you are determined, I would start off by writing a class (which will be almost abstract) dedicated to the purpose of reading the structure of the client database (tables, fields, views etc). I'd then inherit from that to get a specific class for doing this for each individual type of client DB. Once you have that, you can use ADO.Net to read values from the tables in the Client DB, populate datatables with the information, and then write that information back out to the Server based DB.
I really can't stress enough though that I don't like this idea - it seems overly complicated, and also won't deal with functions etc.
Good luck anyway,
Martin
Advisability of doing this aside, one simple answer for a particular subset of the problem would be to create a DSN for a target SQL Server (or any server database) and in Access export table by table to the DSN. You can do this through the Access UI and it can be automated within Access with DoCmd.TransferDatabase. It can be a little fiddly figuring out the proper connect string, and you'd also need to do something about renaming the exported tables so there are no collisions between databases, but that can be handled quite easily in a bit of VBA code.
I post this only because many people overlook the Access capability to export to an ODBC DSN, which requires no writing of DDL and so forth. It may or may not make correct choices about target data types, though, so you'd have to see in any particular situation if it's good enough or not.
The way I develop may not be correct, any advice welcome.
At the moment I have a WPF application that uses a SQL2008 database. I have a copy of the database on a laptop and on my home machine. My application is versioned using SVN and I am obviously able go from the work laptop to the home machine and update/commit as required to ensure I am using the latest code for the application.
However the database is a different story in that any change I make I create a backup and then transfer the backup to the other machine etc. This way I get the data and the changes made on each system. In order to do this the database connection using a different connectionstring and I change a setting in my app to use a different connection based on my location.
I have now started to use LINQ to SQL and DBML files in my application, and finally getting to the question, I don't know how I can change the connectionstring it uses in code so it will use the correct database in the DBML.
Also, is there a better way to transfer the database so I don't need to do the backups and restores? The only reason why I have not versioned the Schema is because I am not sure how that would handle my data as this is key to my development, ie various environment settings etc are stored in the DB and brought through at runtime.
Your Statement:
I have now started to use LINQ to SQL and DBML files in my application, and finally getting to the question, I don't know how I can change the connectionstring it uses in code so it will use the correct database in the DBML.
Yes it's possible.
MYDataContext mycontext = new MYDataContext("Your Connection String");
There is a Constructor where you can chage the Connectionstring.
This is such a common problem, and I have never found a minimal and clean solution to it. How to keep all the values and variables and databases and source files in sync between machines?
Well SVN works great for the source files.
For the database, I TRY to just use one DB if we can get away with it. All the devs point to one machine that hosts the db, then we aren't wasting time with DB setup and merging. If that's not possible, then we usually just end up dumping the database when there is a change and distributing the .bak file around. You can try adding this file to SVN, and it works. you can even have the DB dump to a schedule so that SVN is always getting a new copy. But it's still too much work to keep restoring a db over and over. Perhaps you could hook in some scripting to SVN (we use Tortise for windows) and have a job that would do that automatically. That'd be nice.
For the config files - I do ASP.NET so I have web.config, connectionstrings.config, etc, I do one of two things - either just manually copy sections that need to be changed between machines and comment out the part that doesn't need to be used (clunky), or I've at times written ConfigurationSettings helper objects that diagnose a config key to decide what setting to use, based on the current machine name. eg:
Say my current machine is DEV1. The server is SERVER1. I'll have config keys with names like DEV1.connections.sqlserver and SERVER1.connections.sqlserver. In the code I'll use the helper method GetConfig("connections.sqlserver"). GetConfig figures out which key to use based on the current machine name.
Using this method, I don't have to keep remembering to monkey around with the dozen .configs every time I upload to the server or change things. But I DO have to make a duplicate key for every machine that will be running the application, which can get a bit much. For large teams, instead of using machine names, I use group names and have a config key that assigns machine names to a group - with the idea that every machine in the group will have that application set up in an identical fashion - same file paths etc.
Now onto your second question about LINQ - when you create a linq dbml, it will add a connection string to your config. you just have to make sure that you find this connectionstring and copy it into your active application. eg:
I have a solution that has 2 projects:
1 - website
2 - library
I put the dbml into the library project. If I go and look into the App.config of the library project, I'll see the connectionstring that LINQ wants to use. If I copy this connectionstring into the website's connectionstrings.confing file, when I reference the library and run the website, LINQ will be able to see the connectionstring it wants to use.
You can try Sql Server Merge Replication and use SQL Compact 3.5 as your laptop database and use master as your work/home machine database. However you may do this with only Sql Server Standard Edition.
Other option is , Microsoft Sync Framework.. here..
http://msdn.microsoft.com/en-us/sync/default.aspx
You could use red_gate's SQL COmpare and SQLDataCompare to script out changes to the database. You should be in the habit of scripting database changes anyway as that is what you will need to do when it is time to move changes to prod. I would also make sure all database changes are in SVN, we don't make any changes to the database ever without a script in source control.
I ended up just using multiple connection strings and then manually changing the connection on the dbml file whenever I moved locations. However I also have some code in place to programmatically change it based on the project setting for the location.
I haven't really got a good solution to the transferring of the databases and continue to use the backup and restore method.
Bear in mind here, I am not an Access guru. I am proficient with SQL Server and .Net framework. Here is my situation:
A very large MS Access 2007 application was built for my company by a contractor.
The application has been split into two tiers BY ACCESS; there is a front end portion that holds all of the Ms Access forms, and then on the back end part, which are access tables, queries, etc., that is stored on a computer on the network.
Well, of course, there is a need to convert the data storage portion to SQL Server 2005 while keeping all of these GUI forms which were built in Ms Access. This is where I come in.
I have read a little, and have found that you can link the forms or maybe even the access tables to SQL Server tables, but I am still very unsure on what exactly can be done and how to do it.
Has anyone done this? Please comment on any capabilities, limitations, considerations about such an undertaking. Thanks!
Do not use the upsizing wizard from Access:
First, it won't work with SQL Server 2008.
Second, there is a much better tool for the job:
SSMA, the SQL Server Migration Assistant for Access which is provided for free by Microsoft.
It will do a lot for you:
move your data from Access to SQL Server
automatically link the tables back into Access
give you lots of information about potential issues due to differences in the two databases
keeps track of the changes so you can keep the two synchronised over time until your migration is complete.
I wrote a blog entry about it recently.
You have a couple of options, the upsizing wizard does a decent(ish) job of moving structure and data from access to Sql. You can then setup linked tables so your application 'should' work pretty much as it does now. Unfortunately the Sql dialect used by Access is different from Sql Server, so if there are any 'raw sql' statements in the code they may need to be changed.
As you've linked to tables though all the other features of Access, the QBE, forms and so on should work as expected. That's the simplest and probably best approach.
Another way of approaching the issue would be to migrate the data as above, and then rather than using linked tables, make use of ADO from within access. That approach is kind of famaliar if you're used to other languages/dev environments, but it's the wrong approach. Access comes with loads of built in stuff that makes working with data really easy, if you go back to use ADO/Sql you then lose many of those benefits.
I suggest start on a small part of the application - non essential data, and migrate a few tables and see how it goes. Of course you back everything up first.
Good luck
Others have suggested upsizing the Jet back end to SQL Server and linking via ODBC. In an ideal world, the app will work beautifully without needing to change anything.
In the real world, you'll find that some of your front-end objects that were engineered to be efficient and fast with a Jet back end don't actually work very well with a server database. Sometimes Jet guesses wrong and sends something really inefficient to the server. This is particular the case with mass updates of records -- in order not to hog server resources (a good thing), Jet will send a single UPDATE statement for each record (which is a bad thing for your app, since it's much, much slower than a single UPDATE statement).
What you have to do is evaluate everything in your app after you've upsized it and where there are performance problems, move some of the logic to the server. This means you may create a few server-side views, or you may use passthrough queries (to hand off the whole SQL statement to SQL Server and not letting Jet worry about it), or you may need to create stored procedures on the server (especially for update operations).
But in general, it's actually quite safe to assume that most of it will work fine without change. It likely won't be as fast as the old Access/Jet app, but that's where you can use SQL Profiler to figure out what the holdup is and re-architect things to be more efficient with the SQL Server back end.
If the Access app was already efficiently designed (e.g., forms are never bound to full tables, but instead to recordsources with restrictive WHERE clauses returning only 1 or a few records), then it will likely work pretty well. On the other hand, if it uses a lot of the bad practices seen in the Access sample databases and templates, you could run into huge problems.
It's my opinion that every Access/Jet app should be designed from the beginning with the idea that someday it will be upsized to use a server back end. This means that the Access/Jet app will actually be quite efficient and speedy, but also that when you do upsize, it will cause a minimum of pain.
This is your lowest-cost option. You're going to want to set up an ODBC connection for your Access clients pointing to your SQL Server. You can then use the (I think) "Import" option to "link" a table to the SQL Server via the ODBC source. Migrate your data from the Access tables to SQL Server, and you have your data on SQL Server in a form you can manage and back up. Important, queries can then be written on SQL Server as views and presented to the Access db as linked tables as well.
Linked Access tables work fine but I've only used them with ODBC and other databases (Firebird, MySQL, Sqlite3). Information on primary or foreign keys wasn't passing through. There were also problems with datatype interpretation: a date in MySQL is not the same thing as in Access VBA. I guess these problems aren't nearly as bad when using SQL Server.
Important Point: If you link the tables in Access to SQL Server, then EVERY table must have a Primary Key defined (Contractor? Access? Experience says that probably some tables don't have PKs). If a PK is not defined, then the Access forms will not be able to update and insert rows, rendering the tables effectively read-only.
Take a look at this Access to SQL Server migration tool. It might be one of the few, if not the ONLY, true peer-to-peer or server-to-server migration tools running as a pure Web Application. It uses mostly ASP 3.0, XML, the File System Object, the Data Dictionary Object, ADO, ADO Extensions (ADOX), the Dictionary Scripting Objects and a few other neat Microsoft techniques and technologies. If you have the Source Access Table on one server and the destination SQL Server on another server or even the same server and you want to run this as a Web Internet solution this is the product for you. This example discusses the VPASP Shopping Cart, but it will work for ANY version of Access and for ANY version of SQL Server from SQL 2000 to SQL 2008.
I am finishing up development for a generic Database Upgrade Conversion process involving the automated conversion of Access Table, View and Index Structures in a VPASP Shopping or any other Access System to their SQL Server 2005/2008 equivalents. It runs right from your server without the need for any outside assistance from external staff or consultants.
After creating a clone of your Access tables, indexes and views in SQL Server this data migration routine will selectively migrate all the data from your Access tables into your new SQL Server 2005/2008 tables without having to give out either your actual Access Database or the Table Contents or your passwords to anyone.
Here is the Reverse Engineering part of the process running against a system with almost 200 tables and almost 300 indexes and Views which is being done as a system acceptance test. Still a work in progress, but the core pieces are in place.
http://www.21stcenturyecommerce.com/SQLDDL/ViewDBTables.asp
I do the automated reverse engineering of the Access Table DDLs (Data Definition Language) and convert them into SQL equivalent DDL Statements, because table structures and even extra tables might be slightly different for every VPASP customer and for every version of VP-ASP out there.
I am finishing the actual data conversion routine which would migrate the data from Access to SQL Server after these new SQL Tables have been created including any views or indexes. It is written entirely in ASP, with VB Scripting, the File System Object (FSO), the Dictionary Object, XML, DHTML, JavaScript right now and runs pretty quickly as you will see against a SQL Server 2008 Database just for the sake of an example.
It takes perhaps 15-20 seconds to reverse engineer almost 500 different database objects. There might be a total of over 2,000 columns involved in this example for the 170 tables and 270 indexes involved.
I have even come up with a way for you to run both VPASP systems in parallel using 2 different database connection files on the same server just to be sure that orders entered on the Access System and the SQL Server system produce the same results before actual cutover to production.
John (a/k/a The SQL Dude)
sales#designersyles.biz
(This is a VP-ASP Demo Site)
Here is a technique I've heard one developer speak on. This is if you really want something like a Client-Server application.
Create .mdb/.mde frontend files distributed to each user (You'll see why).
For every table they need to perform an CRUD, have a local copy in the file in #1.
The forms stay linked to the local tables.
Write VBA code to handle the CRUD from the local tables to the SQL Server database.
Reports can be based off of temp tables created from the SQL Server (Won't be able to create temp tables in mde file I don't think).
Once you decide how you want to do this with a single form, it is not too difficult to apply the same technique to the rest. The nice thing about working with the form on a local table is you can keep a lot of the existing functionality as the existing application (Which is why they used and continue to use Access I hope). You just need to address getting data back and forth to the SQL Server.
You can continue to have linked tables, and then gradually phase them out with this technique as time and performance needs dictate.
Since each user has their own local file, they can work on their local copy of the data. Only the minimum required to do their task should ever be copied locally. Example: if they are updating a single record, the table would only have that record. When a user adds a new record, you would notice that the ID field for the record is Null, so an insert statement is needed.
I guess the local table acts like a dataset in .NET? I'm sure in some way this is an imperfect analogy.
I have an application that uses MS SQL Server for which I'll need to do a bulk insert from a file. The sticking point is that the database and my application will be hosted on separate servers. What is the best way to do a bulk insert across a network? Two ideas I'd come up with so far:
From the app server, share a directory that the db server can find, and do the import using a bulk insert statement from the remote file
Run an FTP server from the db server - when the import is performed, simply ftp the file to the db server and do the import using a bulk insert from the local file (I am leaning towards this option).
Can anyone else tell me if there is a better way to do this, or if not, which one makes the most sense, and why?
I've done it before, and tried both options.
In the end, I did the opposite of choice 1. Share a directory on the DB server that the app can find. You don't have to deal with bandwidth issues during the bulk insert.
The FTP server option works if you're particularly concerned with security or transferability.
A final option (be very careful) is to use DTS with a localized SQL server. It might be more secure. If you do it wrong, it'll be much less efficient.
I'm still looking for a way to do this in MS SQL, but what I did in MySQL was to save the CSV as a BLOB in a temporary table, run SELECT ... INTO DUMPFILE in a directory local to the DB server, then run the regular LOAD DATA statement on that local file (EDIT: I feel I should point out, this was before LOAD DATA LOCAL was available).
I think spWriteStringToFile will do it.
EDIT: I'm on to something.
comm.CommandText = #"EXEC spWriteStringToFile #data, 'c:\datadumps', 'data.csv';
BULK INSERT my_table FROM 'c:\datadumps\data.csv';";
comm.Parameters.AddWithValue("data", File.ReadAllText(path));
comm.ExecuteNonQuery();
If the file is small enough then the ftp option may work (you will have a duplicate in your db box). However I don't see much of a problem to do option 1) if you have a gigabit network between the two with a single hop.
I am not sure about 2k08 as there are some additional utilities to copy files between servers. FTP will be the faster of the two, because it doesn't use the window's file system to transfer the file. (there is overhead, but unless the file is large it might be negligible). There are other advantages to not using a share, such as the remote server with the file crashing mid-access.
I disagree with cmartin about adding an open share to the db server. Generally you don't want to open file shares to the db server as it is generally considered a security risk, and a lot of places won't allow it. That being said it would negate you having transfer the file to another location to use the bulk import.
Actually you can place the file at both the Application and Database server.
Both files must be the same path. From the Application server, the purpose is only for selection. The Database server is for the bulk insert.
This is what I did for my customer and they also agreed with this simple trick.