Compatible DDL (CREATE TABLE) across different SQL databases? - sql-server

I'm working on a desktop application that must support (currently) MS Access and SQL Server back ends. The application is under constant development and changes are frequently being made to the database, mostly the addition of tables and views to support new features (but also some DROPs and ALTER TABLEs to add new columns).
I have a system that compiles the DDL into the executable, checks the database to see if the executable has any new DDL that has to be executed, and executes it. This works fine for a single database.
My immediate problem is that SQL Server and Access support wildly different names for data types so a CREATE TABLE statement that executes against Access will not execute against SQL Server (or worse, will execute but create a table with different datatypes).
Is there a method that can be used to create DDL (especially CREATE TABLE commands) that can be executed through ADO against both of these databases without having to craft separate DDL for each provider?

Since you are already using ADO, you should look into Microsoft ADOX
This allows you to manipulate structures in a data source using an ADO object model that is independent of the underlying data source type. i.e. without resorting to explicit DDL
Support for ADOX is not guaranteed by any given ADO Provider, and the level of ADOX support may vary even when it is available. But for MS Access and MS SQL Server I think you will find all the capability you require (and quite possibly more!)

This can be done using DBX in Delphi.
The following is links to sample code showing how this can be done.
http://cc.embarcadero.com/item/26210

I had the same problem.
I resolved it applying a C preproccessor to the SQL before executing it.
The preprocessor includes macros in order to handle the different dbs.
Stefano

Did you check
http://db.apache.org/ddlutils/
or
http://publib.boulder.ibm.com/infocenter/wtelecom/v6r1/index.jsp?topic=%2Fcom.ibm.twss.plan.doc%2Fdb_scripts.html

Related

Insert Data From Access Database to SQL Server

Desired result and why:
I have a lot of old Access databases that we are trying to get to SQL Server, and I'm essentially trying to make the Access DB the "middleman" so our old programs can still read/write to them but the information will also be saved in SQL Server. We need the middleman because of how interconnected these tables are through various programs we are rewriting in modern languages. Once we rewrite all of them we will cut the cord and live in SQL Server, but this will take a lot of time.
What I've tried:
We tried creating a linked table to SQL Server and renaming it so it would take the place of the original table. After doing this the table stopped receiving data so we quickly reverted back.
In order to investigate this I created Table B which is just another linked table to SQL Server, and then tried using the After Insert macro on Table A to send any new rows to the linked table but nothing happens. If I manually add a record to Table B it carries over to SQL Server just fine, but I can't get Table A to send data to Table B. I created Table C that is just a local access table and if I manually add a record to Table A it does show up in Table C. No errors at all, it just doesn't do what I need it to do.
I'm lost on how to accomplish this and open to any help or suggestions on how to move forward with this. One thing to note though, is that most of the access databases I have are not using forms at all which is I'm trying to take the macro route instead of any VBA. I need these to trigger without any interaction from the user.
You should use the tool dedicated to this task:
SQL Server Migration Assistant for Access (AccessToSQL)
Ok, there are from comments some new and signficant moving parts here.
For example, data is to be migrated to sql server. As noted, EVEN in access land, all and every table needs and should have a PK for the "basic" data base operations. While it is possible to do some work, and say some importing of data, the instant one wants some forms, VBA code and starts to build a working applcation? Then all tables should have a PK.
And of course if you moved the data to sql server, then it not going to make a lot of sense to have OTHER applcations attempt to modify the linked tables in access, since the data is not in Access anymore!!! Those other sources in theory should thus also hit sql server, and not attempt to use what amounts to a link on a linked table.
However, it does depend. For example, if you use vb.net code and say open a access database, you CAN in fact have that vb.net code open a access table, and in fact it can be a linked table. (however, it would make a WHOLE lot more sense for the vb.net code to open and hit sql server - introduction of a link on a link is going to be problematic.
However, in testing, I have found that say vb.net can open a access table, and even if it is a link, then access will translate though the jet engine (the access data engine), and you can do this.
However, data macros and table triggers on existing access tables? They might work on linked tables, but you of course need to ensure that the linked table does allow edits, and allows inserts. Only AFTER one has verified that you can click on a linked table to sql server - can edit, and then add should one mess around with data macros and triggers on say local tables.
it also depends on what the new software tools and platform is being used here.
But, from a basic database point of view - and general data mangement?
All code, and designs should assume, and be designed around the assumption that each row of data has a PK. This is not always possible, but is a RARE use case.
Practical data management - and use of a database should from both table designs, and from workflow designs, and from a developer point of view assume the concept of a PK row id. Without such assumptions, then you not in the software industry anymore - but in a hack field, and one that will result in great future difficulty when attempting to build work flows and build general information systems.
So, with above in mind: Your table B - it has to work as a valid sql server table.
The sql server table(s). They need a PK, and after linking to sql server, you can open up the linked table in access. Test if edits work, test if adding works, and even perhaps test if delete works. Only AFTER such time, do you now want to start testing any code or other operations from the Access client side.
Introduction of using a linked table from another application? That is a foggy area, but I can confirm for example that say .net oleDB provider will and can open a access database and use + consume even linked tables.
You also don't mention if you using sql logon, or windows auth for the sql server linked tables. But if you using sql logons, then when linking a table, you see this check box - and you want to ensure you selected this when linking the table(s) in question:
Note that you ONLY get this prompt on the first time create of the table link - additional use of the linked table manager (such as re-fresh links) does not offer this prompt. If you don't select the save password option, then you often see a sql logon prompt when you attempt to open a linked table in access.

CLR Stored Procedure v regular SQL Stored Procedure

I have been writing a CLR stored procedure that moves data from one database to another. I went with the CLR stored procedure because I like the .NET framework's ability to connect to remote servers better than I like linked servers, or openrowset, but I now find that my class is mostly embedded SQL strings. I was considering just using the CLR stored procedures to retrieve the data onto the local SQL Server, and then using a regular SQL stored procedure for the actual inserts and updates.
I'm not worried about pre-compilation of the procedure or performance, and I do like that the CLR procedure allows me to see all of the logic in one place, read from top to bottom.
Are there any reasons I should consider moving to a TSQL solution instead of CLR?
Thanks.
There are multiple reasons why you would stick to a regular stored procedure. I'll try to give you an overview of the ones that I know of:
Performance.
Memory issues. SQL Server only operates with its own max memory settings. CLR's go out of this bound. This could comprimise other applications (and the OS) running on this server.
Updatebility. You can update a Stored procedure with a simple script. CLR's are more complicated to update
Security. CLR's often require more security settings than regular t-sql.
As a general rule you only want to use CLR for:
interaction with the OS, such as reading from a file or dropping a message in MSMQ
performing complex calculations, especially when you already have the code written in a .NET language to do the calculation.

Common function / stored procedures for all databases

We have a database server and it has about 10 databases.
I would like to create some functions / stored procedures which can be used in all databases.
For example, we can use sp_executesql in any database.
We have some requirements like that (getting current academic year, financial year, etc...)
Is it doable?
As others have suggested, you could put objects into the master database, but Microsoft explicitly recommends that you should not do that. I find that solution to be rather risky anyway, because the master database is 'owned' by the system, not by you, so there are no guarantees that it will continue to behave in the same way in the future.
Instead, I would consider this to be primarily a deployment issue. There are (at least) two strategies you could use:
Deploy the objects to every database
Deploy them to one 'reference' database that is only used for shared objects and create synonyms in the other databases
The second option is perhaps the better one, because if your functions use tables (e.g. you use a calendar table to get the academic year, which is much easier than calculating it) then you would have to create the same tables in every database too. By using synonyms, you only have to maintain one set of tables.
For the actual deployment, it's straightforward to use scripting to do manage the objects, because you just need a list of databases to connect to and run each DDL script against. You can do that using batch files and SQLCMD (perhaps with SQLCMD variables in your .sql scripts), or drive it from PowerShell or any other language that you prefer.
Depending upon what the SP actually does, you want to create the procedure in master, name it with sp_ and mark it as a system procedure:
http://weblogs.sqlteam.com/mladenp/archive/2007/01/18/58287.aspx
A couple of options:
You can use a system stored procedure as Cade says. I've done this in the past and it works ok. One warning on this is that the sp_MS_marksystemobject procedure is undocumented, which may mean that it could vanish or change without warning in future SQL versions. Thinking back I think there were other problems using this approach with functions though.
Another approach is to use standardized procedure and functions, and roll them out across your databases using sp_MSforeachdb to run code against every database. If you need to run against only your 10 database you can take copy the code in this procedure and modify it to check that a database matches your schema before running the code (or you can write your own version that does a similar thing).

how to handle db schema updates when using schemabinding and updating often

I'm using a MS SQL Server db and use plenty of views (for use with an O/R mapper). A little annoyance is that I'd like to
use schema binding
update with scripts (to deploy on servers and put in a source control system)
but run into the issue that whenever I want to e.g. add a column to a table, I have to first drop all views that reference that table, update the table, and then recreate the views, even if the views wouldn't need to be updated otherwise. This makes my update scripts a lot longer and also, looking the diffs in the source control system, it is harder to see what the actual relevant change was.
Is there a better way to handle this?
I need to still be able to use simple and source-controllable sql updates. A code generator like is included in SQL Server Management Studio would be helpful, but I had issues with SQL Server Management Studio in that it tends to create code that does not specify the names for some indices or (default) constraints. But I want to have identical dbs when I run my scripts on different systems, including the names of all contraints etc, so that I don't have to jump through loops when updating those constraints later.
So perhaps a smarter SQL code generator would a solution?
My workflow now is:
type the alter table statement in query editor
check if I get an error statement like "cannot ALTER 'XXX' because it is being referenced by object 'YYY'."
use SQL Server Managment Studio to script me create code for the referenced object
insert a drop statement before the alter statement and create statement after
check if the drop statement creates error and repeat
this annoys me, but perhaps I simply have to live with it if I want to continue using schemabinding and script updates...
You can at least eliminate the "check if I get an error" step by querying a few dynamic managment functions and system views to find your dependencies. This article gives a decent explanation of how to do that. Beyond that, I think you're right, you can't have your cake and eat it too with schema-binding.
Also keep in mind that dropping/creating views will cause you to lose any permissions that were granted on those objects, so those permissions should be included in your scripts as well.

Changing VC++6 app database from Access to SQL Server - can I use linked tables?

We have a Visual C++ 6 app that stores data in an Access database using DAO. The database classes have been made using the ClassWizard, basing them on CDaoRecordset.
We need to move from Access to SQL Server because some clients have huge (1.5Gb+) databases that are really slow to run reports on (using Crystal Reports and a different app).
We're not too worried about performance on this VC++ app - it is downloading data from data recorders and putting it in the database.
I used the "Microsoft SQL Server Migration Assistant 2008 for Access" to migrate my database from Access into SQL Server - it then linked the tables in the original Access database. If I open the Access database then I can browse the data in the SQL Server database.
I've then tried to use that database with my app and keep running into problems.
I've changed all my recordsets to be dbOpenDynaset instead of dbOpenTable. I also changed the myrecordsetptr->open() calls to be myrecordsetptr->open(dbOpenDynaset, NULL, dbSeeChanges) so that I don't get an exception.
But... I'm now stuck getting an exception 3251 - 'Operation is not supported for this type of object' for one of my tables when I try to set the current index using myrecordsetptr->->SetCurrentIndex(_T("PrimaryKey"));
Are there any tricks to getting the linked tables to work without rewriting all the database access code?
[UPDATE 17/7/09 - thanks for the tips - I'll change all the Seek() references to FindFirst() / FindNext() and update this based on how I go]
Yes, but I don't think you can set/change the index of a linked table in the recordset, so you'll have to change the code accordingly.
For instance: If your code is expecting to set an index & call seek, you'll basically have to rewrite it use the Find method instead.
Why are you using SetCurrentIndex when you have moved your table from Access to SQL Server?
I mean - you are using Access only for linked table.
Also, as per this page - it says that SetCurrentIndex can be used for table type recordset.
In what context are you using the command SetCurrentIndex? If it's a subroutine that uses SEEK you can't use it with linked tables.
Also, it's Jet-only and isn't going to be of any value with a different back end.
I advise against the use of SEEK (even in Access with Jet tables) except for the most unusual situations where you need to jump around a single table thousands of times in a loop. In all other DAO circumstances, you should either be retrieving a limited number of records by using a restrictive WHERE clause (if you're using SEEK to get to a single record), or you should be using .FindFirst/FindNext. Yes, the latter two are proportionally much slower than SEEK, but they are much more portable, and also the absolute performance difference is only going to be relevant if you're doing thousands of them.
Also, if your SEEK is on an ordered field, you can optimize your navigation by checking whether the sought value is greater or lesser than the value of the current record, and choosing .FindPrevious or .FindNext, accordingly (because the DAO recordset Find operations work sequentially through the index).

Resources