We're building an application for a client with the assumption that they'd be upgrading to a minimum of SQL Server 2005 from SQL Server 2000. We're finished our application, built on 2005, and are ready to integrate. Turns out that they're not going to upgrade their DB server.
So, now we're stuck with trying to sort out what will break.
We don't have access to SQL Server 2000, so we can only change the compatibility of the database to 80.
Aside from complete testing and reviewing every stored procedure (and I've read that changing the compatibility mode is not foolproof - so testing wouldn't be bombproof), is there any other way to determine what will break? Any tools out there? Scripts?
Edit
I'd prefer not to try restoring this onto their production DB server to see what errors are spit out, so that's not a good option.
Suggest you look in Books online for the page that spells out the differences between the two and look for those things. YOu can look over the list and then search for some new keywords in the table where the sp text is stored. That will give you a beginning list.
#rwmnau noted some good ones, I'll add two more
SQL Server 2000 does not have varchar(max) or nvarchar (max), use text instead.
SQl Server 2000 also does not have SSIS - if you are creating SSIS packages to import data or move data to a data warehouse or export data, all of those need to be redone in DTS.
Also it looks to me like you can still download the free edition of SQL Server 2000:
http://www.microsoft.com/downloads/details.aspx?familyid=413744d1-a0bc-479f-bafa-e4b278eb9147&displaylang=en
You might want to do that and test on that.
I wouldn't be worried about your ANSI-SQL (setting the database compatibility level should take care of most of that), but there are a few big features you may have used that aren't available in SQL 2000 (there are many more, but these are the ones I've seen that are most popular):
Common Table Expressions (CTE) - http://msdn.microsoft.com/en-us/library/ms190766.aspx
TRY...CATCH blocks
CLR-integrated stored procs
Also, though you shouldn't be, any selections directly from system tables (objects that begin with "sys" or are in the "sys." schema) may have changed dramatically between SQL 2000 and 2005+, so I'd see if you're selecting from any of those:
SELECT *
FROM syscomments --I know, using a sys table to figure it out :)
WHERE text like '%sys%'
Also, it's worth noting that while extended support is available for a hefty fee, Microsoft has officially ended mainstream support for SQL 2000, and will end extended support in the near future. This leaves your client (and you) without any updates from Microsoft in the case of security patches, bugs, or anything else you discover. I'd strongly encourage them ot upgrade to a newer version (at least 2005), though I suspect you've already been down that road.
Related
Can anyone please suggest a way to import an MS Access 2019 desktop database (tables and data) into SQL Server 2019 Developer edition to create a new SQL Server database?
I know this question has been asked many times for earlier versions of this software but I am hoping that there may be a 2019+ method for the 2019 versions.
Thanks in advance.
If you looking to just import 1 or 2 tables, then use the SQL server management tools. However, those imports don't even preserve the PK and your indexes. And such a import does not support relationships between tables.
However, if you looking to move up lots of tables, keep + set your PK, keep + set your indexing, and ALSO move up related data between tables?
Then I suggest you use Sql Server Migration Assistant for Access.
The so called SSMA can be found here:
https://www.microsoft.com/en-us/download/details.aspx?id=54255
Note in above, you can download a x86 version, or a x64 bit version. While your local say running copy of SQL server can (and even should be) x64 bits, it is STILL VERY VERY VERY common that your Access/office install may well be x32 bits. As a result, you want to download + choose + run the x86 version of SSMA.
While total free, it is a relative complex package, so try a few test migrations, and I high recommend this package, since as noted, it has smarts for not only moving tables, but also indexing, and even relations between tables.
I also STRONG but STRONG suggest that you change the default mapping for the data types. By default, it will use datetime2 for any Access Date/Time columns, and I STRONG but STRONG but STRONG but STRONG but STRONG suggest you change that default back to using sql server datetime for dates. the default is datetime2 and you REALLY but REALLY but REALLY do not want to use this default.
You can also have it "try" to move up your sql (saved) queries. I in most cases don't do this, but that's another long post here.
However, it can and will attempt to try and move saved queries to sql server views - this I don't recommend in most cases, but each use case I suppose is a different use case.
In summary:
To import a table or maybe 2-3 tables - use the SQL manager.
To import a whole lot of tables, and keep things like relations intact, then use SSMA for access.
I have a scenario where I get queries on a webservice that need to be executed on a database.
The source for these queries is from a physical device so I cant really change the input to my queries.
I get the queries from the device in MSSQL. Earlier the backend was in SQL Server, so things were pretty straight forward. Queries would come in and get executed as is on the DB.
Now we have migrated to Postgres and we don't have to the option to modify the input data (SQL queries).
What I want to know is. Is there any library that will do this SQL Server/T-SQL translation for me so I can run the SQL Server queries through this and execute the resulting Postgres query on the database. I searched a lot but couldn't find much that would do this. (There are libraries that convert schema from one to another but what I need is to be able to translate SQL Server queries to Postgres on the fly)
I understand there are quite a bit of nuances that will be different between SQL and postgres so a translator will be needed in between. I am open to libraries in any language(that preferably runs on linux : ) ) or if you have any other suggestions on how to go about this would also be welcome.
Thanks!
If I were in your position I would have a look on upgrading your SQL Sever to 2019 ASAP (as of today, you can find on Twitter that the officially supported production ready version is available on request). Then have a look on the Polybase feature they (re)introduced in this version. In short words it allows you to connect your MSSQL instance to other data source (like Postgres) and query the data in as they would be "normal" SQL Server DB (via T-SQL) then in the background your queries will be transformed into the native pgsql and consumed from your real source.
There is not much resources on this product (as 2019 version) yet, but it seems to be one of the most powerful features coming with this release.
This is what BOL is saying about it (unfortunately, it mostly covers the old 2016 version).
There is an excellent, yet very short presentation by Bob Ward (
Principal Architect # Microsoft) he did during SQL Bits 2019 on this topic.
The only thing I can think of that might be worth trying is SQL::Translator. It's a set of Perl modules that have been around for ages but seem to be still maintained. Whether it does what you want will depend on how detailed those queries are.
The no-brainer solution is to keep a SQL Server Express in place and introduce Triggers that call out to the Postgres database.
If this is too heavy, you can look at creating a Tabular Data Stream (TDS is SQL Server network transport) gateway with limited functionality and map each possible incoming query with any parameters to a static Postgres query. This limits any testing to a finite, small, number of cases.
This way, there is no SQL Server, and you have more control than with the trigger option.
If your terminals have a limited dialect demand then this may be practical. Attempting a general translation is very likely to be worth more than the devices cost to replace (unless you have zillions already deployed).
There is an open implementation FreeTDS that you could use if you are happy with C or Java.
I'm looking for the benefits of upgrading from SQL Server 2000 to 2008.
I was wondering:
What database features can we leverage with 2008 that we can't now?
What new TSQL features can we look forward to using?
What performance benefits can we expect to see?
What else will make management go for it?
And the converse:
What problems can we expect to encounter?
What other problems have people found when migrating?
Why fix something that isn't (technically) broken?
We work in a Java shop, so any .NET / CLR stuff won't rock our world. We also use Eclipse as our main development so any integration with Visual Studio won't be a plus. We do use SQL Server Management Studio however.
Some background:
Our main database machine is a 32bit Dell Intel Xeon MP CPU 2.0GHz, 40MB of RAM with Physical Address Extension running Windows Server 2003 Enterprise Edition. We will not be changing our hardware. Our databases in total are under a TB with some having more than 200 tables. But they are busy and during busy times we see 60-80% CPU utilisation.
Apart form the fact that SQL Server 2000 is coming close to end of life, why should we upgrade?
Any and all contributions are appreciated!
Besides all the features MatthewPK mentions, I also really like:
Common Table Expressions (CTE) (which I find extremely helpful) - see Using Common Table Expressions, SQL Server CTE Basics or SQL Server 2005 Common Table Expressions for more details
Ranking functions like ROW_NUMBER, RANK, DENSE_RANK and NTILE - see Ranking Functions (on MSDN) or New Ranking Functions in SQL Server 2005 for more details
OUTPUT clause in SQL statements to output information about e.g. rows you've deleted with the DELETE statement, or updated with your MERGE statement - see the SQL Server Books Online for more details.
I'm taking care of an old SQL Server 2000 solution, and boy, how many times have I missed those features!
There are a number of reasons to make the migration, I'm sure.
My favorites are:
New DATE datatype (no more having to format strings to compare timestamped dates)
New Spatial Data types (geometry, geography)
New MERGE statement is great for upserts or any other "if exists" type logic
FILESTREAM gets you out of the blob problems (enforced DB integrity on filesystem directories!)
IMHO, from a developer's perspective, the most important upgrade is the TVP
The only shortfall I've personally encountered is that I had to rewrite my DTS packages to SSIS packages (but I think SSIS is great... just more work)
From a purely practical perspective, the most compelling advantages for me are several powerfull TSQL commands that are not available in 2000, e.g. PIVOT/UNPIVOT, and the addition of the intelligent syntax expansion to the 2008 Management Studio that made working with this tool substantially more productive.
Working on a team where people are prone to amending dev SQL Server tables and forgetting about it, or preparing a change for deployment and having to wait for that deployment. This leaves our dev and live tables inconsistent, causing problems when SPROCs are pushed live.
Is there a tool whereby I can enter a SPROC name and have it check all tables referenced in it in the dev and live DBs, and notify of any differences?
I know two excellent tools for diffing SQL database structures - they don't specifically look inside stored procedures at their text, but they'll show you structural differences in your databases:
RedGate SQL Compare
ApexSQL's SQL Diff
Redgate also has a SQL Dependency Tracker which visualizes object dependencies and could be quite useful here.
Marc
For SQL Server 2005/2008, Open DBDiff works pretty well. The great part about this is that it's free. Also note that I am writing this answer for version 0.9 which currently works for SQL 2005/2008.
It'll show you the differences between the database schema between a source database you specify and the destination database you specify. There are also buttons you can click that can update or create the table that is in question.
I would recommend SQL compare and SQL Data Compare from Redgate Software. I worked with these tools for several projects and they did a great job. Documenting changes is also a good thing to do, but some changes are to complex to write your own SQL code for (including juggling data around between tables).
The redgate tools create scripts in a matter of seconds and those scripts are almost always correct (some older versions had a hard time with table dependencies in big databases, but when playing around with the statements (in a begin transaction / rollback) I was able to quickly fix those problems).
Another strong point in the redgate suites is that you can save your comparison project. This is especially useful when you don't want to convert a certain table (or data), you can exclude them. When loading the project the next time the software will automatically ignore those tables.
One disadvantage is the cost of the software (smaller companies I worked with did not want to buy the software). SQL compare and SQL data compare together will cost you about 800 dollars, but if you look at the time you will save when releasing you will save a lot of money. There is also a trial you can play around with (30 days I believe).
SQLDBDiff is a nice and user-friendly and lite tool.
SQLDBDiff supports SQL Server 2000 to 2016 and also SQL Azure.
SQLDBDiff available with both free with limited use and full with a trial.
More Screen
Try Microsoft Visual Studio Database Edition aka Data Dude (formerly for Database Professionals). It'll do a complete schema comparison and generate the necessary scripts to upgrade the target schema.
Of course, this shouldn't replace a proper build process ;-)
If you need a quick schema comparison tool for SQL Server, you should take a look at dbForge Schema Compare for SQL Server.
I've made a MssqlMerge utility that allows to compare (and merge) MSSQL database data and programming objects. It also allows to search for particular word or phrase across table definitions and programming objects.
I m using Access 97 database.It has lots of forms. Data bulked. I have to upgrade it quickly. I bought SQL Server 2005 enterprise edition.
I want to use SQL Server for data holder. I m going to use Access forms regularly. I just want to export data to the sql server.
Is it possible to use "Linked" data storing?
While I agree with HLGEMs first paragraph I respectfully disagree with HLGEMs second paragraph. There are quirks you need to know about of which I'm somewhat ignorant. Such as changing boolean fields to LittleInt. But otherwise it's a lot of tedious work to recreate the database schema. And it'll be error prone such as missing indexes or relationships.
There is a tool from the SQL Server group which is a lot better than the Upsizing Wizard especially the Access 97 version.
SQL Server Migration Assistant for Access (SSMA Access)
http://www.microsoft.com/sql/solutions/migration/access/default.mspx
As you discover these quirks you can change the scripts to recreate the database with the appropriate changes.
I concur with Tony Toews (and you should trust him on this, he's an Access guru): use SSMA to help you move data to SQL Server, it does a more complete job than the upsizing Wizard integrated in Access (which doesn't work for upsizing to SQL Server 2008 anyway).
You have to be wary of a few caveat though; I've made a blog post about some of the things you should check out.
The point is that if the original Access database was designed without relying too much on the liberties that Access allows (strange characters in table and column names for instance), then the process will be much easier.
Pay special attention to all the warning and errors reported by SSMA, they are really useful in helping you focus on the issues you must solve.
With regards to performance, moving to SQL Server isn't necessarily going to make things faster.
In some areas it will actually be slower, sometimes much much slower:
Access is pretty good at optimizing certain forms of data access but once the database moves outside of its reach, it doesn't have as much control.
Most things will work fine though.
You will probably have to rewrite a few queries, maybe move them as views on SQL Server instead of keeping them in your Access application.
Little things such as using % instead of * as wildchars in queries using LIKE in their WHERE clause can also cause strange issues like queries not returning any records.
By the way, I'll post a very good resource Tony has on his own website regarding SQL upsizing: My random thoughts on SQL Server Upsizing from Microsoft Access.
There is also a good and detailed read about things to consider when using SQL Server from Access: Optimizing Microsoft Office Access Applications Linked to SQL Server
YOu can add the SQL Server tables to access as linked tables. Then you will want to start looking at your slowest queries and convert them to stored procs.
Do not use the upgrade wizard in Access to create the SQl Server tables becasue it will make poor choices for datatypes. Do the work yourself to create the scripts, choosing the best datatypes. It takes longer this way, but your database will perform better and you will gain a better understanding of how to do things in SQL Server. You should start right now, learning to do everything through a script and never from the GUI. Best to learn good habits in SQL Server from the start.