What are the advantages of VistaDB - sql-server

I have seen the references to VistaDB over the years and with tools like SQLite, Firebird, MS SQL et. al. I have never had a reason to consider it.
What are the benefits of paying for VistaDB vs using another technology? Things I have thought of:
1. Compact Framework Support. SQLite+MSSQL support the CF.
2. Need migration path to a 'more robust' system. Firebird+MSSQL.
3. Need more advanced features such as triggers. Firebird+MSSQL

The VistaDB client runtime is free. The runtime will never "expire at 3am" as you put it. Only the developer tools are licensed in that manner. You need 1 license per developer, simple. We even offer a really inexpensive Lite version with no Visual Studio tools.
Some other benefits
100% managed code - there are no interop or other unmanaged calls in the engine. This is a big deal to some, and others couldn't care less.
No registry access required - Most other in proc databases require registry access to look for parent controls, or permissions. VistaDB only does what you tell it to do, and will even run in Medium Trust.
XCopy deployment for runtime and your database (single file). You can xcopy you application, the runtime, and your database and run. Nothing to install or configure on the machine, no special privileges needed (we can run in Medium Trust or higher).
Isolated storage - You can put your entire database into Isolated Storage and run it from there directly. This makes it very easy to build secure click once applications that write databases in a domain friendly way for corporate environments. There is no need to store the user data on a shared drive or worry about permission mapping.
CLR Triggers / CLR Procs - You can write CLR Code and use them as Triggers or Stored Procs. We have just recently introduced changes to make it even easier to maintain a single CLR Assembly that can run in both VistaDB and SQL Server 2005/2008.
T-SQL Procs - VistaDB T-SQL Procs are compatible with SQL Server 2005/2008. Any procedure that works in our engine will run in SQL Server. That does not mean anything that runs there will port to us. We are a subset of the functionality in SQL Server. But we are also the only way to run T-SQL Procs without SQL Server (SQL CE can't do it).
I personally think one of the biggest features is the ability to upsize to SQL Server later. All of the VistaDB types, syntax, and CLR Procs, T-SQL procs, etc all will run on SQL Server. (You can't take everything from SQL Server down to VistaDB though, it is a subset)
32/64 bit Deployment - VistaDB is a single assembly deployment that runs both 32 and 64 bit without changes. SQL CE requires two different runtimes depending upon the OS, and cannot run under IIS at all. Access has no 64 bit runtime, and the most recent 32 bit runtime can only be deployed through MSI. The 32 bit version of Windows has the runtime, the 64 bit version does not.
Relational Integrity - VistaDB also actually enforces your constraints and Foreign Keys. You can specific cascade update, and delete operations. The person who commented we are like SQLITE is wrong in this regard. They parse constraints, but do not enforce them.
EDIT: They do have support for FK's now in SQLite. But they are not compiled in by default, and do not use the same syntax as SQL Server.
Medium Trust - The ability to run on a medium trust web server is another feature that many will not care about, but it is a big deal. Many third party controls can't even run in Medium Trust. We can run the complete engine within Medium Trust because of our commitment to 100% managed code and least permission required.
- Full disclosure - I am the owner of VistaDB so I may be biased. :)

Well, the main thing is that it is pure managed code - for what that is worth; it works not only on your typical Windows machines running .NET, but works wherever you run the Compact Framework and even works on Mono. Here are some noteworthy bullet points from their homepage:
Small < 1 MB footprint truly embedded ZeroClick
Microsoft SQL Server 2005 compatible data types and T-SQL syntax
None of the SQL CE limits
Single user, multi user local or using shared network.
Partially trusted shared hosting is no problem.
Royalty-free distribution - single CPU deployment of SQL Server costs more than a site license of VistaDB!
One thing worth noting is that Rob Howard's company, telligent, uses it as the default database for their new CMS software, "Graffiti."
I have played with it here and there but have yet to build anything against it.

For me this most interesting feature of VistaDB is that it can be run in Medium Trust environment. Which makes it perfect solution for creating small to medium .NET websites which can be deployed on server by copying and pasting (x-copy deployment).
And almost all windows shared hosting providers (like GoDaddy) won't let you run your websites in Full Trust mode. And also won't install for you any 3rd party binaries into GAC like System.Data.SQLite.dll if you wish to use SQLite for example.

I hadn't seen VistaDB before, it does look pretty cool.
Update: Received a comment from someone from VistaDB - their update model is only for getting new versions. Your old ones won't stop working if your license expires, which is good to know.
Keeping the original post here as IMHO the warning about expiring software licenses is still worth thinking about, even though VistaDB itself is fine.
It definitely seems 'more featureful' than SQLite, but I don't see anything there to justify the cost. The site seems to indicate that you can buy one license for $279, but it implies this is just a 1 year subscription. Would you have to then pay another $279 next year to stop your site falling over?
If so, remember to factor into the 'cost' how much inconvenience it's going to be when you get a call at 3am (murphy's law, it's always 3am) from your panicking customers because their VistaDB license has expired :-(
I've had this experience personally with some expiring software, and it's never good. You can send your customers emails and messages and flash their entire screen blinking red saying "YOU NEED TO GET A NEW LICENSE BEFORE NEXT WEEK" and they'll still never do it, and you'll still get the pain at 3am when it does expire.

Related

How hard to migrate from IaaS to PaaS on Azure

So I'm thinking of dipping a toe in the Azure pool
Our web App Suite will soon be a pure ASP.Net + SQL Server affair
For various reasons it will be simpler to initially create a SQL VM and run everything from there initially.
How hard will it be to ...
...migrate SQL off the VM and into either "Cloud Services" or "Data Management"?
...migrate the suite of WebApps off the VM and into "Websites"?
It is my understanding that having achieved this migration, the OS level updates will no longer be my concern as they will be handled by the service. Thus at this point I'll be able to throw the original VM away :)
This isn't exactly answering your questions, but it might help educate you on more questions to ask and giving you a boost out of the gate. These were all lessons learned before, during, or after our migration of our systems to Azure. Now that we're up there, we have a ~50GB database with ~6 services running across ~30 instances. As long as our database backup behaves, total amount of effort in upgrading all of this is less than an hour (and could be much less if we didn't have many safeguards meant to force us to be aware of what's going on during the migration process - we don't want it to be too easy to deploy just to protect us from ourselves).
Preparing to migrate your system to Azure:
If you're planning to go to Azure, you first need to make sure your architectures and technologies are compatible. This isn't to say you have to code everything specific to Azure. This means some of the following things:
You should realize that "high availability" does not mean "error-free". In fact, high-availability environments usually have more errors that you have to handle and manage. For example, if you have a request going over the network to a server that just had a motherboard fry and was taken offline, that network request will be unsuccessful. That's not typically a problem you code for in "standard" server apps. To take it even further, what if that failed network is for a Database Connection that gets put back into a connection pool? That will cause that connection to be poisoned and broken the next time somebody pulls it out regardless of the future state of that server that went poof! There are just some extra things to worry about here because you're no longer depending on just 1 network with 4 servers on it but are now depending on hundreds of networks with thousands of servers on them. That 0.05% error scenario will happen MUCH more often to you than you've ever experienced in the past and you really have to be aware of this!
You should use dependency-injection to easily change things around. Proper separation of concerns will changes that seem very difficult become very easy in Azure.
You should use architectures for "high-availability". For example, a web application that would break when ran in a web farm would also break in Azure but a web application designed to work in a web farm would be very easy to run in Azure.
You should have automated deployments and configuration transforms for all of your applications. Anything else is just unsustainable unless it's nothing more than one little web site or something like that.
Depending on your needs, you can do it in phases. If database latency is something that isn't a big deal, perhaps a hybrid approach (over VPN from Azure to your data center) is acceptable to get your apps in Azure first while you later migrate your database. Or perhaps the opposite. What we did was keep primary apps and database in our data center but put secondary apps up in Azure first. Then some primary apps (that took a performance hit for a month until) later our database and critical apps. That final migration sure made for a very long weekend and not much sleep, but it is SOO much nicer now that we're done!
Migrating your applications to Azure:
This ultimately depends very heavily on what your application is or does, and every scenario has different steps/issues/benefits. I'm not going to cover this deeply other than to say, "Use Google, it's your friend!". Beyond that, for us, getting our applications up into Azure was the largest payoff when compared to our data. The ROI on our app migration was less than a month between hosting costs, licensing, and management effort. Instead of taking a couple days to setup a server, I can now take a day to setup an entirely new and duplicated environment of all of our SaaS applications/databases/etc and have them running on ~25 different Cloud instances!
Instead of trying to tell you how to migrate these, let me give you a few words of caution so you know sooner rather than later:
If you have app problems in Windows 2012, humor me and try it in Windows 2008 R2. There are a couple bugs in some of the 2012 images that they've prepared. It's incredibly trivial to switch back and forth!
Go make your logging 1000x's better than what it is now. If you don't do that now, you'll regret it.
Don't depend 100% on the easy-to-implement "Azure Logging". It works well enough but it more-or-less requires your applications to start successfully and is absolutely useless in debugging startup problems. If you don't have an alternative, then you will waste many, many hours just debugging stupid little problems when your app starts up. By the time you're done with it, you could have easily added 5 other logging frameworks and had an amazingly awesome logging system in place plus a running app instead of nothing but a running app to show for the same amount of time. Really, do this! (Profiling is a good idea as well, although Mini-Profiler has load-balancing issues if you have multiple instances.)
If you add new endpoints to your deployment (ports, etc), you cannot simply "Upgrade" an existing deployment. You must delete it (the deployment, not the service) and install from scratch. You can delete the Staging one, deploy to Staging, then swap.
If you have WCF apps, pretend you don't know about Windows Activation Services. They're disabled in Azure by default. You can either hack them to turn them on (startup scripts) or create your own self-hosting application. We self-host so we can more easily tweak service configurations once we're deployed (it's not easy to edit web.config files in a way that sticks in Azure). Web services work in "IIS" in Azure but TCP, named pipes, etc. do not.
Go learn about and add the Transient Errors Application Block (or something equivalent) to anything you communicate with. If you don't do it now, you'll regret it.
Go make your logging better! Really, really, REALLY do this!
Migrating your SQL Server Database to Azure:
Getting your database up into Azure is a bit of a painful process. There isn't a quick and easy way to just get it up there and making it work. Some people have to make some major changes while others just have to tweak a few ignorable things here and there. However, no matter how large or small your database is, you really REALLY must devote a lot of time to testing it. Test your migration process. Test your scripts to prepare your database. Test the performance and stability of the database up in the cloud. Test your backup procedures. Test your upgrade procedures. Test your backup restoration procedures. Test ALL of this because I guarantee you that you will find some surprises!
Schema:
Go learn about all of the limitations of SQL Azure. No Heaps, etc. Learn them before you start! Go learn them now! They're all mostly to very reasonable.
Be aware of the 2GB T-Log limitation! This means some very large indexes can never be rebuilt! (that said, our 30GB table isn't yet hitting this)
To deploy your schema, go into SSMS for your local db and use the "Tasks -> Extract Data-tier Application..." feature (it's in different areas of the menu in different versions of SSMS). Take this file and go into SSMS for your Azure database and use the "Deploy Data-tier Application" feature. (This will help you catch some of the Azure limitations you aren't honoring if this process fails.) This is, by far, the easiest way to get an empty version of your database up into Azure.
Use a tool like Redgate SQL Compare to verify your work (you'll have to tweak a couple options, like WITH NOCHECK to get a clean comparison).
You'll have to cleanup users, schemas, broken sprocs, etc. before you succeed at this. (this is a good thing!)
Data:
Go learn about all of the limitations of SQL Azure. Learn them before you start! Go learn them now! They're all mostly to very reasonable.
Go download the Azure Database Migration Wizard from Codeplex (or wherever the latest version is). It's not the most amazing software (kinda unstable) but even if it crashes once or twice on you, it'll still save you a LOT of time!
I strongly recommend RedGate's SQL Data Compare. The previously-mentioned migration wizard will help you identify problems (it's on you to fix those) and will get you ~98% migrated but you'll want to come back and clean up after it. It has some bugs that misses nullable BIT fields (and upper ascii characters) and some other things that a tool like SQL Data Compare can easily identify and fix. It can also give you the peace-of-mind that you can depend on your database.
If your database is large, consider spinning up a temporary VM in Azure (they have them with SQL Server pre-installed and available in ~20 minutes) to do your migrations from. If you do this, it's best to upload a compressed database backup to Blob storage (Cerebrata's storage too is great for this) and then grab it and restore to SQL Server in that VM. Then stage your migrations all from there!
Test, test, TEST!!!
Be careful running SQL on a VM, it's not a high availability solution. Azure VMs are prone to restarting from time to time. Unless you have multiple VMs running SQL Server in an availability group, or you have some sort of mirroring and load balancing setup, you won't have a high availability solution. I too originally favoured the IaaS to PaaS route, in the end it seemed to be a false economy as migrating IaaS to PaaS is about as much work as migrating on-premise to PaaS. In the end I decided to take the time to optimise my application for PaaS, i.e. moving durable storage to blobs, implementing transient error handling and retry logic, etc.
What you're proposing is certainly possible but having a multi VM arrangement to deliver high availability SQL takes a bit of work and is expensive! Have a read of the following guide, it was really helpful to me when I started the migration process:
Top 7 Concerns of Migrating a .NET Application to Azure
Just yesterday Microsoft announced their plan to host also Iaas solutions and not only Saas solution on their Azure platform.
http://weblogs.asp.net/scottgu/archive/2013/04/16/windows-azure-general-availability-of-infrastructure-as-a-service-iaas.aspx
About migration, it really depends. We work with a distribution mechanism: TFS + Octopus so the deployment is very easy and it works on Iaas or SQL Azure, it doesn't really matter.
There are also other things to keep into consideration when moving into Saas. Probably your code should be refactored if it's not Saas oriented or your application may have a very high hosting cost over Azure.

Migrating from SQL Server to firebird: pro and cons

I am considering the migration for 4 reasons:
1) SQLSERVER installation is a nightmare, expecially for 1-user software (Even if typically I have 3-20 users, sometimes I sell my software to single users: it is incredible to have troubles installing the DB, while installing the applicatino means copying an exe...). (note my max installation is 100 users, but there is no an upper limit). Software installs in 10 seconds, SQLServer in 1 hour. Firebird installation is much easier.
2) SQLSERVER runs on windows server only
3) My customers have all the express edition
4) i am not using any advanced feature, I am now starting using filestream, but the main reason for this is that Express edition has 4/10GB db size limit
So these are all Pros of moving to Firebird.
Which are the cons?
I can also plan to support both platforms, but this will backfire I fear.
MSSQL server is faster and better optimized for large databases and complex queries, especially if administered properly, while Firebird alows you to run without any administration and just forget about it. Although this penalty affects very small percentage of people using it, before complete migration I suggest you to first just migrate data and then test speed of most complex query on both systems. If speed satisfies you then you are good to go.
I don't see any besides need to thoroughly test all of your existing code for compatability issues.
Firebird is wonderful for server installations or single user installations.
It has an embedded version that is suitable for single user scenarios and you do not have to install anything.
It uses the same database file for both server and embedded database so you can easy go from single user to multi user and vice versa.
I have embedded Firebird 2.5 today in my freeware Software. It's great, and there had never been connection problems. I used multiple processes to do both insert and read long operations simultaneously and it all gone correct as was expected. I am waiting for Firebird 3.0. I recommend Firebird when you don't want to trust on other commercial database software.
If there is only one user you can use Sqlite which is even easier to manage than Firebird.

Advantage Database or SQL Server

I have a client that currently uses a local Advantage Database on their PC along with an application. They are thinking of upscaling their setup to have multiple applications running communicating with a database server i.e/a client-server environment.
They are now considering the best database for this approach. They are looking at the Advantage Database Server product in comparison to SQL Server Express(the application does not warrant a full SQL Server at this stage).
Obviously SQL Server is a more well known product probably with more support but I was hoping you could give me some opinions and thoughts on what you think the best product would be in terms of performance, stability and support.
One thing to note although not directly relevant is that the application is currently written in Delphi and there could be a move to C# to bring it up to date.
The migration from a local Advantage Database to a client/server Advantage database is a very simple process. It simply involves changing the connection properties within the program. There are no other coding changes that need to be done.
Advantage has a great support team and has been in development for over 15 years. The stability and support are at least equal to SQL Server.
Advantage also provides a .NET Data Provider which would allow for C# development.
I have developed for both SQL Server and Advantage. They each have their pros and cons (although now I favor Advantage).
Given your situation, however, this decision appears to be a no-brainer: Advantage Database Server. Why? It's already done!
My Advantage programs run, unmodified, against the same database either locally or remotely. All I change is the connection string. I'm not saying that your customer's code won't have to be changed. I am saying it is likely to be trivial. Compare that to the greater effort involved in switching to a whole new database engine.
In general I'm a SQL Server person all the way. I work with id daily and have for almost ten years, but in your situatuion, it seems silly to consider moving to a new database when there is aclear upgrade path to do what you want using the backend you already have. It would be much less work and far less likely to introduce new bugs to stay within the same database family.
ADS wins hands down. It is maintenance-free. It is extremely reliable. It is extremely fast. It is extremely scalable. SQL is very well supported, and the ADS newsgroups are responsive (answers within hours instead of days on SQL server fora) and well-informed. I have been using ADS since 1991 and it has never gone wrong! My users are incredibly demanding and to be able to turn round solutions within hours instead of days, is both a joy to me and a business incentive to the end users and clients. Deployment is gentle, fast and simple. Platform support is better than SQL server. 64-bit server deployment abounds and is well-grounded, transparent and reliable. 64-bit clients are coming in the next version (10). My experience with ADS is wholly positive, whereas my ventures with SQL server have been fraught with difficulties, idiosyncrasies and workrounds!
I happen to be a support rep for Advantage so when you say "Obviously SQL Server is a more well known product probably with more support" I have to argue a bit.
As Chris stated switching from Advantage Local Server to the the Advantage Remote (client/server) Server is a pretty painless process - they designed it that way.
Install the Advantage Database Server on a machine where the data is located (not a requirement but it's recommended). You can get a free trial here: http://marketing.ianywhere.com/forms/ADS91-30-Day
Within the application there will be TAdsConnection component(s) - change the TAdsConnection.ConnectionType to 'REMOTE' (http://devzone.advantagedatabase.com/dz/webhelp/Advantage9.1/mergedProjects/ade/sec7/connectiontype.htm)
You can specify the path (TAdsConnection.ConnectPath) from the clients in a couple different ways but the recommended is:
\\server:6262\mydata
http://devzone.advantagedatabase.com/dz/webhelp/Advantage9.1/mergedProjects/ade/sec7/connectpath_tadsconnection.htm
Note: 6262 is the port used by default (may need to add an exception to the firewall). Also if your application uses a data dictionary the path would include the name of the .ADD file (e.g. \\server:6262\mydata\mydd.add)
Hope this helps!

Oracle XE or Postgres?

Should I use for a web application Postgres or Oracle XE and why?
They are both available for free and for use commercially.
Why should I use one database over the other?
Well, with Oracle XE, if you hit the limits, you have to either buy Oracle or migrate your entire application to a different database. With PostgreSQL, there are no limits, and the software is completely free and open-source.
A few things to be aware of if you choose Oracle XE:
It will only use one CPU if you have a multiprocessor server
It will only use up to a maximum of 1 gigabyte of memory
It has a database size limit of 4 gigabytes for user data
Only available for 32-bit Windows and 32-bit Linux
If those limitations aren't an issue for you and you like the Oracle approach then give it a shot, otherwise consider an opensource server like Postgres or MySQL, which have none of the aforementioned limitations.
If you do choose XE and then later find your requirements have changed, the next version up of Oracle is 900USD for 5 seats, and an additional 180 per seat. This is in fact a bit cheaper than MS SQL Server afaik.
There are some good reasons to choose Oracle, particularly if you're a Java developer e.g. you can write stored procedures in Java, and I think there's native support for Java web services. Ultimately however you need to weigh up the cost with the requirements of your application. MySQL and Postgres will allow you to scale your application without any cost (other than hardware obviously).
You haven't provided much information, but unless you're already an Oracle expert, I see no reason to choose Oracle XE over PostgreSQL. PostgreSQL will always be free and is far more capable and more scalable.
And you can choose to run PostgreSQL on Windows, Mac OS X or Linux. I think Oracle XE is limited to Windows and Linux.
when choosing a db vendor be careful to match the demands of your app against the strengths of the vendors' db product. And watch out for their weaknesses.
For example, if you know your writes will be as frequent as your reads and both will occur simultaneously, then you'll want to know how each vendor you consider handles concurency. Vendors that rely on elaborate lock managers with complex lock escalation schemes are likely to bring you grief if you expect heavy load on you app. You'll spend more time trying to work around the DB's lock manager than actually solving your problems.
That's one example. Every DB has its strengths and weaknesses to consider. Do your research, find a site that compares vendors and make a choice that balances your needs against that. If you can get an eval copy, all the better to run some proof of concept tests against. Write scripts that pummel the db in some similar to what you expect your app to produce and go from there. While your at it, get the query plans for the SQL in your scripts from each vendor and see what you can learn from that about how each vendor's optimizer works.
There's more that can be said, but hopefully you get the gist.
What is your platform of choice? If it's Windows, I'd go with Oracle or MySQL as I have heard only bad things about running PostgreSQL on Windows machines.
Also, Oracle has a more GUI apprach to configuration so if you're not a Linux hacker, you may like it better.
Postgres on the other hand has a way superior SQL console and console tools in general are more developed and easier to use. Oracle has more tools but the decent ares (like PL/SQL navigator) are not free and the free ones simply suck.
Remember that choosing a database is a long-term decision so consider the possible changes in requirements for your application as well. If you are not prepared to eventually spend money on Oracle, better go with Postgres -- safe option. And good enough for most projects.

Setting up a diverse database testing environment

I'm writing some code that needs to work against an array of different database products.
MySql
Sql Server 2000 to 2008
PostgreSQL
Oracle 9i & 10g
Jet 4.0 (MS Access)
MSDE
Sybase Adaptive Server Anywhere
Sybase Sql Anywhere
Progress OpenEdge
We have a set of generic integration tests that we need to run against each database product, as part of a continuous integration build.
I'm sure other people have had to set-up similar test environments and I would like to tap into some of that wisdom - what strategies do you end up using, what worked well or did not work well?
Some thoughts:
Separate virtual machines for each of the products, each allocated a small amount of memory (easier to manage for certain scenarios, or where we have slightly different setup's for individual products).
A couple of virtual machines or even a single virtual machine for all the products (i.e. perhaps an ubuntu box for postgresql & mysql, and a windows 2008 server machine for the remaining products) - I like one or two vm's because this is a more portable environment for running the tests i.e. when on the road / off-site, as my laptop would probably crawl to a halt running 8 or 10 small VM's.
And finally how have you tackled the prohibitive cost of some of these commercial products i.e. Oracle or Progress OpenEdge, and are the previous versions still available i.e. are there free "single-developer" editions available, or cheaper routes to purchasing these products?
We also use vmware for our servers, one vmware host per database. You should be OK with putting several databases on one vmware.
You shouldn't have much a problem with expensive software licenses. Oracle, for example, allows you to have a development license for all of their products. So long as you are not running a production DB on your laptop vmware, you should be OK. Of course, IANAL.
For Oracle, get the Enterprise Edition if that's what your customers will be using (most likely).
Be sure and take advantage of vmware snapshots. You can get the databases configured perfectly for running your tests, run your tests, and then revert the databases back to the pre-test state ready for running your tests again.
What we do here is (mostly) a VM for each configuration required, we also target different operating systems so have something like 22 configurations we are testing. These VM's are hosted in an ESX server, and scripted to start and stop when required so that they are not all running at once. It was a lot of work to set up.
In terms of cost of the Database servers, the kind of software we are building required top-end versions so the testing costs have been factored into the overall dev costs (we just had to "suck it up")
Oracle does have a free 'Express Edition'. It doesn't have all of Oracle's functionality but then if you are also running against Jet then that shouldn't be an issue.
Architecturally, it may be better off creating an integration testing farm of server(s) that are powerful enough to run the variously configured VMs. It could be fronted with a queue process listening for integration testing requests (which could be triggered using svn commits or other source control check in triggers).
For the most accurate range of testing, you probably want to have a separate VM for each configuration. This will reduce the risk of conflicting configs and also increase the flexibility of running a custom set of VMs (e.g. Oracle + MySQL + PostgreSQL and not the others, etc.). Depending on your build process, it may also allow you to run 10-minute builds.
A benefit of running an integration test farm is that if you're on the road using your laptop, you can trigger off the integration build after code check-in, run all the tests and have it notify you of the results. You can also archive each request and the results to help diagnose failing builds.
Oracle license excerpt:
We grant you a nonexclusive, nontransferable limited license to use the programs only for the purpose of developing, testing, prototyping and demonstrating your application, and not for any other purpose.
So, you can download full Oracle versions 9, 10 and 11 and use them free of charge as stated in license.
Check downloads section at www.oracle.com
It sounds like you have a good idea of what to do already. I'd suggest the following VM layouts:
A Windows Server 2008 box hosting the following:
MSDE
Jet 4.0 (MS Access)
Sybase Adaptive Server Anywhere
Sybase Sql Anywhere
Progress OpenEdge
On an Ubuntu or Red Hat box:
MySql
PostgreSQL
Oracle 9i Download it here
Oracle 10g (Use the free express edition)
I don't see any need to test against the full blown SQL and Oracle editions. The free editions are stripped down, so there is very little risk of that code breaking when you run it against the full version of those servers. If you had tested against the full server and went down to the free stuff, then yes some things might break, but by testing with the least common denominator, you should ensure good quality.

Resources