I will be working as QA on a projects which is upgrading Sybase ASE to version 16.
I have not worked on RDBMS system upgrade projects before. I need assistance in drafting a Test strategy. Could anyone please provide me guidance on
- What are the Steps involved in upgrading Sybase ASE version
- From System test point of view what sorts of test should we be running (We are already running regression on the applications which are connected to the DBs but apart from that what else should we be validating?)
I believe you also need to be checking post-upgrade performance, i.e. run some set of load tests against per-upgrade configuration and re-run the same set after the upgrade to ensure that upgrade didn't cause performance degradation.
You can use i.e. Apache JMeter, it can execute arbitrary SQL queries in multithreaded manner via JTDS JDBC Driver. See The Real Secret to Building a Database Test Plan With JMeter article for details.
Related
Microsoft SQL Database Engine Tuning Advisor seems to crash constantly for me... on multiple different servers, for multiple different databases, and throughout multiple different versions of SQL server (and DTA)...
I know this is probably a ridiculous question and not of the quality one would expect on stackoverflow :( but has anyone else experienced this?
I was having the same problem, as recently as SQL Server 2014 with Service Pack 2. I had to use a two-step approach to get it working again:
Installed both the latest service pack, and also the latest cumulative update for the service pack. This fixed the issue with Database Engine Tuning Advisor, but it was still crashing for me (see step 2)
I read where "hypothetical indexes" are added to your database when Database Engine Tuning Advisor is running. If it crashes, and does not complete successfully, the hypothetical indexes are not dropped. It was recommended that the hypothetical indexes be dropped from your database.
The combination of installing the latest service packs and cumulative updates, along with dropping the hypothetical indexes, seems to have worked for me.
I've experienced this behavior many times, and the way to fix it was to update my instances to the latest Service packs.
also the first version of SQL 2012 Tuning Advisor was crashing for some reasons, but updating to the latest SP2 has fixed this issue.
side note: plan-cash (a new features in SQL 2012) maybe helpful until you fix this problem permanently.
I was experiencing the same issue when running analysis on a database that contained encrypted stored procedures. I removed the encryption before I captured my profiler trace workload then re-ran the analysis and issue was resolved.
We need to move an enterprise ERP during the upgrade (from 2005 to 2008). I have done some reading regarding the merits of running compatibility mode and I know there are some differences in the SQL estimator running native vs. compatibility mode, but I was wondering if any of you have encountered any performance improvements running a SQL database in compatibility mode on a newer server, i.e. are there any papers or actual experience that suggest that I am going to get better performance running SQL2008 vs. SQL2014 with Compat mode on the database. Do I actually benefit from the new server. We are licensed either way and the ERP is only "guaranteed" on 2008.
I hope to get some feedback for anyone who has run into this problem before. (Compatibility has been around for a long time, so I am sure someone has). Considering that our databases are ~400GB, clustered and SAN'd makes a really real-world test somewhat difficult to really do. Even more-so that the SAN will "prioritise" things - just make the test even more difficult. We all know that SQL 2014 performs better than 2012, but with the poorest of data, it may be the case - hence the general request.
I have never run into any issues with compatibility mode for any version of SQL Server. I also haven't really noticed any performance benefits or drawbacks doing so, but I admit that I haven't done any real timing tests. Usually when I've had to do that, I've upgraded the hardware on the box, so a true comparison is difficult.
Having said that, are you sure that's the way you want to go?
Why not just run a test environment with the database migrated to 2008 and no compatibility? If everything works for you in the test environment, then upgrade directly.
Most SQL Server upgrades are pretty painless, unless you're trying to skip a version or two, which you aren't. Even in failover clusters they aren't that hard as long as you follow the step-by-step procedure from Microsoft.
I have deployed a 'Customer' customfield plugin in JIRA which accesses a SQL Server database on a server. What happens is when I go to EDIT an issue, if i have the customfield enabled then the webpage takes an extra 2-3 seconds to load. If i then go and disable the customfield, there is no lag at all, the page loads instantaneously, so it is definitely related to this new customfield. Also it is important to note that in the Development Environment there is no lag at all regardless of whether the customfield is enabled or not.
Its strange because, the SQL driver i am using in BOTH the Production and Development Environment is the 'net.sourceforge.jtds.jdbc.Driver'. Also the URL i am using to access the database of Customers in BOTH the Production and Development Environment once again is exactly the same: jdbc:jtds:sqlserver://:". Also the exact same version of the driver is being used: jtds version 1.2.4.
I cant think of anything else that could possibly cause the problem.
Any help would be much appreciated.
Thanks everyone.
Well, it's still all but uncommon to encounter production and development environments, which are considered identical from a distance, but in fact are not ;)
For example, does the customfield execute 'heavy' SQL operations by chance? In this case even minor differences between dev and ops environments could make all the difference in the end:
SQL Server version and configuration details
JDBC driver version (you verified this one already)
JIRA version and configuration details
Any relevant hardware difference could have similar effects, e.g. available memory, memory speed, harddisk speed, CPU performance, number of cores etc.
The best approach in cases like this is almost always trying to identify the bottleneck by actually measurement, be it via code instrumentation or SQL server monitoring or external monitoring solutions - especially the latter two should help you to identify slow SQL queries, if any.
Good luck!
I am considering the migration for 4 reasons:
1) SQLSERVER installation is a nightmare, expecially for 1-user software (Even if typically I have 3-20 users, sometimes I sell my software to single users: it is incredible to have troubles installing the DB, while installing the applicatino means copying an exe...). (note my max installation is 100 users, but there is no an upper limit). Software installs in 10 seconds, SQLServer in 1 hour. Firebird installation is much easier.
2) SQLSERVER runs on windows server only
3) My customers have all the express edition
4) i am not using any advanced feature, I am now starting using filestream, but the main reason for this is that Express edition has 4/10GB db size limit
So these are all Pros of moving to Firebird.
Which are the cons?
I can also plan to support both platforms, but this will backfire I fear.
MSSQL server is faster and better optimized for large databases and complex queries, especially if administered properly, while Firebird alows you to run without any administration and just forget about it. Although this penalty affects very small percentage of people using it, before complete migration I suggest you to first just migrate data and then test speed of most complex query on both systems. If speed satisfies you then you are good to go.
I don't see any besides need to thoroughly test all of your existing code for compatability issues.
Firebird is wonderful for server installations or single user installations.
It has an embedded version that is suitable for single user scenarios and you do not have to install anything.
It uses the same database file for both server and embedded database so you can easy go from single user to multi user and vice versa.
I have embedded Firebird 2.5 today in my freeware Software. It's great, and there had never been connection problems. I used multiple processes to do both insert and read long operations simultaneously and it all gone correct as was expected. I am waiting for Firebird 3.0. I recommend Firebird when you don't want to trust on other commercial database software.
If there is only one user you can use Sqlite which is even easier to manage than Firebird.
I'm writing some code that needs to work against an array of different database products.
MySql
Sql Server 2000 to 2008
PostgreSQL
Oracle 9i & 10g
Jet 4.0 (MS Access)
MSDE
Sybase Adaptive Server Anywhere
Sybase Sql Anywhere
Progress OpenEdge
We have a set of generic integration tests that we need to run against each database product, as part of a continuous integration build.
I'm sure other people have had to set-up similar test environments and I would like to tap into some of that wisdom - what strategies do you end up using, what worked well or did not work well?
Some thoughts:
Separate virtual machines for each of the products, each allocated a small amount of memory (easier to manage for certain scenarios, or where we have slightly different setup's for individual products).
A couple of virtual machines or even a single virtual machine for all the products (i.e. perhaps an ubuntu box for postgresql & mysql, and a windows 2008 server machine for the remaining products) - I like one or two vm's because this is a more portable environment for running the tests i.e. when on the road / off-site, as my laptop would probably crawl to a halt running 8 or 10 small VM's.
And finally how have you tackled the prohibitive cost of some of these commercial products i.e. Oracle or Progress OpenEdge, and are the previous versions still available i.e. are there free "single-developer" editions available, or cheaper routes to purchasing these products?
We also use vmware for our servers, one vmware host per database. You should be OK with putting several databases on one vmware.
You shouldn't have much a problem with expensive software licenses. Oracle, for example, allows you to have a development license for all of their products. So long as you are not running a production DB on your laptop vmware, you should be OK. Of course, IANAL.
For Oracle, get the Enterprise Edition if that's what your customers will be using (most likely).
Be sure and take advantage of vmware snapshots. You can get the databases configured perfectly for running your tests, run your tests, and then revert the databases back to the pre-test state ready for running your tests again.
What we do here is (mostly) a VM for each configuration required, we also target different operating systems so have something like 22 configurations we are testing. These VM's are hosted in an ESX server, and scripted to start and stop when required so that they are not all running at once. It was a lot of work to set up.
In terms of cost of the Database servers, the kind of software we are building required top-end versions so the testing costs have been factored into the overall dev costs (we just had to "suck it up")
Oracle does have a free 'Express Edition'. It doesn't have all of Oracle's functionality but then if you are also running against Jet then that shouldn't be an issue.
Architecturally, it may be better off creating an integration testing farm of server(s) that are powerful enough to run the variously configured VMs. It could be fronted with a queue process listening for integration testing requests (which could be triggered using svn commits or other source control check in triggers).
For the most accurate range of testing, you probably want to have a separate VM for each configuration. This will reduce the risk of conflicting configs and also increase the flexibility of running a custom set of VMs (e.g. Oracle + MySQL + PostgreSQL and not the others, etc.). Depending on your build process, it may also allow you to run 10-minute builds.
A benefit of running an integration test farm is that if you're on the road using your laptop, you can trigger off the integration build after code check-in, run all the tests and have it notify you of the results. You can also archive each request and the results to help diagnose failing builds.
Oracle license excerpt:
We grant you a nonexclusive, nontransferable limited license to use the programs only for the purpose of developing, testing, prototyping and demonstrating your application, and not for any other purpose.
So, you can download full Oracle versions 9, 10 and 11 and use them free of charge as stated in license.
Check downloads section at www.oracle.com
It sounds like you have a good idea of what to do already. I'd suggest the following VM layouts:
A Windows Server 2008 box hosting the following:
MSDE
Jet 4.0 (MS Access)
Sybase Adaptive Server Anywhere
Sybase Sql Anywhere
Progress OpenEdge
On an Ubuntu or Red Hat box:
MySql
PostgreSQL
Oracle 9i Download it here
Oracle 10g (Use the free express edition)
I don't see any need to test against the full blown SQL and Oracle editions. The free editions are stripped down, so there is very little risk of that code breaking when you run it against the full version of those servers. If you had tested against the full server and went down to the free stuff, then yes some things might break, but by testing with the least common denominator, you should ensure good quality.