SQL query testing tools - database

I'm working for a company that has a homemade crap testing framework and they asked to do a research in the market to look for something already existent in order to replace it.
The requirements are basically:
being able to connect to any database (at least through the known
protocols)
check if a given query returns the expected values (like
doing assertions)
do you know any tool close to this?

Related

Database queries as application healthchecks - management tool

Hey there fellow Stackoverflowers,
In our company we have several application stacks running on different types of databases (MySQL, PostgreSQL, MS SQL, Azure SQL,..). For monitoring purposes we use some scripted queries on the databases of all these application stacks, with Nagios reporting back the results in an email.
Now, since our support team would also like easy access to these queries in order to manually run them or modify them, we were considering building an application specifically designed to be able to store, run and modify queries that can be executed on any of the above listed database types and offering both a user-friendly webinterface and a REST API with JSON output for our new reporting stack based on SENSU, to be deployed in a few months.
My personal belief is that a tool like this must already be out there, since the use case for it is so generic. However, googling did not yield any results even closely resembling what I am looking for.
So my question to you is: Do you know of such a tool? If you had to build it yourself: what would your approach be? We're mostly a Java/C++ team, but are open to all options.
Some or may be all of this stuff can be done by an existing API called NAGIRA. Look it up on Google. This will definitely give you all the results in JSON format. Also i think it would allow you to run checks manually. So you can may be build a little front end and call this API to achieve what you want.
A little late of a reply, but check out http://cloudmonix.com -- it offers ability to create metrics based on custom SQL queries, supports SQL Azure, SQL Server, MySQL, and Oracle. Also integrates with Nagios (and Zabbix)

SQL Data verification framework?

I receive a variety of flat files that need to be transformed and aggregated in several stages of an ETL process before loading it into a SQL Server database.
After each stage, I'd like to verify the data in several ways, and I'm looking into existing technologies that can help.
Upon receiving the data, it needs to be validated for things such as truncated data, date formatting and generally ensuring the data is ready for transformation.
After the data is cleaned in this way, I want to verify the data. This would consist of comparing values such as row counts, % nulls, average values etc. to previous loads, or predefined values. If the verification fails, the developer should be alerted.
tSQLt, the database unit testing framework, has several assertions that can be used to do what I want. It's easy to set up and has decent documentation. This is the nearest tool I can see, but it's a long way from what it's designed for.
The alternative is to create my own tool, but I want to know - does something like this already exist?
After a bit of searching I found a commercial solution which I think would solve the problem: QuerySurge. There are a couple of similar tools like this (ETL validator), though it claims to be unique software.
It works by:
Using set comparison between 2 queries, raising errors if they do not
match. This could be row counts before/after transformations, or
simply checking a result returns nothing.
Queries can be performed against any JDBC compliant data source using
ANSI SQL and any connection specific SQL. The results are stored on a
separate server using a MySql backend and you can choose to either
host this yourself or use their servers.
It permits command line usage and therefore supports continuous
integration tools.
A nice feature is the grouping of tests (test suites), although it is
not clear how the results of a group would affect an overall test.
The built-in reporting tools also look nice.
That's the majority of what I gleaned from the website. I haven't downloaded the trial as the software itself is outside of my price range.
The tool is not complicated in principal, and we'll be developing our own framework to cope.

MS Access 97 application working directly with MS SQL Server 2005

Please, give me the most serious arguments against this.
Application directly opens a connection to ms sql server, directly executes queries.
So what I'd like to ask:
1) Why it is wrong when the number of users can be up to 1000 executing huge queries?
2) What serious problems can that cause?
3) What should I do?:)
Arguments, the most serious arguments against this kind of implementation!
One of the things to consider is how the queries are done. 1000 queries against a SQL Server DB might be manageable, but 1000 Access queries in which the table is locked, or which are actually joins or views, could use dramatically more memory. It really depends on how the application is written. Some Access apps open a recordset and page through the records one at a time, or fetch a few dozen and work on those, but sometimes Access grabs the whole recordset, for example to allow users to page through data. And I have seen Access lock a set of tables to allow editing of them. That would be bad in your scenario.
Of course, I wholeheartedly agree with the "10 years out of support" issue. That is a guaranteed problem. Mine is only a possibility. And you should probably update SQL Server to a current version also, for the same reason.
What about:
The version of access 97 is totally outdated and wont get any updates, has a crappy look, crappy functionality and in general - IF it requires rework - should be updated.
Problems? You run on a 10 year old out of support platform. What problem can that cause? Well - what about limited support?
Upgrade at least to 2007, better 2010 (coming in a couple of weeks) when you have a momnent time. I personally dispise access based applications (crappy architecture to start with etc.), but if there is one, the update to access 2010 is possible the most painless way to go.
Access 2003 or 2007 would be just fine for the scenario as long as you had an Access developer who was up to speed on how to develop for client/server with large user populations.
Access 97 is still an awfully nice version of Access. I think it's the best version ever produced.
But it is out of support and predates the alteration of default permissions in Windows implemented with the release of Windows 2000. This means that it has some problems in installing with its default permissions (it expects write access to its application folders and registry keys). An installation script can easily alter these appropriately, but you're still left with problems in certain contexts, like trying to run it in Windows Terminal Server/Citrix, where it very often just completely breaks.
I would like to hear an explanation of exactly why someone would choose A97 for new development. Of course, I may be misinterpreting. You may be asking about an existing app, in which case I'd go with "if it ain't broke, don't fix it," and then ask exactly what it is that is perceived as "broken." Those things can be fixed, though it's unlikely that simply upgrading from A97 to something more recent is going to do the job.
I’m currently nearly finished with a brand new application written in access 97 that stores its data in SQL server 2008. As has been said many times before the access/SQL server combination really works great.
Inline with my other applications it is completely unbound using ADO to get the data from the server. I wont drag up that debate again here but it is something you really want to look into as it can offer some great benefits.
Most of the SQL server guide you will find will ask you to check that you have the correct indexes and try to identify the slowest running parts of the system or the ones that get called a lot and then look at making them faster. That might cause you to make a covering index or to denormalise the data in someway.
Generally what is good practice for JET also works well for SQL server, make a good table schema with a good clustered index choice and good supporting indexes and you are 95% of the way there

An app to search a database

Im not a developer (not a proper one) but always up for an excuse for self-development.
I work in a support team for a reputable software vendor, and we currently use a helpdesk piece of software called iSupport.
- Its not a bad piece of kit, and im not sure if it has been set up badly (trying to find out) but the biggest problem I face (being on the front line as an analyst who uses it 8hrs a day) is its inability to easily search.
A new 'incident' will come in. Client will report some errors in a log, perhaps mention some other keywords when describing his symptoms.
Now I know, I have probably answered a similar problem before (but cant remember the solution) or even more likely, a fellow-analyst may have answered the same question before.
I would like to have the ability to have one search box (think Google) that searches through EVERY incident that has ever been created and return me ALL incidents that contain that keyword.
At the moment the search is very poor - You can take time to set up searches and specify which fields you want to search on, which values to filter by (perhaps by an analyst or category, etc) but this takes time and more often than not, it returns poor results and it would have been easier to try and track it down yourself manually.
All of the data sits in underlying SQL Server tables (have requested a subset of the data).
What im thinking, is creating a separate front-end that is just a basic search box and thats it. This app will point to all the relevant fields in the SQL tables and pull out the relevant records into a table. Once I have the ID for the incident, it is then a simple job to pull out that incident back in the iSupport front end.
I was thinking along the lines of Google Desktop style app (shortcut key brings up the search box).
Now thats as much thinking as ive done. Looking for some advice on where to go next.
I know for instance, that Google Desktop crawls and indexes all your physical files on your machine. Would I have to do something similar for a database as I imagine there maybe a large number of records/fields/tables to search through.
TBH, if it works, im not that fussed (to begin with) if it takes awhile to process the query, as long as it returns relevant results. But ideally id like it to be quick.
Ill leave it at that for now.
Where should I begin?
If .NET is your thing, then Lucene.NET will work well with SQL Server to give you that google search feeling.
The StackOverflow websites use it, you can listen to the SO Podcast where Jeff/Joel bitch about why SQL Server full-text search sucks so much.
I'd suggest this might be a good candidate for a web application - an asp.net / jsf website. This means that you can control it from one machine, but all your colleagues can make use of it without a deployment headache every time you add a new feature...
The incident database is mission critical (critical to your relationship with customers), so if you came to me with this request I would insist that you accessed the database through a user that had select permissions to the appropriate tables, and very little else. This from your point of view is a good thing too - let's you operate knowing you're not going to cock anything up...
The SSMS Tool Pack (an add-in to Management Studio) contains a feature to search Table, View or Database Data.
Have a look into the Full-Text Search functionality of SQL Server.
iSupport includes a full-text search function. Add a 'Global Search' widget to a dashboard you create.

Test data generators / quickest route to generating solid, non-repetitive, but not-real database sample data?

I need to build a quick feasibility test / proof-of-concept of a remote database for a client, that will be populated with mostly-typical Company and People data (names, addresses, etc); 150K records or so. The sample databases mentioned here were helpful:
Where can I find sample databases with common formatted data that I can use in multiple database engines?
...but, I'd like to be able to generate sample data like this easily on less-typical datasets as well. Anyone have any recommendations for off-the-shelf (or off-the-web) solutions?
For SQL Server there is a great solution exists: RedGate SQL Data Generator. It's not cheap, but makes its job very well.
I couldn't find a good one off the shelf and so we built one based on some simple concepts. If you don't find any good answers let me know and I'll share the structure and any files you need.
Check out my answer to this earlier question here.
Not sure what database you are using but hopefully it proves useful. I still haven't used the tool myself but i have heard more good reports when i've passed on the link.
For my specific need this time (which in this case was mostly "people"), I ended up going with Fake Name Generator's 1 Million fake names CSV file for $25. Seemed the quickest/easiest route for the volume of data I needed. Worth checking out if your needs are similar:
http://www.fakenamegenerator.com/
http://www.fakenamegenerator.com/order.php

Resources