Batch extract of SQL Server DDL - sql-server

Since we can point and click in SSMS to obtain DDL, there must be an assembly or DLL of some sort called by the GUI. Does anyone have any familiarity with how to tap into that?
The drive for this comes from our need to capture DDL as part of jobs. Some of our batches only need the data for one table or even one index, others could use the entire database. Getting the detail as needed is critical. That detail might be used as part of a procedure or placed into a file.
I know there are various solutions to the problem of batch/automated retrieval of SQL Server DDL (versions 2000-2014) on the web. None are directly supported by Microsoft, and for what I need, that is a considerable weakness.
Of the items on the web, some use scripts and the system views/tables to build DDL. I admire the work that went into these, but such things may have problematic support and can break from SQL Server version to SQL Server version. Also, a number of vendors have tools, and there is at least one open source project (OpenDiff) that ventures into this area. But vendor tools won't easily fit into my batch streams. And third party tools also require installation on client systems, which is always a sensitive area, and usually have licensing requirements. Any third party tool, of course, introduces the various types of vendor dependencies.
There is one item I have found, SMOScript, that uses SMOs (with which I am only slightly familiar). Perhaps that is doing something similar to what I want, though the limited notes on the project imply that it does not allow the detail needed such as for a single index. From a management viewpoint, it also introduces a dependency on that project and its single author.
That assembly used by SSMS, whatever it is, must be kept up to date by MS. If calling that is possible (though I am sure it is foolish to hope it is also simple), the weakness of a third party dependency is eliminated. So I don't need web links to scripts and third party tools for those I have (yet thanks for the thought), but if someone can point me toward what SSMS is using, that might be a great help. In the meantime, for what its worth, I'll be researching SMOs.

Related

AX 2012 - Restore Production over Non Production

I have an AX 2012 database in Production that I need to restore over UAT. I've never done this before and am wondering on the steps involved.
I have found many blog articles but they all seems to differ on the steps and I want to avoid any accidents being made.
http://ajit-dynamicsax.blogspot.com/2012/08/ax-2012-database-restore-without.html
https://dynamicsuser.net/ax/f/developers/49023/taking-a-copy-of-live-production-into-test
http://theaxexperience.blogspot.com/2013/06/copying-production-dynamics-ax-2012-or.html
Does anyone manage a AX 2012 environment and know how to do this safely? I know how to do a backup/restore, but am more asking as to the application specific steps needed after the restore process.
To add to what FH-Inway has said, if you do a SQL restore, first take a UAT backup and then the bare minimum you'll need to do is:
UPDATE SYSSQMSETTINGS
SET GLOBALGUID = '00000000-0000-0000-0000-000000000000'
DELETE
FROM SYSCLIENTSESSIONS
DELETE
FROM SYSSERVERSESSIONS
DELETE
FROM SYSSERVERCONFIG
You'll also need to decide what to do with BatchJob and Batch tables...meaning you probably want to update everything to a hold status.
Then there are environment specific parameters (Prod vs UAT) that you'll need to update, such as reporting server URL's, email (SMTP) parameters perhaps, if you have any 3rd party ISVs that connect to external services (prod URL vs non-prod URL).
You may need to change DB permissions too, as your SQL service account may be different.
It's a very iterative process. What's good to know is that if you screw it up, you can just re-try and fix parameters. You're essentially completely replacing the UAT environment each time, so most anything you could do wrong is of little concern.
The main concern is services that are external to AX! The absolute last thing you want to do is have users in your UAT system processing fake orders that are accidentally connected to a 3rd party shopping cart (for example) and they're charging real customer credit cards.
Or if you have a data warehouse or some internal database that UAT is now pointing to Production, that is inserting records into.
This question is a bit broad for Stack Overflow, so I'm gonna limit my answer to the scope of what data needs to be considered after a data transfer. Note that there can be a lot of other steps involved. Although a bit outdated, Moving between Microsoft Dynamics AX 2012 environments should be a good starting point. But if you have never done this before, I strongly suggest doing this with someone who has.
Although it is (and probably will be forever) in beta and has not been updated in quite some time, take a look at the Test Data Transfer Tool. It comes with several Exclude*.txt files that can give you a good idea which data in the standard AX 2012 database Microsoft considers environment specific (and therefore excludes it from the data transfer). Even if you don't end up using the tool, these files provide a good starting point on which data should be considered after a backup restore.
If you don't use the tool, SQL scripts are the way to go. The Data exports/imports functionality mentioned in some of the links you listed should not be used. It does not handle surrogate key relationships between tables and there may be also issues with container fields.
In my experience, data restores are also always very project specific and it usually takes a few iterations until it can be done without issues. I highly recommend a checklist and/or thorough documentation of the process.

Databases and "branch"

We are currently developping an application which use a database.
Every time we update the database structure, we have to provide a script to update the database from the previous version to the current one.
So the database has currently a number that gave us it's current version and then our software make an update when we want to use an "old" database.
The issue we are encountering is when we have branches:
When we create a new big feature, that will not be available for users(and not included in releases), we create a branch.
The main branch(trunk) will be merged regularly to ensure that the create brunch has the latest bug corrections.
Here is some illustration:
The issue is with our update scripts. They update from the previous version to the current one, then update the version number of the database.
Imagine that we have the DB version 17 when creating the branch.
We then do the branch, and make changes on the Trunk DB. The DB has now the version 18.
Then we make a db change on the branch. Since we know there has already been a new version "18", we create the version 19 and the updater 18->19.
Then the trunk is merged on the branch.
At this very moment we may have some updaters that will never runs.
If someone updated his database before the merge, his database will be flagged has having the version 19, the the update 17->18 will never be done.
We want to change this behavior but we can't find how:
Our constraints are:
We are unable to make all changes on the same branch
Sometimes we have more than just 2 branchs, and we can only merge from the trunk to the feature branch until the feature is finished
What can we do to ensure a continuity between our database branch?
I think the easiest way is to use the Ruby-on-rails approach. Every DB change is a separate script file, no matter how small. Each script file is numbered, and when you do an upgrade you simply run each script from the number your DB currently is to the last one.
What this means in practice is that your DB version system stops being v18 to v19, and starts being v18.0 to v18.01, then v18.02 etc. What you release to the customer may get rolled up into a big v19 upgrade script, but as you develop, you will be making many, many small upgrades.
You'll have to modify this slightly to work for your system, each script will either have to be renumbered as it gets merged to the branch or you will have to ensure the upgrade scripts don't simply track the last upgrade number, but track each upgrade number so missing holes will still get filled in as the script gets merged across.
You will also have to roll up these little upgrades into the next major number as you create the release tag (on the trunk first) to keep things sane.
edit: so fundamentally you first havew to get rid of the notion of using a upgrade sdcript to go from version to version. For example, if you start with a table, and trunk adds column A and the branch adds column B, then you merge trunk to branch - you cannot realistically "upgrade" to the version with both, unless the branch version number is always greater than the trunk's upgrade script, and that doesn't work if you subsequently merge trunk to the branch. So you must therefore scrap the idea of a "version" that applies to development branches. The only way round that is to update each change independently, and track each change individually. Then you can say you need the "last main release plus colA plus colB" (admittedly if you merge trunk in, you can take the current main release from trunk whether its v18 or v19, but you still need to apply each branch update individually).
So you start with trunk at DB v18. Branch and make changes. Then you merge trunk later, where the DB is at v19. Your earlier branch changes still need to be applied (or should already be applied, but you may need to write a branch-update script with all branch changes in it, if you re-create your DB). Note the branch does not have a "v20" version number at all, and the branches changes are not made to a single update script like you have on trunk. You can add these changes you make on branch as a single script if you like (or 1 script of 'since the last trunk merge' changes) or as many little scripts.
When the branch is complete, the very last task is to take all the DB changes made for the branch and toll them up into a script that can be applied to the master upgrader, and when it is merged onto trunk, that script is merged into the current upgrade script and the DB version number bumped.
There is an alternative that may work for you, but I found it to be a little flaky when you try to update DBs with data, sometimes it just couldn't manage to do the update and the DB had to be wiped and re-created (which, to be fair, is probably what would have had to happen if I used SQL scripts at the time). That's to use Visual Studio Database project. This stores every part of the schema as a file, so you'll have 1 script per table. These will be hidden from you by Visual Studio itself that will show you designers instead of scripts but they're stored as files in version control. VS can deploy the project and will try to upgrade your DB if it already exists. Be careful of the options, many defaults say "drop and create" instead of using alter to update an existing table.
These projects can generate a (largely machine-readable) SQL script for deployment, we used to generate these and deliver them to a DBA team who didn't use VS and only accepted SQL.
And lastly, there's Roundhouse which is not something I've used but it might help you to become the new upgrader "script". Its a free project and I've read its more powerful and easier to use than VS DB projects. Its a DB versioning and change management tool, integrates with VS, and uses SQL scripts.
We use the following procedure for about 1.5 years now. I don't know if this is the best solution, but we didn't have any trouble with it (except some human errors in a delta-file like forgetting a USE-statement).
It has some simularities with the answer that Krumia gave, but differs in the point that in this approach only new change scripts/delta files are executed. This makes it a lot easier to write those files.
Delta files
Write all the DB-changes you make for a feature in a delta-file. You can have multiple statements in one delta-file or split them up into multiple. Once committed that file it's best (and once merged it's necessary) to start a new one and leave the old one untouched.
Put all the delta-files in one directory and give them a name-pattern like YYYY-MM-DD-HH.mm.description.sql. It's essential that you can sort them in time (therefore the timestamp) so you know what file needs to be executed first. Besides that you don't want to have a merge conflict with those files so it should be unique (over all branches).
Merging/pulling
Create a merge-script (for examlpe a bash-script) that performs the following actions:
Note the current commit-hash
Do the actual merge (or pull)
Get a list of all the delta-files that are added with this merge (git diff --stat $old_hash..HEAD -- path/to/delta-files)
Execute those delta-files, in the order specified by the timestamp
By using git to determine what files are new (and thus what database-actions aren't executed yet on the current branch) you are not longer bound to version-numbering.
Alternating delta-files
It might happen that within one merge delta-files from different branches may be 'new to execute' and that those files alternate like this:
2014-08-04-delta-from-feature_A.sql
2014-08-05-delta-from-feature_B.sql
2014-08-06-delta-from-feature_A.sql
As the timestamp determines the execution-order there will be first added something from feature A, then feature B, then back again to feature A. When you write proper delta-files, that are executable by themself/stand-alone, that shouldn't be a problem.
We recently have started using the Sql Server Data Tools (SSDT), which replaced the Visual Studio Database Project type, to version control our SQL databases. It creates a project for each database, with items for views and stored procedures and the ability to create Data-Tier Applications (DACPAC) that can be deployed to SQL Server instances. SSDT also supports Unit Testing and Static Data, and offers developers the option of quick sandbox testing using a LocalDB instance. There is a a good TechEd video overview of the SSDT tools and a lot more resources online.
In your situation you would use SSDT to manage your database objects in version control along side your application code, using the same merging process to push features between branches. When it comes time to upgrade an existing install you would create the DACPACs and use the Data-Tier Application upgrade process to apply the changes. Alternatively you could also use database synchronization tools such as DBGhost or RedGate to apply updates to the existing schema.
You want database migrations. Many frameworks have plugins for this. For instance CakePHP uses a plugin from CakeDC to manage. Here are some generic tools: http://en.wikipedia.org/wiki/Schema_migration#Available_Tools.
If you want to roll your own, perhaps instead of keeping the current DB version in the database, you keep a list of which patches have been applied. So instead of version table with one row with value 19, you instead have a patches table with multiple rows:
Patches
1
2
3
4
5
8
Looking at this you need to apply patches 6 and 7.
I just stumbled upon an older article written in 2008 by Jeff Atwood; hopefully it is still relevant to your problem.
Get Your Database Under Version Control
It mentiones five part series written by K. Scott Allen:
Three rules for database work
The Baseline
Change Scripts
Views, Stored Procedures and the Like
Branching and Merging
There are tools specifically designed to deal with this type of problems.
One is DBSourceTools
DBSourceTools is a GUI utility to help developers bring SQL Server
databases under source control. A powerful database scripter, code
editor, sql generator, and database versioning tool. Compare Schemas,
create diff scripts, edit T-SQL with ease. Better than Management
Studio.
Another one:
neXtep Designer
NeXtep designer is an Integrated Development Environment for database
developers. The main concept behind the product is to take advantage
of versioning in order to compute the incremental SQL scripts you need
to deliver your developments.
This project aims at building a development platform that provides all
tools which a database developer needs while automating the tasks of
generating the deliveries (= SQL resulting from a development).
To learn more about the problematic of delivering database updates, we
invite you to read the Delivering database updates article which will
present you our vision of best and worst practices.
I think an approach which will satisfy most of your requirements is to embrace the "Database Refactoring" concept.
There is a good book on this topic Refactoring Databases: Evolutionary Database Design
A database refactoring is a small change to your database schema which
improves its design without changing its semantics (e.g. you don't add
anything nor do you break anything). The process of database
refactoring is the evolutionary improvement of your database schema so
as to improve your ability to support the new needs of your customers,
support evolutionary software development, and to fix existing legacy
database design problems.
The book describes database refactoring from the point of view of:
Technology. It includes full source code for how to implement each refactoring at the database level and for most refactorings we
show how the application would change to reflect the change in the
database. Our code examples are in Oracle, Java, and Hibernate
meta-data (the refactorings are easy to translate to other
environments, and sometimes we discuss vendor-specific features which
simplify some refactorings).
Process. It describes in detail the process of database refactoring in both the simple situation of a single application
accessing the database as well as the situation of the database being
accessed by many programs, many of which are out of the scope of your
authority. The technical examples assume the latter situation, so if
you're in the simple situation you may find some of our solutions to
be a little more complicated than you need (lucky you!).
Culture. Although it is technically simple to implement individual refactorings, and clearly possible (albeit a little
complicated) to adapt your internal processes to support database
refactoring, the fact is that cultural challenges within your
organization will likely prove to be the most difficult hurdle to
overcome.
This idea may or may not work, but reading about your work so far and the previous answer looks like reinventing the wheel. The "wheel" is source control, with it's branch, merge and version tracking features.
At the moment, for each DB schema change, you have a SQL file containing the changes from the previous one. You already mention the significant issues you have with this approach.
Replace your method with this one: Maintain ONE (and only ONE!) SQL file, which stores all DDL command for creating tables, indexes, and so on from scratch. You need to add a new field? Add a "ALTER TABLE" line in your SQL file. This way your source control tool will in effect manage your database schema, and each branch can have a different.
All of a sudden, the source code is in sync with the database schema, branching and merging works, and so on.
Note: Just to clarify the purpose of the script mentioned here is to recreate the database from scratch up to a specific version, every single time.
EDIT: I spent some time looking for material to support this approach. Here is one that looks particularly good, with a proven track record:
Database Schema Versioning Management 101
Have you seen this situation before?
Your team is writing an enterprise application around a database
Since everyone is building around the same database, the schema of the database is in flux
Everyone has their own "local" copies of the database
Every time someone changes the schema, all of these copies need the latest schema to work with the latest build of the code
Every time you deploy to a staging or production database, the schema needs to work with the latest build of the code
Factors such as schema dependencies, data changes, configuration changes, and remote developers muddy the water
How do you currently address this problem of keeping the database
versions in working order? Do you suspect this is taking more time
than necessary? There are many ways to approach this problem, and the
answer depends on the workflow in your environment. The following
article describes a distilled and simplistic methodology you can use
as a starting point.
Since it can be implemented with ANSI SQL, it is database agnostic
Since it depends on scripting, it requires negligible storage management, and it can fit in your current code version management
program
The database versioning method you are using is certainly wrong, in my opinion. If anything has to have versions, it should be the source code. The source code has versions. Your live environment is only an instance of the source code.
The answer is to apply database changes using redeployable change scripts.
All changes, no matter which branch it is on (even in master/trunk) should be done in a separate script.
Sequence your scripts, so that newer ones will not get executed first. Having a prefix with date in the format YYYYMMDD for filename has worked for us.
When this happens, the change is made to the source code, not the database. You can have as many instances/builds for various tags/branches in the VCS as you like. For example, separate live builds for each branch.
Then you only have to do the build for each instance (probably every day). The build should fetch the files from the relevant branch and perform compiling/deploying. Since the scripts are redeployable, old scripts make no effect on the database. Only the recent changes are deployed to the database.
But, how to make redeployable scripts?
This is a question that is hard to answer, since you have not specified which database you are using. So I will give you an example about how my organization does it.
Let me take a simple example: if we need to add a column to a particular table, we do not just write ALTER TABLE ... ADD COLUMN .... We write code to add a column, if and only if that column does not exist in the given table.
Now, we have separate API to handle all that existence-checking boilerplate code. So our scripts are simply calls to those APIs. You will have to write your own. These API's are not actually that hard (we're using Oracle RDBMS). But they give us a huge gain in version control and deployment.
But, that's only one scenario, there are gazillion ways a schema definition can change
Yes indeed. Data type of a column can change; A new table can be added; An attribute column can be merged into a primary key (very rare); Sequences can change; Constraints; Foreign keys; They all can change.
But it turns out that all this can be handled by API's with special privileges to read metadata tables. I am not saying it's easy, but I am saying that it is a one time cost.
But, how do you rollback a database change?
My personal experience is, if you put some real effort into designing before banging the keyboard to write ALTER TABLE statements, this scenario is extremely rare. And if there ever is a rollback, you should manually handle it. (e.g. manually remove added column).
Normally, changes to views and stored procedures are rather common, and changes to table definitions is rare.
Building the Database
As I said before, building the database can be done by running all the redeployable scripts. Pre-deployed scripts has no effect.
Your database deployment script should not start with DROP DATABASE. Your database has lots of data which was used for unit tests. Unless you make a really really simple system, these data will be valuable in the future for testing. Your testers will not be too happy about adding ten thousand records to various tables every time a database is upgraded.
Put testers aside, how are you planning to upgrade your client/customers production database without annihilating all their production data? This is why you must use redeployable change scripts.
You can try version number schemes such as 18.1-branchname etc... But they are really going to utterly fail. Because you can merge your source, not it's instances.
I think that the way you pose the problem is impossible to solve, but if change part of your process there is a solution. Let's start with the first part: why it is impossible to solve using just deltas. In the following I assume you have the main trunk and two branches dev-a and dev-b; both branches stem from the same point-in-time.
Why cannot work
Say Alice add a delta script to dev-a:
ALTER TABLE t1 (ALTER COLUMN col5 char(4))
and Bob add another script in dev-b
ALTER TABLE t1 (ALTER COLUMN col5 int)
The two scripts are clearly incompatible and you end up in breaking code in main when you merge back from any of the two. The merge tool cannot be of help if the script files have different names.
Possible solution
My suggestion is to describe your database in terms of both baseline and deltas: the delta scripts must always refer to a specific baseline, so you are able to compute a new baseline schema resulting from the application of successive deltas to a specific baseline.
An example
dev-a *--B.A1--D.1#A1--D2#A1--------B.A2--*--B.A3--
/ /
main -- B.0 --*--------------------------*--B.1---*----------
\ /
dev-b *--B.B1--D.1#B1--B.B2--*
note that after branching you immediately spin-off a new baseline, same before every merge. This way you may check that the baselines are compatible.
Final comment
Managing deltas in version control is kind of reinventing the wheel, as each delta script is functionally equivalent to saving different versions of the baseline script. That said I agree with you that they in practice they convey more value and force people to think what happens in production when you change the database.
If you opt store only baseline, you have plenty of tools to support.
Another option is to serialize work on the database, as a whole or partitioning the schema in separate areas with unique owners.

advice on hyperfile db

At my work, my co-workers are considering using hyperfile as a database server for a windev project. I don't even know that kind of database, it's from PCSOFT, the company that develops windev.
Since windev can also work with microsoft sql server, I'm looking for advice on that kind of database (performance, stability, etc) from people who already used it.
Regards!
It depends on the size of your project. Actually, Windev works well with HyperFileSQL. It has been designed for it ! By using another DBMS, you cut yourself some feature such as direct-reading/modifying/deleting in your tables.
Your performances will decrease significantly as soon as you have a nice amount of records in a table (> 100'000). Your database management will become a nightmare since you can't execute several SQL requests at the same time. In example, i'm using another tool developped by a french guy to manage my databases and execute some updates.
Despite of this, it's stable and provides an easy way to interact with Windev's fields.
In my opinion, Hyperfile SQL should be used with small applications with a small amount of features and datas.
Adding upon what Samuël Tremblay already wrote, I would say that after 2+ years of using Windev with HFSQL (old name is HyperFile SQL), here are my conclusions (I have used Windev versions 20 and 22):
PROS:
replication of a database to another server is rather easy to setup. You can choose to replicate a whole database or a selection of tables. But DBMS like PostgreSQL are actually offering advanced replication setups (https://www.2ndquadrant.com/en/resources/pglogical/).
easy export to a Microsoft Excel file of a query/table
create and change the schema/structure of your database through a graphical user interface (GUI)
CONS:
When you use the database server provided by Windev (i.e. HFSQL), you must use Windev (that is imposed upon you). You cannot interact with your database with another language/framework other than Windev, you are forced to use Windev to query a HFSQL database. If you use instead a DBMS like PostgreSQL, mySQL/MariaDB, etc. you can (and will be able) to query the database with some other language : C++, Java, JavaScript, etc.
Say that you wanted now to open your data to customers through a web app, you will actually need to use their other software Webdev from their software suite (and buy it actually).
Or say, some day, you want to develop a simple app for smartphone with Qt or else. Well, if your database runs on HFSQL, then you will not be able to query your database unless you use Windev (actually Windev Mobile that you also need to purchase).
UNIQUE constraints are not working with the presence of NULL (two rows containing NULL would be considered to violate the UNIQUE constraint).
(almost) every time you update your "analyse/analysis" (basically the database schema), you will also need to update your binary executable. You will need to recompile your software and distribute it again to the users. For example, say you modify a table by adding a column, or modifying the type of a column, then you need to recompile. The executable that the users have will not run, it will say that the version of the "analysis" (schema) on the database is not the same as the one in the executable, and will stop. BAM !
the HyperFile SQL (HFSQL) server is not so stable, it will crash (often) when executing slightly advanced queries with not so many rows...
You cannot create scripts to query the HFSQL database : you must create a binary executable (a new project) with Windev. Say you want to quickly modify something --> you need to recompile (and have a Windev IDE with you).
Say you are on the go, on some trip, and you forgot to bring your computer with the Windev Dongle key (a license cryptographic USB key : it you don't have it, you cannot run Windev), and you need to do some work on the database. PCSoft provides a software called HFSQL Control Center (a GUI software) that can interact with the database, but unfortunately it cannot be downloaded from the internet. You actually obtain it when you buy Windev, and you are allowed to distribute it to whom you want, but it cannot be downloaded from PCSOFT website.
Whereas if your database engine is another one, say PostgreSQL or MariaDB, you can simply download PGAdmin or the equivalent, and boom you can interact with your data.
It seems to me that HFSQL is not a real/genuine DBMS, let me explain myself : the constraints you can set in the analysis (UNIQUE for example), are not always respected. For example, after adding a UNIQUE constraint in the schema (analysis) and compiling the program, I have seen that if I would insert some data in a table from the executable, it would detect the violation of the UNIQUE constraint when it should happen. However if I would insert the same set of data through the HFSQL Control Center, the constraint would not be enforced and the duplicates would be insterted.
There would be more to say ...
Bottom line: From my own experience, I would strongly encourage anybody, who wants to develop a reliable and dependable software that "must" be developed with Windev (and that needs data persistence), not to use their database HFSQL. You would be much better off using a RDBMS such as PostgreSQL or MariaDB. We are actually going to port our databases from HFSQL to PostgreSQL this Summer.
You should carefully consider what sql functions you will use. For example deg2rad, rad2deg, ... not working correctly.
Also if you want to use it on a mobile device (Windev Mobile for iOS or Android) you should use SQLLite. Because HyperFile uses a lot of memory and it will be a problem on mobile.
If you want a free database, use PostgreSQL, the Windev connector for PostgreSQL is free to download and install on your windev as a replacement for HFSQL, it will be way more powerful while using the usual hFunctions like you would with HFSQL, plus you will find a ton of docs on the web to do powerful stuff.
HFSQL is in fact the same as an old ISAM DBASE database so it requires re-indexations and things like that of those older DB systems era.
PostgreSQL is like having a free Oracle DB with all the powerful features and reliability, we dropped HFSQL for this and performance has increased tenfold plus all the other benefits while keeping our code pretty much the same, every day feels like we discover freebies and gifts from ProsgreSQL since our migration :)
Free VS Free... You gotta go with power and sheer size of web documentation and poeple available to help .
In WinDev Mobile 18 and up, you can use Hyperfile on the device. And it is recommended from me, because it is faster and SQLLite restrict blob-size to 1MB!!
#Spek memory usage of HyperFile on the phone? Can you give me any values? I think if you want to make a full feature APP you cannot ignore the benefits of HyperFile...
FYI: New in Windev version 19: Hyperfile SQL is ACID.

In a distributed architecture, why is it difficult to manage versions?

I see this time and time again. The UAT test manager wants the new build to be ready to test by Friday. The one of the first questions asked, in the pre-testing meeting is, "what version will I be testing, against?" (which is a fair question to ask). The room goes silent, then someone will come back with, "All the assemblies have their own version, just right-click and look at the properties...".
From the testing managers point-of-view, this is no use. They want a version/label/tag across everything that tells them what they are working on. They want this information easily avaialble.
I have seen solutions where the version of diffierent areas of a system being stored in a datastore, then shown on the main application's about box. Problem is, this needs to be maintained.
What solutions have you seen that gets around this on going problem?
EDIT. The distributed system covers VB6, Classic ASP, VB.Net, C#, Web Services (accross departments, so which version are we using ?), SQL Server 2005.
I think the problem is that you and your testing manager are speaking of two different things. Assembly versions are great for assemblies, but your test manager is speaking of a higher-level version, a "system version", if you will. At least that's my read of your post.
What you have to do in such situations is map all of your different component assemblies into a system version. You say something along the lines of "Version 1.5 of the system is composed of Foo.Bar.dll v1.4.6 and Baz.Qux.dll v2.6.7 and (etc.)". Hell, in a distributed system, you may want different versions for each of your services, which may in and of themselves, be composed of different versions of .dlls. You might say, for example: "Version 1.5 of the system is composed of the Foo service v1.3, which is composed of Foo.dll v1.9.3 and Bar.dll v1.6.9, and the Bar service v1.9, which is composed of Baz.dll v1.8.2 and Qux.dll v1.5.2 and (etc.)".
Doing stuff like this is typically the job of the software architect and/or build manager in your organization.
There are a number of tools that you can use to handle this issue that have nothing to do with your language of choice. My personal favorite is currently Jira, which, in addition to bug tracking, has great product versioning and roadmapping support.
Might want to have a look at this page that explains some ways to integrate consistent versioning into your build process.
There are a number of different things that contribute to the problem. Off of the top of my head, here's one:
One of the benefits of a distributed architecture is that we gain huge potential for re-use by creating services and publishing their interfaces in some form or another. What that then means is that releases of a client application are not necessarily closely synchronized with releases of the underlying services. So, a new version of a business application may be released that uses the same old reliable service it's been using for a year. How shall we then apply a single release tag in this case?
Nevertheless, it's a fair question, but one that requires a non-trivial answer to be meaningful.
Not using build based version numbering for anything but internal references. When the UAT manager asks the question you say "Friday's*".
The only trick then is to make sure labelling happens reliably in your source control.
* insert appropriate datestamp/label here
We use .NET and Subversion. All of our application assemblies share a version number, which is derived from a manually updated major and minor revision numbers and the Subversion revision number (<major>.<minor>.<revision>). We have a prebuild task that updates this version number in a shared AssemblyVersionInfo.vb file. Then when testers ask for the version number, we can either give them the full 3-part number or just the subversion revision. The libraries we consume aren't changing or the change is not relevant to the tester.

Is there any way to impersonate a specific database engine while running another one?

This is something I would like to see while doing my day today programming works, But I've never seen such application yet. You input is highly appreciated.
Lets say we have an application that needs MSSQL server as DBMS. And suppose you just need to install it and do something. (i.e You are not going to deply it in production servers etc.)
In such a case it might be an overhead, of installing MSSQL first. I am suggesing something like a software bridge that can use another DBMS to store data. In other words the application "sees" an MSSQL instance but underneath that it might be Access. The bridge sholud do some sort of conversion.
Another example : You have MSSQL but a certain application needs Oracle. You have to purchase Oracle then. But with a something like a bridge, You can put information into your MSSQL DBMS. The bridge listens to port 1521 like Oracle so application "Thinks" there is an oracle installation.
Is it an idea that cannot be implemented?
Are there any such applications?
If so what are they?
Thanks... :)
Adding a Clarification : The application might be from a third party. You don't have any knowledge on internal architecture of that. you just know it uses a certain DBMS. I am trying to use a different DBMS other than the third party software needs.
Application usually don't depend on a specific database server, OR they depend on it for a reason.
If an application asks for oracle, or sql server, or whatever, it's because it relies on the implementation details of this specific vendor to run its SQL, its stored procedures, etc. There's no way you could emulate that with an access database, for example...
If your application just needs to run some very simple SQL (ie basic insert/select statements), it probably uses a standard driver (odbc, ado, etc.), and those drivers can accommodate every major sql database engines. In my experience, "simple applications" don't ask for a specific database vendor.
This is the problem that ODBC was supposed to solve :-) .
But in response to your questions:
Is it an idea that cannot be implemented?
It can be implemented.
It would be tedious and thankless work, and you would have a very limited audiance. In my opinion it's not worth doing.
Are there any such applications?
None that I know of.
If so what are they?
None that I know of.
......
Bringing in Chandrasekar's note in the comments section:
Have a look in a super user's perspective... He has a nice application but he can't use it without some DBMS. But still he is not a programmer to do something. So they need such a product
I agree it has applications, but it has a very limited audiance :) .
What you're proposing is something like the firefox plugin 'ietab', Only you won't have ie installed... so instead of embedding ie, you would need to entirely re-implement ie using firefox's rendering window.
Just my opinion : that's too much effort... It's simpler to just install a second database.
If this application uses ADO to connect to SQL Server and you can modify the connection string, then it's quite easy to use a different database: change the connection string! However, the other database must be able to support all features of SQL Server. Besides, the software was never tested on another database so the application might Crash & Burn.
If you can't change the connection string, or the application doesn't use ADO, things are more complicated and very close to impossible.
I've worked in the past on a project that needed to be reasonable database-independent. The database had to support stored procedures but there weren't any other restrictions. By default, we tried to support both SQL Server and Oracle. (We also supported Interbase but never advertised this.) While we did manage to keep it mostly database-independent, we did have to work around quite a few minor issues. Especially joins in our queries had some nasty problems which we just solved by adding more logic to stored procedures.
"This is the problem that ODBC was supposed to solve :-) ."
And it is the very same problem that SQL was intended to solve too.
It seems to me that the reason why this problem exists is that the world seemingly fails to agree sufficiently on what the data manipulation language/interface ought to look like.
I suspect that if this were solvable, it would already have been done.
The closest I've heard is EnterpriseDB where they have built a layer on top of Postgres so it looks more like Oracle.
But remember these databases have features covered by patents and copyright so there's a limit on how closely a competitor product can imitate the real thing.
It would probably be easier to imitate 'down' than up. For example, MS-Access wouldn't be able to imitate much of the functionality for Oracle or SQL Server, whereas there's a much better chance of SQL Server imitating a simpler DB like Access.
Applications usually DO depend on a specific database server. Every database implements things slightly differently - even MSSQL and Sybase, which have a common ancestor.
Any bridge, however well it attempts to abstract the differences, would leave some exposed. These would be likely to create subtle bugs in the application, which might appear initially to work, but then fail, or worse, corrupt data.
Moreover, the application vendor would not support you in such a case - they'd simply say they don't support that use case, and you should remove the bridge and install a proper instance of whatever database it was intended for.
In short, I don't think it's worth the risk of the application malfunctioning subtly, and being left without support, even if the application isn't especially important. If you dislike the underlying database the application uses, choose a different application.

Resources