How to test changes to a cube? - sql-server

There is a process in our company built by a long-departed dev that pulls data from a cube stored in a MS SQL Analysis Server (which I typically access via Management Studio). The overall process almost never fails and is refreshed several times a day. However, there appear to be some bugs in the calculations that has been handed to me to investigate and fix.
Unfortunately, I knew nothing about cubes when this was handed to me, was not part of the original development process, and generic web tutorials don't seem to quite apply to whatever I'm looking at. On the plus side, trial and error has taught me enough that I can ask a few questions.
The bug is definitely in the calculations. But I obviously don't want to test in Production and I also don't want to make changes without a proper backup (that I know how to revert).
Is there a way to export the whole Analysis db to a .SLN file and open it in VS?
Should I instead use Script Cube as->Create to, change the cube name, and execute to make a copy?
If I'm later asked to add new dimensions or need to edit the data source, what's the best way to do this?
Any other tips?

You can create a Visual Studio project based on an existing cube. This will get everything in a project so you can investigate the calculations, make the necessary changes, deploy to a test environment, and check everything out before deploying to production. Don't forget to check the project into source control ;-)

Related

Docker Like DB Deployment

I've just finished setting a dev environment where every developers /feature/, /bugfix/ and /hotfix/* git branches are automatically built and deployed to a freshly provisioned Windows Container which hosts the webapp and services creating a test environment, for each branch to be validated before merging into master.
While this is working quite nicely, I've still only got 1 dev db per developer which is used by all of their branches.
In an ideal world, I would like each of these test containers to use their own isolated db instance, however the db is currently at about 50gb at the smallest i can get it without going and tearing out historical data which is sometime useful.
What I would really like to do, is create a docker like image for this db and then spawn a new "container" from this image which only keeps track of the diff between it's changes an the original without ever altering the original db.
Is something like this even possible or does anyone have any ideas how I might achieve this db isolation, per container without having to create a full 50gb db for each?
Ok, So after much flailing around in the dark, I think I've finally come up with a solution. #ErikEJ, thanks, you started me off in the right direction. After looking into DB Snapshots on MSSQL, I found the only way to do writable snapshots seemed to be using vss and actually creating writable disk snapshots. This too led me down a long path of at first trying locally and failing, then trying to implement iscsi and still getting nowhere. Then I stumbled upon hyper-v snapshots and had a look at what was happening under the hood there and finally came across creating differencing VHD's. So basically my solution is as below.
Create VHD with a sanitised copy of my production DB mdf and ldf file in.
Then I create differencing vhds for each environment I need, mount the differencing vhds each in their own folder eg c:\db\Issue-1234 and create a new db DB_ISSUE-1234 by atatching to the files in these folders. Only the diffs are then stored and instead of having several 50-60gig copies of the db. I only have 1 and then the differencing vhds only store the difference.
I've just got this working with 2 or 3, so not sure how robust it is and how fast these differencing vhds are going to grow, but looking very promising so far and is allowing me to spin up multiple environments for testing purposes, extremely quickly (in fact all automated by scripts in deployment).
Hope this possibly helps someone else save some time one day and please let me know if anyone has figured out a more efficient/quicker/better way to do this :)

Issue With DAO 3.6 on VB6 database

I am currently in the process of trying to launch a database that has a VB6 front end connected to an access 2000 database. On certain computers we are experiencing a problem where the data being pulled from the database does not show up or does not show up correctly.
The computers that work seem to have the same dao360.dll date modified in both the system 32 and microsoftshared/dao while the one that are not working do not have the same date modified.
Is this whats causing the error? How can I correct this? Or is it something else that is happening?
There shouldn't be two copies of the DLL on the system. It sounds like a poorly designed install of some application had been previously done on these systems. There is no telling what the full extent of this has been.
Packaging as an isolated application can insulate your programs from these kinds of bad installs that create DLL Hell. Sadly MDAC/DAC and related components are very difficult to isolate.
This is another reason to have moved to ADO back in 1998, if not in the time since then. While you can't isolate the ADO-related parts of MDAC/DAC any more than you can DAO, those libraries are now shipped as part of Windows. You don't need to deploy them and they are protected from bad installers by the increasingly better system file protection mechanisms in Windows.
However providing specific assistance will probably require a more specific and detailed description of what is going on than "does not show up or show up corectly."
I'd create a minimal test case using DAO to begin exploring where (and what) the problems really are. To begin with perhaps just a simple query displaying the returned rowset without data binding.
I suggest installing the latest version of MDAC and Jet. While Jet used to be a part of the MDAC, I'm pretty sure they dropped it into its own installl/update/service pack at this point. Perhaps start here: http://support.microsoft.com/kb/239114

Creating the Front End MDE

I created a database for tracking metrics, with some automation tricks (email, .doc,.ppt presentations, etc) with a very large Main-table, and lots of forms/GUI. This is the first time I have ever I worried about an MDE/front-end for the thing. So if you would be so kind to answer a few questions, or offer any advice, it would be greatly appreciated (I would hate for all this work to not be utilized).
What is the first thing I need to do? It the 2000 version that must be converted to 03 to create the MDE, but does that get done before I use the database splitter?
Will the amount of objects in the database effect the ability to do this? I have something like 80 forms, 70 queries, 20+ macros, 12 tables, etc...but does the amount of objects prevent some of this from working well once the front end is there?
when i split the database, can I continue to work/make changes and such on the "back end", and have those changes directly effect the front end?
These may be some basic questions, but I don't know the answer so.....Thanks!
Here is my 2 ยข.
Question 1 - I have never used the database splitter as I feel I have more control doing it manually. If you do it manually you can do it to a version that does not have a database splitter. But if you do use the splitter then--yes--you will have to upgrade to a version that has a splitter before doing it.
To do it manually here are the steps.
Backup everything.
Create a copy of your file into the same directory. So if you have an MyApp.MDB create a copy into the same directory with a new name, such as MyAppDATA.mdb.
Open the new DATA file (MyAppDATA.mdb) and delete all of the objects EXCEPT the TABLES.
Open the App file (MyApp.mdb) and delete all of the tables.
Also in MyApp.mdb...go to the File/Get External Data/Link Tables menu to link the tables in MyAppDATA.mdb to MyApp.mdb. Select All and create the links.
That should do it. And if you screw up you made a backup...right?
A couple of tips and gotchas...be sure that you go to Tools/Options and that you are NOT showing System and Hidden tables. You just don't want to delete system tables from MyApp. Another way to do it is do NOT delete tables that start with MSys or USys.
Question 2 - Does not matter how many object you have. In fact you don't have that many objects anyway.
Question 3 - Yes...you will make backend changes in MyAppData.mdb and when you open MyApp.mdb those changes will auto-magically be there to see and query against etc. (In the query designer you may need to save/close/reopen to see new fields if you made the mod while in the query). The EXCEPTION to that is New Tables You will have to use the File/Get External Data/Link Tables option to create links to new tables.
One thing to remember (and that I hope you already realize) is that the one downside of splitting the database is that when you deploy the front end file that usually the relative path to the data will vary from machine to machine and there is no automatic re-linking of tables in access. If your target clients have full access you can always use Tools/Database Utilities/Linked Table Manager to refresh the links to the right location. If you can't do that then you will have to do one of the following:
1. Write code that does the automatic re-linking for you. Basically it will check the links...if invalid it will prompt the user for the data location (or look it up in an INI file) and re-link the tables.
2. Always deploy your app to the same location on all machines. If you have commercial visions for your application this won't work...I mention it for academic reasons. It might be doable for a limited deployment where you have a lot of control over file placement on each machine.
3. Put the Data file (MyAppDATA.mdb) onto a network share and link the table across the network using a drive mapping or UNC (\myserver\mydata\ApplicationData\MyAppData.mdb). The latter is preferred but both of them run the same risks as number two.
Seth
PS This answer assumes Access 2003.
PPS If you have commercial visions for your application then the table linking has got to be REALLY robust.
PPPS I agree with the commenter that you may want to take the plunge and do SQL if it is in your skill set.
One thing that hasn't been discussed, and that's the issue of whether the compile to MDE could fail. Basically, if your code compiles in your front-end MDB, it will convert to an MDE. But I've noticed that lots of people never compile.
Some hints for keeping your VBA code in good shape:
in VBE options, turn off COMPILE ON DEMAND.
add the COMPILE button to your standard VBE toolbar and USE IT OFTEN.
periodically, backup your MDB and decompile/recompile it.
Also, remember that you must keep the MDB source, as the VBA code is not editable in an MDE and not recoverable by any good method.
EDIT:
Steps for a decompile:
backup your MDB.
start an instance of Access with the /decompile commandline argument. For, instance, I have a shortcut on my deskstop that has this as the target:
"C:\Program Files\Microsoft Office\OFFICE11\MSACCESS.EXE" /decompile
having opened that instance of Access, open the MDB you want to decompile. You will see nothing happen. DO NOTHING FURTHER IN THIS INSTANCE OF ACCESS -- close this instance of Access (the reason for this is that Michael Kaplan, who knows a thing or two about this, recommended that you never do any work in an Access instance opened with the decompile switch because he said there was no guarantee that the Access application code executed under those circumstances in a way that was fully safe for all kinds of Access work).
open the just-decompiled MDB holding down the shift key (you want to be sure that startup routines don't run because that would likely recompile the product before you've finished your cleanup) and compact the MDB (holding down the shift key again).
open the code editor and compile the project (DEBUG -> COMPILE [db name] for those who haven't step #2 in my original compiling instructions at the top of the post before the edit).
compact the MDB (doesn't matter if you bypass startup, since it's already fully compiled).
Why so many steps?
Because the purpose of the decompile is to get rid of the compiled p-code in order to start afresh from the canonical VBA code. Following the steps above insures that you have completely cleared the data pages storing the compiled code before you recompile. The reason for this is that without the compact step after the decompile, under some very rare circumstances, the code can behave strangely. I can't imagine that the old discarded p-code is being used again, but there's something about the pointers between the canonical code and the compiled code that apparently doesn't get completely flushed by a decompile without a compact.
This would be a comment to Seth's answer, but my rep isn't high enough to comment yet.
Seth did a great job answering your questions, I just wanted to add a bit more to part #1 about using the Database Splitter. The Database Splitter in the Tools menu works fine. Doing it manually is alright too, but it's a whole lot faster and easier to use the Database Splitter. I've used it a dozen times and never encountered any issues after using it.
http://www.databasedev.co.uk/split_a_database.html has a decent page about some of the pros, cons of splitting your database.
http://www.accessmvp.com/TWickerath/articles/multiuser.htm also has some good info when dealing with a split database in a multi-user environment.
Seth gave you a very good answer. But I'll add a few comments.
The number of objects only becomes relevant when you get close to about 1000 forms, reports and modules which have code. There's a limit about there. If you do get that message when trying to make an MDE then you almost certainly have a code error and need to compile to find the error
Another resource is "Splitting your app into a front end and back end Tips"
See the Auto FE Updater downloads page to make the process of distributing new FEs relatively painless.. The utility also supports Terminal Server/Citrix quite nicely.

Using Visual Studio to create a more complex setup project

I need to learn more about creating setup projects from within Visual Studio to support the following scenario:
When the user starts the setup, he needs to choose between the parts that he wants to set up. The setup should offer to install three web services, one web site and maybe even run some SQL scripts to install/update the database.
During installation, the user will need to tell where he wants the sites/services to be installed within IIS. He also needs to specify the database connection which is used within the services/sites and to update the database. And there will probably be a few other wishes too. It should also support an uninstall of the site and services, but the database can continue to exist.
Is this even possible with the Setup projects that Visual Studio creates? If not, no worries. I don't need an alternative solution! I just need to know if this is possible before trying myself and discovering it's not possible after weeks of trying... This is for an internal project and I want to make life easier for the administrators who need to install/upgrade these sites/ services every time when there's an update. (About once every two weeks.)
Stay well away from vdproj stuff and move to WiX ASAP (As you'll see me being advised in questions I asked here). For a start, flexibility around where to put the IIS apps is seriously limited (you get one virtual dir and the user can only choose the name, you cant have multiple instances).
The other side of this is of course that the vdproj stuff is an 80% solution. Ultimately you can add as many custom steps as you like, and they can pop up dialogs and whatever they like. There's no reason why a custom step cant do all the things you want.
I just know that I once thought like you, and looking back wish someone had grabbed me by the scruff of the neck and said, just use the proper stuff - even if it seems a little harder initially. There is a conversion tool that will suck in your vdproj and spit you out a WiX.
By all means, try wizarding up what you need and seeing if it works - most of the stuff is pretty searchable - just know when to call it quits.

How to keep Stored Procedures and other scripts in SVN/Other repository?

Can anyone provide some real examples as to how best to keep script files for views, stored procedures and functions in a SVN (or other) repository.
Obviously one solution is to have the script files for all the different components in a directory or more somewhere and simply using TortoiseSVN or the like to keep them in SVN, Then whenever a change is to be made I load the script up in Management Studio etc. I don't really want this.
What I'd really prefer is some kind of batch script that I can run periodically (nightly?) that would export all the stored procedures / views etc that had changed in a given timeframe and then commit them to SVN.
Ideas?
Sounds like you're not wanting to use Revision Control properly, to me.
Obviously one solution is to have the
script files for all the different
components in a directory or more
somewhere and simply using TortoiseSVN
or the like to keep them in SVN
This is what should be done. You would have your local copy you are working on (Developing new, Tweaking old, etc) and as single components/procedures/etc get finished, you would commit them individually until you have to start the process over.
Committing half-done code just because it's been 'X' time since it was last committed is sloppy and guaranteed to cause anyone else using the repository grief.
I find it best to treat Stored Procedures just like any other compilable code: Code lives in the repository, you check it out to make changes and load it in your development tool to compile or deploy the code.
You can create a batch file and schedule it:
delete the contents of your scripts directory
using something like ExportSQLScript to export all objects to script/scripts
svn commit
Please note: That although you'll have the objects under source control, you'll not have the data or it's progression (is that a renamed field, or 1 new field and 1 deleted?).
This approach is fine for maintaining change history. But, of course, you should never be automatically committing to the "production build" (unless you like broken builds).
Although you didn't ask for it: This approach also won't produce a set of scripts that will upgrade a current DB. You'll only have initial creation scripts. Recording data progression and creation upgrade scripts is beyond basic source control systems.
I'd recommend Redgate SQL Compare for this - it allows you to compare database versions and generate change scripts - it's also fairly easily scriptable.
Based on your expanded question, you really want to use DDL triggers. Check out this article that details how to create a changelog system for your database.
Not sure on your price range, however DB Ghost could be an option for you.
I don't work for this company (or own the product) but in my researching of the same issue, this product looked quite promising.
I should've been a little more descriptive. The database in question is for an internal ERP system and thus we don't have many versions of our database, just Production/Testing/Development. When we've done a change request, some new fancy feature or something, we simply execute a script or series of scripts to update the procedures in question on the Testing database, if that is all good, then we do the same to Production.
So I'm not really after a full schema script per se, just something that can keep track of the various edits to the stored procedures over time. For example, PROCESS_INVOICE does stuff. It gets updated in some minor way in March. Some time later in say May it is discovered that in a rare case customers get double invoiced (or some other crazy corner case). I'd like to be able to see what has happened over time to this procedure. Currently the way the development environment is setup here I don't have that, which I'm trying to change.
I can recommend DBPro which is part of Visual Studio Team Edition. Have been using it for a few months for storing all parts of the database in Team Foundation Server as well as for deployment and database compares, etc.
Of course, as someone else mentioned, it does depend on your environment and price range.
I wrote a utility for dumping all of the relevant parts of my db into a directory structure that I use SVN on. I never got around to trying to incorporate it into the Manager but, if you're interested, it's here: http://www.reluctantdba.com/dbas-and-programmers/sqltools/svnforsql2005.aspx
It's free and, since I regularly run it, you know any bugs get fixed quickly.
You can always try integrating SourceSafe with SQL Server. Here's a quick start : link . To work with it you've got to have Managment Studio Developers Edition.

Resources