Best tool to document T-SQL *source* files? - sql-server

At work, the database is not documented at all. Furthermore, the stored procedures, functions and views are all encrypted, this rules out a lot of tools that document these objects for you. All I have are the plain .SQL files that generate the database, schemas, tables, functions and all.
I'd like to know, is there a tool that can read these files and generate a Doxygen-like documentation? Preferably open-source or freeware.
I found IzzySoft's HyperSQL and SourceForge's project PLDoc do something very close to what I'd need, though both seem to be very PL/SQL specific. I want something that reads SQL source files (that understands T-SQL's idiosyncracies), parses them, and gets me:
List of SPs, UDFs, etc. defined within each file
List of objects (both tables/views and procs/functions) each object depends on (directly and, if possible, also indirectly)
Calling and dependencies graphs (i.e. what calls what and is called by what)
If possible, when an SP uses a table/view, how's it using it (INSERT/DELETE/UPDATE/SELECT/mix???)
I've already developped a tiny Perl script that minimally parses these files attempting to get first point - but then it's just a hack and lacks a lot of polish. I'm sure there must be a tool out there which does the job, I want to believe I won't have to code it myself.
Thanks in advance,
Joe

We use Red Gate SQL Doc to generate ours.
However, it works from a database not files: it's easier to read everything from system tables (permission, dependecies, datatypes etc) than parse scripts. Parsing scripts is what the DB engine does...
Can you not generate an empty DB from the source files (remove WITH ENCRYPTION) and generate from that?
Or decrypt if you have sa rights?

Related

Export Databases of DOS Clipper Application

Our current system database system is a clipper DOS application. The database inside its folder is fragmented/divided into many parts. I want to decrypt the database so that I will have only one database in all and avoid reshuffling of data. I'll attached the file folder Screenshot.. the database is on .DBF format
VScreenshot of files
Often you can decompile the CLIPPER exe file to source code and work from the .prg I've done it many times. The program to use is called WALKYRIE.
In Clipper and Fox Pro for DOS .dbf file is a simple table file.
If You want to use as data base with many tables in one unit.
You can import these tables in MS SQL data base and/or part of a MS Access database.
I see that you got several answers. Most are partially right. Let's address these one at a time:
All those files essentially comprise the "database" for the application you're using. They could be used by other applications as well. Besides having a lot of files, what is the problem you're trying to solve?
People mentioned indexes. You can generally ignore these. There are there primarily to make access to the data files faster. Any properly written clipper application will recreate these if they're missing or corrupted. You could test this by renaming one, running the app, and seeing what happens. If it doesn't recreate it you can name it back. Not replacing missing index files would be unusual behavior.
The DBF file format is binary, but barely. Most of what's in a DBF is text and is readable with an editor. But there's no reason to do so - I'm sure there are several free DBF utilities out there to to read DBF files. Getting the structure of the files could be very helpful.
Getting the data out of the files would also be fairly simple with a utility. If you look up the DBF format you could even write one fairly easily in Clipper, any other language that uses DBF files, or in something like Python. Any language that can open and write files, really. It's not hard - any competent developer could do this in a matter of hours. Must less if you're using Clipper or another language that natively reads DBX files.
Most people create dBase/Clipper programs with relational data, like SQL Server. Where SQL Server has tables that relate to each other dBase/Clipper has a file for each "table." This isn't a requirement, but it was almost certainly done this way.
Given that, if you get the table structures through a utility or by reading the headers in an editor (don't save them from an editor!) you could quite likely recreate the database schema (i.e. the map of the data). Once you have that it's fairly trivial to get the data into another type of database (SQL Sever, Access, or whatever you like to use.) If non of the files are too large it's conceivable to put all the files into Excel sheets. It really depends on what you want to do with it.
As others have said, you may be able to get the code by Valkyrie. Some people have used it very successfully. I don't know where you get it and I've never used it. Why do you not have the code? If this is a commercial application you likely should not have it. If it's a custom app who ever wrote it or paid to have it written should have the code.
Again, it's not clear to me what problem you're trying to solve. But there are many options for doing something with those DBF files. Fortunately they are one of the easier to read data formats you could be working with.
Let me know if you have any questions. Apologies for the typos that are no doubt scattered throughout this reply.
You sort of can get an idea of how they relate to each other by opening the index files they use (.NTX files). If you have the DBU utility (executable) around, you can open the DBF and load the index (NTX). LibreOffice Calc is also able to open DBFs (haven't tested .NTX).
If you open the .NTX on a text editor you will see the indexes in the beginning.
I open with Access, but I can save the data using a PrintFill Program.

Granting access to master.sys.xp_dirtree SQL

There may be an answer to this somewhere else on here, but I can't find it.
My organization uses a EHR called TIER that has a SQL back-end. One of the features of the EHR is that you can "scan" a document to a folder on the network with the unique ID of a row in a table on the server. Then from the EHR, you can open a record from the table and then it links to the documents in the folder with the same Unique ID.
An example may be helpful - In the EHR I create a document (a row in the ScannedFormTable) with unique ID of 100. I then "scan" (basically attaching or copying) a pdf or other document into a folder on the network (say D:\ScannedDocuments) with the name of 100, so abc.pdf is now in D:\ScannedDocuments\100. Then from the document in the EHR, I can open the pdf. However, without opening the document to check I can't see if there is any file in the ...\100 folder.
Through some googling, I found that using master.sys.xp_dirtree (and "Undocumented" procedure I think) I can have the EHR "see" the name of files "attached" to the documents. The issue is that I can run this stored procedure from SSMS, but can't from the EHR itself. I have tried to figure out a way to grant security permission for the user in the EHR to run the procedure vs running the script in the background on the server at regular intervals.
Any insights would be greatly appreciated. As you may have noticed from the number of " used, I am a self taught SQL user who is better at googling than actually understanding the intricacies of the language.
I found that using master.sys.xp_dirtree (and "Undocumented" procedure I think) I can have the EHR "see" the name of files "attached" to the documents.
I think you are confusing different things. Yes, YOU can (and tsql can) use that undocumented and unsupported procedure. However, your "EHR" is a system designed to provide specific functionality. It isn't clear what you are trying to accomplish, but to "get" your system to do something obviously depends on your system, its features, and what you are actually trying to accomplish.
I'll add that tsql is not designed to access the filesystem natively - hence the use of undocumented extended procedures. If you are simply trying to verify or traverse this ScannedFormtable table and verify that there is some sort of file in the appropriate location, you might find this task easier to implement in a typical programming language.
If you want better suggestions, it would help to discuss your goal. And consider carefully what you are trying to do, since it is quite easy to create a security hole by altering permissions.

Using Tessy with Subversion

I have an embedded C project which uses subversion for source control. I want to use Tessy for unit testing and have these tests archived in subversion too. However, it generates many small files which will make analysing diffs for the actual source code changes a real pain. Trying to actually look at the source changes when there are hundreds of Tessy related files changed will make it impossible.
Does anyone know if there is a setting to have these stored in a less problematic format or any suggestions for a viable solution? What would be ideal is if it could store everything as, for example, an xml file - this would make browsing directory diffs easier and would allow the actual content to be human readable as well.
Any ideas?
I know this is an old question ...
Does anyone know if there is a setting to have these stored in a less problematic format or any suggestions for a viable solution?
The TESSY recommended way is to do utilize the database save feature found under in File menu (and in a variety of right-click menu's). This creates a binary .tmb file which contains everything related to your tests. By default the .tmb file is stored in the backup directory in your Tessy Project folder. The config folder, backup folder and the PDBX file would then all be stored in SVN. See the Tessy Users Manual (Backup, restore, version control chapter) for more specifics.
What would be ideal is if it could store everything as, for example, an xml file - this would make browsing directory diffs easier and would allow the actual content to be human readable as well.
That would be ideal, but unfortunately is not really an option. Having everything stored as a binary file makes it impossible to do a useful diff. The other problem with this method is that it disconnects a change to the test from the file that is checked into SVN - unless the tester specifically performs a database save.
Yes I realize xUnit testing frameworks don't have those limitations, but Tessy has some features (like MCDC and DO178B support) that the xUnit frameworks do not have baked in.
So how do you work in this environment. Key word - Discipline.
We set up internal procedures for who and how tests gets updated. When the proecedures are followed we are able to deal with the limitations presented above. It is not optimal, but with some internal discipline it can work.

Make use of an unknown database file?

I have some database files I'd like to pull data from (and push to).
The first problem is that I don't know what format the database is in.
Each table (or object) seems to have a separate pair of files, such as ACCOUNT.FS5 and ACCOUNT.IDX. Some of them also have .SAV files.
A friend suggested that they are likely to be Flagship database files, presumably because of the FS5 extension. Edit: this is incorrect, they are not Flagship files, they are database files for the software 'EXACT'.
If this is the case, the second problem is that I don't know how I'd go about querying on these files. I have no schema per se, although the application is capable of exporting the data in csv format. Judging by the unfriendly nature of the csv, I'd imagine it to be pretty closely aligned to the database schema.
Any ideas?
If what you think is in these files, is not confidential, I would create the project on one of freelance sites, like "vWorker", and ask for a complete data extraction there.
You can as well specify the destination file format (say, .sqlite) you know how to deal with.
Hope it helped.
Regards

Export queries from SSMS to files - not the results but the query

I want to export all my queries as individual files for purposes of putting them into mercurial source control, but I don't know how to export the individual queries as individual files without having to open each one, then save to the folder, then add into the project, or some equally convoluted process.
I wouldn't mind having to add each one individually, but how do I get them out of the database as individual files without opening them all and doing each one save as? Ostensibly I would like them named with the name they have in the database right now.
I could easily dump the whole lot into one long file using database tasks, but that's not really super helpful is it?
I have SSMS 2k5 and 2k8 (and VS 2k5, 2k8, 2010 to boot) to work with, any thoughts?
Right click on the database. Select Generate Script. On the last page. Script To file you can choose single file or file per object
When you script a database in SSMS you have the option of one file per objects.
SMO is useful with a small app to iterate through
Third party tools like Red Gate SQL Compare (there are other free tools) can script too
I would write a small C# program which extracts your database object via SMO and stores them in your filesystem the way you want.
It is rather easy to write stored procedures which fetches the definition into the result as text. sp_helptext could be used as start.
Than you can use PowerShell to write the Output to the file system.
It sounds as if this would fit rather good into the Really Simple Data Dictionary codeplex project. link text

Resources