I am receiving a message from a commercial program stating that the "LogMessage" stored procedure is not found. There does not appear to be a stored procedure called LogMessage in the associated MS SQLServer 2000 database. What can I do to track down the missing procedure, other than calling the company?
The reason you couldn't find it is because it's not there. Unless you have the original proc, you're going to have to call the company.
Granted, you could take a stab at creating the proc, yourself. But why bother when somebody already has the original proc?
Is this a fresh install of the commercial product? If so, this is completely their responsibility.
Occasionally you will also get this message if you do not have permissions to access the stored procedure. Log in as 'sa' or equivalent to verify that the proc is indeed missing.
LogMessage seems pretty self-explanatory. You could probably take a stab at creating one yourself just to see what happens, if you can't easily get the real thing.
Create a new table called LoggedMessages and just insert to the table when the proc is called. Then see what pops in.
Kind of hacky, but given that it's a logging mechanism, which is tangential to the main features of the app, you could give it a try.
Well, if the company is out of business, unhelpful, etc, or your in a hurry, then attach a SQL trace, look at what kind and how many parameters are being passed and create a stored procedure with that name and signature. It may take some experimentation to get the signature right dependening on the data access API being used. The body of the stored procedure would be empty. Since this is just logging, presumably this will let the rest of the app run, but logging would be off.
Make sure that the rest of the schema is there. Obviously if the entire schema is missing, then this trick won't work.
If you have a maintenance agreement with the vendor go holler at them, that's what maintenance and support is for.
Related
There may be an answer to this somewhere else on here, but I can't find it.
My organization uses a EHR called TIER that has a SQL back-end. One of the features of the EHR is that you can "scan" a document to a folder on the network with the unique ID of a row in a table on the server. Then from the EHR, you can open a record from the table and then it links to the documents in the folder with the same Unique ID.
An example may be helpful - In the EHR I create a document (a row in the ScannedFormTable) with unique ID of 100. I then "scan" (basically attaching or copying) a pdf or other document into a folder on the network (say D:\ScannedDocuments) with the name of 100, so abc.pdf is now in D:\ScannedDocuments\100. Then from the document in the EHR, I can open the pdf. However, without opening the document to check I can't see if there is any file in the ...\100 folder.
Through some googling, I found that using master.sys.xp_dirtree (and "Undocumented" procedure I think) I can have the EHR "see" the name of files "attached" to the documents. The issue is that I can run this stored procedure from SSMS, but can't from the EHR itself. I have tried to figure out a way to grant security permission for the user in the EHR to run the procedure vs running the script in the background on the server at regular intervals.
Any insights would be greatly appreciated. As you may have noticed from the number of " used, I am a self taught SQL user who is better at googling than actually understanding the intricacies of the language.
I found that using master.sys.xp_dirtree (and "Undocumented" procedure I think) I can have the EHR "see" the name of files "attached" to the documents.
I think you are confusing different things. Yes, YOU can (and tsql can) use that undocumented and unsupported procedure. However, your "EHR" is a system designed to provide specific functionality. It isn't clear what you are trying to accomplish, but to "get" your system to do something obviously depends on your system, its features, and what you are actually trying to accomplish.
I'll add that tsql is not designed to access the filesystem natively - hence the use of undocumented extended procedures. If you are simply trying to verify or traverse this ScannedFormtable table and verify that there is some sort of file in the appropriate location, you might find this task easier to implement in a typical programming language.
If you want better suggestions, it would help to discuss your goal. And consider carefully what you are trying to do, since it is quite easy to create a security hole by altering permissions.
My database has had several successive maintainers over the years and any naming guidelines that may have once been in place have been ignored.
I'd like to rename the stored procedures to a consistent format. Obviously I can rename them from within SQL Server Management Studio, but this will not then update the calls made in the website code behind (C#/ASP.NET).
Is there anything I can do to ensure all calls get updated to the new names, short of searching for every single old procedure name in the code? Does Visual Studio have the ability to refactor such stored procedure names?
NB I do not believe my question to be a duplicate of this question as the latter is solely about renaming within the database.
You could make the change in stages:
Copy of the stored procedures to the new stored procedures under their new name.
Alter the old stored procedures to call the new ones.
Add logging to the old stored procedures when you've changed all the code in the website.
After a while when you're not seeing any calls to the old stored procedures and you're happy you've found all the calls in the web site, you can remove the old stored procedures and logging.
You can move the 'guts' of the SPROC to a new SPROC meeting your new naming conventions, and then leave the original sproc as a shell / wrapper which delegates to the new SPROC.
You can also add an 'audit' table to track when the old wrapper SPROC is called - this way you will know that there are no dependencies on the old SPROC, and the old SPROC can be safely dropped (also, make sure that it isn't just 'your app' using the DB - e.g. cross database joins or other apps)
This has a small performance penalty, and won't really buy you that much (other than being able to 'find' your new SPROCs easier)
You will need to handle this in at least two areas, the application and the database. There could be other areas as well, and you have to be careful not to overlook them.
The Application
A Nice Practice for Future Projects
It helps to abstract your sprocs out. In our apps, we wrap all of our sprocs in a giant class, I can make calls like this:
Dim SomeData as DataTable = Sprocs.sproc_GetSomeData(5)
That way, the code end is nice and encapsulated. I can go into Sprocs.sproc_GetSomeData and tweak the sproc name in just one place, and of course I can right click on the method and do a symbolic rename to fix the method call solution-wide.
Without the Abstraction
Without that abstraction, you can just do Find In Files (Cntl+Shift+F) for the sproc name and then if the results looks right, open the files up and Find/Replace all the occurances.
The Sql Server
Don't Trust View Dependencies
On the SQL server end, theoretically in MSSMS 2008 you can right click on a sproc and select View Dependencies.
That should show you a list of all the places where the sproc is used in the database, however my confidence in this feature is very low. It might be better in SQL 2008, but in previous versions it definitely had problems.
View Dependencies hurt me, and it will take time for that to heal. :)
Wrap It!
You end up having to keep the old sproc around for awhile. This is the major reason why renaming sprocs is a such a project - it can take a month to finally be done with it.
First replace its contents with some simple TSQL that calls the the new sproc with the same parameters, and write some logging so that once some time goes by, you can tell if the old sproc is actually unused.
Finally, when you're sure the old sproc is unused, delete it.
Other Areas?
There could be a lot of other areas as well. Reporting Services springs to mind. SSIS packages. Using the technique of keeping the old sproc around and re-routing to the new one (mentioned above) will help you know if you missed anything, however it won't tell you what you missed. This can lead to much pain!
Good luck!
Short of testing every path in your application to ensure that any calls to the database and the relevant stored procedures have been updated... no.
Use global search and replace (but review each suggested replacement) to try to avoid missing any instances. If you app is well structured then there really should only be 1 place each stored proc is called.
As far as changing your application, I have all my stored procs as settings in the web.config file, so all the names are in one place and can be changed at any time to match changes to the database.
When the application needs to call a stored proc, the name is determined from web.config.
This makes it easier to manage all the potential calls which the application could make to the database services layer.
It will be a bit of a tedious search through your source code and other database objects I'm afraid.
Don't forget SSIS Packages, SQL Agent Jobs, Reporting Services rdl as well as your main application code.
You could use a regular expression like spProc1|spProc2 to search in the source code for all object names at the same time if you have a tool that supports searching through files using regular expressions (I have used RegexBuddy for this in the past)
If you want to just cover the possibility you might have missed the odd one you could leave all the previous stored procedures behind for a month and just have them log a custom SQL trace event with APP_NAME(), SUSER_NAME() and any other info you find helpful then have it call the renamed version. Then set up a trace monitoring this event.
If you use a connection to DB, stored procedures etc, you should create a service class to delegate these methods.
This way when something in your database, SP etc changes, you only have to update your service class, and everything is protected from breaking.
There are tools for VS that can manage changing a name, like refactor, and resharper
I did this and I relied heavily on global search in my source code for stored procedure names and SQL digger to find sql procs that called sql proces.
http://www.sqldigger.com/
SQL Server (as of SQL 2000) poorly understands it own dependencies, so one is left searching the text of the scripts to find dependencies, which could be other stored procs or substrings of dynamic sql.
I would obtain a list of references to a procedure by using the following, because SSMS dependencies doesn't pickup dynamic SQL references or references outside the database.
SELECT OBJECT_NAME(m.object_id), m.*
FROM SYS.SQL_MODULES m
WHERE m.definition LIKE N'%my_sproc_name%'
The SQL needs to be run in every database where there could be references.
syscomments and INFORMATION_SCHEMA.routines have nvarchar(4000) columns. So if "mySprocName" is used at position 3998, it won't be found. syscomments does have multiple lines but ROUTINES truncates. Should you disagree, take it up with gbn.
Based on that list of dependencies, I'd create new stored procedures starting the foundation stored procedures - those with the least dependencies. But I'd mind not to create stored procedures, prefixing the name with "sp_"
Verify the foundation procedures work identically to existing ones
Move to the next level of stored procedures - repeat steps 1-3 as needed till the highest level procedure has been processed.
Test the switch over the application uses to the new procedure - don't wait until the all the procedures are updated to test interaction with the application code. This doesn't need to be done for every stored procedure, but waiting to do this wholesale isn't a great approach either.
Developing in parallel has it's risks too:
Any changes to existing code needs to also be applied to the new code. If possible, work in areas where development is frozen or use a bug fix as an opportunity to migrate to new code rather than apply the patch in two places (while also minimizing downtime for transition).
Use a utility like FileSeek to search the contents inside each and every file in your project folder. Don't trust the windows search - it's slow and user-unfriendly.
So if you had a Stored Procedure named OldSprocOne and want to rename it to SP_NewONe, search all occurrences Of OldSprocOne then search all occurrences of OldSprocOne to see if that name isn't already being used somewhere else and won't cause problems. Then rename each and every occurrence in the code.
This can be very time consuming and repetitive for larger systems.
I would be more concerned about ignoring the names of the procedures and replacing your legacy DAL with Enterprise Library Data Access Block 5
Database Accessors in Enterprise Library 5 DAAB - Database.ExecuteSprocAccessor
Having code that is like
public Contact FetchById(int id)
{
return _database.ExecuteSprocAccessor<Contact>
("FetchContactById", id).SingleOrDefault();
}
Will have atleast a billion times more value than having stored procs with consistent names, especially if the current code passes around DataTables or DataSets ::shudders::
I'me all in favor of refactoring any sort of code.
What you really need here is a method slowly and incrementally renaming your stored procs.
I certainly would not do a global find and replace.
Rather, as you identify small pieces of functionality and understand the relationships between the procs, you can re-factor in small pieces.
Fundamental to this process, though, is source-code control of your database.
If you do not manage changes to your database the same as normal code, you will be in serious trouble.
Have a look at DBSourceTools. http://dbsourcetools.codeplex.com
It's specifically designed to help developers get their databases under source code control.
You need a repeatable method of restoring your database to a specific state - prior to refactoring.
Then re-apply your refactored changes in a controlled way.
Once you have embraced this mindset, this mammoth and error-prone task will become simple.
This is assuming that you use SQL Server 2005 or above. An option that I have used before is to rename the old database object and create a SQL Server Synonym with the old name. This will allow for you to update your objects to whatever convention you choose and replace the refrences in code, SSIS packages, etc... as you come along them. Then you can concentrate updating the references in your code gradually over however maintenance releases you choose (as opposed to breaking them all at once). As you feel that you've found all references you can remove the synonym as the code goes to QA.
I know a little about SQL injections and URL decode, but can someone who's more of an expert than me on this matter take a look at the following string and tell me what exactly it's trying to do?
Some kid from Beijing a couple weeks ago tried a number of injections like the one below.
%27%20and%20char(124)%2Buser%2Bchar(124)=0%20and%20%27%27=%27
It's making a guess about the sort of SQL statement that the form data is being substituted into, and assuming that it will be poorly sanitised at some step along the road. Consider a program talking to an SQL server (Cish code purely for example):
fprintf(sql_connection, "SELECT foo,bar FROM users WHERE user='%s';");
However, with the above string, the SQL server sees:
SELECT foo,bar FROM users WHERE user='' and char(124)+user+char(124)=0 and ''='';
Whoops! That wasn't what you intended. What happens next depends on the database back-end and whether or not you've got verbose error reporting turned on.
It's quite common for lazy web developers to enable verbose error reporting unconditionally for all clients and to not turn it off. (Moral: only enable detailed error reporting for a very tight trusted network, if at all.) Such an error report typically contains some useful information about the structure of the database which the attacker can use to figure out where to go next.
Now consider the username '; DESCRIBE TABLE users; SELECT 1 FROM users WHERE 'a'='. And so it goes on... There are a few different strategies here depending on exactly how the data comes out. SQL injection toolkits exist which can automate this process and attempt to automatically dump out the entire contents of a database via an unsecured web interface. Rafal Los's blog post contains a little more technical insight.
You're not limited to the theft of data, either; if you can insert arbitrary SQL, well, the obligatory xkcd reference illustrates it better than I can.
You'll find detailed info here:
http://blogs.technet.com/b/neilcar/archive/2008/03/15/anatomy-of-a-sql-injection-incident-part-2-meat.aspx
These lines are double-encoded -- the
first set of encoded characters, which
would be translated by IIS, are
denoted by %XX. For example, %20 is a
space. The second set aren't meant to
be translated until they get to the
SQL Server and they use the char(xxx)
function in SQL.
' and char(124)+user+char(124)=0 and ''='
that's strange..however, make sure you escape strings so there will be no sql injections
Other people have covered what's going on, so I'm going to take a moment to get on my high-horse and strongly suggest that if you're not already (I suspect not from a comment below) that you use parameterized queries. They literally make you immune to SQL injection because they cause parameters and the query to be transmitted completely separately. There's also potential performance benefits, yadda yadda, etc.
But seriously, do it.
Our application uses SQL Server Reporting Services and allows users to add custom filters to reports. We do this by modifying the RDL and then uploading the modified RDL to the server to create a new report. The problem is that after the report has run once, it's no longer needed; it's really just a temporary report. Obviously, this would eventually result in a lot of temporary reports laying around. We need a way to clean these up.
We've already thought about external methods like creating a service or job to periodically delete the reports, and that's probably what we'll end up doing if we can't come up with something better. What we're wondering is, does SSRS itself provide a better way to do this? We thought about trying to somehow use a cached instance which would be set to expire, but that seems to only works on an executed instance of a report not the report itself. As far as I can tell there's no way to set a report to expire. Is there some other way to get SSRS to clean up for us?
Immediately deleting the report isn't an option because our execution is asynchronous.
Built-in, there's nothing. But writing something yourself is easy enough.
Try having a process which queries your catalog of reports for ones that are older than half an hour (or so). You could even join to ReportServerTempDB to see if they still have an active session (in which case, you ignore them a bit longer).
Once you've found them, it's easy to grab that using the Web Service interface and delete them from the catalog.
But... I'd actually look at a better way of providing the custom filter, using code. Surely you could provide the filter as a parameter, and use the VB code within the report to convert what the user provides into something which could be evaluated for each row.
Rob
In our current database development evironment we have automated build procceses check all the sql code out of svn create database scripts and apply them to the various development/qa databases.
This is all well and good, and is a tremdous improvement over what we did in the past, but we have a problem with rerunning scripts. Obviously this isn't a problem with some scripts like altering procedures, because you can run them over and over without adversly affecting the system. Right now to add metadata and run statements like create/alter table statements we add code to check and see if the objects exists, and if they do, don't run them.
Our problem is that we really only get one shot to run the script, because once the script has been run, the objects are in the environment and system won't run the script again. If something needs to change once it's been deployed, we have a difficult process of running update scripts agaist the update scripts and hoping that everything falls in the correct order and all of the PKs line up between the environments (the databases are, shall we say, "special").
Short of dropping the database and starting the process from scratch (the last most current release), does anyone have a more elegant solution to this?
I'm not sure how best to approach the problem in your specific environment, but I'd suggest reading up on Rail's migrations feature for some inspiration on how to get started.
http://wiki.rubyonrails.org/rails/pages/UnderstandingMigrations
We address this - or at least a similar problem to this - as follows:
The schema has a version number - this is represented by a table which has one row per version which, as well as the version number, carries boring things like a date/time stamp for when that version came into existence.
By having the schema create/modify DDL wrapped in code that performs the changes for us.
In the context above one would build the schema change code as part of the build process then run it and it would only apply schema changes that haven't already been applied.
In our experience (which is bound not to be representative) in most cases the schema changes are sufficiently small/fast that they can safely be run in a transaction which means that if it fails we get a rollback and the db is "safe" - although one would always recommend taking backups before applying schema updates if practicable.
I evolved this out of nasty painful experience. Its not a perfect system (or an original idea) but as a result of working this way we have a high degree of confidence that if there are two instances of one of our databases with the same version that then the schema for those two databases will be the same in almost all respects and that we can safely bring any db up to the current schema for that application without ill effects. (That last isn't 100% true unfortunately - there's always an exception - but its not too far from the truth!)
Do you keep your existing data in the database? If not, you may want to look at something similar to what Matt mentioned for .NET called RikMigrations
http://www.rikware.com/RikMigrations.html
I use that on my projects to update my database on the fly, while keeping track of revisions. Also, it makes it very simple to move database schema to different servers, etc.
if you want to have re-runnability in your scripts, then you can't have them as definitions... what I mean by this is that you need to focus on change scripts rather than here is my Table script.
let's say you have a table Customers:
create table Customers (
id int identity(1,1) primary key,
first_name varchar(255) not null,
last_name varchar(255) not null
)
and later you want to add a status column. Don't modify your original table script, that one has already run (and can have the if(! exists) syntax to prevent it from causing errors while running again).
Instead, have a new script, called add_customer_status.sql
in this script you'll have something like:
alter table Customers
add column status varchar(50) null
update Customers set status = 'Silver' where status is null
alter table Customers
alter column status varchar(50) not null
Again you can wrap this with an if(! exists) block to allow re-running, but here we've leveraged the notion that this is a change script, and we adapt the database accordingly. If there is data already in the customers table then we're still okay, since we add the column, seed it with data, then add the not null constraint.
Both of the migration frameworks mentioned above are good, I've also had excellent experience with MigratorDotNet.
Scott named a couple of other SQL tools that address the problem of change management. But I'm still rolling my own.
I would like to second this question, and add my puzzlement that there is still no free, community-based tool for this problem. Obviously, scripts are not a satisfactory way to maintain a database schema; neither are instances. So, why don't we keep metadata in a separate (and while we're at it, platform-neutral) format?
That's what I'm doing now. My master database schema is a version-controlled XML file, created initially from a simple web service. A simple javascript program compares instances against it, and a simple XSL transform yields the CREATE or ALTER statements. It has limits, like RikMigrations; for instance it doesn't always sequence inter-depdendent objects correctly. (But guess what — neither does Microsoft's SQL Server Database Publication tool.) Really, it's too simple. I simply didn't include objects (roles, users, etc.) that I wasn't using.
So, my view is that this problem is indeed inadequately addressed, and that sooner or later we'll have to get together and tackle the devilish details.
We went the 'drop and recreate the schema' route. We had some classes in our JUnit test package which parameterized the scripts to create all the objects in the schema for the developer executing the code. This allowed all the developers to share one test database and everyone could simultaneously create/test/drop their test tables without conflicts.
Did it take a long time to run? Yes. At first we used the setup method for this which meant the tables were dropped/created for every test and that took way too long. Then we created a TestSuite which could be run once before all the tests for a class and then cleaned up when all the class tests were complete. This still meant that the db setup ran many times when we ran our 'AllTests' class which included all the tests in all our packages. How I solved it was adding a semaphore to the OracleTestSuite code so when the first test requested the database to be setup it would do that but any subsequent call would just increment a counter. As each tearDown() method was called, the counter would decrement the counter until it reached 0 and the OracleTestSuite code would drop everything. One issue this leaves is whether the tests assume that the database is empty. It can be convenient to let database tests know the order in which they run so they can take advantage of the state of the database because it can reduce the duplication of DB setup.
We used the concept of ObjectMothers to solve a similar problem with creating complex domain objects for testing purposes. Mock objects might be a better answer but we hadn't heard about them at the time. After all this time, I'd recommend creating test helper methods that could create standardized datasets for the typical scenarios. Plus that would help document the important edge cases from a data perspective.