ERP system written in SQL Server, Trigger job not running - sql-server

We are a manufacturing plant that runs off an ERP system written in SQL Server. I have never worked with SQL and therefore do not know the language. What I do know is that a trigger job that was running and updating data for us, is now NO longer running. Is anyone familiar enough to answer any questions about this for me??? Any help would certainly be appreciated!
Thanks,
Jana

well someone could have disabled the trigger, someone could have deleted the trigger or someone could have changed the trigger.
Here is code to enable all triggers on a table:
ALTER TABLE mydatabse.dbo.mytable ENABLE Trigger ALL
or you could just enable the one trigger that is disabled.
If that doesn't work, it is one of the other two possibilities. YOu do have a backup right? or all the code properly stored in source control so you can revert to the eralier verision if need be. If you do though, find out why the triiger was chaged or deleted. Proably it was interfering with another process and the people who did that weren't professional enough to find out if the trigger init's current form was still needed.

Related

red herring error "The user does not have permission to perform this action"

When running a stored procedure, we're getting the error 297
"The user does not have permission to perform this action"
This occurs during times of heavy load (regularly, when a trim job is running concurrently). The error clears up when the service accessing SQL Server is restarted (and very likely the trim job is finished as well), so it's obviously not a real permissions problems. The error is reported on a line of a stored procedure which access a function, which in turn accesses dynamic management views.
What kind of situations could cause an error like this, when it's not really a permissions problem?
Might potentially turning on trace flag 4616 fix this, as per this article? I'd like to be able to just try it, but need more info. Also, I'm baffled by the fact that this is an intermittent problem, only happening under periods of high activity.
I was trying to reproduce this same error in other situations (that were also not real permissions problems), and I found that when running this on SQL Server 2005 I do get the permissions problem:
select * from sys.dm_db_index_physical_stats(66,null,null, null, null)
(66 is an invalid DBID.)
However, we're not using dm_db_index_physical_stats with an incorrect DBID. We ARE using dm_tran_session_transactions and dm_tran_active_transactions, but they don't accept parameters so I can't get the error to happen with them. But I was thinking perhaps that the issue is linked.
Thanks for any insights.
Would it be related to concurrency issues?
For example, the same data being processed or a global temp table being accessed? If so, you may consider sp_getapplock
And does each connection use different credentials with a different set of permissions? Do all users have GRANT VIEW SERVER STATE TO xxx?
Finally, and related to both ideas above, do you use EXECUTE AS anywhere that may not be reverted etc?
Completely random idea: I've seen this before but only when I've omitted a GO between the end of the stored proc definition and the following GRANT statement. So the SP tried to set it's own permissions. Is it possible that a timeout or concurrency issue causes some code to run that wouldn't normally?
If this occurs only during periods of heavy activity maybe you can run Profiler and watch for what locks are being held.
Also is this always being run the same way? For example is it run as a SQL Agent Job? or are you sometimes running manually and sometimes running it as a job. My thinking is maybe it is running as diff. users at different times.
Maybe also take a look at this Blog Post
Thanks everyone for your input. What I did (which looks like it's fixed the problem for now), is alter the daily trim job. It now waits substantially longer between deletes, and also deletes a much smaller chunk of records at a time.
I'll update this later on with more info as I get it.
Thanks again.

SQL Server Profiler

I have been told that SQL Profiler makes changes to the MSDB when it is run. Is this true and if so what changes does it make?
MORE INFO
The reason I ask is that we have a DBA who wants us to range a change request when we run the profiler on a live server. Her argument is that it makes changes to the DB's which should be change controlled.
Starting a trace adds a row into msdb.sys.traces, stopping the trace removes the row. However msdb.sys.traces is a view over an internal table valued function and is not backed by any physical storage. To prove this, set msdb to read_only, start a trace, observer the new row in msdb.sys.traces, stop the trace, remember to turn msdb back read_write. Since a trace can be started in the Profiler event when msdb is read only it is clear that normally there is no write into msdb that can occur.
Now before you go and grin to your dba, she is actually right. Profiler traces can pose a significant stress on a live system because the traced events must block until they can generate the trace record. Live, busy, systems may experience blocking on resources of type SQLTRACE_BUFFER_FLUSH, SQLTRACE_LOCK, TRACEWRITE and other. Live traces (profiler) are usualy worse, file traces (sp_trace_create) are better, but still can cause issues. So starting new traces should definetly something that the DBa should be informed about and very carefully considered.
The only ones I know happen when you schedule a trace to gather periodic information - a job is added.
That's not the case as far as I'm aware (other than the trivial change noted by others).
What changes are you referring to?
Nothing I have ever read, heard, or seen says that SQL Profiler or anything it does or uses has any impact on the MSDB database. (SQL Profiler is, essentially, a GUI wrapped around the trace routines.) It is of course possible to configure a specific setup/implementation to do, well, anything, and perhaps that's what someone is thinking of.
This sounds like a kind of "urban legend". I recommend that you challenge it -- get the people who claim it to be true to provide proof.

What is your biggest SQL Server mistake or embarrassing incident?

You know the one I am talking about.
We have all been there at some point. You get that awful feeling of dread and the realisation of oh my god did that actually just happen.
Sure you can laugh about it now though, right, so go on and share your SQL Server mishaps with us.
Even better if you can detail how you resolved your issue so that we can learn from our mistakes together.
So in order to get the ball rolling, I will go first……..
It was back in my early years as a junior SQL Server Guru. I was racing around Enterprise Manager, performing a few admin duties. You know how it is, checking a few logs, ensuring the backups ran ok, a little database housekeeping, pretty much going about business on autopilot and hitting the enter key on the usual prompts that pop up.
Oh wait, was that a “Are you sure you wish to delete this table” prompt. Too late!
Just to confirm for any aspiring DBA’s out there, deleting a production table is a very very bad thing!
Needless to say a world record was promptly set for the fastest database restore to a new database, swiftly followed by a table migration, oh yeah. Everyone else was none the wiser of course but still a valuable lesson learnt. Concentrate!
I suppose everyone has missed the WHERE clause off a DELETE or UPDATE at some point...
Inserted 5 million test persons into a production database. The biggest mistake in my opinion was to let me have write access to the production db in the first place. :P Bad dba!
My biggest SQL Server mistake was assuming it was as capable as Oracle when it came to concurrency.
Let me explain.
When it comes to transactional isolation level in SQL Server you have two choices:
Dirty reads: transactions can see uncommitted data (from other transactions); or
Selects block on uncommitted updates.
I believe these come from ANSI SQL.
(2) is the default isolation level and (imho) the lesser of two evils. But it's a huge problem for any long-running process. I had to do a batch load of data and could only do it out of hours because it killed the website while it was running (it took 10-20 minutes as it was inserting half a million records).
Oracle on the other hand has MVCC. This basically means every transaction will see a consistent view of the data. They won't see uncommitted data (unless you set the isolation level to do that). Nor do they block on uncommitted transactions (I was stunned at the idea an allegedly enterprise database would consider this acceptable on a concurrency basis).
Suffice it to say, it was a learning experience.
And ya knkow what? Even MySQL has MVCC.
I changed all of the prices to zero on a high-volume, production, eCommerce site. I had to take the site down and restore the DB from backup.. VERY UGLY.
Luckily, that was a LOOONG time ago.
forgetting to highlight the WHERE clause when updating or deleting
scripting procs and checking drop dependent objects first and running this on production
I was working on the payment system on a large online business. Multi-million Euro business.
Get a script from a colleague with a few small updates.
Run it on production.
Get an error report 30 minutes later from helpdesk, complaining about no purchases last 30 minutes.
Discover that all connections are waiting on a table lock to be released
Discover that the script from my colleague started with an explicit BEGIN TRANSACTION and expected me to manually type COMMIT TRANSACTION at the end.
Explain to boss why 30 minutes of sales were lost.
Blame myself for not reading the script documentation properly.
Starting off a restore from last week onto a production instance instead of the development instance. Not a good morning.
I've seen plenty other people miss a WHERE clause.
Myself, I always type the WHERE clause first, and then go back to the start of the line and type in the rest of the query :)
Thankfully we only ever make one cock-up before you realise that using transactions really is very, very trivial. I've amended thousands of records on accident before, luckily roll-back is there...
If you're querying the live environment without having thoroughly tested your scripts then I think embarrassing should really be foolhardy or perhaps unprofessional.
One of my favorites happened in an automated import when the client changed the data structure without telling us first. The Social Security number column and the amount of money we were to pay the person got switched. Luckily we found it before the system tried to pay someone his social security number. We now have checks in automated imports that look for funny data before running and stop it if the data seems odd.
Like zabzonk said, forgot the WHERE clause on an update or two in my day.
We had an old application that didn't handle syncing with our HR database for name updates very efficiently, mainly due to the way they keyed in changes to titles. Anyway, a certain woman got married, and I had to write a database change request to update her last name, I forgot the where clause and everyone in said application's name was now Allison Smith.
Columns are nullable, and parameter values fail to retrieve the correct information...
The biggest mistake was giving developers "write" access to the production DB
many DEV and TEST records were inserted / overwritten and backup- ed too production until it was wisely suggested (by me!) to only allow read access!
Sort of SQL-server related. I remember learning about how important it is to always dispose of a SqlDataReader. I had a system that worked fine in development, and happened to be running against the ERP database. In production, it brought down the database because I assumed it was enough to close SqlConnection, and had hundreds, if not thousands of open connections.
At the start of my co-op term I ended up expiring access to everyone who used this particular system (which was used by a lot of applications in my Province). In my defense, I was new to SQL Server Management Studio and didn't know that you could 'open' tables and edit specific entries with a sql statement.
I expired all the user access with a simple UPDATE statement (access to this application was given by a user account on the SQL box as well as a specific entry in an access table) but when I went to highlight that very statement and run it, I didn't include the WHERE clause.
A common mistake I'm told. The quick fix was unexpire everyones accounts (including accounts that were supposed to be expired) until the database could be backed up. Now I either open tables and select specific entries with SQL or I wrap absolutely everything inside a transaction followed by an immediate rollback.
Our IT Ops decided to upgrade to SQL 2005 from SQL 2000.
The next Monday, users were asking why their app didn't work. Errors like:
DTS Not found etc.
This lead to a nice set of 3 Saturdays in the office rebuilding the packages in SSIS with a good overtime package :)
Not, exactly a "mistake" but back when I was first learning PHP and MYSQL I would spend hours daily, trying to figure out why my code was not working, not knowing that I had the wrong password/username/host/database credentials to my SQL database. You cant believe how much time I wasted on that, and to make it even worse this was not a one time incident. But LOL, its all good, it builds character.
I once, and only once, typed something similar to the following:
psql> UPDATE big_table SET foo=0; WHERE bar=123
I managed to fix the error rather quickly. Since that and another error my updates always start out as:
psql> UPDATE table SET WHERE foo='bar';
Much easier to avoid errors that way.
I worked with a junior developer once who got confused and called "Drop" instead of "Delete" on a table.
It was a long night working to get the backup restored...
Edit: I should have mentioned, this was in a production environment, and the table was full of data...
This was before the days when Google could help. I didn't encounter this problem with SQL Server, but with it's ugly older cousin Sybase.
I updated a table schema in a production environment. Not understanding at the time that stored procedures that use SELECT * must be recompiled to pickup new fields, I proceeded to spend the next eight hours trying to figure out why the stored procedure that performed a key piece of work kept failing. Only after a server reboot did I clue in.
Losing thousands of dollars and hundreds of (end user) man-hours at your flagship customer's site is quite an educational experience. Highly recommended!!
A healthy amount of years ago I was working on a clients site, that had a nice script to clear the dev environment of all orders, carts and customers.. to ease up testing, so I of course put the damn script on the productions server query analyzer and ran it.
Took some 5-6 minutes to run too, I was bitching about how slow the dev server was until the number of deleted rows came up. :)
Fortunately I had just ran a full backup since I was about to do an installation..
Beyond the typical where clause error. Ran a drop on an incorrect DB and thus had to run a restore. Now I triple check my server name. Thankfully I had a good backup.
I set the maximum server memory to 0. I was thinking at the time that would automatically tell SQL server to use all available memory (It was early). No such luck. SQL server decided to use only 16 MB and I had to connect in single user mode to get the setting changed back.
Hit "Restore" instead of "Backup" in Management Studio.

SQL Server UNDO

I am a part time developer (full time student) and the company I am working for uses SQL Server 2005. The thing I find strange about SQL Server that if you do a script that involves inserting, updating etc there isn't any real way to undo it except for a rollback or using transactions.
You might say what's wrong with those 2 options? Well if for example someone does an update statement and forgets to put in a WHERE clause, you suddenly find yourself with 13k rows updated and suddenly all the clients in that table are named 'bob'. Now you have the wrath of 13k bobs to face since that "someone" forgot to use a transaction and if you do a rollback you are going to undo critical changes that were needed in other fields.
In my studies I have Oracle. In Oracle you can first run the script then commit it if you find that there isn't any mistakes. I was wondering if there was something that I missed in SQL Server since I am still relatively new in working developer world.
I don't believe you missed anything. Using transactions to prevent against these kind of errors is the best mechanism and it is the same mechanism Oracle uses to protected the end user. The difference is that Oracle implicitly begins a transaction for you whereas in SQL Server you must do it explicitly.
SET IMPLICIT_TRANSACTIONS is what you are probably looking for.
I'm no database/sql server expert and I'm not sure if this is what you're looking for, but there is the possibility to create snapshots of a database. A snapshot allows you to revert the database to that state at any time.
Check this link for more information:
http://msdn.microsoft.com/en-us/library/ms175158.aspx
I think transactions work well. You could rollback the DB (to a previous backup or point in the log), but I think transactions are much simpler.
How about this: never make changes to a production database that have not 1st been tested on your development server, and always make a backup before trying anything that is un-proven.
From what I understand, SQL Server 2008 added an Auditing feature that logs all changes made by users to the various databases and also has the option to roll them back after the fact.
Again, this is from what I've read or overheard from our DBA, but might be worth looking into.
EDIT: After looking into it, it appears to only give the ability to rollback on schema changes, not data modifications (DDL triggers).
If I am doing something with any risk in SQL Server, I write the script like this:
BEGIN TRAN
Insert .... whatever
Update .... whatever
-- COMMIT
The last line is a comment on purpose: I first run the lines before, then make sure there's no error, and then highlight just the word Commit and execute that. This works because in Management Studio you can select a part of the T-SQL and just execute the selected portion.
There are a couple of advantages: Implicit Transactions works too, but it's not the default for SQL Server so you have to remember to turn it on or set options to do that. Also, if it's on all the time, I find it's easy for people to "forget" and leave uncommitted transactions open, which can block others. That's mainly because it's not the default behavior and SQL Server folks aren't used to it.

Is there DDL Auditing in SQL Server 2000

And if not, is there a way to tell when a trigger was disabled/enabled?
FOLLOWUP:
It's a rather interesting diagnostic case. I was only involved from the periphery, and the guy doing the diagnostics isn't a database guy.
Anyways, he had a trigger that would move data from one table to another. He did a comparison and not all the data had made it to the second table. I said, I'm a critic of SQL Server but I trust that their triggers fire in the same transaction. He said but some of the data made it... if it was just disabled, nothing should make it. True. So I said maybe someone is enabling and disabling the triggers. Hence the question.
But what really happened is someone permanently disabled the trigger and copied the code into a sproc that was set to run at a certain time.
The correct forensic test would have been to look at the dependencies of the second table, see what else was using it. That would show the tumor sproc... (I've been watching lots of House reruns, can ya tell).
No auditing, though there is a company called Lumigent that offers a product "Audit DB" which will do DDL auditing (among other things) for SQL Server.
You can look in the sysobjects table for the crdate which will tell you when the object was created.
Your problem looks quite similar to the one that Randy Volters wrote about in Simple-Talk
http://www.simple-talk.com/sql/database-administration/dml-trigger-status-alerts/
I suspect it will help

Resources