We are running into a few issues with our TFS installation (TFS 2013 Update 4, SQL 2014 Standard) as a result of email alerts. Most notably, Work Items cannot be created, because this triggers an email.
Any time a process or user attempts to create a Work Item, the error
TF30040: The database is not correctly configured. Contact your Team Foundation Server administrator.
is received. Further, when I check the Event Viewer on the server, I can see the error and it reports that the inner exception is:
Exception Message: The EXECUTE permission was denied on the object 'sp_send_dbmail', database 'msdb', schema 'dbo'. (type SqlException)
I have worked with the DBA and we have enabled Email Alerts on the server. We have verified that, in general, the alerts work by using the test button on the administration console. I can also set up a check-in alert through the web interface and receive said alerts without issue. This seems to be specifically affecting Work Item creation alerts (which apparently are just automatically and irrevocably enabled).
Presumably, we could correct this by giving appropriate permissions to use that stored procedure. To do so, we need to know what user to give permissions to. So far we have tried giving execute permissions to my AD user, the service account used by the build service, and the Network Service account (which appears to be the TFS Service Account).
There is no indication in any error message as to what user is being used to execute that procedure. So, my question: What SQL user is used to send alerts when creating Work Items?
Edit:
For the record, this started working of its own accord. We decided Monday to call Microsoft to get this fixed. Before that happened, failed builds magically created some work items (on Tuesday, a full day after we gave up), and we are now able to create work items. Everyone involved states not doing anything. We are baffled, but in a good way.
I'm going to advise you that a DBA should not be making changes to the TFS databases. I suggest opening a ticket with MSFT and getting assistance from the product support group.
Related
I have a strange one for you. I'm maintaining several databases prior to a migration. One of them is a 2008R2 instance. This instance has multiple errors in the logs (the infrastructure has been poorly maintained), so I set up a bunch of alerts (16-25) and tried using Database Mail to send them. But the mail registry settings keep resetting and preventing it from working. I can't tell if someone is maliciously going in behind me and reverting the settings in the registry (this is possible in the poisonous environment I'm working in) or whether it's some kind of obscure problem.
Just to confirm... I've created the same alerts with the same mail settings on the 2017 instances that I'm also monitoring with no problem. Equally, on the 2008R2 instance, I can successfully set the Database Mail parameters, send myself a test email AND execute a job, sending a 'completed' email using the same Database Mail profile and user via an Operator.
Setting the parameters using xp_instance_regwrite or sp_set_sqlagent_properties didn't work either, although I realised early on that the parameters weren't sticking because of a lack of admin rights on the server, so I got the infrastructure guys to give me access. I then:
logged in to the server
shut down the Agent (it isn't doing anything at all)
configured the registry settings (HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10_50.<instance>\SQLServerAgent\UseDatabaseMail = 1, HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Microsoft SQL Server\MSSQL10_50.<instance>\SQLServerAgent\DatabaseMailProfile = <my-mail-profile>)
restarted the Agent.
I then confirmed from SSMS that the 'Mail session' parameters (Enable mail profile, Mail system and Mail profile) were correctly set. A day later, the log is full of errors, I have no emails and all of the Agent properties are empty and greyed out!
Anyone seen this before?
I have an on-premise CRM 2016 instance and I can't receive any incoming emails inside of it even though when I run the test access says everything is good.
First, I'm unable to change a queue record email address, because I keep getting a SQL timeout error (doesn't matter how much time you increase the timeout it will never change) but if I try to change any other field it works and saves (but not the email field of course).
The same with the Mailbox's records, when I try to change the email it returns a SQL timeout error.
So what I did was change these emails by SQL queries, but after that the emails still won't create inside CRM.
It shows the next warning log in the event viewer:
35241 - The recipients for the email message with subject "[x]" in mailbox [email address] did not match any known records.
I'm running out of choices here, when I run the diagnosis tool on my organization it's performance is good but there must be something obstructing the communication with the SQL? Any clues?
SQL timeout error:
Unhandled Exception: System.ServiceModel.FaultException`1[[Microsoft.Xrm.Sdk.OrganizationServiceFault, Microsoft.Xrm.Sdk, Version=8.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35]]: SQL timeout expired.Detail:
-2147204783
SQL timeout expired.
2018-10-10T14:14:15.5749939Z
I got the answer from Microsoft Community Forumns, thanks to Radu Chiribelea:
It's not enough to change the email address in SQL in it's base table for a record, so that this can be used for email tracking. There are other references as well - for example the EmailSearchBase. This is why you need to let the platform handle your changes.
You biggest issue here is the SQL Timeout and that is what you need to address. Since this occurs at a Create / Update I suspect there might be a deadlock somewhere. Do you have any plug-ins or workflows triggered at the time you create / update? If you disable those, do you still see the issue?
Can you enable a CRM Platform trace at a Verbose Level while reproducing the issue? This would give you a better overview of the actual timeout and you can then start from there to tackle it.
I am developing a WCF service, so far I have added only a few simple interfaces. Testing the service in the VS debug environment, all is well.
When I published the service to IIS 7 for further testing, all was fine, at least after I added db_datareader and db_datawriter permissions to the relevant databases for user IIS APPPOOL\ASP.NET v4.0.
Now the service is failing silently. Calling the service through a browser, I get the following message: "The server encountered an error processing the request. See server logs for more details." Fiddler says this is a result of an HTTP 400 error message (Bad Request).
I rolled back the code to return a hard-coded value, so I am sure that the issue is at the DB level, not the IIS 7 server installation.
The problem is that the Event Viewer shows no meaningful error messages. This despite the fact that all the code is surrounded by try/catch, with any exception caught going to the event viewer.
There is one message stating "Starting up database 'ReportServer$SQLEXPRESSTempDB'", but as far as I can tell that appears every 10 minutes, without reference to any attempts to access the database. Just to make sure, I gave the .NET 4 user R/W permission to access that DB as well.
In addition, I don't see any messages in the SQL Server Logs.
How can I debug this?
The problem was that db_datareader and db_datawriter permissions are not enough to run stored procedures. I upgraded the permissions and all was good.
The only thing I don't understand is why the exception wasn't written to the event log.
PROBLEM BACKGROUND
Sorry if this is a bit tedious to read, but please bear with me.
I have been tasked to determine the most restrictive security permissions...or rather investigate if more restrictive security settings can be configured for the SQL server login our program uses, yet still function as normal.
Currently the program runs as a Windows service configured to log on using a Windows user account that has been configured in SQL server with trusted auth. The login used has been assigned a db_owner role and the service works fine like that.
So to narrow the permissions for this user I removed the db_owner rights and assigned it to the db_datareader and db_datawriter roles. Unfortunately this causes a problem and when I start up the service I get an error dialogue displaying:
Error 1053: the service did not respond to the start or control request in a timely fashion.
and in the event viewer under the System events are logged:
event 7009 (timeout waiting for..to connenct)
event 7000 (the service did not respond to the start or control )
My problem is the code base is really large and I'm not sure what exactly to look for that would require db_owner permissions (it sets permissions maybe?).
QUESTION
What should I be looking for in a program that executes SQL that would cause it to require db_owner permissions?
In case the first question is too general: is there an easy way/any tools I can use to figure out what a Windows service is trying to do during start-up 'SQL wise' if I get system error events logged:
event 7009 (Timeout (30000 milliseconds) waiting for the ... service to connect)
event 7000 (The service did not respond to the start or control request in a timely fashion).
BTW I tried running profiler with all audit events selected, but still get nothing logged when starting the service.
This is such a broad question without knowing the architecture of your service and how it communicates with SQL Server. Are you using in-line SQL? Stored Procedures?
I think you'd best tackle this issue by starting from the service's code and tracing the execution path from the start and see what is being executed on/against SQL Server.
Alternatively, if you are using stored procedures, you may want to script them all out into a file and search on some common T-SQL commands limited to a db_owner, such as CREATE, DROP, ALTER.
I have two databases, production and stage. I am getting the error message in the title of this post when I click "Database Diagrams" for production, but on stage I don't get an error.
I've researched this message, and I've found posts such as this:
Link
They pretty much say to change the owner of my database to sa. I'm not convinced this is the issue, though, because both production and stage databases have the same owner (not sa), but I only get this error for production.
Does anyone else know how else to resolve this error message?
both production and stage databases
have the same owner (not sa), but I
only get this error for production.
That usually is the very source of the problem: a database created on one machine is restored on a different machine where the SID of the original creator is no longer valid. Change the owner to a valid one:
ALTER AUTHORIZATION ON DATABASE::[<dbname>] TO sa;
I received this error. In my case, I had existing Diagrams but could not view them on account of this error. I remembered that I had changed the name of the server a week or so ago. After renaming the computer, SQL Server (2012) apparently correctly began using the correct underlying local user object in the Logins section of Security. So, from appearances, the database had a valid owner. But the name of the security account wasn't changed - the username of the owner was correctly localputer\localuser but the SQL account name was local-puter\localuser (the original name of the server). I renamed the account name to localputer\localuser and everything went back to normal. I did not need to issue an ALTER AUTH ON DB.