After dropping a user in SSMS, what should I check? - sql-server

I did something stupid, I think I fixed it, and I would like to know if there is anything obvious I missed or should check. I'm a first-time DBA who has formal training for only some operations within SSMS 2008 R2. I have a moderate to good understanding of SQL and that knowledge is growing fairly rapidly. I'm still making some mistakes.
Today I created a user and, on autopilot, accidentally hit Enter after giving her a password. I had meant to uncheck Enforce Password Expiration and User must change password at next login. The user was mapped to three databases.
The stupid thing I did was in deciding to delete the login in the server, delete the login from each of the three databases, and recreate the login. Obviously this should not have been my solution. Obviously again, attempting to recreate the login returned an error: "The server principal FOO already exists..." This was error 15025.
So I found that sure enough, there was a row in sys.server_principles for FOO row. I used Drop User to get rid of it, then checked again and it was gone. Great, recreate the login. New error: "User, group, or role already exists in the current database..." Error number 15023. This also added a new row for FOO back into sys.server_principal, which I dropped again.
So then I recreated the login without any mapping, which worked. Then I tried adding the mapping but got the same error 15023, which didn't surprise me. I used Alter User FOO with login = FOO. Ran successfully. Tried adding the mapping again, same error. I tried adding each mapping one at a time and found that two of the three databases I could map just fine, but the third one was a problem.
Finally, I closed SSMS and reopened it, and for some reason I was now able to map the third database. It is probable that closing and reopening SSMS did nothing, I don't see why it would, but I don't know what caused the mapping to fail previously and work that last time.
I have tested the login and it works for all of the databases. I'm wondering if there is any cleanup I should perform or look into, or any concern I should have. I do have daily full and hourly transaction backups, but as the databases are in use right now I obviously hope not to have to use them.
So obviously I messed up and I won't be doing that again. Any places to check, concerns, or assurances would be appreciated.
Thanks.

You can check that the mapping is correct with the following query:
select name, suser_sname(sid) as [login]
from sys.database_principals
where name = 'foobar'
The suser_sname(sid) function will take a security identifier (sid) and map it back to the server-level login. Run that in each database you're concerned about and ensure that the login is indeed what you expect it to be. If it's not, you already know how to fix it (the `alter user … with login = …) is the right move there).
As an aside, part of your story doesn't quite match up for me. You said that you were checking sys.database_principals to see whether the login existed or not. The thing is that logins are in sys.server_principals (whereas users for each individual database are in sys.database_principals). This may account for the trouble you had.

Related

Azure SQL Database - change user permissions on a read-only database for cross-database queries

We use Azure SQL Database, and therefore had to jump through some hoops to get cross-database queries set up. We achieved this following this great article: https://techcommunity.microsoft.com/t5/azure-database-support-blog/cross-database-query-in-azure-sql-database/ba-p/369126 Things are working great for most of our databases.
The problem comes in for one of our databases which is read-only. The reason it's read-only is b/c it is being synced from another Azure SQL Server to derive its content. This is being achieved via the Geo-Replication function in Azure SQL Database. When attempting to run the query GRANT SELECT ON [RemoteTable] TO RemoteLogger as seen in the linked article, I of course get the error "Failed to update because the database is read-only."
I have been trying to come up with a workaround for this. It appears user permissions are one of the things that do NOT sync as part of the geo-replication, as I've created this user and granted the SELECT permission on the origin database, but it doesn't carry over.
Has anyone run into this or something similar and found a workaround/solution? Is it safe/feasible to temporarily set the database to read/write, update the permission, then put it back to read-only? I don't know if this is even possible - I was told by one colleague that they think it will throw an error along the lines of "this database can't be set to read/write b/c it's syncing from another database..."
I figured out a work-around: Create a remote connection to the database on the ORIGIN server. So simple, yet it escaped me until now. Everything working great now.

Oracle 11g max login fail attempts workaround

My problem with database starts with situation where I cant really modify anything in database. My project specialist has limited time to help me. Here is the thing:
My user in Oracle database has older schema than actual production one. My section is working on stable and older version. After every release we are keep getting this issue, that something is set (maybe on Jenkins, maybe not) automatically to update our database to version, which we dont want. We tried to resolve it by changing password to user, but it produce new issue. Automat is trying to log in and when it gets wrong pass error, it is trying again. Oracle 11g has this limit 10 failed login attempts, after which it is locking the whole user account, which we use to connect do db by our application server.
We can not investigate this by turning on auditing failed logins, because it takes place on database space and our db-guy has not allowed us to do it, because if we exceed the space limit (which is about 11GB) the whole database will be dead (our project is not as important to do it). Another thing is that person who probably set the scripts which are our problem doesnt work anymore here.
Our workaround was to manually unlock account to get the connection by application server, and then wait a few secs to get locked again (but the connection of app server was stable). It is stupid, you must admit and the problem is when the connection drops by any reason - app server will not get it automatically, we have to do it manually which is not a solution. I have reconsidered it all again, my db-guy has no time to help me, I have no tools and access rights to investigate where this script or whatever other problem causing thing is beeing executed, so I started to thinking: what if we set limit of failed login attempts to unlimited? Will this decrease the performance of database? Will this generate any special new problems? Maybe the solution would be change the PASSWORD_LOCK_TIME to small value? I am asking you to some arguments that I could provide to my db-guy to convince him to use this new workarounds so I can start working again with code and not this database problems.

Is there ever a reason that a SQL job should be owned by anyone other than sa?

Put differently, should I always set the job owner to sa for a SQL job, even though it defaults to the user who created it?
Any jobs that are owned by a user will cease to run if that user is disabled or deleted. The jobs may also not run if there is an Active Directory problem at run time. Brent Ozar has an article about this on his website:
http://www.brentozar.com/blitz/jobs-owned-by-user-accounts/
You're gonna have to bear with me. Because I'm going by memory.
Looking at some old scripts, I have this code.
select #jobOwnerNameVeryImportantToSetCorrectly = 'someSqlAuthenticatonUser'
Now. In my scenario, I allowed a non 'sa' user to schedule and run the jobs.
Thus why I made the owner a non 'sa' user.
The question I think to answer is, "who runs the jobs". If it is always 'sa', then its not an issue.
But, if you want a non 'sa' account to run it, then how is a less privileged account going to run a job owned by the super-mack-daddy account?
My test would be.
Create a job. Let 'sa' own it.
Create a temp sql-authentication account.
Login to the database as this sql-authentication account.
See if you can run the job.
My memory is saying "the lesser account won't be able to". However, I dealt with jobs on Sql Server 2005. So even if I remember correctly for 2005, it may not be the same for 2008 or 2008R2.
But I remember having issues with this. And thus my variable declaration:
select #jobOwnerNameVeryImportantToSetCorrectly = 'someSqlAuthenticatonUser'
I read Brent Ozar's article as well and am in a similar situation where there are a lot of jobs not owned by SA that are enabled. From what I have researched, I haven't found any compelling reason NOT to change the ownership to 'SA' but two good reasons mentioned above why you should.
You don't have to worry about them not being able to run from this change. The only two things I would be cautious about are:
1) Jobs that you run as SA and
2) If you are coming into an environment, enabling jobs that have been disabled after you've made this change.
Why someone would ever do those two things, IDK but you can always back up msdb and then test these changes in a dev or training server. Just note the jobs that you change in case something unexpected happens.

MS access database in VB.NET

I followed a tutorial on the internet to create my own database. I succesfully built a program upon it. Then I created an access .mdb file(another database) and then I just changed the database which the program connected to, to the one which I created.
I just made that one change. But then it started showing me error whenever I tried to update using
da.update(ds,"Phone Book")
where da is data adapter and ds is data set.
The error was: " syntax error in INSERT INTO statement"
I have just changed the DB that the program is connecting to. I did not change the code one bit.
EDIT: I forgot to mention, I searched for this on google, and one thing which I read was, that access database might be only read only or something. But I unchecked the read only box, so I don't know whether it still might be the problem. Although, I don't think there is a problem with the code
EDIT: I just discovered now, that even if I change the table which is being referred to, it throws up the same error.
It sounds like the first database probably used something like Sql Server Express. That's a completely different kind of database then Access, with a different providers, different dialect of SQL, connection string, etc. Why would you think you can change all that without breaking some of your code?

Service Broker not working after database restore

Have a working Service Broker set up on a server, we're in the process of moving to a new server but I can't seem to get Service Broker set up on the new box.
Have done the obvious (to me) things like Enabling Broker on the DB, dropping the route, services, contract, queues and even message type and re adding them, setting ALTER QUEUE with STATUS ON
SELECT * FROM sys.service_queues
gives me a list of the queues, including my own two, which show as activation_enabled, receive_enabled etc.
Needless to say the queues aren't working. When I drop messages into them nothing goes in and nothing comes out.
Any ideas? I'm sure there's something really obvious I've missed...
Just a shot in the dark:
ALTER AUTHORIZATION ON DATABASE::[restored db name] TO [sa];
The dbo of the restored database is the Windows SID that created the db on the original server. This may be a local SID (eg. SERVERNAME\user) that has no meaning on the new server. This problem usually affects activated procedures and may affect message delivery, both issues happening due to inability of SQL to impersonate 'dbo'. Changing dbo to a valid login SID (like sa) would fix it.
If this doesn't fix it, then you need to track down where do the messages go. If they stay in sys.transmission_queue, then you must check the transmission_status. If they reach the target queueu but no activation occurs, check ERRORLOG. If they vanish, it means you do fire-and-forget (SEND followed immediately by END) and you are therefore deleting the error message that indicates the cause. This article Troubleshooting Dialogs contains more tips where to look.
And last, but not least, try using ssbdiagnose.exe.
In addition to Remus's answer, You might also want to check the BrokerEnabled property of the restoredDB. Whenever you restore a DB, the BrokerEnabled property of the restored DB is set to False. For this reason nothing will go into your queue. To address this:
right click on the restoredDB in SSMS > goto "Properties" > "Options" >
Scroll down to the "Service Broker" group and verify the value of the "Broker
Enabled" property. If it is set to False, change it to True and this
should solve your problem.

Resources