I accidentally deleted my domain logon from SQL Server LocalDB. It was the only logon on the server.
How can I get access back to my instance? Every time I try connecting I get the message Logon failed for domain\user.
I tried reinstalling and repairing with no luck.
Did you have any special configuration of your instance that would be hard to recreate? Or maybe user data in masterDB (please don't do this ever again if so)?
If not then you can just delete the instance and recreate it. All user databases will be preserved, although you will need to attach them again.
Assuming we're talking about the automatic instance (v11.0), just do:
SqlLocalDB delete v11.0
It will be recreated the next time you try connecting to it. It can take 10 seconds or more, so the first connection could timeout.
After that use sp_attach_db() stored proc to attach your user databases again.
Related
up to a while ago I used SQL2k5 exclusively but recently got updated to 2008 R2. Apart from the obvious changes in Mgmt Studio, there is one quirk that's starting to be very annoying : each time a connection is dropped, I have to switch back to the 'lost' database again as it seems the conneciton automagically reconnected to the initial database again.
In SQL 2005 I simply had to press F5 twice and the first time it would give me an error saying the connection was lost, the second time it would reconnect to the database it was on before the connection got dropped and then execute whatever sql-commands it had. It did not really matter how I had gotten to that database, be it by using the dropdown-box on top, or a USE statement...
In SQL 2008 (R2) I now press F5 and mgmt studio will "eat" the lost connection silently and instead immediately reconnect to the server and execute the code on the default database or the database that I 'forced' while connecting using the [>> options] button/tabs
This happens quite a lot as I often have one tab open that kills all connections and restores the database, and another (series of) tab(s) with changed procedures, test-cases etc...
Is there some (hidden) configuration to (re)set this behaviour ???
I know I can try to add USE statements on top everywhere, or 'force' every connection directly to it's 'target' database, but bye bye for ad-hoc queries then =(
ps : doing some extra searching I'm wondering if this isn't due to the "fix" bespoken
here [connect.com]
ps: as a side note, after reconnecting the SPID on the bottom of the screen isn't updated properly either, as a result I've already been killing the wrong connection as I based myself on stale information ... yay for progress =( (**)
Anyone with better google-fu than me ? Or closer connections to Microsoft ? =)
Thx.
(**: man, I so miss Query Analyzer =)
If you register the instance you're connecting to in management studio, you can go to "Connection Properties" on the registered instance and set "Connect to database" to be the main database you use on that instance. When you are disconnected and it auto-reconnects, it will use that default database again.
Limitations:
You can only set this to one database per instance, by design.
You need to connect using that registered instance to get it to work (right click on it, then 'New Query'). If you just do a new connection without going to the Registered Servers pane, it won't apply the properties to the connection.
Certainly not a perfect solution, but perhaps better than nothing.
PS: Connect bug for incorrect spid is here. It looks like there is a promised fix in Denali
Note: rereading, I see you're already setting the database on the advanced options for your connection at times. This is no more helpful than that, of course, just prevents you from having to do it each time.
I'm getting this error when running an SSIS package through SQL Agent
Failed to acquire connection "ORACLE ADO.NET". Connection may not be configured correctly or you may not have the right permissions on this connection.
When I log on as the SQL Agent User and run the ssis package directly it is fine. When I then execute it through the SQL agent job, it fails.
I've read around extensively on this topic, and it seems a lot of the advise concerns how you are logged in, configuring of proxy accounts, etc, etc, etc, none of which has been helpful.
I am logging onto an Oracle database with an ADO.NET conncetion. The connection string is as follows (datasource, userid and password have been changed):
Data Source=DATASOURCE;User ID=userid;Password=password;Persist Security Info=True;Unicode=True;
I'm loading this from a registry setting using package configuration. To check that I am getting the correct string, I am writing it into a temporary log table. I am definately getting the string I need from the correct registry setting.
I've tested the oracle login credentials though PL/SQL developer, and it lets me login just fine.
As far as I can tell, as I'm using an explicit user name and password for the Oracle connection it just shouldn't matter who the SSIs pacakge is run as. The only point of failure that Ican see would be the reading of the information from the registry, but that seems fine.
I'm really quite baffled, I must confess, and would appreciate any help some of the splendid experts here can offer.
Many thanks,
James
Ok, tracked this one down after quite a lot of pain.
It was working fine on one environment, but not another, so I fired up Process Monitor (http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx) and ran a package through the SQL Agent job, comparing which system entities were hit on each enviroment.
On the failing environment, at the point of the bulk transfer operation, the package attempted to get the Oracle 11 client DLL, and then hung.
I knew that this was installed, and, moreoever, the DLL path was a system environment setting. After further investigation it was revealed that the server had not been rebooted since the Oracle Client install and the SQL Server Agent process had not bee recycled.
Yes, can you believe it, the old helpdesk fix "Can you reboot your computer?" worked.
Sigh!
We had issues at a client with running packages connecting to Oracle before stored on our sql server instance. The work around we found was to change the package property, protection level, to "Dont save Sensitive Data" and for security purposes, we encrypted the username and password in the package configuration that was decrypted by a udf in sql server. Of course, before you try the whole encryption part, I would recommend putting the username and password in the package configuration without encrypting the values to see if changing the protection level setting is the solution to your specific problem. I hope this helps.
I was getting this error when tnsnames.ora file did not have a valid entry for the environment
I'm trying to perform some offline maintenance (dev database restore from live backup) on my dev database, but the 'Take Offline' command via SQL Server Management Studio is performing extremely slowly - on the order of 30 minutes plus now. I am just about at my wits end and I can't seem to find any references online as to what might be causing the speed problem, or how to fix it.
Some sites have suggested that open connections to the database cause this slowdown, but the only application that uses this database is my dev machine's IIS instance, and the service is stopped - there are no more open connections.
What could be causing this slowdown, and what can I do to speed it up?
After some additional searching (new search terms inspired by gbn's answer and u07ch's comment on KMike's answer) I found this, which completed successfully in 2 seconds:
ALTER DATABASE <dbname> SET OFFLINE WITH ROLLBACK IMMEDIATE
(Update)
When this still fails with the following error, you can fix it as inspired by this blog post:
ALTER DATABASE failed because a lock could not be placed on database 'dbname' Try again later.
you can run the following command to find out who is keeping a lock on your database:
EXEC sp_who2
And use whatever SPID you find in the following command:
KILL <SPID>
Then run the ALTER DATABASE command again. It should now work.
There is most likely a connection to the DB from somewhere (a rare example: asynchronous statistic update)
To find connections, use sys.sysprocesses
USE master
SELECT * FROM sys.sysprocesses WHERE dbid = DB_ID('MyDB')
To force disconnections, use ROLLBACK IMMEDIATE
USE master
ALTER DATABASE MyDB SET SINGLE_USER WITH ROLLBACK IMMEDIATE
Do you have any open SQL Server Management Studio windows that are connected to this DB?
Put it in single user mode, and then try again.
In my case, after waiting so much for it to finish I had no patience and simply closed management studio. Before exiting, it showed the success message, db is offline. The files were available to rename.
execute the stored procedure
sp_who2
This will allow you to see if there is any blocking locks.. kill their should fix it.
In SSMS: right-click on SQL server icon, Activity Monitor. Open Processes. Find the processed connected. Right-click on the process, Kill.
In my case I had looked at some tables in the DB prior to executing this action. My user account was holding an active connection to this DB in SSMS. Once I disconnected from the server in SSMS (leaving the 'Take database offline' dialog box open) the operation succeeded.
anytime you run into this type of thing you should always think of your transaction log. The alter db statment with rollback immediate indicates this to be the case. Check this out: http://msdn.microsoft.com/en-us/library/ms189085.aspx
Bone up on checkpoints, etc. You need to decide if the transactions in your log are worth saving or not and then pick the mode to run your db in accordingly. There's really no reason for you to have to wait but also no reason for you to lose data either - you can have both.
Closing the instance of SSMS (SQL Service Manager) from which the request was made solved the problem for me.....
To get around this I stopped the website that was connected to the db in IIS and immediately the 'frozen' 'take db offline' panel became unfrozen.
Also, close any query windows you may have open that are connected to the database in question ;)
I tried all the suggestions below and nothing worked.
EXEC sp_who
Kill < SPID >
ALTER DATABASE SET SINGLE_USER WITH Rollback Immediate
ALTER DATABASE SET OFFLINE WITH ROLLBACK IMMEDIATE
Result: Both the above commands were also stuck.
4 . Right-click the database -> Properties -> Options
Set Database Read-Only to True
Click 'Yes' at the dialog warning SQL Server will close all connections to the database.
Result: The window was stuck on executing.
As a last resort, I restarted the SQL server service from configuration manager and then ran ALTER DATABASE SET OFFLINE WITH ROLLBACK IMMEDIATE. It worked like a charm
In SSMS, set the database to read-only then back. The connections will be closed, which frees up the locks.
In my case there was a website that had open connections to the database. This method was easy enough:
Right-click the database -> Properties -> Options
Set Database Read-Only to True
Click 'Yes' at the dialog warning SQL Server will close all connections to the database.
Re-open Options and turn read-only back off
Now try renaming the database or taking it offline.
For me, I just had to go into the Job Activity Monitor and stop two things that were processing. Then it went offline immediately. In my case though I knew what those 2 processes were and that it was ok to stop them.
In my case, the database was related to an old Sharepoint install. Stopping and disabling related services in the server manager "unhung" the take offline action, which had been running for 40 minutes, and it completed immediately.
You may wish to check if any services are currently utilizing the database.
Next time, from the Take Offline dialog, remember to check the 'Drop All Active Connections' checkbox. I was also on SQL_EXPRESS on local machine with no connections, but this slowdown happened for me unless I checked that checkbox.
SSMS, especially if running it from your own desktop remotely and not directly within the database server, can be a reason for the long delays in detaching a database. For some reason SSMS may not be able to disconnect any existing "connections" to the database.
We found the process was almost instant when we did it directly from the database server itself. And in fact it killed the attempt from my own desktop SSMS session, and it "took over" and detached the database.
Nothing else suggested here worked.
Thanks
In my case i stopped Tomcat server . then immediately the DB went offline .
I have a connection to a Microsoft SQL Server and want the change the connection authenticated user. Is it possible to do it without closing and reopening the connection?
The ideal is something like Oracle set role feature.
I'd love if the solution also works for SQL Server 2000.
You might want to take a look at app roles (sp_setapprole), but you must be aware of consequences being that once the context is changed (e.g. the role is set), it can't be reverted with SQL Server 2000 (it's possible with 2005). The result of this is that the connection is effectively useless when closed in your code, e.g. it can't be returned to the pool and reused, which leads to scalability issues.
Otherwise it is not possible to change the security context once it has been established.
As far as I know SQL Server is very distinct on the account that is passed is the authenticated context. Take Enterprise Manager and other tools for example you must disconnect and re-connect to change users.
Plus looking at the way connection pooling works, it indicates that the connection itself is cached user specific, so if you changed executing parties part way through it would cause major problems with security.
So the short answer, no, it isn't possible as far as I know.
Depending on what you are doing, EXECUTE AS may help you out here. This allows you to execute SQL in the context of another user in a similar fashion to the RUN AS available from the Windows shell. The profiler and audit tracing in SQL Server allows you to see both the original user and which context a statement is run under.
EXECUTE AS USER = 'newuser';
SELECT ... <-- SQL code from under the context of newuser
REVERT;
Note: This is not available under SQL Server 2000 and was added because of requests like yours.
We've been having some issues with a SharePoint instance in a test
environment. Thankfully this is not production ;) The problems started
when the disk with the SQL Server databases and search index ran out
of space. Following this, the search service would not run and search
settings in the SSP were not accessible. Reclaiming the disk space did
not resolve the issue. So rather than restoring the VM, we decided to
try to fix the issue.
We created a new SSP and changed the association of all services to
the new SSP. The old SSP and it's databases were then deleted. Search
results for PDF files are no longer appearing, but the search works
fine otherwise. MySites also works OK.
Following the implementation of this change, these problems occur:
1) An audit failure message started appearing in the application event log, for 'DOMAIN\SPMOSSSvc' which is the MOSS farm account.
Event Type: Failure Audit
Event Source: MSSQLSERVER
Event Category: (4)
Event ID: 18456
Date: 8/5/2008
Time: 3:55:19 PM
User: DOMAIN\SPMOSSSvc
Computer: dastest01
Description:
Login failed for user 'DOMAIN\SPMOSSSvc'. [CLIENT: <local machine>]
2) SQL Server profiler is showing queries from SharePoint that reference the old
(deleted) SSP database.
So...
Where would these references to DOMAIN\SPMOSSSvc and the old SSP
database exist?
Is there a way to 'completely' remove the SSP from the server, and
re-create? The option to delete was not available (greyed out) when a
single SSP is in place.
As Daniel McPherson said, this is caused when SSPs are deleted but the associated
job are not and attempt to communicate with the deleted database.If the SSP
database has been deleted or a problem occurred when deleting an SSP, the job may
not be deleted. When the job attempts to run, it will fail since the database no
longer exists.
Follow the steps Daniel mentioned:
1. Go to SQL Server Management Studio
2. Disable the job called SSPNAME_JobDeleteExpiredSessions, right click and choose Disable Job.
I suspect these are related to the SQL Server Agent trying to login to a database that no longer exists.
To clear it up you need to:
1. Go to SQL Server Management Studio
2. Disable the job called <database name>_job_deleteExpiredSessions
If that works, then you should be all clear to delete it.
Have you tried removing the SSP using the command line? I found this worked once when we had a broken an SSP and just wanted to get rid of it.
The command is:
stsadm.exe -o deletessp -title <sspname> [-deletedatabases]
The deletedatbases switch is optional.
Also, check in Central Administration under Job Definitions and Job Schedules to ensure no SSP related jobs are still running