Calling sp_rename on a table kills database connection in Sybase - sybase

I'm trying to rename a table using the following syntax
sp_rename [oldname],[newname]
but any time I run this, I get the following [using Aqua Datastudio]:
Command was executed successfully
Warnings: --->
W (1): The SQL Server is terminating this process.
<---
[Executed: 16/08/10 11:11:10 AM] [Execution: 359ms]
Then the connection is dropped (can't do anything else in the current query analyser (unique spid for each window))
Do I need to be using master when I run these commands, or am I doing something else wrong?

You shouldn't be getting the behaviour you're seeing.
It should either raise an error (e.g. If you don't have permission) or work successfully.
I suspect something is going wrong under the covers.
Have you checked the errorlog for the ASE server? Typically these sorts of problems (connections being forcibly closed) will be accompanied by an entry in the errorlog with a little bit more information.
The error log will be on the host that runs the ASE server, and will probably be in the same location that ASE is installed into. Something like
/opt/sybase/ASE-12_5/install/errorlog_MYSERVER

try to avoid using "sp_rename". Because some references in system tables remain like old name. Someday this may cause some faulties if you forget this change.
I suggest;
select * into table_backup from [tableRecent]
go
select * into [tableNew] from table_backup
go
drop table [tableRecent] -- in case of backup you may not drop that table
go
drop table table_backup -- in case of backup you may not drop that table
go
to achieve that; your database has an option "select into/bulkcopy/pllsort"
if your ata is huge, check your free space on that database.
and enjoy :)

Related

SSMS query - script won't run if database does not exist

I'm trying to do something like:
"If it exists, use it. If not, create it."
"If it exists, delete it. If not, create it."
One place it's definitely choking is the use it command - because if it DOES NOT EXIST - it chokes on the use command EVEN THOUGH that command will not run.
Here's more explanation:
I have a SQL Server script where I create a database and then I use the database.
The script will not run
because the use database command is invalid
because the database does not exist
but it will exist after the first command executes
but it doesn't matter because it doesn't exist NOW so the script will not run.
How do I put code in there that tries to use a database that might not exist?
How do I put code in there that will cause an error if run directly but WILL NOT RUN unless conditions are appropriate.
Please see the attached images.
Here's the code so you don't have to type it...
-- SQL SERVER: We can't run this script because CFPT does not exist.
-- ME: But it WILL exist after the first command runs
-- SQL SERVER: That does not matter - at THIS point in the code... it does not exist... tough luck
-- CREATE THE DATABASE
create database CFPT
-- USE THE DATABASE
USE CFPT
use master
drop database CFPT
Second code snippet:
-- SQL SERVER: We can't run this script because CFPT does not exist.
select db_id('CFPT') -- this just lets us see what the IF statement is going to have to deal with
IF db_id('CFPT') is null
begin
print 'DESIRED DB DOES NOT EXIST'
return
end
else
begin
use CFPT -- this line of code makes the whole script just not run.
end;
-- doesn't want to work - chokes on the use databasename (when the database does not exist)
(EDIT 1 start ////////////////////////////////////////////////////////////////////////////////////)
A third image was added with this edit - The SECOND image shows that the if/then/else statement will not work. The 3rd image shows that the database CFPT is not in the database list (left side of image) and the select statement was run (top highlighed code) and the results of that select (bottom red circle)
How do I get the if/then/else statement to work? (Because the THEN will not run if the conditions are not favorable shall-we-say)
(for some reason the red wavy lines are not showing up - they should be but they aren't - hmmm)
(EDIT 1 end ////////////////////////////////////////////////////////////////////////////////////)
(EDIT 2 start ////////////////////////////////////////////////////////////////////////////////////)
In relation to this question - trying to segregate commands that would normally fail but will not be attempted to be executed unless conditions are just right..... (see 4th image below) I'm segregating some commands with an IF statement (IF 1=2) but SQL Server is going into that IF statement even though the condition is false. Why is that?
(EDIT 2 end ////////////////////////////////////////////////////////////////////////////////////)
Try this ...
-- CREATE THE DATABASE
create database CFPT
GO
-- USE THE DATABASE
USE CFPT
use master
drop database CFPT
The GO command is a batch terminator, it separates the command to create the database from the command to use it.
See https://msdn.microsoft.com/en-us/library/ms188037.aspx
and
What is the use of GO in SQL Server Management Studio & Transact SQL?

Unable to carry out operations (create trigger, drop table) for a table I created

I am using a SQL Server database with SQL Server Management Studio where I have existing tables. I add a few tables to it and it works just fine. However, for subsequent operations such as
Drop table XXX --OR
Create Trigger YYY on XXX
I run into a error statement that reads:
i) Cannot drop table XXX as it does not exist or you do not have permissions
ii) The object 'XXX' does not exist or is invalid for this operation
I tried to carry out an Insert operation but that showed me a similar error (The object 'XXX' does not exist). I can see this maybe a permissions issue since I am using an existing database. However, in that case, I should have been unable to create a table as well?
Can anyone pinpoint how I can work myself around this and what the problem is?
What is your default schema?
SELECT name, default_schema_name
FROM sys.database_principals
WHERE type = 'S';
Try qualifying your references to the table as SchemaName.XXX and see if that helps.
Most of times when I had similar situations tables were created in system databases (master, tempdb..). Of course it was my mistake.
So maybe try to search for a tables in other databases?

Cannot find the object because it does not exist or you do not have permissions. Error in SQL Server

I have a database and have a Sql script to add some fields to a table called "Products" in the database.
But when i am executing this script, I am getting the following error:
Cannot find the object "Products" because it does not exist or you do not have permissions
Why is the error occurring and what should I do to resolve it?
I found a reason why this would happen. The user had the appropriate permissions, but the stored procedure included a TRUNCATE statement:
TRUNCATE TableName
Since TRUNCATE deletes items without logging, you (apparently) need elevated permissions to execute a stored procedure that contains it. We changed the statement to:
DELETE FROM TableName
...and the error went away!
Are you sure that you are executing the script against the correct database? In SQL Server Management studio you can change the database you are running the query against in a drop-down box on one of the toolbars, or you can start your query with this:
USE SomeDatabase
It can also happen due to a typo in referencing a table such as [dbo.Product] instead of [dbo].[Product].
Does the user you're executing this script under even see that table??
select top 1 * from products
Do you get any output for this??
If yes: does this user have the permission to modify the table, i.e. execute DDL scripts like ALTER TABLE etc.? Typically, regular users don't have this elevated permissions.
Look for any DDL operation in the script.
Maybe the user does not have access rights to run changes.
In my case it was SET IDENTITY_INSERT tblTableName ON
You can either add db_ddladmin for the whole database or for just the table to solve this issue (or change the script)
-- give the non-ddladmin user INSERT/SELECT as well as ALTER:
GRANT ALTER, INSERT, SELECT ON dbo.tblTableName TO user_name;
It could also be possible that you have created the "Products" in your login schema and you were trying to execute the same in a different schema (probably dbo)
Steps to resolve this issue
1)open the management studio
2) Locate the object in the explorer and identify the schema under which your object is? ( it is the text before your object name ). In the image below its the "dbo" and my object name is action status
if you see it like "yourcompanydoamin\yourloginid" then you should
you can modify the permission on that specific schema and not any other schema.
you may refer to "Ownership and User-Schema Separation in SQL Server"
I've been trying to copy a table from PROD to DEV but get an error:
"Cannot find the object X because it does not exist or you do not have permissions."
However, the table did exist, and I was running as sa so I did have permissions.
The problem was actually with CONTRAINTS. I'd renamed the table on DEV to be old_XXX months ago. But when I tried to copy the original one over from PROD, the Defaut Constraint names clashed.
The error message was misleading
You can right click the procedure, choose properties and see which permissions are granted to your login ID. You can then manually check off the "Execute" and alter permission for the proc.
Or to script this it would be:
GRANT EXECUTE ON OBJECT::dbo.[PROCNAME]
TO [ServerInstance\user];
GRANT ALTER ON OBJECT::dbo.[PROCNAME]
TO [ServerInstance\user];
This could be a permission issue. The user needs at least ALTER permission to truncate a table.
Another option is to call DELETE FROM instead of TRUNCATE TABLE, but this operation is slower because it writes to the Log file, whereas TRUNCATE does not write to the log file.
The minimum permission required is ALTER on table_name. TRUNCATE TABLE
permissions default to the table owner, members of the sysadmin fixed
server role, and the db_owner and db_ddladmin fixed database roles,
and are not transferable. However, you can incorporate the TRUNCATE
TABLE statement within a module, such as a stored procedure, and grant
appropriate permissions to the module using the EXECUTE AS clause.
Sharing my case, hope that will help.
In my situation inside MY_PROJ.Database->MY_PROJ.Database.sqlproj I had to put this:
<Build Include="dbo\Tables\MyTableGeneratingScript.sql" />
In my case I was running under a different user than the one I was expecting.
My code passed 'DRIVER={SQL Server};SERVER=...;DATABASE=...;Trusted_Connection=false;User Id=XXX;Password=YYY' as the connection string to pypyodbc.connect(), but it ended up connecting with the credentials of the Windows user that ran the script instead of the User Id= from the connection string.
(I verified this using the SQL Server Profiler and by putting an invalid uid/password combination in the connection string - which didn't result in an expected error).
I decided not to dig into this further, since switching to this better way of connecting fixed the issue:
conn = pypyodbc.connect(driver='{SQL Server}', server='servername',
database='dbname', uid='userName', pwd='Password')
In my case the sql server version on my localhost is higher than that on the production server and hence some new variables were added to the generated script from the localhost. This caused errors in creating the table in the first place.
Since the creation of the table failed, subsequent query on the "NON EXISITING" table also failed.
Luckily, in among the long list of the sql errors, I found this "OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF" to be the new varialbe in the script causing my issue. I did a search and replace and the error went away.
Hope it helps someone.
The TRUNCATE statement was my first problem, glad to find the solution here. But I was using SSIS and trying to load data from another database, and it failed with the same error on any table that used IDENTITY to create an auto-incrementing ID. If I was scripting it myself I'd first need to use the command SET IDENTITY_INSERT tablename ON, and then SET IDENTITY_INSERT tablename OFF when the table update was done. But this requires ALTER permissions on the table, which I do not have. Hence the error message in SSIS on the table load (even though the previous step had just deleted all the data out of the table.)
You receive this error, when you use an ORM like GORM (https://gorm.io/) in Go for example.
When you try to create a struct and accidentally pass the ID (primary key) although it's inserted automatically.
Rich features IDE like Visual Studio Code make this mistake happen easily:
if tx := db.Create(&myStruct{
Ts: Time.Now(),
ID: 42,
}); tx.Error != nil {
t.Fatal(tx.Error)
}
You can still use auto-filling by Visual Studio Code, but delete your entry for your model's primary keys:
if tx := db.Create(&myStruct{
Ts: Time.Now(),
}); tx.Error != nil {
t.Fatal(tx.Error)
}

SQL Server COMPILE locks?

SQL Server 2000 here.
I'm trying to be an interim DBA, but don't know much about the mechanics of a database server, so I'm getting a little stuck. There's a client process that hits three views simultaneously. These three views query a remote server to pull back data.
What it looks like is that one of these queries will work, but the other two fail (client process says it times out, so I'm guessing a lock can do that). The querying process has a lock that sticks around until the SQL process is restarted (I got gutsy and tried to kill the spid once, but it wouldn't let go). Any queries to this database after the lock hang, and blame the first process for blocking it.
The process reports these locks... (apologies for the formatting, the preview functionality shows it as fully lined up).
spid dbid ObjId IndId Type Resource Mode Status
53 17 0 0 DB S GRANT
53 17 1445580188 0 TAB Sch-S GRANT
53 17 1445580188 0 TAB [COMPILE] X GRANT
I can't analyze that too well. Object 1445580188 is sp_bindefault, a system stored procedure in master. What's it hanging on to an exclusive lock for?
View code, to protect the proprietary...I only changed the names (they stayed consistent with aliases and whatnot) and tried to keep everything else exactly the same.
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER OFF
GO
ALTER view [dbo].[theView]
as
select
a.[column1] column_1
,b.[column2] column_2
,[column3]
,[column4]
,[column5]
,[column6]
,[column7]
,[column8]
,[column9]
,[column10]
,p.[column11]
,p.[column12]
FROM
[remoteServer].db1.dbo.[tableP] p
join [remoteServer].db2.dbo.tableA a on p.id2 = a.id
join [remoteServer].db2.dbo.tableB b on p.id = b.p_id
WHERE
isnumeric(b.code) = 1
GO
SET ANSI_NULLS OFF
GO
SET QUOTED_IDENTIFIER ON
GO
Take a look at this link. Are you sure it's views that are blocking and not stored procedures? To find out, run this query below with the ObjId from your table above. There are things that you can do to mitigate stored procedure re-compiles. The biggest thing is to avoid naming your stored procedures with an "sp_" prefix, see this article on page 10. Also avoid using if/else branches in the code, use where clauses with case statements instead. I hope this helps.
[Edit]:
I believe sp_binddefault/rule is used in conjunction with user defined types (UDT). Does your view make reference to any UDT's?
SELECT * FROM sys.Objects where object_id = 1445580188
Object 1445580188 is sp_bindefault in the master database, no? Also, it shows resource = "TAB" = table.
USE master
SELECT OBJECT_NAME(1445580188), OBJECT_ID('sp_bindefault')
USE mydb
SELECT OBJECT_NAME(1445580188)
If the 2nd query returns NULL, then the object is a work table.
I'm guessing it's a work table being generated to deal with the results locally.
The JOIN will happen locally and all data must be pulled across.
Now, I can't shed light on the compile lock: the view should be compiled already. This is complicated by the remote server access and my experience of compile locks is all related to stored procs.

How to log in T-SQL

I'm using ADO.NET to access SQL Server 2005 and would like to be able to log from inside the T-SQL stored procedures that I'm calling. Is that somehow possible?
I'm unable to see output from the 'print'-statement when using ADO.NET and since I want to use logging just for debuging the ideal solution would be to emit messages to DebugView from SysInternals.
I think writing to a log table would be my preference.
Alternatively, as you are using 2005, you could write a simple SQLCLR procedure to wrap around the EventLog.
Or you could use xp_logevent if you wanted to write to SQL log
I solved this by writing a SQLCLR-procedure as Eric Z Beard suggested. The assembly must be signed with a strong name key file.
using System;
using System.Data;
using System.Data.SqlClient;
using System.Data.SqlTypes;
using Microsoft.SqlServer.Server;
public partial class StoredProcedures
{
[Microsoft.SqlServer.Server.SqlProcedure]
public static int Debug(string s)
{
System.Diagnostics.Debug.WriteLine(s);
return 0;
}
}
}
Created a key and a login:
USE [master]
CREATE ASYMMETRIC KEY DebugProcKey FROM EXECUTABLE FILE =
'C:\..\SqlServerProject1\bin\Debug\SqlServerProject1.dll'
CREATE LOGIN DebugProcLogin FROM ASYMMETRIC KEY DebugProcKey
GRANT UNSAFE ASSEMBLY TO DebugProcLogin
Imported it into SQL Server:
USE [mydb]
CREATE ASSEMBLY SqlServerProject1 FROM
'C:\..\SqlServerProject1\bin\Debug\SqlServerProject1.dll'
WITH PERMISSION_SET = unsafe
CREATE FUNCTION dbo.Debug( #message as nvarchar(200) )
RETURNS int
AS EXTERNAL NAME SqlServerProject1.[StoredProcedures].Debug
Then I was able to log in T-SQL procedures using
exec Debug #message = 'Hello World'
You can either log to a table, by simply inserting a new row, or you can implement a CLR stored procedure to write to a file.
Be careful with writing to a table, because if the action happens in a transaction and the transaction gets rolled back, your log entry will disappear.
Logging from inside a SQL sproc would be better done to the database itself. T-SQL can write to files but it's not really designed for it.
There's the PRINT command, but I prefer logging into a table so you can query it.
You can write rows to a log table from within a stored procedure. As others have indicated, you could go out of your way to write to some text file or other log with CLR or xp_logevent, but it seems like you need more volume than would be practical for such uses.
The tough cases occur (and it's these that you really need your log for) when transactions fail. Since any logging that occurs during these transactions will be rolled back along with the transaction that they are part of, it is best to have a logging API that your clients can use to log errors. This can be a simple DAL that either logs to the same database, or to a shared one.
For what it's worth, I've found that when I don't assign an InfoMessage handler to my SqlConnection:
sqlConnection.InfoMessage += new SqlInfoMessageEventHandler(MySqlConnectionInfoMessageHandler);
where the signature of the InfoMessageHandler looks like this:
MySqlConnectionInfoMessageHandler(object sender, SqlInfoMessageEventArgs e)
then my PRINT statements in my Stored Procs do not appear in DbgView.
You could use output variables for passing back messages, but that relies on the proc executing without errors.
create procedure usp_LoggableProc
#log varchar(max) OUTPUT
as
-- T-SQL statement here ...
select #log = #log + 'X is foo'
And then in your ADO code somehwere:
string log = (string)SqlCommand.Parameters["#log"].Value;
You could use raiserror to create your own custom errors with the information that you require and that will be available to you through the usual SqlException Errors collection in your ADO code:
RAISERROR('X is Foo', 10, 1)
Hmmm but yeah, can't help feeling just for debugging and in your situation, just insert varchar messages to an error table like the others have suggested and select * from it when you're debugging.
You may want to check Log4TSQL. It provides Database-Logging for Stored Procedures and Triggers in SQL Server 2005 - 2008. You have the possibility to set separate, independent log-levels on a per Procedure/Trigger basis.
Use cmd commands with cmdshell
I found this while searching for an answer to this question.
https://www.databasejournal.com/features/mssql/article.php/1467601/A-general-logging-t-sql-process-to-write-to-txt-files.htm
select #cmdtxt = "echo " + #logEntry + " >> drive:\path\filename.txt"
exec master..xp_cmdshell #cmdtxt
I've been searching for a way to do this, as I am trying to debug some complicated, chained, stored procedures, all that are called by an external API, and which operate in the context of a transaction.
I'd been writing diagnostic messages into a logging file, but if the transaction rolls back, the new log entries disappear with the rollback. I found a way! And it works pretty well. And it has already saved me many, many hours of debugging time.
Create a linked server to the same SQL instance, using the login's
security context. In my case, the simplest method was to use the
localhost loop address, 127.0.0.1
Set the linked server to enable RPC, and to NOT "Enable Promotion of
Distributed Transactions". This means that calls through that
server will take place outside of your transaction context.
In your logging procedure, (I have an example excerpted below) write
to the log table using the procedure through loopback linked server
if you are in a transaction. You can write to it the usual way
if your are not. Writing though the linked server is considerably
slower than direct DML.
Voila! My in-process logging survives the rollback, and I can find out what's happening internally when things are going south.
I can't claim credit for thinking of this--I found the approach after some time with Google, but I'm so pleased with the result I felt like I had to share it.
USE TX
GO
CREATE PROCEDURE dbo.LogError(#errorSource Varchar(32), #msg Varchar(400))
AS BEGIN
SET NOCOUNT ON
IF ##TRANCOUNT > 0
EXEC [127.0.0.1].TX.dbo.LogError #errorSource, #msg
ELSE
INSERT INTO TX.dbo.ErrorLog(source_module, message)
SELECT #errorSource, #msg
END
GO

Resources