I have a stored procedure that relies on a query to a linked server.
This stored procedure is roughly structured as follows:
-- Create local table var to stop query from needing round trips to linked server
DECLARE #duplicates TABLE (eid NVARCHAR(6))
INSERT INTO #duplicates(eid)
SELECT eid FROM [linked_server].[linked_database].[dbo].[linked_table]
WHERE es = 'String'
-- Update on my server using data from linked server
UPDATE [my_server].[my_database].[dbo].[my_table]
-- Many things, including
[status] = CASE
WHEN
eid IN (
SELECT eid FROM #duplicates
)
THEN 'String'
ELSE es
END
FROM [my_server].[another_database].[dbo].[view]
-- This view obscures sensitive information and shows only the data that I have permission to see
-- Many other things
The query itself is much more complex, but the key idea is building this temporary table from a linked server (because it takes the query 5 minutes to run if I don't, versus 3 seconds if I do).
I've recently had an issue where I ended up with updates to my table that failed to get checked against the linked server for duplicate information.
The logical chain of events is this:
Get all of the data from the original view
The original view contains maybe 3000 records, of which maybe 30 are
duplicates of the entity in question, but with 1 field having a
different value.
I then have to grab data from a different server to know which of
the duplicates is the correct one.
When the stored procedure runs, it updates each record.
ERROR STEP - when the stored procedure hits a duplicate record, it
updates my_table again - so es gets changed multiple times in a row.
The temp table was added after the fact when we realized incorrect es values were being introduced to my_table.
'my_database` does not contain the data needed to determine which is the correct tuple, hence the requirement for the linked server.
As far as I can tell, we had a temporary network interruption or a connection timeout that stopped my_server from getting the response back from linked_server, and it just passed an empty table to the rest of the procedure.
So, my question is - how can I guard against this happening?
I can't just check if the table is empty, because it could legitimately be empty. I need to definitively know if that initial SELECT from linked_server failed, if it timed out, or if it intentionally returned nothing.
without knowing the definition of the table you're querying you could get into an issue where your data is to long and you get a truncation error on your table.
Better make sure and substring it...
DECLARE #duplicates TABLE (eid NVARCHAR(6))
INSERT INTO #duplicates(eid)
SELECT SUBSTRING(eid,1,6) FROM [linked_server].[linked_database].[dbo].[linked_table]
WHERE es = 'String'
-- Update on my server using data from linked server
UPDATE [my_server].[my_database].[dbo].[my_table]
-- Many things, including
[status] = CASE
WHEN
eid IN (
SELECT eid FROM #duplicates
)
THEN 'String'
ELSE es
END
FROM [my_server].[another_database].[dbo].[view]
I had a similar problem where I needed to move data between servers, could not use a network connection so I ended up doing BCP out and BCP in. This is fast, clean and takes away the complexity of user authentication, drivers, trust domains. also it's repeatable and can be used for incremental loading.
I'm working on a legacy system using SQL Server in 2000 compatibility mode. There's a stored procedure that selects from a query into a virtual table.
When I run the query, I get the following error:
Error converting data type varchar to numeric
which initially tells me that something stringy is trying to make its way into a numeric column.
To debug, I created the virtual table as a physical table and started eliminating each column.
The culprit column is called accnum (which stores a bank account number, which has a source data type of varchar(21)), which I'm trying to insert into a numeric(16,0) column, which obviously could cause issues.
So I made the accnum column varchar(21) as well in the physical table I created and it imports 100%. I also added an additional column called accnum2 and made it numeric(16,0).
After the data is imported, I proceeded to update accnum2 to the value of accnum. Lo and behold, it updates without an error, yet it wouldn't work with an insert into...select query.
I have to work with the data types provided. Any ideas how I can get around this?
Can you try to use conversion in your insert statement like this:
SELECT [accnum] = CASE ISNUMERIC(accnum)
WHEN 0 THEN NULL
ELSE CAST(accnum AS NUMERIC(16, 0))
END
I've tried the following method of creating a temp table for MSSQL using SQLA:
table_name = "#foo"
meta = MetaData(bind = session.bind)
table = Table(quoted_name(table_name, quote=False),
meta,
Column('a_number', Integer),
Column('device_Id', Integer),
Column('cost', Integer)
)
table.create()
There are no errors when I execute this, but there are errors if I follow it up with SQL statements that try to access the table. (The errors indicate #foo doesn't exist)
Also, if I look at the temp tables in my MSSQL session, there's no mention of the table, further evidence that it doesn't exist.
Note that I don't think this is a connection issue - if I comment out the above table.create() and 'manually' create the table, as in session.execute("create #foo..") that succeeds and so does the subsequent insert and read. So I think I'm on the same connection the whole time. Also, I can single step through this in a debugger and intermittently request my MSSQL session ID and it comes back the same (meaning I'm on the same session from MSSQL's point of view too)
A later test: I enabled full SQLAlchemy debugging and I noticed that it table.create() was causing a "commit" to be issued after the create table statement. Somehow, this commit was causing the temp table to become inaccessible. I experimented and found that if this commit is not emitted, then table.create() works and the temp table can be accessed in subsequent statements.
Here's my "work around" until I figure out why the commit is being emitted and/or why the commit is causing the temp table to "go away":
table_name = "#foo"
meta = MetaData(bind = session.bind)
table = Table(quoted_name(table_name, quote=False),
meta,
Column('a_number', Integer),
Column('device_Id', Integer),
Column('cost', Integer)
)
session.execute(CreateTable(table))
In the above approach, CreateTable is returning the actual SQL creation syntax and it's then executed via session.execute (which does not issue a commit)
A couple of points:
-> Check if you are creating engine properly. Check the link and look for Microsoft SQL Server heading. Link: http://docs.sqlalchemy.org/en/latest/core/engines.html
-> Check if your metadata is bounded to the engine.
I am using a GUID for a batch identifier in SSIS. My final output goes to SQL Server.
I know how I can generate one using Select NewId() MyUniqueIdentifier in Sql Server - I can generate one using a query and an Execute SQL task.
I am however looking to do this within a SSIS package if possible without SQL Server available.
Can I generate a GUID within SSIS?
I had a similar problem. To fix it, I created an SSIS "Composant Script" in which I created a "guid" output. The script VS C# code was the following :
Row.guid = Guid.NewGuid();
Finally, I routed the output as a derived column into my database "OLE DB Destination" to generate a guid for every new entry.
Simply do it in an Execute SQL Task.
Open the task
Under General -> SQL Statement, enter your query Select NewID() MyID in the "SQLStatement" field
Under General -> Result Set, choose "Single row"
Under Parameter Mapping, Enter your User::myID in Variable Name, "Input" as direction, 0 as Parameter Name, and -1 as Parameter Size
Under Result Set, enter "MyID" for your Result Name and type the variable in Variable Name
-Click OK
Done. Note that "MyID" is a value you can choose. EDIT: "User::myID" corresponds to the SSIS variable that you create.
I have a database and have a Sql script to add some fields to a table called "Products" in the database.
But when i am executing this script, I am getting the following error:
Cannot find the object "Products" because it does not exist or you do not have permissions
Why is the error occurring and what should I do to resolve it?
I found a reason why this would happen. The user had the appropriate permissions, but the stored procedure included a TRUNCATE statement:
TRUNCATE TableName
Since TRUNCATE deletes items without logging, you (apparently) need elevated permissions to execute a stored procedure that contains it. We changed the statement to:
DELETE FROM TableName
...and the error went away!
Are you sure that you are executing the script against the correct database? In SQL Server Management studio you can change the database you are running the query against in a drop-down box on one of the toolbars, or you can start your query with this:
USE SomeDatabase
It can also happen due to a typo in referencing a table such as [dbo.Product] instead of [dbo].[Product].
Does the user you're executing this script under even see that table??
select top 1 * from products
Do you get any output for this??
If yes: does this user have the permission to modify the table, i.e. execute DDL scripts like ALTER TABLE etc.? Typically, regular users don't have this elevated permissions.
Look for any DDL operation in the script.
Maybe the user does not have access rights to run changes.
In my case it was SET IDENTITY_INSERT tblTableName ON
You can either add db_ddladmin for the whole database or for just the table to solve this issue (or change the script)
-- give the non-ddladmin user INSERT/SELECT as well as ALTER:
GRANT ALTER, INSERT, SELECT ON dbo.tblTableName TO user_name;
It could also be possible that you have created the "Products" in your login schema and you were trying to execute the same in a different schema (probably dbo)
Steps to resolve this issue
1)open the management studio
2) Locate the object in the explorer and identify the schema under which your object is? ( it is the text before your object name ). In the image below its the "dbo" and my object name is action status
if you see it like "yourcompanydoamin\yourloginid" then you should
you can modify the permission on that specific schema and not any other schema.
you may refer to "Ownership and User-Schema Separation in SQL Server"
I've been trying to copy a table from PROD to DEV but get an error:
"Cannot find the object X because it does not exist or you do not have permissions."
However, the table did exist, and I was running as sa so I did have permissions.
The problem was actually with CONTRAINTS. I'd renamed the table on DEV to be old_XXX months ago. But when I tried to copy the original one over from PROD, the Defaut Constraint names clashed.
The error message was misleading
You can right click the procedure, choose properties and see which permissions are granted to your login ID. You can then manually check off the "Execute" and alter permission for the proc.
Or to script this it would be:
GRANT EXECUTE ON OBJECT::dbo.[PROCNAME]
TO [ServerInstance\user];
GRANT ALTER ON OBJECT::dbo.[PROCNAME]
TO [ServerInstance\user];
This could be a permission issue. The user needs at least ALTER permission to truncate a table.
Another option is to call DELETE FROM instead of TRUNCATE TABLE, but this operation is slower because it writes to the Log file, whereas TRUNCATE does not write to the log file.
The minimum permission required is ALTER on table_name. TRUNCATE TABLE
permissions default to the table owner, members of the sysadmin fixed
server role, and the db_owner and db_ddladmin fixed database roles,
and are not transferable. However, you can incorporate the TRUNCATE
TABLE statement within a module, such as a stored procedure, and grant
appropriate permissions to the module using the EXECUTE AS clause.
Sharing my case, hope that will help.
In my situation inside MY_PROJ.Database->MY_PROJ.Database.sqlproj I had to put this:
<Build Include="dbo\Tables\MyTableGeneratingScript.sql" />
In my case I was running under a different user than the one I was expecting.
My code passed 'DRIVER={SQL Server};SERVER=...;DATABASE=...;Trusted_Connection=false;User Id=XXX;Password=YYY' as the connection string to pypyodbc.connect(), but it ended up connecting with the credentials of the Windows user that ran the script instead of the User Id= from the connection string.
(I verified this using the SQL Server Profiler and by putting an invalid uid/password combination in the connection string - which didn't result in an expected error).
I decided not to dig into this further, since switching to this better way of connecting fixed the issue:
conn = pypyodbc.connect(driver='{SQL Server}', server='servername',
database='dbname', uid='userName', pwd='Password')
In my case the sql server version on my localhost is higher than that on the production server and hence some new variables were added to the generated script from the localhost. This caused errors in creating the table in the first place.
Since the creation of the table failed, subsequent query on the "NON EXISITING" table also failed.
Luckily, in among the long list of the sql errors, I found this "OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF" to be the new varialbe in the script causing my issue. I did a search and replace and the error went away.
Hope it helps someone.
The TRUNCATE statement was my first problem, glad to find the solution here. But I was using SSIS and trying to load data from another database, and it failed with the same error on any table that used IDENTITY to create an auto-incrementing ID. If I was scripting it myself I'd first need to use the command SET IDENTITY_INSERT tablename ON, and then SET IDENTITY_INSERT tablename OFF when the table update was done. But this requires ALTER permissions on the table, which I do not have. Hence the error message in SSIS on the table load (even though the previous step had just deleted all the data out of the table.)
You receive this error, when you use an ORM like GORM (https://gorm.io/) in Go for example.
When you try to create a struct and accidentally pass the ID (primary key) although it's inserted automatically.
Rich features IDE like Visual Studio Code make this mistake happen easily:
if tx := db.Create(&myStruct{
Ts: Time.Now(),
ID: 42,
}); tx.Error != nil {
t.Fatal(tx.Error)
}
You can still use auto-filling by Visual Studio Code, but delete your entry for your model's primary keys:
if tx := db.Create(&myStruct{
Ts: Time.Now(),
}); tx.Error != nil {
t.Fatal(tx.Error)
}