Temp table dropping immediately after query finishes - sql-server

If I highlight then execute lines 1-4, I get output
Commands completed successfully.
Then, if I highlight and execute lines 6-14, I get output message
Invalid object name '#TestThis'
If I highlight and execute lines 1 - 16, I can see the one row of data returned. Why would a temp table that was just created (in the same session) immediately be dropped/invalid right after the code was executed? We're running this on an Azure based SQL Server.

If the session remains alive, the temporary table should still exist and be accessible. Make sure you are executing the create statement and the other ones on the same session and you are not getting a disconnection message in between.
Make sure you have the "Disconnect after the query executes" check in SSMS OFF.
If it still fails, do this check:
Create your temporary table, and keep the session alive (don't close the tab or disconnect it):
CREATE TABLE #TestThis (oldvalue INT, newvalue INT)
On a different session, query tempdb like the following:
SELECT * FROM tempdb.sys.tables WHERE [name] LIKE N'#TestThis%'
You should be able to see the temporary table created on the other session, starting with the same name and getting a bunch of underscores and some numbers at the end. This means the table still exists and you should be able to access it from the original session.
If you open a 3rd session and create the same temporary table, 2 of these should be listed on the tempdb query:

Matthew Walk You can try this solution as I showed in the example below.
CREATE TABLE #TestThis(oldvalue INT, newvalue INT )
INSERT INTO #TestThis(oldvalue, newvalue) VALUES (1,3),(5,7)
select oldvalue, newvalue FROM #TestThis
IF OBJECT_ID('tempdb..#TestThis') IS NOT NULL DROP TABLE #TestThis
In this example you need to check object id is null or not for your temptable, if it is not null then you need to drop your temp table.
I hope this helps you.

A session is created every time you click the execute icon, so yes, if you don’t highlight the “create table” part, the table wouldn’t exist.
You can either remove the hash (#) in front of the table name if you want the table to stay in the db until you drop it .... or also highlight the create table part every time you click “execute” ....whichever fits your needs better.

"#Temp" table is session level. If the database connection or session closed, the object "#TestThis" will be deleted.
You must keep the session going.
You should first highlight execute lines 1-4 to create the temp table "#TestThis" if not exist.
CREATE TABLE #TestThis
(
oldvalue INTEGER,
newvalue INTEGER)
Then you can execute lines 6-14. If you don't first create the temp table, how can you insert the data to temp table "#TestThis"?
Now you can execute:
INSERT INTO #TestThis
(
oldvalue,
newvalue
)
VALUES
( 1234,
7788
)
Or
SELECT * FROM #TempThis
Hope this helps.

It turns out the cause of this issue was using MFA instead of Active Directory password. Once the connection was switched to Active Directory password, the temp tables were created and accessible and persisted as expected.

Related

How does SQL Server handle failed query to linked server?

I have a stored procedure that relies on a query to a linked server.
This stored procedure is roughly structured as follows:
-- Create local table var to stop query from needing round trips to linked server
DECLARE #duplicates TABLE (eid NVARCHAR(6))
INSERT INTO #duplicates(eid)
SELECT eid FROM [linked_server].[linked_database].[dbo].[linked_table]
WHERE es = 'String'
-- Update on my server using data from linked server
UPDATE [my_server].[my_database].[dbo].[my_table]
-- Many things, including
[status] = CASE
WHEN
eid IN (
SELECT eid FROM #duplicates
)
THEN 'String'
ELSE es
END
FROM [my_server].[another_database].[dbo].[view]
-- This view obscures sensitive information and shows only the data that I have permission to see
-- Many other things
The query itself is much more complex, but the key idea is building this temporary table from a linked server (because it takes the query 5 minutes to run if I don't, versus 3 seconds if I do).
I've recently had an issue where I ended up with updates to my table that failed to get checked against the linked server for duplicate information.
The logical chain of events is this:
Get all of the data from the original view
The original view contains maybe 3000 records, of which maybe 30 are
duplicates of the entity in question, but with 1 field having a
different value.
I then have to grab data from a different server to know which of
the duplicates is the correct one.
When the stored procedure runs, it updates each record.
ERROR STEP - when the stored procedure hits a duplicate record, it
updates my_table again - so es gets changed multiple times in a row.
The temp table was added after the fact when we realized incorrect es values were being introduced to my_table.
'my_database` does not contain the data needed to determine which is the correct tuple, hence the requirement for the linked server.
As far as I can tell, we had a temporary network interruption or a connection timeout that stopped my_server from getting the response back from linked_server, and it just passed an empty table to the rest of the procedure.
So, my question is - how can I guard against this happening?
I can't just check if the table is empty, because it could legitimately be empty. I need to definitively know if that initial SELECT from linked_server failed, if it timed out, or if it intentionally returned nothing.
without knowing the definition of the table you're querying you could get into an issue where your data is to long and you get a truncation error on your table.
Better make sure and substring it...
DECLARE #duplicates TABLE (eid NVARCHAR(6))
INSERT INTO #duplicates(eid)
SELECT SUBSTRING(eid,1,6) FROM [linked_server].[linked_database].[dbo].[linked_table]
WHERE es = 'String'
-- Update on my server using data from linked server
UPDATE [my_server].[my_database].[dbo].[my_table]
-- Many things, including
[status] = CASE
WHEN
eid IN (
SELECT eid FROM #duplicates
)
THEN 'String'
ELSE es
END
FROM [my_server].[another_database].[dbo].[view]
I had a similar problem where I needed to move data between servers, could not use a network connection so I ended up doing BCP out and BCP in. This is fast, clean and takes away the complexity of user authentication, drivers, trust domains. also it's repeatable and can be used for incremental loading.

using :new and :old referencing different tables in SQL

Revising for a uni exam. A question states:
Write an SQL command to create a trigger in table Permission. The
trigger should add one to the numberOfPermissions in table File for a
file, after each time a new permission row is entered into table
Permission with that file’s name.
here's a list of the tables provided
I've got everything down except one line, the WHERE line. How would I specify the :new value to a different table? It needs to read the new value as a the fileName column coming from the Permissions table, but I'm not sure how to do that. I've tried it in ways such as :Permissions.new.fileName etc but I always get an unspecified error around the "." point.
CREATE TRIGGER newTrig
AFTER INSERT ON Permission
BEGIN
UPDATE File
SET numberOfPermissions = numberOfPermissions+1
WHERE File.name = :new.fileName
END;
When I run your trigger creation code in this db fiddle, it gives me :
ORA-04082: NEW or OLD references not allowed in table level triggers
What happens is that you have omitted the FOR EACH ROW option in the declaration of the trigger. Because of that, Oracle believes that you want a table level trigger, which is executed once per statement (whereas a row level trigger is executed once for each row inserted).
As a single statement can result in multiple rows being inserted (eg : INSERT INTO ... AS SELECT ...), Oracle does not allow you to access :NEW and :OLD references in table level triggers.
Adding the FOR EACH ROW option to your trigger definition will make it a row level trigger, that is allowed to access :NEW and :OLD references.
CREATE TRIGGER newTrig
AFTER INSERT ON Permission
FOR EACH ROW
BEGIN
UPDATE File
SET numberOfPermissions = numberOfPermissions+1
WHERE File.name = :new.fileName;
END;
PS : you were also missing a semi-colon at the end of the UPDATE statement.
Unrelated PS :
> create table USER ( x number);
ORA-00903: invalid table name
> create table FILE ( x number);
ORA-00903: invalid table name
=> It is usually not a good idea to create tables whose names are reserved words, this can sometimes lead to tricky errors.
Trigger knows which records should get update, so you can use ":OLD" pointer.
Try something like this:
CREATE OR REPLACE TRIGGER newTrig
AFTER INSERT ON Permission
FOR EACH ROW
BEGIN
UPDATE File
SET numberOfPermissions = numberOfPermissions+1
WHERE name = :old.fileName
END;

VB6 application - automatically incremented number check

I'm building a small application and I came across an issue that I need to resolve. When I insert a new client into the SQL-SERVER I need to create an ID number to go with the client. I have a value, say - 1000 in a reference table, that gets pulled from the table, incremented by 1, then put back into the ref table, and the value 1001 gets assigned to the client. Before it gets saved to the client, I add 'TOL' to the number - so when save is complete, the ID is TOL1001. The issue I need to resolve is to check the tblClient_TABLE, to make sure that ID TOL1001 doesn't exist already before doing the insert for a new client.
I'm not really sure where I should do it, because on SAVE, I call the function that increments the number, assigns TOL to it and stores the value in an invisible textbox, so when I do my insert, it just pulls the value from the textbox...
strSQL = "INSERT into tblClient_TABLE (ID) values ("txtIDnumber.text")
I obviously have more data to insert, it's just i'm struggling with finding a logic way to check for the already existing ID.
Thanks for any ideas, help!
Your database is able to use identity columns (=autoincrement). So, if you insert a new record, an identity column will get the next value (you can rely on the uniqueness).
How do you get this number? The insert statement has (for mssql) an "output inserted" clause, and if you use ado with executescalar you get your inserted id.
The SQL command (add the vb6 code for ado command you must)
INSERT INTO [TABLENAME] ( [COL1], [COL2] ) OUTPUT INSERTED.MYID VALUES ( #COL1, #COL2 )
... add your Parameter values here ....
result = adoCommand.ExecuteScalar()
(something like that, don´t have VB6 at the office ...)

Connection scoped temp tables across stored procedures

I'm working on a data virtualization solution. The user is able to write his own SQL queries as filters for a query i make. I would like not having to run this filter query every time i select something from the database(It will likely be a complex series of joins).
My idea was to use a # temp table at script level and keep the connection alive. This #temp table would then be selected from but updated only when the user changes the filter. The idea being i can actually use it from stored procedures and the table is scoped to that connection.
I got the idea from someone who suggested to use dynamic sql and ## global temp tables named with the connection process ID so to make each connection have a unique global temp table. This was to overcome sharing temp tables across stored procedures. But it seems a bit clumsy.
I did a quick test with the below code and seemed to work fine
-- Run script at connection open from some app
SELECT * INTO #test
FROM dataTable
-- Now we can use stored procedures with #test table
EXECUTE selectFromTempTable
EXECUTE updateTempTable #sqlFilterString
EXECUTE selectFromTempTable
Only real problem i can see is the connection have to be kept alive for the duration which can be a few hours maybe. A single user can have multiple connections running at the same time. The number of users on a single database server would be like max 20.
If its a huge issue i could make it so the application can close and open them as needed so each user only have 1 connection open at a time. And maybe even then close it if not in use, and reopen when needed again with the delay of having to wait for the query to run.
Would this be bad practice? or kill any performance benefit from not running the filter query? This is on SQL Server 2008 and up.
I think I would create a permanent table, using the spid (process ID) as a key value. Each connection has its own process ID, so anyone can use it to identify their entries in the table:
create table filter(
spid int,
filternum int,
filterstring varchar(255),
<other cols> );
create unique index filterindx on filter(spid, filternum);
Then when a user creates filter entries:
delete from filter where spid = ##spid
insert into filter(spid, filternum, filterstring) select ##spid, 1, 'some sql thing'
insert into filter(spid, filternum, filterstring) select ##spid, 2, 'some other sql thing'
Then you can access each user's filter values by selecting where spid = ##spid etc

Tracking User activity log for SQL Server database

I have a database with multiple tables and I want to log the users activity via my MVC 3 web application.
User X updated category HELLO. Name changed from 'HELLO' to 'Hi There' on 24/04/2011
User Y deleted vehicle Test on 24/04/2011.
User Z updated vehicle Bla. Name changed from 'Blu' to 'Bla' on 24/04/2011.
User Z updated vehicle Bla. Wheels changed from 'WheelsX' to 'WheelsY' on 24/04/2011.
User Z updated vehicle Bla. BuildProgress changed from '20' to '50' on 24/04/2011
My initial idea is to have on all of my actions that have database crud, to add a couple lines of code that would enter those strings in a table.
Is there a better way of checking which table and column has been modified than to check every column one by one with if statements (first I select the current values, then check each of them with the value of the textbox) I did that for another ASPX web app and it was painful.
Now that I'm using MVC and ADO.NET Entity Data Model I'm wondering if a faster way to find the columns that were changed and build a log like the one above.
You can also accomplish this by putting your database into full recovery mode and then reading the transaction log.
When database is in a full recovery mode then sql server logs all Update, insert and delete (and others such as create, alter, drop..) statements into it's transaction log.
So, using this approach you dont need to make any additinal changes to your application or your database structure.
But you will need 3rd party sql transaction log reader. Red gate has a free solution for sql server 2000 only. If your server is 2005 or higher you would probably want to go with ApexSQL Log
Also, this approach will not be able to audit select statements but it's definately the easiest to implement if you dont really need to audit select queries.
The way I see, you have two options:
Create triggers in the database side, mapping changes in a table by table basis and getting result into a Log table
OR
Having the code handle the changes. You would have a base class with data and with reflection you could iterate all object properties and see what has changed. And then save that into your Log table. Of course, that coding would be on your Data Access Layer.
By the way, if you have a good code structure/architecture, I would go with the second option.
You could have a trigger (AFTER insert/update/deelte) on each table you want to monitor. The beauty is columns_updated() which returns a barbinary value, indicating which columns have been updated.
Here is some snippet of code that I put in each trigger:
IF (##ROWCOUNT = 0) return
declare #AuditType_ID int ,
#AuditDate datetime ,
#AuditUserName varchar(128),
#AuditBitMask varbinary(10)
select #AuditDate = getdate() ,
#AuditUserNAme = system_user,
#AuditBitMask = columns_updated()
-- Determine modification type
IF (exists (select 1 from inserted) and exists (select 1 from deleted))
select #AuditType_ID = 2 -- UPDATE
ELSE IF (exists (select * from inserted))
select #AuditType_ID = 1 -- INSERT
ELSE
select #AuditType_ID = 3 -- DELETE
(record this data to your table of choice)
I have a special function that can decode the bitmask values, but for some reason it is not pasting well here. Message me and I'll email it to you.

Resources