Check if a table exists based on a column value in SQL Server - sql-server

We have a data collection program that dynamically creates tables for data storage based on the identity value from another table. For example if 15 devices are created then the Devices table would have 15 entries (name, address, etc) and the DeviceID value would be say 134 - 149 then 15 tables would be created called Dev134 through Dev149.
Occasionally an issue occurred where some DEV tables were deleted but the record in the device table was not deleted leaving a orphan entry in the devices table. I.e. there is a DeviceID = 1245, but there is no table Dev1245.
What we would like to do is go through the Devices table and see if there is a corresponding Dev table in the database, and if not list the ID.
I have done this through a separate program, pulling the DeviceID's from the Device table into a list and then doing a
SELECT *
FROM #DeviceID
(#DeviceID = "Dev" + DeviceID)
and if I get something I know it's there and if I return nothing it's missing but I was hoping to do this with a single select statement that would return the ID of the missing tables.

You can select table information from sys.tables:
https://learn.microsoft.com/en-us/sql/relational-databases/system-catalog-views/sys-tables-transact-sql?view=sql-server-ver16
This statement should give you all entries which misses the corresponding table:
SELECT [devices].*
FROM Devices AS [devices]
LEFT JOIN sys.tables AS [tables]
ON [devices].[name] = [tables].[name]
WHERE [tables].[name] IS NULL

SELECT 'Dev'+CAST(deviceId AS VARCHAR(10))
FROM devices
WHERE NOT EXISTS (SELECT * FROM sys.tables WHERE name='Dev'+CAST(deviceId AS VARCHAR(10)));
Here is DBFiddle demo

Related

How to efficiently replace long strings by their index for SQL Server inserts?

I have a very large DataTable-Object which I need to import from a client into an MS SQL-Server database via ODBC.
The original Data-Table has two columns:
* First column is the Office Location (quite a long string)
* Second column is a booking value (integer)
Now I am looking for the most efficient way to insert these data into an external SQL-Server. My goal is to replace each office location automatically by an index instead using the full string because each location occurs VERY often in the initial table.
Is this possible via a trigger or via a view on the SQL-server?
At the end I want to insert the data without touching them in my script because this is very slow for these large amount of data and let the optimization done by the SQL Server.
I expect that if I do INSERT the data including the Office location, that SQL Server looks up an index for an already imported location and then use just this index. And if the location did not already exist in the index table / view then it should create a new entry here and then use the new index.
Here a sample of the data I need to import via ODBC into the SQL-Server:
OfficeLocation | BookingValue
EU-Germany-Hamburg-Ostend1 | 12
EU-Germany-Hamburg-Ostend1 | 23
EU-Germany-Hamburg-Ostend1 | 34
EU-France-Paris-Eifeltower | 42
EU-France-Paris-Eifeltower | 53
EU-France-Paris-Eifeltower | 12
What I do need on the SQL-Server is something like these 2 tables as a result:
OId|BookingValue OfficeLocation |Oid
1|12 EU-Germany-Hamburg-Ostend1 | 1
1|23 EU-France-Paris-Eifeltower | 2
1|43
2|42
2|53
2|12
My initial idea was, to write the data into a temp-table and have something like an intelligent TRIGGER (or a VIEW?) to react on any INSERT into this table to create the 2 desired (optimized) tables.
Any hint are more than welcome!
Yes, you can create a view with an INSERT trigger to handle this. Something like:
CREATE TABLE dbo.Locations (
OId int IDENTITY(1,1) not null PRIMARY KEY,
OfficeLocation varchar(500) not null UNIQUE
)
GO
CREATE TABLE dbo.Bookings (
OId int not null,
BookingValue int not null
)
GO
CREATE VIEW dbo.CombinedBookings
WITH SCHEMABINDING
AS
SELECT
OfficeLocation,
BookingValue
FROM
dbo.Bookings b
INNER JOIN
dbo.Locations l
ON
b.OId = l.OId
GO
CREATE TRIGGER CombinedBookings_Insert
ON dbo.CombinedBookings
INSTEAD OF INSERT
AS
INSERT INTO Locations (OfficeLocation)
SELECT OfficeLocation
FROM inserted where OfficeLocation not in (select OfficeLocation from Locations)
INSERT INTO Bookings (OId,BookingValue)
SELECT OId, BookingValue
FROM
inserted i
INNER JOIN
Locations l
ON
i.OfficeLocation = l.OfficeLocation
As you can see, we first add to the locations table any missing locations and then populate the bookings table.
A similar trigger can cope with Updates. I'd generally let the Locations table just grow and not attempt to clean it up (for no longer referenced locations) with triggers. If growth is a concern, a periodic job will usually be good enough.
Be aware that some tools (such as bulk inserts) may not invoke triggers, so those will not be usable with the above view.

Updating Rows In a Table That Was Used to Import From SSIS

I am currenly creating a SSIS package that selects the top 2000 entries from one table and insert those entries into another table. How can I update those entries in the original table (my select contains a condition whereby I check where the status is still empty) so in my next run of the SSIS package they won't be imported?
**EDIT
As I can't give the DLL due to security reasons, to sketch my scenario, I have TableA and TableB who share the same structure:
Index Number (Primary Key, Indexed, Auto Increment)
Name (varchar)
Surname (varchar)
Status (varchar)
Type (varchar)
Import Date (date)
Using an OLE DB Source within my Data Flow Task, the following query is used to determine which cases I should import from TableA to TableB (an OLE DB Destination):
SELECT TOP(2000) *
FROM TableA
WHERE Status = ''
AND [Type] = 'New Case'
ORDER BY IndexNumber ASC
Now those cases imported from TableA to TableB, I want to update the cases in Table A to not import them again. The data in TableB is often moved to another database, so I can't use that for comparison.
I suggest you use this pattern:
FIRST update the source table and mark 1000 records as 'ready to extract', i.e. set them to 1
Next, select all records from your source table that are ready to extract
Lastly, if the extract was succesful, update the 1's (ready to extract) to 2's (extracted OK)
Here is how I would do it:
I would add a Execute SQL Task at the end of the control flow to update these first 2000 records in your original table to some other status (status = 3 or something). That way, since you already have a null check in your query, it would not select those initial 2000 records in your next run.

Merge query using two tables in SQL server 2012

I am very new to SQL and SQL server, would appreciate any help with the following problem.
I am trying to update a share price table with new prices.
The table has three columns: share code, date, price.
The share code + date = PK
As you can imagine, if you have thousands of share codes and 10 years' data for each, the table can get very big. So I have created a separate table called a share ID table, and use a share ID instead in the first table (I was reliably informed this would speed up the query, as searching by integer is faster than string).
So, to summarise, I have two tables as follows:
Table 1 = Share_code_ID (int), Date, Price
Table 2 = Share_code_ID (int), Share_name (string)
So let's say I want to update the table/s with today's price for share ZZZ. I need to:
Look for the Share_code_ID corresponding to 'ZZZ' in table 2
If it is found, update table 1 with the new price for that date, using the Share_code_ID I just found
If the Share_code_ID is not found, update both tables
Let's ignore for now how the Share_code_ID is generated for a new code, I'll worry about that later.
I'm trying to use a merge query loosely based on the following structure, but have no idea what I am doing:
MERGE INTO [Table 1]
USING (VALUES (1,23-May-2013,1000)) AS SOURCE (Share_code_ID,Date,Price)
{ SEEMS LIKE THERE SHOULD BE AN INNER JOIN HERE OR SOMETHING }
ON Table 2 = 'ZZZ'
WHEN MATCHED THEN UPDATE SET Table 1.Price = 1000
WHEN NOT MATCHED THEN INSERT { TO BOTH TABLES }
Any help would be appreciated.
http://msdn.microsoft.com/library/bb510625(v=sql.100).aspx
You use Table1 for target table and Table2 for source table
You want to do action, when given ID is not found in Table2 - in the source table
In the documentation, that you had read already, that corresponds to the clause
WHEN NOT MATCHED BY SOURCE ... THEN <merge_matched>
and the latter corresponds to
<merge_matched>::=
{ UPDATE SET <set_clause> | DELETE }
Ergo, you cannot insert into source-table there.
You could use triggers for auto-insertion, when you insert something in Table1, but that will not be able to insert proper Shared_Name - trigger just won't know it.
So you have two options i guess.
1) make T-SQL code block - look for Stored Procedures. I think there also is a construct to execute anonymous code block in MS SQ, like EXECUTE BLOCK command in Firebird SQL Server, but i don't know it for sure.
2) create updatable SQL VIEW, joining Table1 and Table2 to show last most current date, so that when you insert a row in this view the view's on-insert trigger would actually insert rows to both tables. And when you would update the data in the view, the on-update trigger would modify the data.

SQL Server: A severe error occurred on the current command. The results, if any, should be discarded

I have the following SQL Server Query in a stored procedure and I am running this service from a windows application. I am populating the temp table variable with 30 million records and then comparing them with previous days records in tbl_ref_test_main to Add add and delete the different records. there is a trigger on tbl_ref_test_main on insert and delete. Trigger write the same record in another table. Because of the comparison of 30 million records its taking ages to produce the result and throws and error saying A severe error occurred on the current command. The results, if any, should be discarded.
Any suggestions please.
Thanks in advance.
-- Declare table variable to store the records from CRM database
DECLARE #recordsToUpload TABLE(ClassId NVARCHAR(100), Test_OrdID NVARCHAR(100),Test_RefId NVARCHAR(100),RefCode NVARCHAR(100));
-- Populate the temp table
INSERT INTO #recordsToUpload
SELECT
class.classid AS ClassId,
class.Test_OrdID AS Test_OrdID ,
CAST(ref.test_RefId AS VARCHAR(100)) AS Test_RefId,
ref.ecr_RefCode AS RefCode
FROM Dev_MSCRM.dbo.Class AS class
LEFT JOIN Dev_MSCRM.dbo.test_ref_class refClass ON refClass.classid = class.classid
LEFT JOIN Dev_MSCRM.dbo.test_ref ref ON refClass.test_RefId = ref.test_RefId
WHERE class.StateCode = 0
AND (ref.ecr_RefCode IS NULL OR (ref.statecode = 0 AND LEN(ref.ecr_RefCode )<= 18 ))
AND LEN(class.Test_OrdID )= 12
AND ((ref.ecr_RefCode IS NULL AND ref.test_RefId IS NULL)
OR (ref.ecr_RefCode IS NOT NULL AND ref.test_RefId IS NOT NULL));
-- Insert new records to Main table
INSERT INTO dbo.tbl_ref_test_main
Select * from #recordsToUpload
EXCEPT
SELECT * FROM dbo.tbl_ref_test_main;
-- Delete records from main table where similar records does not exist in temp table
DELETE P FROM dbo.tbl_ref_test_main AS P
WHERE EXISTS
(SELECT P.*
EXCEPT
SELECT * FROM #recordsToUpload);
-- Select and return the records to upload
SELECT Test_OrdID,
CASE
WHEN RefCode IS NULL THEN 'NA'
ELSE RefCode
END,
Operation AS 'Operation'
FROM tbl_daily_upload_records
ORDER BY Test_OrdID, Operation, RefCode;
My suggestion would be that 30 million rows is too large for the table variable, try creating a temporary table, populating it with the data and then performing the analysis there.
If this isn't possible/suitable then perhaps create a permanent table and truncating it between uses.

how to fill a table with using some of the data of other tables

I need to create a new table called “customer” that include some of columns from the “user table”, and also “project table”. I built my suppliers table with specific column names and I need to fill its column by using data of the other tables. Finally I am trying to finish; when user create a new account and project, the customer table automatically fill with some of other two tables varieties with different column names.
INFO: I have three different user types such as “suppliers”, “costumers”, “managers”. I am holding their information(include user types) in one table called users.
Use the following query as an example and write a query to insert the rows to destination table from source table.
Ex:-
INSERT INTO TestTable (FirstName, LastName)
SELECT FirstName, LastName
FROM Person.Contact
WHERE EmailPromotion = 2
Note: Use Join in the select query to join two tables
The 1st step would be to couple the data from the different tables using a table join command. If you can create a search result that matched your new table, then creating the table is simple a call to the below.
Create table CUSTOMER as (Select ...)
"when user create a new account and project.." this is something you plan on doing at run time in your application and not something you need to collate using sql at this point?

Resources