Create a synonym (or similar) for multiple fields - sql-server

I'm using SQL Server 2012 Express. I want to create a synonym (or similar inline solution) to substitute in multiple standard column names across many tables.
For example, almost every table in my database has 3 identical columns: ID, DateAdded and TenantID. I want to have a way to select these without having to list them all out every time.
I tried some simple code as below to try to achieve this, but the syntax isn't correct in the create synonym section. I've googled but can't find anything that gives me what I'm after as an inline solution.
So for example, rather than:
SELECT [ID], [DateAdded], [TenantID]
FROM TableName
instead, I hoped to use this code to create a synonym:
CREATE SYNONYM [dbo].[Fields] FOR [ID], [DateAdded], [TenantID]
then I can repeatedly write the query:
SELECT dbo.[Fields] FROM TableName
and have the TableName be different every time.
I need this to work across many tables, so creating a view for each table won't be satisfactory.
Maybe synonyms aren't the right solution, but if not then I'd be happy to hear of some other way that provides an inline solution.

Following the post for a while as the topic seems interesting and a learning opportunity to me :) Not sure this is possible or not, but you can think about Dynamic query execution as an alternative as below-
DECLARE #C_Names VARCHAR(MAX)
DECLARE #T_Name VARCHER(MAX)
SET #C_Names = '[ID], [DateAdded], [TenantID]'
SET #T_Name = 'your_table_name'
--AS column names are fixed, you can now change the table name
--only and execute the script to get your desired output
EXEC('SELECT '+#C_Names+' FROM '+#T_Name+'')
Hope this will at least give you some light of hope.

Related

'Multiple' values for a variable

A bit of background. There are multiple tables from multiple databases that have the same schemas. So, when I query to select all columns having the same master code (in the tables, the master code is in the column called CATMASTRCAT), the same code will have multiple rows, the only same thing about them is the CATMASTRCAT column. This works for a single master code (in the script below if I set the variable to 031325-002-70 it will show multiple rows having different organizations and same data with the rest, which is the desired result).
Question is, is there a way to have multiple master codes be as an input in the variable? I'm planning to create this as a stored procedure.
This is my SQL script:
DECLARE #ProductNumber AS VARCHAR(1000)
SET #ProductNumber = ('031325-002-70')
SELECT ITEMS
,ORGANIZATION
FROM [EU].[dbo].[SOMETHING14]
WHERE ITEMS in (#ProductNumber)
UNION
SELECT ITEMS
,ORGANIZATION
FROM [EU].[dbo].[SOMETHING12]
WHERE ITEMS in (#ProductNumber)
UNION
SELECT ITEMS
,ORGANIZATION
FROM [EU].[dbo].[SOMETHING11]
WHERE ITEMS IN (#ProductNumber)
Feel free to clarify any other needed data. I'm fairly new to SQL, just self-learning. You can also lecture me about the wrong code haha and how to do this better.
Thanks!
P.S. Attached the picture of query result
Yes the best way to do this is to use a table value parameter and then change the where clause to say
WHERE catmastrcat IN (SELECT catmastrcat FROM #tablevaluename)
or you could use an inner join -- which might be faster depending on indexes and other issues - the code for that would look like this
JOIN #tablevaluename tv ON AJF_CATMASTER.catmastrcat = tv.catmastrcat

SQL Server : how to use variable tablename from select value

SELECT
(
SELECT SUM(ISNULL(Volume,0))
FROM Order_【a1.Login】
WHERE Login = a1.Login
) AS SelfVolume
FROM dbo.Account a1
I want the table name in the sub-select (【a1.Login】) to match the value a1.Login from the outer select statement (field Login of table Account). How can I get this result?
The technical answer is: By using dynamic SQL. It's complicated, error-prone and potentially dangerous (beware of Bobby Tables). Your SQLs will become unreadable and unmaintainable. You are entering a world of pain.
The correct answer is: You don't. Don't create a separate Orders table for every user. Create one Orders table with a foreign key to your Account table.
If you still want to go ahead and work with this broken database design (remember: You are entering a world of pain, and you are just getting started), you will somehow need to construct the following SQL dynamically:
SELECT SUM(ISNULL(Login_Volume,0)) FROM
(
SELECT SUM(ISNULL(Volume,0)) AS Login_Volume FROM Order_SomeUser WHERE Login = 'SomeUser'
UNION ALL
SELECT SUM(ISNULL(Volume,0)) AS Login_Volume FROM Order_SomeOtherUser WHERE Login = 'SomeOtherUser'
UNION ALL
...
) AS AllSums
You can do that in the language of your choice, either in your target language (C#, Java, PHP, etc.), which is probably the easiest and most maintainable solution, or directly in T-SQL, by using T-SQL cursors and loops (= the hard way). Whichever language you choose, the algorithm is straight-forward:
Loop through your Account table and get the Logins.
Sanitize the value and validate that the corresponding Order_ table exists.
Create one SQL statement for each account.
Join them with UNION ALL.
Wrap them in the outer SELECT as shown above.
Again: If there is any chance of fixing your broken DB design instead, do that, it will pay off in the long run.

Query help to track daily updates made to table(for specific column)

I have 2 tables Individual(IndividualId is primary key) and IndividualAudit. Every time update is made on individual table
record goes to audit table. There are many columns that can be modified but i am interested only in picking up records where SSN is modified.
I m using below query:
Select DI.IndividualId,DI.ssn FRom Individual I
INNER JOIN IndividualAudit A
ON(I.IndividualId = A.IndividualId and A.UpdateDate = GETDATE())
where i.updatedate = GETDATE() and I.ssn <> a.ssn
group by I.IndividualId,I.ssn
Can someone please tell me whether my approach is correct.
Actually i was searching on google and got scared looking at below link:
Query help when using audit table
the person who answered similar query on this post seem to be very good with sql and comparing with his answer my approach looks quite naive.
so i just want to know where am i wrong in my understanding.
Thanks a lot
Rather than fixing the query, I'd suggest instead using an update trigger aimed specifically at changes to that SSN column you're concerned about. The query you've supplied won't work because of the date comparison (as user2159471 has pointed out). But even after you get the query fixed, you'll still have to run it in order to see which SSNs have been updated.
Instead use a SQL update trigger that, perhaps, inserts an entry into a third table each time an individual's SSN get changed. Then you can look at that table any time you, or run a report against it, to see who's been changed.
The trigger code looks like this:
CREATE TRIGGER MyCoolNewTrigger ON Individual
FOR UPDATE
AS
SET NOCOUNT ON
IF (UPDATE(SSN))
BEGIN
Declare #oldSSN as varchar(40)
Declare #NewSSN as varchar(40)
set #oldSSN = deleted.SSN --holds the old SSN being changes
Set #NewSSN = inserted.SSN -- holds the new SSN inserted
Insert into IndividualUpdateLog (NewSSN, OldSSN, ChangeDate)
values (#NewSSN, #oldSSN, getdate)
END

MAX keyword taking a lot of time to select a value from a column

Well, I have a table which is 40,000,000+ records but when I try to execute a simple query, it takes ~3 min to finish execution. Since I am using the same query in my c# solution, which it needs to execute over 100+ times, the overall performance of the solution is deeply hit.
This is the query that I am using in a proc
DECLARE #Id bigint
SELECT #Id = MAX(ExecutionID) from ExecutionLog where TestID=50881
select #Id
Any help to improve the performance would be great. Thanks.
What indexes do you have on the table? It sounds like you don't have anything even close to useful for this particular query, so I'd suggest trying to do:
CREATE INDEX IX_ExecutionLog_TestID ON ExecutionLog (TestID, ExecutionID)
...at the very least. Your query is filtering by TestID, so this needs to be the primary column in the composite index: if you have no indexes on TestID, then SQL Server will resort to scanning the entire table in order to find rows where TestID = 50881.
It may help to think of indexes on SQL tables in the same way as those you'd find in the back of a big book that are hierarchial and multi-level. If you were looking for something, then you'd manually look under 'T' for TestID then there'd be a sub-heading under TestID for ExecutionID. Without an index entry for TestID, you'd have to read through the entire book looking for TestID, then see if there's a mention of ExecutionID with it. This is effectively what SQL Server has to do.
If you don't have any indexes, then you'll find it useful to review all the queries that hit the table, and ensure that one of those indexes is a clustered index (rather than non-clustered).
Try to re-work everything into something that works in a set based manner.
So, for instance, you could write a select statement like this:
;With OrderedLogs as (
Select ExecutionID,TestID,
ROW_NUMBER() OVER (PARTITION BY TestID ORDER By ExecutionID desc) as rn
from ExecutionLog
)
select * from OrderedLogs where rn = 1 and TestID in (50881, 50882, 50883)
This would then find the maximum ExecutionID for 3 different tests simultaneously.
You might need to store that result in a table variable/temp table, but hopefully, instead, you can continue building up a larger, single, query, that processes all of the results in parallel.
This is the sort of processing that SQL is meant to be good at - don't cripple the system by iterating through the TestIDs in your code.
If you need to pass many test IDs into a stored procedure for this sort of query, look at Table Valued Parameters.

Preferred way to access data within XML columns in SQL Server

Background
Recently I've started to use XML a lot more as a column in SQL Server 2005. During a bit of downtime yesterday, I noticed that two of the link tables I used a really just in the way and it bores me to tears having to write yet more supporting structure code for a couple of joins.
To actually generate the data for these two link tables, I pass in two XML fields to my stored procedure, which writes the main record, breaks the two XML variables down into #tables and inserts them into the actual tables with the new SCOPE_IDENTITY() from the master record.
After some though, I decided to just do away with those tables altogether and just store the XML in XML fields. Now I understand there are some pitfalls here, like general querying performance, GROUP BY doesn't work on XML data. And the query is generally a bit of a mess, but overall I like that I can now work with XElement when I get the data back.
Also, this stuff isn't going to get changed. It's a one shot affair, so I don't have to worry about modification.
I am wondering about the best way to actually get at this data. A lot of my queries involve getting a master record based upon the criteria of a child or even a subchild record. Most of the sprocs in the database do this but on a far more elaborate scale, usually requiring UDFs and Subqueries to work effectively but I have knocked up a trivial example to test querying some data...
INSERT INTO Customers VALUES ('Tom', '', '<PhoneNumbers><PhoneNumber Type="1" Value="01234 456789" /><PhoneNumber Type="2" Value="01746 482954" /></PhoneNumbers>')
INSERT INTO Customers VALUES ('Andy', '', '<PhoneNumbers><PhoneNumber Type="2" Value="07948 598348" /></PhoneNumbers>')
INSERT INTO Customers VALUES ('Mike', '', '<PhoneNumbers><PhoneNumber Type="3" Value="02875 482945" /></PhoneNumbers>')
INSERT INTO Customers VALUES ('Steve', '', '<PhoneNumbers></PhoneNumbers>')
Now I can see two ways of grabbing it.
Method 1
DECLARE #PhoneType INT
SET #PhoneType = 2
SELECT ct.*
FROM Customers ct
WHERE ct.PhoneNumbers.exist('/PhoneNumbers/PhoneNumber[#Type=sql:variable("#PhoneType")]') = 1
Really? sql:variable feels a bit unwholesome. However, it does work. However it's distinctively more difficult to access data in a more meaningful way.
Method 2
SELECT ct.*, pt.PhoneType
FROM Customers ct
CROSS APPLY ct.PhoneNumbers.nodes('/PhoneNumbers/PhoneNumber') AS nums(pn)
INNER JOIN PhoneTypes pt ON pt.ID = nums.pn.value('./#Type[1]', 'int')
WHERE nums.pn.value('./#Type[1]', 'int') = #PhoneType
This is more like it. Already I can easily expand it to do joins and all other good stuff. I've used CROSS APPLY before on a table valued function, and it was very good. The execution plan for this as opposed to the previous query is seriously more advanced. Admittedly I haven't done any indexing and whatnot on these tables, but it's 97% of the entire batch cost.
Method 2 (expanded)
SELECT ct.ID, ct.CustomerName, ct.Notes, pt.PhoneType
FROM Customers ct
CROSS APPLY ct.PhoneNumbers.nodes('/PhoneNumbers/PhoneNumber') AS nums(pn)
INNER JOIN PhoneTypes pt ON pt.ID = nums.pn.value('./#Type[1]', 'int')
WHERE nums.pn.value('./#Type[1]', 'int') IN (SELECT ID FROM PhoneTypes)
Nice IN clause here. I can also do something like pt.PhoneType = 'Work'
Finally
So I'm essentially obtaining the results that I want, but is there anything I should be aware of when using this mechanism to interrogate small amounts of XML data? Will it fall down on performance during elaborate searches? And is the storage of such markup style data too much of an overhead?
Side note
I've used things like sp_xml_preparedocument and OPENXML in the past just to pass lists into sprocs, but this is like a breath of fresh air in comparison!
One approach we've taken for some of our key items of information stored inside an XML column is to "surface" them as computed, persisted properties on the "parent" table. This is done using a little stored function.
It works great, because the value is computed only once every time the XML changes - as long as it's not changing, there's no recomputation, the value is stored on the table like any other column.
It's also great since it can be indexed! So if you're searching and/or joining on such a field - that works like a charm!
So you basically need a stored function along the lines of this:
CREATE FUNCTION [dbo].[GetPhoneNo1](#DataXML XML)
RETURNS VARCHAR(50)
WITH SCHEMABINDING
AS BEGIN
DECLARE #result VARCHAR(20)
SELECT
#result = #DataXML.value('(/PhoneNumbers/PhoneNumber[#Type="1"]/#Value)[1]', 'VARCHAR(50)')
RETURN #result
END
If you don't have a phone number of type 1, you'll just get back a NULL.
Then, you need to extend your parent table with a computed, persisted column:
ALTER TABLE dbo.Customers
ADD PhoneNumberType1 AS dbo.GetPhoneNo1(PhoneNumbers)
As you can see - it works just fine for single entries, but unfortunately, you cannot surface a whole list of properties. But if you have some key items, like ID's or something, that you expect most of your rows to have, this can be a very nice and slick way to get at that information more easily and more efficiently.

Resources