I've come from an application dev and been thrust into the web dev and I'm getting my head around asymmetrical data requests/returns and how to handle them.
I need to make a number of SQL requests and though the best way to manage which ones are returned would be to insert a UUID or something similar into the return sql table.
Also, in general I'm pretty basic with my sql language, but I want to add an external value into my returned table, where #ext would be the external data added in from the original request.
SELECT *
FROM
#ext AS uuid,
dbo.Orders
WHERE ....
expected return table
uuid: 12234
customer: jack
orderNo: 774
postAddy: 123 Albert St
...
The error I'm always getting is "but declare the table variable "#ext".
Is this the right approach or am I just doing something dumb?
The error message you are getting is telling you that you haven't declared the table variable #ext. This is because you've used a variable name (with the # prefix) in the FROM clause where it's expecting a table or other table-like object (ie. table, view, table variable, TVF, etc).
The #ext variable appears to be a scalar (single-valued) variable, so it isn't recognised in the FROM clause. You should try something like this instead:
SELECT
-- scalar values and column names / aliases go here
#ext AS uuid, *
FROM
-- only tables, views, table variables, TVF's etc go here
dbo.Orders
WHERE ....
Note that if your query returns multiple rows, they will all have the same value for uuid. This may or may not be desirable, and there may be better ways to achieve what you want, in terms of managing the data that is returned from multiple queries, but this is best posed in another question once you have a working example.
Make sure you know what #ext is for you and how to properly reference it.
If it's a sacalar value you can use it on expressions:
DECLARE #ext INT = 5
SELECT
#ext AS ScalarValue,
#ext + 10 AS ScalarOperation,
#ext + S.SomeColumn AS ScalarOperationWithTableColumn
FROM
SomeTable AS S
If it's a table variable, you can reference it as table (as in your example):
DECLARE #ext TABLE (
FirstValue INT,
SecondValue VARCHAR(100))
INSERT INTO #ext (
FirstValue,
SecondValue)
VALUES
(10, 'SomeText'),
(20, 'AnotherText')
SELECT
E.FirstValue,
E.SecondValue
FROM
#ext AS E
/*
LEFT JOIN ....
WHERE
....
*/
Related
I need to create function in SQL to check data in variables or parameters as below
#Category as varchar(50)='ABC,DEF'
#Value as varchar(50)='1,2'
And compare #Category value with Category in table then return value matching from parameter
JOB TABLE ---
JOB CATEGORY
123 ABC
234 DEF
234 SSS
Select JobNo,FUNCTION(#Category,#Value,CATEGORY) from JOB
FINAL RESULTS
JOB VALUE
123 1
234 2
234 0
If category match then return value from parameter else return 0.
If you can't use a static lookup table as mentioned in the comments (for example, perhaps the mapping needs to be dynamic based on data supplied by the application), then this looks like a job for a table valued parameter.
Right now you have two parameters, but it seems to me that the values in the parameters are related. That is to say, right now you have #category = 'ABC,DEF' and #value = '1,2', but I think you intend each "element" in the comma delimited set of "categories" to associate with the "element" in the comma delimiited set of "values" that is in the same position.
Right now the design is brittle, because what would happen if I use parameters like this: #category = 'ABC,DEF,GHI,JKL', #value = '1'?
So, you can make your code more durable, and use the sort of "join-based" lookup table solution being recommended to you in the comments, by using a function that takes a table valued parameter argument. To do this, you first have to create the table valued parameter as a type in your database. We then accept a parameter of that type, and join onto it. In the solution below I have "guessed" at datatypes for category and value that seem reasonable based on the sample data in your question.
Also, I've kept the "structure" of your solution - ie, the function is written in such a way that it can be "applied" against every row in jobs, individually. We don't have to do it this way. We could instead just do the whole query inside the function (ie, including the join to the job table), but perhaps you want to use the function against other tables that also have a cateogry column? I won't try to second guess the overall design here. But I will switch the function to an inline table valued function (which returns one row with one column) rather than a scalar function, for performance reasons.
-- schema and data to match your question
create table dbo.job (job int, category char(3));
insert dbo.job(job, category) values (123, 'ABC'), (234, 'DEF'), (234, 'SSS');
go
-- solution
create type dbo.CategoryValues as table
(
category char(3) unique,
[value] int
);
go
create or alter function dbo.MapCategory(#category char(3), #map dbo.CategoryValues readonly)
returns table as return
(
select [value] from #map where category = #category
);
go
-- to call the function we need to pass a parameter of type
-- dbo.CategoryValues with the mappings we desire
declare #map dbo.CategoryValues;
insert #map values ('ABC', 1), ('DEF', 2)
-- outer apply the function to each row in the job table.
select j.job, [value] = isnull(v.[value], 0)
from dbo.job j
outer apply dbo.MapCategory(j.category, #map) v
I have a lookup table with a list of values. Lookup Table I need to filter on a value from the LUT table in a simple where condition. It works with all table values except one and I don't know why. I have tried using trim function and lower function to change the string but nothing helped. Does anyone have the same experience? Why it does work for all table values except one? My code:
SELECT * FROM "PossibleNewGMCIssues" WHERE "gmcIssue" = 'Suspended account for policy violation'
Thank you in advance.
It's too hard for anybody to say without seeing the strings themselves. They probably look similar but have different unicode values. You can convert to the hex values to see where they are different though by using hex_encode:
Below I create a table that uses two strings that look the same but aren't. One contains an m-dash and the other an en-dash.
-- Create a table with two columns with strings in them that look the same but arent
create or replace transient table test_table as (
select 'a-string'::string col1, 'a—string'::string col2
);
-- This returns 0 results
select * from test_table where col1=col2;
-- You can tell that the strings are different by checking the hex representation of them
select hex_encode(col1), hex_encode(col2)
from test_table
;
-- The above returns:
-- +----------------+--------------------+
-- |HEX_ENCODE(COL1)|HEX_ENCODE(COL2) |
-- +----------------+--------------------+
-- |612D737472696E67|61E28094737472696E67|
-- +----------------+--------------------+
I have 2 tables (A and B). I want to implement such logic to table B:
[Bank] AS (iif( Some_Statement,
(SELECT [Money] FROM [ATM] WHERE [B].[Currency] = [A].[Currency] ),
[NoMoney]
)
)
I get an error: Subqueries are not allowed in this context. Only scalar expressions are allowed.
Is there a way to implement such logic on creation of the tables? It doesn't look hard.
After reading the discussion in comments I get the feeling, that this was going the wrong direction. Might be because of your very direct question about IIF()...
If I get this correctly you try to add a computed column to you table. Something along this:
DECLARE #tblTest TABLE (ID INT IDENTITY
,SomeValue VARCHAR(100)
,[Test] AS IIF(SomeValue IS NOT NULL, CONCAT(SomeValue,'Blah'), 'Default if null'));
INSERT INTO #tblTest(SomeValue) VALUES('Test');
INSERT INTO #tblTest(SomeValue) VALUES(NULL);
SELECT * FROM #tblTest;
The result
ID SomeValue Test
1 Test TestBlah
2 NULL Default if null
Now you want to get the value for the computed column not just from some simple scalar computation, but you want to pick it from a table.
Here I will try to simulate your issue. Next time it is up to you to do this yourself. Providing DDL, sample data and the expected output together with your own attempts is the best chance to get the answer you are waiting for.
This is the table B from where you want to get the value.
Later we will ask for 'Test' and not for 'xyz'.
CREATE TABLE tblB (ID INT IDENTITY
,SomeResultColumn VARCHAR(100)
,SomeConditionColumn VARCHAR(100));
INSERT INTO tblB(SomeResultColumn,SomeConditionColumn) VALUES('not wanted','xyz')
,('wanted','Test');
GO
--This is what you are trying to do, but I out-commented it because of the error you've got.
--A computed column does not allow for a SELECT. This is not bound to the IIF()
--CREATE TABLE tblA (ID INT IDENTITY
-- ,SomeValue VARCHAR(100)
-- ,[Test] AS IIF(SomeValue IS NOT NULL,(SELECT b.SomeResultColumn FROM tblB b WHERE b.SomeConditionColumn=SomeValue),'Default if null'));
--GO
--But what we can do - and the error message is telling so - provide a scalar expression:
--A Scalar Function is exactly this:
CREATE FUNCTION dbo.GetMyComputedColumn(#Condition VARCHAR(100))
RETURNS VARCHAR(100) AS
BEGIN
RETURN (SELECT b.SomeResultColumn --You might use `TOP 1` to ensure a scalar result
FROM tblB b
WHERE b.SomeConditionColumn=#Condition);
END
GO
--We can use this function in the IIF():
CREATE TABLE tblA (ID INT IDENTITY
,SomeValue VARCHAR(100)
,[Test] AS IIF(SomeValue IS NOT NULL,dbo.GetMyComputedColumn(SomeValue),'Default if null'));
GO
INSERT INTO tblA(SomeValue) VALUES('Test');
INSERT INTO tblA(SomeValue) VALUES(NULL);
SELECT * FROM tblA;
The result
ID SomeValue Test
1 Test wanted
2 NULL Default if null
Should you do this?
The question, if this is a good idea, is something completely different.
Besides the fact, that scalar functions are known as bad performers, the main question is: WHY?
Do you need a persistant default value? In this case a trigger will be a better choice... Or you can use an insert statement with the computed value directly. I'm afraid you are mixing the concepts of computed columns and a default constraint. The computed column cannot be changed...
If you want to compute this value whenever you fetch data from this table, it was much better to use a VIEW or an iTVF, where you simply join the needed value to your result set.
If what you are saying is that you want a value from table "B" IF it matches on a value from your main table "A" and if there is no matching value in "B", use a literal as a default, try this:
select
Bank,
coalesce( (SELECT Money FROM ATM WHERE Currency = A.Currency), 'NoMoney' ) as 'Type'
from
ATM A
where
....
If the select of [Money] returns null, the literal 'NoMoney' will be used.
If need be, you can have a select on both sides of the coalesce, the right need not be a literal.
I have data inside a table's column. I SELECT DISTINCT of that column, i also put LTRIM(RTRIM(col_name)) as well while writing SELECT. But still I am getting duplicate column record.
How can we identify why it is happening and how we can avoid it?
I tried RTRIM, LTRIM, UPPER function. Still no help.
Query:
select distinct LTRIM(RTRIM(serverstatus))
from SQLInventory
Output:
Development
Staging
Test
Pre-Production
UNKNOWN
NULL
Need to be decommissioned
Production
Pre-Production
Decommissioned
Non-Production
Unsupported Edition
Looks like there's a unicode character in there, somewhere. I copied and pasted the values out initially as a varchar, and did the following:
SELECT DISTINCT serverstatus
FROM (VALUES('Development'),
('Staging'),
('Test'),
('Pre-Production'),
('UNKNOWN'),
('NULL'),
('Need to be decommissioned'),
('Production'),
(''),
('Pre-Production'),
('Decommissioned'),
('Non-Production'),
('Unsupported Edition'))V(serverstatus);
This, interestingly, returned the values below:
Development
Staging
Test
Pre-Production
UNKNOWN
NULL
Need to be decommissioned
Production
Pre-Produc?tion
Decommissioned
Non-Production
Unsupported Edition
Note that one of the values is Pre-Produc?tion, meaning that there is a unicode character between the c and t.
So, let's find out what it is:
SELECT 'Pre-Production', N'Pre-Production',
UNICODE(SUBSTRING(N'Pre-Production',11,1));
The UNICODE function returns back 8203, which is a zero-width space. I assume you want to remove these, so you can update your data by doing:
UPDATE SQLInventory
SET serverstatus = REPLACE(serverstatus, NCHAR(8203), N'');
Now your first query should work as you expect.
(I also suggest you might therefore want a lookup table for your status' with a foreign key, so that this can't happen again).
DB<>fiddle
I deal with this type of thing all the time. For stuff like this NGrams8K and PatReplace8k and PATINDEX are your best friends.
Putting what you posted in a table variable we can analyze the problem:
DECLARE #table TABLE (txtID INT IDENTITY, txt NVARCHAR(100));
INSERT #table (txt)
VALUES ('Development'),('Staging'),('Test'),('Pre-Production'),('UNKNOWN'),(NULL),
('Need to be decommissioned'),('Production'),(''),('Pre-Production'),('Decommissioned'),
('Non-Production'),('Unsupported Edition');
This query will identify items with characters other than A-Z, spaces and hyphens:
SELECT t.txtID, t.txt
FROM #table AS t
WHERE PATINDEX('%[^a-zA-Z -]%',t.txt) > 0;
This returns:
txtID txt
----------- -------------------------------------------
10 Pre-Production
To identify the bad character we can use NGrams8k like this:
SELECT t.txtID, t.txt, ng.position, ng.token -- ,UNICODE(ng.token)
FROM #table AS t
CROSS APPLY dbo.NGrams8K(t.txt,1) AS ng
WHERE PATINDEX('%[^a-zA-Z -]%',ng.token)>0;
Which returns:
txtID txt position token
------ ----------------- -------------------- ---------
10 Pre-Production 11 ?
PatReplace8K makes cleaning up stuff like this quite easily and quickly. First note this query:
SELECT OldString = t.txt, p.NewString
FROM #table AS t
CROSS APPLY dbo.patReplace8K(t.txt,'%[^a-zA-Z -]%','') AS p
WHERE PATINDEX('%[^a-zA-Z -]%',t.txt) > 0;
Which returns this on my system:
OldString NewString
------------------ ----------------
Pre-Produc?tion Pre-Production
To fix the problem you can use patreplace8K like this:
UPDATE t
SET txt = p.newString
FROM #table AS t
CROSS APPLY dbo.patReplace8K(t.txt,'%[^a-zA-Z -]%','') AS p
WHERE PATINDEX('%[^a-zA-Z -]%',t.txt) > 0;
I'm using SQL Server 2014. My request I believe is rather simple. I have one table containing a field holding a date value that is stored as VARCHAR, and another table containing a field holding a date value that is stored as INT.
The date value in the VARCHAR field is stored like this: 2015M01
The data value in the INT field is stored like this: 201501
I need to compare these tables against each other using EXCEPT. My thought process was to somehow extract or TRIM the "M" out of the VARCHAR value and see if it would let me compare the two. If anyone has a better idea such as using CAST to change the date formats or something feel free to suggest that as well.
I am also concerned that even extracting the "M" out of the VARCHAR may still prevent the comparison since one will still remain VARCHAR and the other is INT. If possible through a T-SQL query to convert on the fly that would be great advice as well. :)
REPLACE the string and then CONVERT to integer
SELECT A.*, B.*
FROM TableA A
INNER JOIN
(SELECT intField
FROM TableB
) as B
ON CONVERT(INT, REPLACE(A.varcharField, 'M', '')) = B.intField
Since you say you already have the query and are using EXCEPT, you can simply change the definition of that one "date" field in the query containing the VARCHAR value so that it matches the INT format of the other query. For example:
SELECT Field1, CONVERT(INT, REPLACE(VarcharDateField, 'M', '')) AS [DateField], Field3
FROM TableA
EXCEPT
SELECT Field1, IntDateField, Field3
FROM TableB
HOWEVER, while I realize that this might not be feasible, your best option, if you can make this happen, would be to change how the data in the table with the VARCHAR field is stored so that it is actually an INT in the same format as the table with the data already stored as an INT. Then you wouldn't have to worry about situations like this one.
Meaning:
Add an INT field to the table with the VARCHAR field.
Do an UPDATE of that table, setting the INT field to the string value with the M removed.
Update any INSERT and/or UPDATE stored procedures used by external services (app, ETL, etc) to do that same M removal logic on the way in. Then you don't have to change any app code that does INSERTs and UPDATEs. You don't even need to tell anyone you did this.
Update any "get" / SELECT stored procedures used by external services (app, ETL, etc) to do the opposite logic: convert the INT to VARCHAR and add the M on the way out. Then you don't have to change any app code that gets data from the DB. You don't even need to tell anyone you did this.
This is one of many reasons that having a Stored Procedure API to your DB is quite handy. I suppose an ORM can just be rebuilt, but you still need to recompile, even if all of the code references are automatically updated. But making a datatype change (or even moving a field to a different table, or even replacinga a field with a simple CASE statement) "behind the scenes" and masking it so that any code outside of your control doesn't know that a change happened, not nearly as difficult as most people might think. I have done all of these operations (datatype change, move a field to a different table, replace a field with simple logic, etc, etc) and it buys you a lot of time until the app code can be updated. That might be another team who handles that. Maybe their schedule won't allow for making any changes in that area (plus testing) for 3 months. Ok. It will be there waiting for them when they are ready. Any if there are several areas to update, then they can be done one at a time. You can even create new stored procedures to run in parallel for any updated app code to have the proper INT datatype as the input parameter. And once all references to the VARCHAR value are gone, then delete the original versions of those stored procedures.
If you want everything in the first table that is not in the second, you might consider something like this:
select t1.*
from t1
where not exists (select 1
from t2
where cast(replace(t1.varcharfield, 'M', '') as int) = t2.intfield
);
This should be close enough to except for your purposes.
I should add that you might need to include other columns in the where statement. However, the question only mentions one column, so I don't know what those are.
You could create a persisted view on the table with the char column, with a calculated column where the M is removed. Then you could JOIN the view to the table containing the INT column.
CREATE VIEW dbo.PersistedView
WITH SCHEMA_BINDING
AS
SELECT ConvertedDateCol = CONVERT(INT, REPLACE(VarcharCol, 'M', ''))
--, other columns including the PK, etc
FROM dbo.TablewithCharColumn;
CREATE CLUSTERED INDEX IX_PersistedView
ON dbo.PersistedView(<the PK column>);
SELECT *
FROM dbo.PersistedView pv
INNER JOIN dbo.TableWithIntColumn ic ON pv.ConvertedDateCol = ic.IntDateCol;
If you provide the actual details of both tables, I will edit my answer to make it clearer.
A persisted view with a computed column will perform far better on the SELECT statement where you join the two columns compared with doing the CONVERT and REPLACE every time you run the SELECT statement.
However, a persisted view will slightly slow down inserts into the underlying table(s), and will prevent you from making DDL changes to the underlying tables.
If you're looking to not persist the values via a schema-bound view, you could create a non-persisted computed column on the table itself, then create a non-clustered index on that column. If you are using the computed column in WHERE or JOIN clauses, you may see some benefit.
By way of example:
CREATE TABLE dbo.PCT
(
PCT_ID INT NOT NULL
CONSTRAINT PK_PCT
PRIMARY KEY CLUSTERED
IDENTITY(1,1)
, SomeChar VARCHAR(50) NOT NULL
, SomeCharToInt AS CONVERT(INT, REPLACE(SomeChar, 'M', ''))
);
CREATE INDEX IX_PCT_SomeCharToInt
ON dbo.PCT(SomeCharToInt);
INSERT INTO dbo.PCT(SomeChar)
VALUES ('2015M08');
SELECT SomeCharToInt
FROM dbo.PCT;
Results: