Auto computed column SQL Server - sql-server

I want to alter a table and add a column that is the sum of two other columns, and this column is auto computed when I add new data.

The syntax for a computed column specification is as follows:
column-name AS formula
If the column values are to be stored within the database, the PERSISTED keyword should be added to the syntax, as follows:
column-name AS formula PERSISTED
You didn't mention an example, but if you wanted to add the column "sumOfAAndB" to calculate the Sum of A and B, your syntax looks like
ALTER TABLE tblExample ADD sumOfAAndB AS A + B
Hope that helps.

Rather than adding this column to the table, I would recommend using a View to calculate the extra column, and read from that.
Here is a tutorial on how to create views here:
http://odetocode.com/Articles/299.aspx
Your view query would look something like:
SELECT
ColumnA, ColumnB, (ColumnA+ColumnB) as ColumnC
FROM
[TableName]

You can use a view, but you may want to use a PERSISTED computed value if you don't want to incur the cost of computing the value each time you access the view.
e.g.
CREATE TABLE T1 (
a INT,
b INT,
operator CHAR,
c AS CASE operator
WHEN '+' THEN a+b
WHEN '-' THEN a-b
ELSE a*b
END
PERSISTED
) ;
See the SQL docs assuming you're using SQL Server of course.

How to create computed column during creation of new table:
CREATE TABLE PRODUCT
(
WORKORDERID INT NULL,
ORDERQTY INT NULL,
[ORDERVOL] AS CAST
(
CASE WHEN ORDERQTY < 10 THEN 'SINGLE DIGIT' WHEN ORDERQTY >=10 AND ORDERQTY < 100 THEN 'DOUBLE DIGIT'
WHEN ORDERQTY >=100 AND ORDERQTY < 1000 THEN 'THREE DIGIT' ELSE 'SUPER LARGE' END AS NVARCHAR(100)
)
)
INSERT INTO PRODUCT VALUES (1,1),(2,-1),(3,11)
SELECT * FROM PRODUCT

UPDATE table_name SET total = mark1+mark2+mark3;
first think you can insert all the data into the table and then you can update the table like this form.
//i hop it's help to you

Related

Snowflake - How to split tuple of dates column while using in where clause

We have a scenario to apply on below table (sample provided) to split the date_clmn of varchar type to each individual.
tableA
emp_id date_clmn
123 ("2021-01-01", "2021-03-03")
456 ("2021-02-01", "2021-04-03")
So, we have a scenario while inserting the data in the following table through dml operations.
For example1:
DELETE FROM tableA WHERE cast(BEGIN(date_clmn) as DATE FORMAT 'YYYY-MM-DD') =current_date AND END(date_clmn) IS UNTIL_CHANGED ;
Which we need to convert to snowflake format, to take the first value and the second value from the date_clmn column where we are considering the "date_clmn" is in varchar datatype.
So, in snowflake we need to know how to get the first and second value from all rows when we are accessing the column in the filter clause ?
This is actually the teradata period datatype, which we want to equivalate in snowflake.
If there are always just 2 dates in there, easiest to me would be:
to_date(split_part(column_1, ',', 1)) as date_1
to_date(split_part(column_1, ',', 2)) as date_2
You might have to clean up the parentheses after with replace.
Below is script which helps you to pull needed data:
https://docs.snowflake.com/en/sql-reference/functions/split_to_table.html
Setup Data:
create or replace table splittable (a number, v varchar);
insert into splittable (a, v) values (456, '("2021-02-01", "2021-04-03")');
Output Query:
select a, regexp_replace(value,'\\("|"|\\)') as value
from splittable, lateral split_to_table(splittable.v, ',');

Can table columns be created by copying the datatype from another column ? (for ex like %type in Oracle)

For example, this is possible in Oracle. I wanted to know if snowflake has a similar concept.
CREATE TABLE Purchases
(
purchase_date calendar.date%type,
customer_nr customer.customer_nr%type,
purchase_amount numeric(10,2)
)
I'm afraid there's no way to do that right now. You can use system$typeof to check for a column type, but that can't be used in a create table statement.
The referenceability that you have in your example is not available. You can build a table by joining one or more tables and/or views together and build the column list with columns from any of the joins and any that you explicitly add to the list. The key is to join on 1 = 2 or FALSE
Example
CREATE OR REPLACE TEMP TABLE TMP_X
AS
SELECT A."name" AS NAME
,A."owner" AS OWNER
,B.STG_ARRAY
,NULL::NUMERIC(10,2) AS PURCHASE_AMOUNT
,NULL AS COMMENT
FROM TABLE_A A
JOIN TABLE_B B
ON 1 = 2
;
NAME - takes datatype from A."name" column
OWNER - takes datatype from A."owner" column
STG_ARRAY - takes datatype from B.STG_ARRAY column
PURCHASE_AMOUNT - takes the datatype explicitly specified NUMERIC(10,2)
COMMENT - no explicit datatype -- takes default datatype of VARCHAR(16777216)

SQL Server changes the value in the float column when converting to varchar

I have a column in my table that is of float type. The table was automatically generated when I imported the spreadsheet (Excel) data to my database. Thus there is a column I wish to change from float to varchar, but when I try to do this, I get an error:
'tblInvoices' table
Unable to create index 'IX_tblInvoices'.
The CREATE UNIQUE INDEX statement terminated because a duplicate key was found for the object name 'dbo.tblInvoices' and the index name 'IX_tblInvoices'.
The duplicate key value is (1.00001e+006). The statement has been terminated.
It is a unique column, and set that way (not set as the primary key for reasons). I have already run queries to search for and delete duplicate fields but there are none. The query I ran as follows:
WITH CTE AS
(
SELECT
Invoice,
RN = ROW_NUMBER()OVER(PARTITION BY Invoice ORDER BY Invoice)
FROM
dbo.tblInvoices
)
DELETE FROM CTE
WHERE RN > 1
So the value within the Invoice column is 1000010 and when I run the following query a single row is found.
SELECT *
FROM [TradeReceivables_APR_IFRS9].[dbo].[tblInvoices]
WHERE Invoice = 1.00001e+006
Note that I have searched for the value in the error, 1.00001e+006, and not 1000010.
So my question is why does the DBMS do this? Why does it change the value like that? When I remove the column, it does it with another column and so on and so on (about 40 000 rows in total). How can I change the column from float to varchar without changing the data and getting errors?
Any help will be greatly appreciated!
It seems that the field is an integer so you can Cast it to BIGINT before cast to VARCHAR
Declare #Invoice as float = 1.00001e+006
print cast(#Invoice as varchar) -->> Result : 1.00001e+006
print cast(cast(#Invoice as bigint) as varchar) -->> Result : 1000010

Is there a way to cast a group of columns to the same NUMBER(p,s) type so that they can be UNPIVOT in Snowflake SQL?

I have a table several numeric columns all with different NUMBER(p,s) types. The table was created with a CREATE TABLE xx as (select date, SUM(x), SUM(y) from xxx GROUP BY date). It seems that snowflake decided the minimum NUMBER(precision, scale) required to store each resulting column. That resulting in different types for each column.
Now I want to UNPIVOT those columns and Snowflake will complain that SQL compilation error: The type of column 'xxxxx' conflicts with the type of other columns in the UNPIVOT list
I create this little minimal table to exemplify the problem:
create or replace temporary table temp1(id number, sales number(10,0), n_orders number(20,0)) as (
select * from (values
(1, 1, 2 )
,(2, 3, 4)
,(3, 5, 6)
)
); -- imagine that temp1 was created via a select AGG1, AGG2 FROM XXX GROUP BY YYY
describe table temp1;
--
name type kind null? default primary key unique key check expression comment
ID NUMBER(38,0) COLUMN Y N N
SALES NUMBER(10,0) COLUMN Y N N
N_ORDERS NUMBER(20,0) COLUMN Y N N
select *
from temp1 UNPIVOT(measure_value for measure_name in (sales, n_orders)); -- won't work because SALES is NUMBER(10,0) and N_ORDERS is NUMBER(20,0)
Right now my workaround is to cast each columns with a explicit TO_NUMBER(x, 38,0) as x like so:
with t1 as (
select
id
,TO_NUMBER(sales,38,0) as sales
,TO_NUMBER(n_orders, 38,0) as n_orders
from temp1
)
select * from t1 UNPIVOT(measure_value for measure_name in (sales, n_orders));
This is less than optimal because there are many columns in the actual table that I'm using.
I don't want to recreate the table (the aggregations take long to compute) so what are my options?
Is there a any other syntax that I can use to cast in bulk a list of columns?
Your best option is to modify the already create table (without having to rerun the costly aggregation) like so:
alter table temp1 modify
sales set data type number(38,0)
,n_orders set data type number(38,0)
;
This way has two advantages:
you avoid typing the column name twice for each column: column_name set data type number(38,) instead of TO_NUMBER(column_name, 38,0) as column_name
It's run just once, instead of having to running as a CTE before each UNPIVOT query.

TSQL: getting next available ID

Using SQL Server 2008, have three tables, table a, table b and table c.
All have an ID column, but for table a and b the ID column is an identity integer, for table c the ID column is a varchar type
Currently a stored procedure take a name param, following certain logic, insert to table a or table b, get the identity, prefix with 'A' or 'B' then insert to table c.
Problem is, table C ID column potentially have the duplicated values, i.e. if identity from table A is 2, there might already have 'A2','A3','A5' in the ID column for table C, how to write a T-SQL query to identify the next available value in table C then ensure to update table A/B accordingly?
[Update]
this is the current step,
1. depends on input parameter, insert to table A or table B
2. initialize seed value = ##Identity
3. calculate ID value to insert to table C by prefix 'A' or append 'B' with the seed value
4. look for record match in table C by ID value from step 3, if didn't find any record, insert it, else increase seed value by 1 then repeat step 3
The issue being at a certain value range, there could be a huge block of value exists in table C ID, i.e. A3000 to A500000 existed now in table C ID, the database query is extemely slow if follow the existing logic. Needs to figure out a logic to smartly get the minimum available number (without the prefix)
it is hard to describe, hope this make more sense, I truly appreciate any help on this Thanks in advance!
This should do the trick. Simple self extracting example will work in SSMS. I even made it out of order just in case. You would just change your table to be where #Data is and then change Identifier field to replace 'ID'.
declare #Data Table ( Id varchar(3) );
insert into #Data values ('A5'),('A2'),('B1'),('A3'),('B2'),('A4'),('A1'),('A6');
With a as
(
Select
ID
, cast(right(Id, len(Id)-1) as int) as Pos
, left(Id, 1) as TableFrom
from #Data
)
select
TableFrom
, max(Pos) + 1 as NextNumberUp
from a
group by TableFrom
EDIT: If you want to not worry about production data you could add this last part amending what I wrote:
Select
TableFrom
, max(Pos) as LastPos
into #Temp
from a
group by TableFrom
select TableFrom, LastPos + 1
from #Temp
Regardless if this was production environment you are going to have to hit part of it at some time to get data. If the datasets are not too large and just varchar(256) or less and only 5 million rows or less you could dump that entire column from tableC to a temp table. Honestly query performance versus imports change vastly from system to system.
Following your design there shouldn't be any duplicates in Table C considering that A and B are unique.
A | B | C
1 1 A1
2 2 A2
B1
B2

Resources