Import data into multiple tables from excel sheet in sqlserver - sql-server

I am able to populate excel sheet's data into a datatable , that table is tempInput (sql fiddle)
Now from that table I want to split data into two tables tbl1,tbl2 (fiddle)
I am facing problem while inserting into tbl2.. let me show you those tables first
create table tempInput
(
question_text nvarchar(100),
description nvarchar(100),
option_1 nvarchar(20),
option_2 nvarchar(20),
option_3 nvarchar(20),
option_4 nvarchar(20),
right_option nvarchar(50)
)
create table tbl1
(
question_id int primary key identity(1,1),
name nvarchar(100),
description nvarchar(100)
)
create table tbl2
(
option_id int IDENTITY(1,1) primary key ,
option_text nvarchar(30),
is_right_option bit,
question_id int
)
lets populate some sample data into tempInput
now I have to split sampleInput's data into two tables as follows..
How can I do that, please find it here

Related

How to assign a permanent unique ID with T-SQL

Can someone let me know how to permanently assign a unique ID to a field?
I have the following table:
CREATE TABLE PrestigeCars.Reference.Staff
(
StaffName NVARCHAR(50) NULL,
ManagerID INT NULL,
Department NVARCHAR(50) NULL
) ON [PRIMARY]
GO
The following code assigns a new id field to to the table called 'myuniqueID'
SELECT
Staff.StaffName
,Staff.ManagerID
,Staff.Department
,NEWID() AS myuniqueID
FROM Reference.Staff
This will produce the following table:
The problem is I would like the unique IDs generated to become permanent.
Can someone let me know if that is possible?
CREATE TABLE PrestigeCars.Reference.Staff (
StaffName NVARCHAR(50) NULL
,ManagerID INT NULL
,Department NVARCHAR(50) NULL
, UniqueId NVARCHAR(255) NOT NULL default NEWID()
) ON [PRIMARY]
GO
Important is, that this only works for creating the table. If you want to alter the table, you firstly have to add the Column which has to allow null, then fill the values and at last set it to not null.
Edit:
To add a Column you need the alter table statement, as mentioned in many other posts before:
ALTER TABLE PrestigeCars.Reference.Staff
ADD UniqueId NVARCHAR(255) NULL default NEWID()
Next you have to set the UniqueId for the existing rows:
UPDATE PrestigeCars.Reference.Staff
SET UniqueId = NEWID()
WHERE UniqueId IS NULL
Last but not least you should set the column to not null:
ALTER TABLE PrestigeCars.Reference.Staff
ALTER COLUMN UniqueId NOT NULL
You could add an Unique-Index, if you want to, but this should not be necessary.

how to calculate column to other table

i want to make column Customers_Balance on TBL_CUSTOMERS to show by defult the result of that stored procedures....
TBL_CUSTOMERS which it have the info of the customer, and it created as like that
CREATE TABLE TBL_CUSTOMERS
(
Customers_ID int PRIMARY KEY,
Customers_Name varchar(100) NOT NULL,
Customers_Phone varchar(100),
Customers_Address varchar(100),
Customers_Web varchar(100),
Customers_Balance decimal(16,0) not null,
);
TBL_CUSTOMERS_DETAILS which it have the details of all customer transactions , and it created as like that
CREATE TABLE TBL_CUSTOMERS_DETAILS
(
Customers_Details_ID int PRIMARY KEY,
Customers_ID int,
Customers_Details_Tybe varchar(50) not null,
Customers_Details_Date date not null,
Customers_Details_Amount decimal(16,0) not null,
);
i have created stored procedures to calculate the result of sum of customer's transactions balance and worked fine, and it created as like that
CREATE PROC SP_SUM_CUSTOMERS_DETAILS_AMOUNT
#ID INT
AS
SELECT SUM(Customers_Details_Amount)
FROM TBL_CUSTOMERS_DETAILS
Where Customers_ID = #ID
NOW
i want to make column Customers_Balance on TBL_CUSTOMERS to show by defult the result of that stored procedures....
how i can make something like that ??
Materializing values that can be calculated by other materialize values is usually a bad idea as it bears the risk of inconsistencies.
So you best drop the column Customers_Balance in TBL_CUSTOMERS and the procedure and then create a view which includes the customer's data and their balance. You can do so by a join and aggregation.
ALTER TABLE TBL_CUSTOMERS
DROP COLUMN Customers_Balance;
DROP PROCEDURE SP_SUM_CUSTOMERS_DETAILS_AMOUNT;
CREATE VIEW VW_CUSTOMERS
AS
SELECT C.Customers_ID,
C.Customers_Name,
C.Customers_Phone,
C.Customers_Address,
C.Customers_Web,
sum(CD.Customers_Details_Amount) Customers_Balance
FROM TBL_CUSTOMERS C
INNER JOIN TBL_CUSTOMERS_DETAILS CD
ON CD.Customers_ID = C.Customers_ID
GROUP BY C.Customers_ID,
C.Customers_Name,
C.Customers_Phone,
C.Customers_Address,
C.Customers_Web;
You are looking for a Computed Column
What you need to do is to create a scalar function rather than a stored procedure (simply change your current stored procedure into a scalar function), and then use this function in your computed column. This would give you an auto-updated results on your computed column.
So, redoing your work should be something like this :
-- CREATE THE SCALAR FUNCTION FIRST
CREATE FUNCTION SUM_CUSTOMERS_DETAILS_AMOUNT (#ID INT)
RETURNS INT
AS
BEGIN
RETURN (
SELECT SUM(Customers_Details_Amount)
FROM TBL_CUSTOMERS_DETAILS
WHERE Customers_ID = #ID
)
END
GO
-- NOW DROP THE CURRENT Customers_Balance COLUMN
ALTER TABLE TBL_CUSTOMERS
DROP COLUMN Customers_Balance
GO
-- CREATE THE COMPUTED COLUMN WITH THE FUNCTION
ALTER TABLE TBL_CUSTOMERS
ADD Customers_Balance AS dbo.SUM_CUSTOMERS_DETAILS_AMOUNT (Customers_ID)
GO

Import CSV into SQL Server

I need import a .csv file into SQL Server.
I tried with bulk but that didn't work
I need create table with field left and column with field right .
Example
CREATE TABLE CARGA_TRAFICO_MED_MES_DATASET_IT
(
ID INT NOT NULL IDENTITY(1, 1),
NAME VARCHAR(200),
STATUS varchar(20),
PRIMARY KEY(ID)
);
INSERT INTO CARGA_TRAFICO_MED_MES_DATASET_IT
VALUES("Job_Activity_8 (JOB JOB_CARGA_TRAFICO_DATASET_IT)", "status=1")

SQL Server how to populate data to table2 when table1 updated

I'm creating a table to store cars, and another table to store the time when the new car was added to the database, can someone please explain to me how to create the relationship to update time automatically when the car was created.
Create table Cars
(
CarID int Primary Key identity(1,1),
Make varchar(50),
Model varchar(50),
Colour varchar(59)
)
create Table TimeLogs
(
AddedOn SYSDATETIME(),
CarId int unique foreign key references Cars(CarId)
)
I would solve this by not using a second table for what should be a column in the Cars table. The table would be designed more appropriately like this.
Create table Cars
(
CarID int Primary Key identity(1,1),
Make varchar(50),
Model varchar(50),
Colour varchar(59),
AddedOn datetime default SYSDATETIME()
)
To automatically update one table whenever another table is updated, you need to use a TRIGGER.
You needs to use insert trigger for the same, as below
CREATE TRIGGER yourNewTrigger ON yourSourcetable
FOR INSERT
AS
INSERT INTO yourDestinationTable
(col1, col2 , col3, user_id, user_name)
SELECT
'a' , default , null, user_id, user_name
FROM inserted
go

Improve Insert into table performance in sql

There are a couple of things confusing me at this level.
I have a table with around 40 columns out of which atleast 35 are in where clause at different times in single execution of a procedure.
When these 35 columns are passed a value via stored procedure, the stored procedure call their respective inline TVF's which in turn call a common multiline TVF.
I need to know if I shall consider creating indexes for all these 35 columns (Though I have serious doubts if that can help, but please tell me I am wrong if it does. )
I am inserting data into a Temporary table . This insert goes on for number of parameters passed to stored procedure and execution plan shows it takes quite a considerable amount of time. Is there a way I can improve the performance here ?
The insert query looks like this :
INSERT INTo #Temp2
(RowNumber,ValFromUser,ColumnName,ValFromFunc,FuncWeight,percentage)
SELECT RowNumber,#firstname,'firstname',PercentMatch,
#constVal,PercentMatch * #constVal FROM dbo.MatchFirstName(#firstname)
:
Execution plan is attached :
execution plan
Table with large number of columns is as follows :
create table Patients
(
Rowid int identity(1,1),
firstname nvarchar(20) not null,
middlename nvarchar(20),
lastname nvarchar(20)not null,
DOB Date,
SSN nvarchar(30),
ZIP nvarchar(10),
[State] nvarchar(2),
City nvarchar(20),
StreetName nvarchar(20),
StreetType nvarchar(20),
BuildingNumber int,
Aptnumber nvarchar(10),
patientnickname nvarchar(20),
patientsMaidenlastname nvarchar(20),
fathersFirstName nvarchar(20),
fatherslastname nvarchar(20),
mothersfirstname nvarchar(20),
motherslastname nvarchar(20),
mothersMaidenlastname nvarchar(20),
citizenship nvarchar(20),
nationality nvarchar(20),
ethnicity nvarchar(20),
race nvarchar(20),
religion nvarchar(20),
primarylanguage nvarchar(20),
patientmrn nvarchar(30),
hospitalname nvarchar(30),
Medicaidid nvarchar(10),
pcpnpi nvarchar(10),
phonenumber nvarchar(15),
email nvarchar(30),
CreatedAt datetime default getdate(),
ModifiedAt datetime DEFAULT getdate(),
CreatedBy nvarchar(128) default SUSER_NAME(),
ModifiedBy nvarchar(128) default SUSER_NAME()
);
Temporary table looks like this :
create table #Temp2
(
Rownumber int not null,
ValFromUser nvarchar(30),
ColumnName nvarchar(30),
ValFromFunc decimal(18, 4),
FuncWeight decimal(18, 4),
Percentage decimal(18, 4) not null,
);
ResultsStored table :
create table ResultsStored
(
Sno int identity(1,1),
SearchSerial int,
StringSearched varbinary(8000),
RowId int,
PercentMatch decimal(18,4),
CreatedAt datetime default getdate(),
ModifiedAt datetime default getdate(),
CreatedBy nvarchar(128) default SUSER_Name(),
ModifiedBy nvarchar(128) default SUSER_NAME(),
HashedKey binary(16)
);
Indexes speed up (sometimes) SELECTs. But indexes slow down INSERTs (and also DELETEs, UPDATEs etc.). So it's a bad idea to have too many indexes.
Quite often SELECT is able to use only one index (or even zero). So all other 34 indexes are nothing of help. Keep only those ones your SELECTs really use.
As you say, you have about 40 columns in a table with at least 35 columns are being mentioned in distinct 'WHERE'-clauses. That is your table not just big, but, which is far worse, it has too many potential keys. It's a very bad design. You need to split it to several tables. Read about normalization.

Resources