Sqlserver getting to much time fetch the details - sql-server

I have table like below,
Id(Pk) User_Id C1 C2 C3 C4
1 111 2 a b c
2 111 5 d e f
3 111 7 a f ty
4 222 2 a b c
5 222 5 d e f
6 222 7 a f ty
This table almost having 10L records. And each user_id having almost 10k Records. If I am fetching details by User_id, it is getting almost 5 mins to get the details. Where i have to tune this?
Im using below query
Select * from User where user_id ='111'
And total number of columns in this table is around 130 columns.

Assuming you have a right index defined on (User_id) & just select the column which you actually want so, i would do with backed SSMS:
SET NOCOUNT ON
SELECT User_Id, C1, C2, . . .
FROM User
WHERE user_id = 111;
If, relative index is not present with user_id then you have to pay with poor performance.
Note : If user_id has numeric type, then you don't need to use ''.

If you PK on the Id column is a clustered index, change it to non-clustered and create a clustered index on the User_Id column.
This will most likely make SQL Server use a partial table scan, and as all requested records will be saved "close to eachother", this might reduce the number of page reads required to retrieve the values (which can be "grabbed" from the disk without further criteria checks).

Related

Excel to SQL server : UPDATE table with equivalent of : SUMIF divide by SUMIF

I'm moving an Excel database into an SQL Server database and I have some problems translating this excel query into SQL.
In my Excel, I have two tables (table1 and table2).
In the table1, there is a column 'total_operation' who divides the result of two 'SUMIF' :
= SUMIF(table2[operation_value]; table2[operation_id]; [#[operation_id]])
/ SUMIF(table2[quantity]; table2[operation_id]; [#[operation_id]])
table 1 (#)
invoice_number
operation_id
total_operation
1
A11
2
A12
3
A13
table2
operation_id
operation_value
quantity
A11
111.45
2
A11
34.00
1
A11
29.00
3
A12
20.40
1262
A12
34.00
5
A12
1257.00
18
A13
1.45
435
The result of the first row for 'total_operation' would be : [(111.45 + 34.00 + 29.00)] / [2 + 1 + 3)] = 174,45 / 6 = 29,07
Do you think it's possible to make an UPDATE of the table1.total_operation_value in one single query?
I don't think I quite understand your tables (if that's the whole table then you probably want an unique ID column somewhere, although I can't quite tell what's going on), so sorry if this answer is quite vague and confusing; but table 2 is consistent and operation A11 always has exactly the same values I think #larnu is right and a View might help. (Actually whatever's going on, I think larnu is probably right)
If you had a view such as
CREATE VIEW view_name AS
SELECT operation_id, sum(operation_value) /sum(quantity) as total_operation
FROM table2
and changed table1 to just have invoice number and operation id, then whenever you needed the quantity you could do
select t1.invoice_number, t1.operation_id, v.total_operation
from table1 t1
join view_name v on v.operation_id = t1.invoice_number
That way your table doesn't go out of date if you ever decided to add a row to table2 in five years time, as it's already accounted for.
If you think you might have an invoice with more than one item on it, you probably want a one to many invoice to operation id table somewhere, but the view would still be useful.
Otherwise if it's always consistent and you don't need the rows in table two, you could just replace table two with a table of just one row per operation id and the total operations value, if you have them in Excel already you should be able to transfer them over as hard values.

Multiple updates in SQL Server

I have 500 records in Excel and it contains some columns which is same as the columns in SQL DB Table. I have old column value
as well as new column value in excel.
Example
Excel
Name Age OldEmployeeID NewEmployeeID
x 1 100 200
y 2 101 201
z 3 102 202
SqlTable
EmployeeTable
Name Age Department City EmployeeID
x 1 HR x 100
a 4 HR x 103
y 2 Admin x 101
b 5 Finance x 104
c 3 IT x 102
I want to update EmployeeID column in SQL Table as NewEmployeeID which is there in excel.
Can anyone suggest how to write a sql query to update the sql table.
Assuming the column is not an ID Column or having some other constraint you can:
Option 1:
Assuming your table is similar to the screenshot you have a concatenated column with this formula and drag it down and copy it directly into SQL.
=CONCATENATE("UPDATE dbo.EmployeeTable SET EmployeeID = ",D2," WHERE EmployeeID = ",C2,";")
Option 2:
You can import the Excel file into SQL and use a statement like the one below. However, be cautious if any of the old EmployeeIDs overlap the new EmployeeIDs. For Example if Jim has ID 100 and his new one is 500 and Jane's new ID is 100 and if you accidentally run it a second time, the Jane will also get an ID of 500.
UPDATE EmployeeTable SET EmployeeID = Excel.NewEmployeeID
FROM
EmployeeTable
JOIN ExcelTable
ON Excel.OldEmployeeID = EmployeeTable.EmployeeID
;

SQLite Row_Num/ID

I have a SQLite database that I'm trying to use data from, basically there are multiple sensors writing to the database. And I need to join one row to the proceeding row to calculate the value difference for that time period. But the only catch is the ROWID field in the database can't be used to join on anymore since there are more sensors beginning to write to the database.
In SQL Server it would be easy to use Row_Number and partition by sensor. I found this topic: How to use ROW_NUMBER in sqlite and implemented the suggestion:
select id, value ,
(select count(*) from data b where a.id >= b.id and b.value='yes') as cnt
from data a where a.value='yes';
It works but is very slow. Is there anything simple I'm missing? I've tried to join on the time difference possibly, create a view. Just at wits end! Thanks for any ideas!
Here is sample data:
ROWID - SensorID - Time - Value
1 2 1-1-2015 245
2 3 1-1-2015 4456
3 1 1-1-2015 52
4 2 2-1-2015 325
5 1 2-1-2015 76
6 3 2-1-2015 5154
I just need to join row 6 with row 2 and row 3 with row 5 and so forth based on the sensorID.
The subquery can be sped up with an index with the correct structure.
In this case, the column with the equality comparison must come first, and the one with unequality, second:
CREATE INDEX xxx ON MyTable(SensorID, Time);

OLTP-Database design

I need help. I have 2 tables Books and Authors
One book can have multiple Authors
One Author can write multiple Books
So I designed Mapping/Junction table to maintain this relation
My requirement - I want to get Book ID,Name for the given Author combination.
Say in below example Book 'B3' (103) written by Author A2 & A3. So my input will be 302 & 303 (A2 & A3 id's) and query should give me 103 (book id)
Pl suggest schema changes if require
Here is the sample code work in SQL Server 2005 and above
declare #tbl_Books TABLE (Book_ID INT, Book_Name VARCHAR(500))
declare #tbl_Authors TABLE (Author_ID INT, Author_Name VARCHAR(50))
declare #tbl_Mapping TABLE (Mapping_ID INT IDENTITY(1,1), Book_ID INT, Author_ID INT)
insert into #tbl_Books VALUES (101,'B1'),(102,'B2'),(103,'B3')
insert into #tbl_Authors VALUES (301,'A1'),(302,'A2'),(303,'A3')
insert into #tbl_Mapping VALUES (101,301),(101,302),(102,301),(102,302),(101,303),(103,302),(103,303)
select * from #tbl_Books
select * from #tbl_Authors
select * from #tbl_Mapping
Table : tbl_Books
==========
Book_ID Book_Name
101 B1
102 B2
103 B3
Table: tbl_Authors
===================
Author_ID Author_name
301 A1
302 A2
303 A3
Table:tbl_Mapping
==============
Mapping_ID Book_ID Author_ID
1 101 301
2 101 302
3 102 301
4 102 302
5 102 303
6 103 302
7 103 303
This isn't pretty but it works:
SELECT x.book_id, b.book_name
FROM (SELECT book_id, COUNT(*) AS num FROM tbl_mapping GROUP BY book_id) x --Get all books with a count of their authors
INNER JOIN (SELECT book_id FROM tbl_mapping WHERE author_id IN (302,303)) y --Get all books which involve the specified authors
ON y.book_id = x.book_id
INNER JOIN tbl_books b
ON b.book_id = x.book_id
WHERE x.num = 2 --Filter for books which have exactly the required number of authors
GROUP BY x.book_id, b.book_name
HAVING COUNT(*) = 2 --Filter for how many times each book appears in the results. We want those that appear as many times as there are authors being searched
To make it less static you would somehow have to get your IN clause to be built according to the list of author IDs you supply and where it says = 2 you would need to change the 2 to the number of authors being search by.
I tested it by adding another book to your example data written by only one author and adjusting the query accordingly. It returned what I expected. Also tried the book with three authors which works too. This hardly constitutes robust testing but it proves the basic concept. I'm certain there's a nicer way to do this possibly using window functions but frankly it's my dinner time and I'm starving so I can't think of it!
So you are looking for Book ID and Book names for a given set of authors.
You could try something like (very much pseudo-sql):
select tb.Book_ID, tb.Book_Name, SUM(tm.Author_ID) as authors FROM tbl_Mapping tm
INNER JOIN tbl_Book tb on tb.Book_ID = tm.Book_ID
WHERE tm.Author_ID IN ( <your list of authors>)
AND authors = (<the number of authors passed in>)
GROUP BY tb.Book_ID
But I'm not sure of the legality of the authors alias as a filter (I've never really done this direclty in SQL)
A programatic approach, however, would be to have a query like:
select Book_ID from tbl_Mapping WHERE Author_ID = <One author ID>
And put it in a loop. The above query is the first execution, and later queries also have
AND BOOK_ID IN (<List of Book IDs returned by previous loops)
you loop until you run out of authors, and then you run those IDs through a query to get the name (or you tack the name on to the previous queries and also track it).

Database Designing and Normalization issue

I have a huge access mdb file which contains a single table with 20-30 columns and over 50000 rows and
i have some thing like this
columns:
id desc name phone email fax ab bc zxy sd country state zip .....
1 a ab 12 fff 12 w 2 3 2 d sd 233
2 d ab 12 fff 12 s 2 3 1 d sd 233
here I have some column values related to addresses repeating is there a way to normalize the above table so that we can remove duplicates or repeating data.
Thanks in advance.
Here's a quick answer. You just need to move your address fields to a new table (remove dups) and add a FK back to your primary table.
Table 1 (People or whatever)
id desc name phone email fax ab bc zxy sd address_id
1 a ab 12 fff 12 w 2 3 2 1
2 d ab 12 fff 12 s 2 3 1 2
3 d ab 12 fff 12 s 2 3 1 2
4 d ab 12 fff 12 s 2 3 1 1
Table 2 (Address)
address_id country state zip .....
1 d sd 233
2 e ac 123
Jim W has a good start, but to normalize even further, make your redundant address elements into separate tables as well.
Create the tables for which address data is repeated (Country, State, etc.) Once you have your data tables, you'll want to add columns such as StateID, CountryID, etc. to the Address table.
You now have options for fixing the existing data. You can be quick and dirty and use Update statements to set all the newly created ID fields to point to the right data table.
UPDATE Addresses SET StateID=1 WHERE STATE='AL'
You can do this fairly quickly as a batch .sql file, but I'd recommend a more programmatic solution that rolls through the Address table and tries to match the current 'State' to an entry in the new States table. If found, the StateID on the Address table is updated with the id from the corresponding row in States.
You can then delete the old State field from the address table, as it is now normalized nice and neatly into a separate States table.
This process can be repeated for all redundant data elements. However, IMO db normalization can be taken too far. For example, if you have a commonly used query that, after normalization, requires 10 joins to accomplish, you may see a performance reduction. This doesn't appear to be the case here, as I think you're on the right track.
From a comment above:
#Lance i wanted something similar to that but here is the problem i have raw data coming in the form of single table and i need to refine and send it to two tables i can add address in table 2 but i m not undertanding how would you insert the address_id in table 1
You can retrieve the newly created ID from the address table using ##IDENTITY, and update the address_ID with this value.

Resources