I have one view vwBalance which returns more than 150.000.000 rows, bellow is the code:
SELECT *
FROM dbo.balance INNER JOIN
dbo.NomBalance ON dbo.balance.IdNomBil = dbo.NomBalance.Id
But I want to transpose the values return so I use a PIVOTE function like this:
SELECT An, cui,caen, col1, col2, ... col 100
FROM (SELECT cui, valoare, cod_campbil,caen,An
FROM vwBilant WITH (NOLOCK)
p PIVOT (MAX(valoare) FOR cod_campbil IN ( col1, col2, ... col100 ) AS pvt
The questions are:
Should i use query hint inside the view vwBalance? This hint could improve or could lock the transpose action?
It's a problem if I use NOLOCK hint instead of the other query hints?
There are better ways to improve transposing many columns?
Thanks!
I can give the following adviŅes:
you can use hint readpast if it does not broke your business logic
you can create clustered index for this view. It materialize you view, but performance of changing operation will be decreased for all tables that are used in this view.
also, you should check indexes for fields that you use in join and where clauses.
and you can use preprocessing. So, you insert this values in some
another table (for example at night). In this case you can use
columnstore index or just make page compression for this table
as well as you can use page compression for all tables that are
used in this view.
Related
I am creating view in Snowflake that has CTE on base table without any filters. I have other CTEs that depend on Parent CTE to fetch further information.
Everything is working fine when I query all records from base table that has 45K rows. But when I query view for one particular ID, explain plan shows Base CTE is picking up 45K rows, joining rest of CTE on 45K rows then finally applying my unique ID filter and returning one row.
I am not getting any difference in performance pulling data for all records or one record. Snowflake is not optimizing base CTE to apply the filter criteria I am looking for.
Any suggestions how can I resolve this issue? I used local variables in filter criteria of base CTE but it is not viable solution.
CREATE OR REPLACE VIEW test_v AS
WITH parent_cte as
(select document_id, time, ...
from audit_table
),
emp_cte as
(select employee_details, ...
from employee_tab,
parent_cte
where parent_cte.document_id = employee_tab.document_id),
dep_cte as
(select dep_details, ....
from dependent_tab,
emp_cte
where ..........)
select *
from dep_cte, emp_cte, base_cte;
Now when I query the view for one document_id, plan is fetching all data and joining then applying filter which is not efficient.
select * from test_v where document_id = '1001';
I can't use these tables in one select with join condition as I am using "LATERAL FLATTEN" which is cross multiplying each base table record so I am going with CTE approach.
Appreciate your ideas.
My application access data from a table in SQL Server. Consider the table name is PurchaseDetail with some other columns.
The select query has below where clauses.
1. name - name has 10000 values only.
2. createdDateTime
The actual query is
select *
from PurchaseDetail
where name in (~2000 name)
and createdDateTime = 'someDateValue';
The SQL tuning advisor gave some recommendation. I tried with those recommended indexes. The performance increased a bit but not completely.
Is there any wrong in my query? or Is there any possible to change/improve my select query?
Because I didn't use IN in where clause before. My table having more than 100 million records.
Any suggestion please?
In this case using IN for that much data is not good at all.
this best way is to use INNER JOIN instead.
It would be nicer if insert those names into a temp table and INNER JOIN it with your SELECT query.
I know this is DBA 101, but the question was asked and I have no answer ... well I guess what the answer is but need to confirm. I have a view (View1) that is a simple select of one table: Select col1, col2 .... from table1 where col2=0.
Table1 is indexed of course. When I run the Executation plan .. same results when selecting from the table or the view. So, would an index change anything on the view since it has no joins? Using SQL Server 2016.
this is my first question here. I am very new into SQL Server and T-SQL.
I would like to create a table, with a column that is using data from another column. I thought I could use select function, but it is not allowed.
How to do it?
It is very simple to create view in this way, but I would like to have a table not view.
It should look like
Column A, ColumnB,
Column C=select count(*) from [another table] where....
Could you please advise?
SELECT [COLUMN A],[COLUMN B],COUNT(*) as [COLUMN C]
INTO [destination table] FROM [another table] where...
You should use an alias
You create a table using the create table syntax because you will need to define the field names and sizes. Look the syntax up in Books Online. Do not ever use SELECT INTO unless you are creating a staging table for one-time use or a temp table. It is not a good choice for creating a new table. Plus, you don't say where any of the other columns come from except the column one, so it is may be impossible to properly set up the correct field sizes from the initial insert. Further, well frankly you should take the time to think about what columns you need and what data types they should be, it is irresponsible to avoid doing this for a table that will be permanently used.
To populate you use the Insert statement with a select instead of the values statement. If only column c come from another table, then it might be something like":
Insert table1 (colA, Colb, colC)
select 'test', 10, count(*)
from tableb
where ...
If you have to get the data from multiple tables, then you may need a join.
If you need to maintain the computed column as the values change in TableB, then you may need to write triggers on TableB or better (easier to develop and maintain and less likely to be buggy or create a data integrity problem) use a view for this instead of a separate table.
Ok, I have a table that has 10 years worth of data, and performance is taking a hit. I am planning on moving the older data to a seperate historicaltable. the problem is i need to select from the first table if it is in there and the 2nd table if not. I do not want to do a join because then it will do a lookup on the 2nd table always. HELP?
IF you still need to query the data in no way would I move it to another table. How big is the table now? What are the indexes? Have you considered partioning the table?
If you must move to another table, you could query in stored procs with an if statement. Query the main table first and then if the rowcount = 0 query the other table. It will be slower for records not in the main table but should stay fast if they are in there. However, it wouldn't know when you need records from both.
Sample of code to do this:
CREATE PROC myproc (#test INT)
AS
SELECT field1, field2 from table1field1, field2 from table1
IF ##rowcount = 0
BEGIN
SELECT field1, field2 FROM table2 field1, field2 from table1
END
But really the partioning and indexing correctly is probaly your best choice. Also optimize existing queries. If you are using known poorly performing techniques such as cursors, correlated subqueries, views that call views, scalar functions, nonsargable where clauses, etc. just fixing your queries may mean you don't have to archive.
Sometimes, buying a better server would help as well.
Rather than using a separate historical table, you might want to look into partitioning the table by some function of the date (year perhaps?) to improve performance instead.