Update Stats and Table Count - sql-server

I have few questions.
Is there any way to get the Table Row Count if we are not maintaining historical data for count.
Below is for Update statistics
Should we run update statistics in our database for all the tables? The database is highly transactional.
How should I calculate the sample size that will suite for all the tables.
There are some tables which gets reindex, this we will ignore. We have a job, which reorganise some tables.
Now the decision need to be taken what table should we update statistics to:
The table which has been reorganised
Or
Table which has its statistics outdated.

To get the number of rows in a table, assuming the table has a primary key field:
SELECT COUNT(PrimaryKeyField) AS NoOfRows FROM table

Related

Query optimization_T- SQL

we have 14M data in a source table and wanted to know what are all the possible ways to insert data from source table to destination table without dropping indexes. Indexes are created on destination table but not on source table. In SSIS package we tried with Data flow task, Lookup and by using Execute SQL task, but there is no use and lacking in performance. So, Kindly let me now the possible ways to speedup the insertion without dropping indexes. Thanks in advance
let me now the possible ways to speedup the insertion without dropping indexes
It really depend upon the complete set up.
What is real query like or what is table schema ?
How many indexes are there ?
Is it one time operation or daily avg will be 14m.
General answer :
i) Run insert operation in downtime or during very minimum traffic hour.
ii) Use TABLOCK hint
INSERT INTO TAB1 WITH (TABLOCK)
SELECT COL1,COL2,COL3
FROM TAB2
iii) You can think of disable/Rebuild index
ALTER INDEX ALL ON sales.customers
DISABLE;
--Insert query
ALTER INDEX ALL ON sales.customers
REBUILD ;
GO
iv) If source server is different then you can put source data in some destination parking table which is without index.Now again another job will put from parking to destination table.
Transaction between same server will be relatively faster.

Identity column in a budget table

Programming noob here, and DBA I am not..
I am creating a table called 'budget' on a 2005 sqlserver databases. The purpose of this table is to simply store the monthly $ allowed to departments for budgeting purposes.
The essential columns in this table will be month, year, dept, amount. I am migrating this from an old foxpro table which did not have an identity/primary key column.
My question is, for my purposes do I need to worry about creating an identity column? I am having a hard time importing the data into SQLserver and having it populate the ID column, so I am inclined to just skip it if it's not needed. Thanks for your $.02
If you're specifying a PK in your insert statement, you'll need to use SET IDENTITY_INSERT <tableName> ON at the beginning of your query.
For more information, look here.

Column Indexing and DELETE query performance

I have a table with few Columns like ID (PK), Name, created_time etc. and periodically I DELETE rows from this table by using this simple DELETE query.
DELETE FROM my_table WHERE created_time < 'some time';
Just want to know what will be the performance impact on INSERT, SELECT and DELETE if I make or not make an INDEX on created_time.
This table may have millions of rows and one DELETE query may delete rows in hundred thousands in one go.
Database : Oracle, JavaDB, DB2, SQL Server
If you make an INDEX on created_time.
1. Insert it will slow because it need to check column created_time is one time only(like a key).
2. SELECT and DELETE will quick if your condition where WHERE created_time = 'some time' It's mean if it find the first row of condition it will stop.

Data modeling: (Why) Should I always create indexes on the FK columns?

Consider the following scenario:
ParrentLookupTable:
PK: SomeCode CHAR(10)
has only 5 rows.
Does not change often
MyTable:
One million rows are being added daily.
Has the column SomeCode FK to ParrentLookupTable
There is no query that search or sort based on SomeCode
Here is the FK definition
ALTER TABLE MyTable CONSTRAINT FK_MyTable_ParrentLookupTable_SomeCode FOREIGN KEY(SomeCode)
REFERENCES ParrentLookupTable (SomeCode)
Should I create an index IX_MyTable_SomeCode?
The index would cost IO during this write intensive workload scenario and I am not sure how it becomes useful?
In general adding an index on the column in the parent table is to speed up the FK check. In this case it might not help at all. ParentLookupTable will fit into one database page so it should not matter much if it is doing a table scan or a index seek.
Adding an index on the FK column on the child table could help in the case of cascading deletes.
Here is a longer article on the subject
http://sqlperformance.com/2012/11/t-sql-queries/benefits-indexing-foreign-keys

Nano second precision over datetime column in sybase

We have a audit table which tracks the changes in master table by an insert/update trigger. The trigger will copy all new field values in master table to audit table. We have a datetime column in audit table which tracks the time which insert/update happened on master table (getdate()).
We have unique index over primary key and time column. The problem is if more than an update happens almost at the same time on the master table, then it ends in unique key violation.
Is there any datetime type which captures nanosecond level of precision?
The DB should inherently handle the updates of the same record via ACID. "Cheesing" the audit table with a joint master_table_id / updatetime primary key to prevent "too many updates" in a short period of time is probably not the right approach...especially as performance improves via new hardware...you could have more "legitimate" updates that your pk is preventing.
I hate to ask, but what type of operation are you performing that's updating the same row, many times, at the sub-millisecond level? Are you updating col2, then col3, then col4 all for the same PK via some JDBC or ADO connection?
Can you batch these "many" updates into 1 stored procedure call via inputs to the stored proc, so you limit your write operations? This would be faster, and provide less churn on the audit trail.

Resources