I have a big database, I get about 100.000 data for a day, and database is getting bigger fastly
when I make a select query for last added rows , select query looking all datas in tables.
For example;
Select ... WHERE INSERT_TIME > TO_TIMESTAMP('27/11/2014 16:12:09,383418' , 'DD/MM/YYYY HH24:MI:SS,FF')
this query look at all rows if it is added after that time or not. But I will just use last added rows so I dont wanna look at all rows if it is new or not. that is I want to eliminate all rows which I used before when searching the table for new entries.
Additionaly I dont wanna create any new table for this and dont wanna delete any rows from the table
Is there any way or solution for this?
Thank you advance!
Related
I am trying to write into my_table, in column ABC values to rows where id match.
My issue now is that I'm adding new rows, which is not what I want. I want to write on the row when col id from my_table matches id with my query.
I tried using a lookup but so far unsuccessfully.
whats the best approach you would suggest?
When creating ad-hoc queries to look for information in a table I have run into this issue over and over.
Let's say I have a table with a million records with fields id - int, createddatetime - timestamp, category - varchar(50) and content - varchar(max). I want to find all records in the last day that have a certain string in the content field. If I create a query like this...
select *
from table
where createddatetime > '2018-1-31'
and content like '%something%'
it may complete in a second because in the last day there may only be 100 records so the LIKE clause is only operating on a small number of records
However if I add one more item to the where clause...
select *
from table
where createddatetime > '2018-1-31'
and content like '%something%'
and category = 'testing'
then it could take many minutes to complete while locking up the table.
It appears to be changing from performing all the straight forward WHERE clause items first and then the LIKE on the limited set of records, over to having the LIKE clause first. There are even times where there are multiple LIKE statements and adding one more causes the query to go from a split second to minutes.
The only solutions I've found are to either generate an intermediate table (maybe temp tables would work), insert records based on the basic WHERE clause items, then run a separate query to filter by one or more LIKE statements. I've tried various JOIN and CTE approaches which usually have no improvement. Alternatively CHARINDEX also appears to work though difficult to use if trying to convert the logic of multiple LIKE statements.
Is there any hint or something that can be placed in the query statement to tell sql server to wait until records are filtered by the basic WHERE clause items before filtering by the LIKE?
I actually just tried this approach and it had the same issue...
select *
from (
select *, charindex('something', content) as found
from bounce
where createddatetime > '2018-1-31'
) t
where found > 0
while the subquery independently returns in a couple seconds, the overall query just never returns. Why is this so bad
Not fancy, but I've had better luck with temp tables than nested select statements... It will isolate the first data set, and then you can select just from that. If you're looking for quick and dirty, which usually serves my purposes for ad-hoc, this may help. If this is a permanent stored proc, the indexing suggestions may serve you better in the long run.
select *
into #like
from table
where createddatetime > '2018-1-31'
and content like '%something%'
select *
from #like
where category = 'testing'
I have a table tblCriteria that contains a small (<20) set of records. Each record has a field of criteria.
I want SQL to move through these records when requested tblFilterRun, filter the main table tblRecords (~5000 records) and then insert some key fields from the matching records into another table tblFilterResults.
tblCriteria (CriteriaID, CriteriaText)
tblFilterRun (FilterRunID, FilterRunDate)
tblFilterResults (FilterResultsID, FilterRunID, RecordID, Ref, CustomerID, SupplierID
tblRecords (RecordID, CustomerID, SupplierID...)
Previously I would have created something in Access to iterate through each tblCriteria record, but I would like a purely server solution. I've heard cursors mentioned (usually at the same time as a profanity), what are my options?
It's not really clear what you need to do with the records in tblCriteria, but can you created a UDF that would do the work of processing one record? Then you can call it on every record using one query like
SELECT *
FROM tblCriteria
CROSS APPLY dbo.udf_yourFunction(parameter1, parameter2, etc)
So I'm trying to copy some data from database table to another. The problem is though, the target database table has 2 new columns that are required. I wanted to use the export/import wizard on SQL Server Management Studio but if I use that I will need to write a query for each table and I can only execute 1 query at a time. Was wondering if there are a more efficient way of doing it.
Here's an example of 1 table:
dbase1.dbo.Appointment { id, name, description, createdate }
dbase2.dbo.Appointment { id, name, description, createdate, auditby, auditat}
I have a total of 8 tables with those 2 additional columns. and most of them are related to each other via fk, so I wanted to use the wizard as it figures out which table gets inserted first. The problem with that is, it only works if I do a "copy data from one or more tables " and not the "write a query to specify data" (I use this to populate those two new columns).
I've been doing this very slow process in copying data as I'm using MVC Code First for my application and I dont have access to the server to be able to drop and create the table at my leisure. So I have to resort to this to maintain the data that I already have.
An idea: temporarily disable the foreign key constraints in the destination database. Then it doesn't matter what order you run your inserts. In order to populate the two new and required columns, you just need to pick some stock values to put in there (since obviously these rows initially are not subject to initial auditing). For example:
INSERT dbase2.dbo.appointment
(id, name, description, createdate, auditby, auditat)
SELECT id, name, description, createdate,
auditby = 'me', auditat = GETDATE()
FROM dbo.appointment;
Since it seems the challenge is merely that the destination requires columns that aren't in the source, and that you need to determine what should be populated in these audit columns, this seems to solve multiple problems at once. You just need to figure out what to put in there instead of 'me' and GETDATE().
(To get the wizard to pull these 8 tables for you, you might be able to create a view similar to the select portion of the above query, but that's more work and it won't see the underlying FK constraints to generate them in the right order anyway.)
Write the sql query for each of the insert processes in the order you want it. That would be the simplest approach.
Set the Default values for these two columns
Like for AuditAt - Default Date i.e. GetDate()
For AuditBy - The Person ID/Name
Now, you can Insert into these tables without entering for these two columns
Currently I have a Item table and a ItemWaste table.
Both tables will have some fields, such as: Name, Amount, etc. But the ItemWaste table will have one more field, which is the TimeWasted.
I wish to automatically insert the DELETED item from the Item table to the ItemWaste table, and at the same time insert the deletion time to the TimeWasted field.
I got no idea how to do this, is it using trigger???
Hope can get some help here... Appreciate any feedback... Thanks....
Sure - not a problem.
You need a basic AFTER DELETE trigger - something like this:
CREATE TRIGGER trg_ItemDelete
ON dbo.Item
AFTER DELETE
AS
INSERT INTO dbo.ItemWaste(Name, Amount, TimeWasted)
SELECT d.Name, d.Amount, GETDATE()
FROM Deleted d
That's all there is! One point to remember: the trigger is called once per batch - e.g. if you delete 100 rows at once, it will be called once and the pseudo table Deleted will contain 100 rows. The trigger is not called once per row (a common misconception).
Yes, simply by writting a trigger you can insert a row when an delete action is performed in another table, have a look at Triggers