T-SQL, SSRS: Set up automatic daily Inserts into Table - sql-server

I'm using SQL Server 2012.
SSMS 11.0.6248.0.
I want to create an automated way of Inserting data [using a T-SQL insert statement] into a SQL Server table before users start using the system [third-party business system] each morning.
I do a lot of SSRS reporting and creating subscriptions; know how to do inserts using T-SQL, and I am familiar with stored procedures, but I have not had to automate something like this strictly within SQL Server.
Can I make this happen on a schedule - strictly in the SQL Server realm [i.e. using SSRS ... or a stored procedure ... or a function]?
Example Data to read:
Declare #t Table
(
DoctorName Varchar(1),
AppointmentDate Date,
Appointments Int
)
Insert Into #t select 'A','2018-10-23', 5
Insert Into #t select 'B','2018-10-23', 5
Insert Into #t select 'C','2018-10-23', 5
Insert Into #t select 'D','2018-10-23', 5
Insert Into #t select 'E','2018-10-23', 5
Insert Into #t select 'F','2018-10-23', 5
Insert Into #t select 'G','2018-10-23', 5
Insert Into #t select 'H','2018-10-23', 5
Insert Into #t select 'I','2018-10-23', 5;
Select * From #t
The value in Appointments changes through the day as Doctors see patients. Patients may cancel. Patients may walk in. Typically, at the end of the day Doctors end up seeing more patients than they have scheduled at the start of the day. [I set the number at 5 for all Doctors at the start of the above day].
I want to capture the data as it is at the start of each day - before the Clinic opens and the numbers change - and store it in another Table for historic reporting.
I hope this simplified example clarifies what I want to do.
I would appreciate any suggestions on how I might best go about doing this.
Thanks!

This sounds like a job for the SQL Server Agent. A more specific suggestion will require a more detailed description of what you're doing (with sample data, preferably).

You can use SSIS to create a job that you can then schedule. Since you are familiar with stored procedures, you would create your SP first then in SSIS add a Control Flow of Execute SQL Task and configure it according to your needs.
If that doesn't work for you, you could create an application to run on a Timer that executes your SP, however, since you want to stay in the SQL realm, SSIS is the place to look.

Related

Combine these 3 SQL steps into 1 query

SQL Server 2017 (In Azure) - when I need to create a new client in our clients database, I have to run three separate queries, and in between each query, do a lookup to be able to populate a part of the next query. I'd like to see if there is a way to combine all this into one query, or, parameterized stored procedure:
All of this takes place in the same database called Clients:
Step 1 - Create the client record in dbo.clients:
INSERT INTO dbo.clients
(ClientGuid, Name, Permissions)
VALUES
(NEWID(), 'Contoso', 1)
Step 2 - Get the Primary Key which was auto-created in Step 1:
SELECT ClientKey from dbo.clients
WHERE Name = 'Contoso'
Now write down the primary key (ClientKey) from that record, we'll say 12345678
Step 3 - Create a new billing code in the dbo.billingcodes table:
INSERT INTO dbo.billingcodes
(BillingCodeGuid, ClientKey, Name, ScoreId)
VALUES
(NEWID(), 12345678, 'Contoso Production Billing Code', 1)
How can I combine all this into one query or parameterized stored procedure where all I have to enter in are the two names from step 1 and 3 (assume the Permissions and ScoreId integers are always going to be 1) and also get an output at the end of the process of the created values for dbo.clients.ClientKey and dbo.billingcodes.BillingCodeGuid?
You could create a procedure that consists of both inserts with a line in between to get the ID of the inserted client. Assign the ID to a variable and pass it in to the second part.
See this post about some different ways about getting the inserted record’s ID Best way to get identity of inserted row?
You could do it by using procedure. You may find this link for creating procedure in SQL Server Link.
In case of Procedure , need to insert your data into first table. Then using IDENT_CURRENT (Ident_Current) you'll get your last inserted id from table, which will further use to insert it into next table.

How to access the a table ABC_XXX constantly in Teradata where XXX changes periodically?

I have a table in Teradata ABC_XXX where XXX will change monthly basis.
For Ex: ABC_1902, ABC_1812, ABC_1904 etc...
I need to access this table in my application without modifying the code every month.
Is that any way to do in Teradata or any alternate solution.??
Please help
Can you try using DBC.TABLES in subquery like below:
with tbl as (select 'select * from ' || databasename||'.'||tablename as tb from
dbc.tables where tablename like 'ABC_%')
select * from tbl;
If you can get the final query executed in your application, you will be able to query the required table without editing the query.
The above solution is with expectation that the previous month's table gets dropped whenever a new month's table is created.
However, if previous table is not being dropped, then you can try the below approach:
select 'select * from db.ABC_' ||to_char(current_date,'YYMM')
Output will be
select * from db.ABC_1902
execute the output in your application, you will be able to query dynamic table.

SSIS data flow - copy new data or update existing

I queried some data from table A(Source) based on certain condition and insert into temp table(Destination) before upsert into Crm.
If data already exist in Crm I dont want to query the data from table A and insert into temp table(I want this table to be empty) unless there is an update in that data or new data was created. So basically I want to query only new data or if there any modified data from table A which already existed in Crm. At the moment my data flow is like this.
clear temp table - delete sql statement
Query from source table A and insert into temp table.
From temp table insert into CRM using script component.
In source table A I have audit columns: createdOn and modifiedOn.
I found one way to do this. SSIS DataFlow - copy only changed and new records but no really clear on how to do so.
What is the best and simple way to achieve this.
The link you posted is basically saying to stage everything and use a MERGE to update your table (essentially an UPDATE/INSERT).
The only way I can really think of to make your process quicker (to a significant degree) by partially selecting from table A would be to add a "last updated" timestamp to table A and enforcing that it will always be up to date.
One way to do this is with a trigger; see here for an example.
You could then select based on that timestamp, perhaps keeping a record of the last timestamp used each time you run the SSIS package, and then adding a margin of safety to that.
Edit: I just saw that you already have a modifiedOn column, so you could use that as described above.
Examples:
There are a few different ways you could do it:
ONE
Include the modifiedOn column on in your final destination table.
You can then build a dynamic query for your data flow source in a SSIS string variable, something like:
"SELECT * FROM [table A] WHERE modifiedOn >= DATEADD(DAY, -1, '" + #[User::MaxModifiedOnDate] + "')"
#[User::MaxModifiedOnDate] (string variable) would come from an Execute SQL Task, where you would write the result of the following query to it:
SELECT FORMAT(CAST(MAX(modifiedOn) AS date), 'yyyy-MM-dd') MaxModifiedOnDate FROM DestinationTable
The DATEADD part, as well as the CAST to a certain degree, represent your margin of safety.
TWO
If this isn't an option, you could keep a data load history table that would tell you when you need to load from, e.g.:
CREATE TABLE DataLoadHistory
(
DataLoadID int PRIMARY KEY IDENTITY
, DataLoadStart datetime NOT NULL
, DataLoadEnd datetime
, Success bit NOT NULL
)
You would begin each data load with this (Execute SQL Task):
CREATE PROCEDURE BeginDataLoad
#DataLoadID int OUTPUT
AS
INSERT INTO DataLoadHistory
(
DataLoadStart
, Success
)
VALUES
(
GETDATE()
, 0
)
SELECT #DataLoadID = SCOPE_IDENTITY()
You would store the returned DataLoadID in a SSIS integer variable, and use it when the data load is complete as follows:
CREATE PROCEDURE DataLoadComplete
#DataLoadID int
AS
UPDATE DataLoadHistory
SET
DataLoadEnd = GETDATE()
, Success = 1
WHERE DataLoadID = #DataLoadID
When it comes to building your query for table A, you would do it the same way as before (with the dynamically generated SQL query), except MaxModifiedOnDate would come from the following query:
SELECT FORMAT(CAST(MAX(DataLoadStart) AS date), 'yyyy-MM-dd') MaxModifiedOnDate FROM DataLoadHistory WHERE Success = 1
So the DataLoadHistory table, rather than your destination table.
Note that this would fail on the first run, as there'd be no successful entries on the history table, so you'd need you insert a dummy record, or find some other way around it.
THREE
I've seen it done a lot where, say your data load is running every day, you would just stage the last 7 days, or something like that, some margin of safety that you're pretty sure will never be passed (because the process is being monitored for failures).
It's not my preferred option, but it is simple, and can work if you're confident in how well the process is being monitored.

Getting bulk data into a busy table

I am currently performing analysis on a client's MSSQL Server. I've already fixed many issues (unnecessary indexes, index fragmentation, NEWID() being used for identities all over the shop etc), but I've come across a specific situation that I haven't seen before.
Process 1 imports data into a staging table, then Process 2 copies the data from the staging table using an INSERT INTO. The first process is very quick (it uses BULK INSERT), but the second takes around 30 mins to execute. The "problem" SQL in Process 2 is as follows:
INSERT INTO ProductionTable(field1,field2)
SELECT field1, field2
FROM SourceHeapTable (nolock)
The above INSERT statement inserts hundreds of thousands of records into ProductionTable, each row allocating a UNIQUEIDENTIFIER, and inserting into about 5 different indexes. I appreciate this process is going to take a long time, so my issue is this: while this import is taking place, a 3rd process is responsible for performing constant lookups on ProductionTable - in addition to inserting an additional record into the table as such:
INSERT INTO ProductionTable(fields...)
VALUES(values...)
SELECT *
FROM ProductionTable (nolock)
WHERE ID = #Id
For the 30 or so minutes that the INSERT...SELECT above is taking place, the INSERT INTO times-out.
My immediate thought is that SQL server is locking the entire table during the INSERT...SELECT. I did quite a lot of profiling on the server during my analysis, and there are definitely locks being allocated for the duration of the INSERT...SELECT, though I fail remember what type they were.
Having never needed to insert records into a table from two sources at the same time - at least during an ETL process - I'm not sure how to approach this. I've been looking up INSERT table hints, but most are being made obsolete in future versions.
It looks to me like a CURSOR is the only way to go here?
You could consider BULK INSERT for Process-2 to get the data into the ProductionTable.
Another option would be to batch Process-2 into small batches of around 1000 records and use a Table Valued Parameter to do the INSERT. See: http://msdn.microsoft.com/en-us/library/bb510489.aspx#BulkInsert
It seems like table lock.
Try portion insert in ETL process. Something like
while 1=1
begin
INSERT INTO ProductionTable(field1,field2)
SELECT top (1000) field1, field2
FROM SourceHeapTable sht (nolock)
where not exists (select 1 from ProductionTable pt where pt.id = sht.id)
-- optional
--waitfor delay '00:00:01.0'
if ##rowcount = 0
break;
end

Return a Table of Payroll Dates from a SQL Stored Procedure

I'm working with SQL Server Reporting Services 2008, which is somewhat new to me, as most of my experience is with LAMP development. In addition, moving most of the logic to SQL as stored procedures is something I'm not very familiar with, but would like to do. Any help or direction would be greatly appreciated.
I need a list of acceptable payroll dates in the form of a table to use as the allowed values for a report parameter. Ideally, the person will be able to select this payroll date from the drop-down provided by the report parameter, which will then be used in the dataset to pull data from a table. I would like the logic to be stored on the SQL server if possible, as this is something that will most likely be used on a few other reports.
The logic to create the list of dates is rather simple. It starts with the oldest payroll date that is need by the system (sometime in 2007) and simply goes every two weeks from there. The procedure or function should return a table that contains all these dates up to and including the nearest upcoming payroll date.
It seems to me that the way to go about this would be a procedure or function that creates a temporary table, adds to it the list of dates, and then returns this table so that the report parameter can read it. Is this an acceptable way to go about it?
Any ideas, examples, or thoughts would be greatly appreciated.
I would use a CTE something like this one:
;WITH PayPeriod AS (
SELECT #DateIn2007 AS p UNION ALL
SELECT DATEADD(dd, 14, p) as P FROM PayPeriod WHERE p < GetDate() )
SELECT p FROM PayPeriod
OPTION ( MAXRECURSION 500 )
The MAXRECURSION and/or where parameter limits the number of dates it will generate.
You can use a parameter to figure out the correct limit to get the correct last date still, of course.
try something like this:
;with AllDates AS
(
SELECT CONVERT(datetime,'1/1/2007') AS DateOf
UNION ALL
SELECT DateOf+14
FROM AllDates
WHERE DateOf<GETDATE()+14
)
SELECT * FROM AllDates
OPTION (MAXRECURSION 500)
you can put this in a view or function.
However, I would suggest that instead of presenting a select box of this many values, why not just have two text box fields: start date and end date and default them to reasonable values, just my 2 cents

Resources