How to push the data from database to application? - sql-server

I want to push the data from database to application instead of application pull the data. I have installed ms sql server and apache tomcat server. I have my application in apache tomcat, here I made connection to database. Now I want database send the data whenever there is update in data. But all I know is fetch the data from database is not good idea, because application needs to monitor the database for updated data will lead to fire the query for every 5 sec, this is not efficient as well.
I google it I got some answers they are Query Notification here, Sql server Agent Job to schedule the task automatically. If you have any other suggestion please post it.

There surely are several possibilities to do that:
Implement unsafe CLR trigger
Implement unsafe CLR procedure
Use xp_cmdshell
Call web service
Use Query Notification
You can read a little about them in this discussion:
Serial numbers, created and modified in SQL Server.
Personally I would prefer Query Notification over other methods, because it already has support fopr various cases (e.g. sync/async communication) and you don't have to reinvent the wheel. And is in your case recommended by Microsoft.
Polling is another method you've mentioned. It's is a more like traditional method and there can be some performance penalties related, but you shouldn't worry about them if you are careful enough. For example, if you already have an authentication built in your application, you can create another column in your Users table that is set if there are any changes related to that user. And then, there can be just a thread in your app that will perform a query every second against this table (even dirty reads with NOLOCK shouldn't be a problem here) and maintain some in-memory structure (e.g. thread-safe dictionary) that says which client should get pushed. Another thread polls your dictionary and when it finds there something for the client, performs a db query that extracts data and sends it to the client. This looks like a lot of unnccessary work, but at the end you get two independent workers which somewhat helps to separate concerns; first one is just an informer which performs 'lightweight' database polling; second one extract real data and performs server push. You can even optimize the push-worker in the way that when it runs, it checks if multiple clients need some data and then executes the select for all of those who need it. You would probably want the second worker to run less frequently than first one.
EDIT
If you wish to use non-.NET technology to achieve the same functionality, you will have to get more into SQL Server Service Broker. Query Notification is a simplified layer built in .NET on top of SQL Server Service Broker, and you would have to build at least part of that layer by yourself. This includes creating queue, message type, service and stored procedures with SEND and RECEIVE on the other side. You will have to take care of the conversation/dialog by yourself. SB is actually a async-messaging world adjusted to work in RDBMS environment, so you will see some new TSQL expressions. However, MSDN is here to help:
http://msdn.microsoft.com/en-us/library/ms166061(v=sql.105).aspx
http://msdn.microsoft.com/en-us/library/bb522893.aspx
This could help as well: Externally activate non-.NET application from Service Broker
Example on how to code the stuff:
-- First you have to enable SB for your database
USE master
ALTER DATABASE Playground
SET ENABLE_BROKER
GO
USE Playground
GO
-- Then create a message type; usually it will be XML
-- because it's very easy to serialize/deserialize it
CREATE MESSAGE TYPE [//Playground/YourMessageType]
VALIDATION = WELL_FORMED_XML
GO
-- Then create a contract to have a rule for communication
-- Specifies who sends which message type
CREATE CONTRACT [//Playground/YourContract] (
[//Playground/YourMessageType] SENT BY ANY)
GO
--Creates queues, one for initiator (1) and one for target (2)
CREATE QUEUE MyQueue1
GO
CREATE QUEUE MyQueue2
GO
-- Finally, configure services that 'consume' queues
CREATE SERVICE [//Playground/YourService1]
ON QUEUE MyQueue1 ([//Playground/YourContract])
GO
CREATE SERVICE [//Playground/YourService2]
ON QUEUE MyQueue2 ([//Playground/YourContract])
GO
-- Now you can send a message from service to service using contract
DECLARE
#dHandle uniqueidentifier,
#Msg nvarchar(max)
BEGIN DIALOG #dHandle
FROM SERVICE [//Playground/YourService1]
TO SERVICE '//Playground/YourService2'
ON CONTRACT [//Playground/YourContract]
WITH ENCRYPTION = OFF
SELECT #Msg = (
SELECT TOP 3 *
FROM Table1
FOR XML PATH('row'), ROOT('Table1'))
;SEND ON CONVERSATION #dHandle
MESSAGE TYPE [//Playground/YourMessageType] (#Msg)
PRINT #Msg
GO
-- To get the message on the other end, use RECEIVE
-- Execute this in another query window
DECLARE #dHandle uniqueidentifier
DECLARE #MsgType nvarchar(128)
DECLARE #Msg nvarchar(max)
;RECEIVE TOP(1)
#dHandle = conversation_handle,
#Msg = message_body,
#MsgType = message_type_name
FROM MyQueue2
SELECT #MsgType
SELECT #Msg
END CONVERSATION #dHandle
GO

Related

Calling SSIS with SSISDB implementation from SQL Server Service broker

The requirement is to call a web service through SSIS and calling the SSIS from a SQL Server Service Broker activated stored procedure.
Here is what I have currently doing:
Queue
CREATE QUEUE [schema].[ProccessingQueue] WITH STATUS = ON , RETENTION = OFF , ACTIVATION ( STATUS = ON , PROCEDURE_NAME = [schema].[usp_ProccessingQueueActivation] , MAX_QUEUE_READERS = 10 , EXECUTE AS N'dbo' ), POISON_MESSAGE_HANDLING (STATUS = ON)
My stored procedure:
ALTER PROCEDURE [schema].[usp_ProccessingQueueActivation]
WITH EXECUTE AS CALLER
AS
BEGIN
SET NOCOUNT ON;
<snip declaration>
BEGIN
BEGIN TRANSACTION;
WAITFOR
(
RECEIVE TOP (1)
#ConversationHandle = conversation_handle,
#MessageBody = CAST(message_body AS XML),
#MessageTypeName = message_type_name
FROM [schema].[ProccessingQueue]
), TIMEOUT 5000;
<snip awasome stuff>
EXEC dbo.RunSSIS <param>
DECLARE #ReplyMessageBody XML = #MessageBody;
SEND ON CONVERSATION #ConversationHandle MESSAGE TYPE [type] (#ReplyMessageBody);
END
<handle error>
COMMIT TRANSACTION;
END
END
Now here is what RunSSIS stored procedure looks like
ALTER PROCEDURE [dbo].[RunSSIS]
<params>
AS
BEGIN
DECLARE #exec_id BIGINT
EXEC [SSISDB].[catalog].[create_execution]
#package_name=N'<SSIS_package>',
#folder_name=N'<folder>',
#project_name=N'<projectName>',
#use32bitruntime=FALSE,
#reference_id=NULL,
#execution_id=#exec_id OUTPUT
EXEC [SSISDB].[catalog].[set_execution_parameter_value]
#exec_id,
#object_type=30,
#parameter_name=N'<param_Name>',
#parameter_value=<param>
SELECT #exec_id
EXEC [SSISDB].[catalog].[start_execution] #exec_id
END
Now this will throws the below exception in event-viewer as the Sql service broker activation security context isn't recognized in SSISDB environment.
The activated proc
'[schema].[usp_ProccessingQueueActivation]' running on
queue '' output the
following: 'The current security context cannot be reverted. Please
switch to the original database where 'Execute As' was called and try
it again.'
To resolve the problem I have tried those following approach
So I follow this link
http://www.databasejournal.com/features/mssql/article.php/3800181/Security-Context-of-Service-Broker-Internal-Activation.htm
and created a User with a self signed certificate (thinking that it
is user that doesn't has permission). But it is returning same error,
digging deeper I found that [internal].[prepare_execution] in
SSISDB has "REVERT" statement in line no 36 that throws the error as
it doesn't like Impersonation at all.
I tried to move the RunSSIS stored procedure to SSISDB and try to call it from activation stored procedure, it was shoot down as SSISDB it doesn't allow any user with SQL Server auth, It needs to have a Windows auth and User created by Certificate obviously doesn't has windows credential.
My question is
Am I on the correct path? I certainly doesn't anticipate using 2 component of SQL server together would be that difficult.
If not in correct approach what would be best approach to call a service from Service broker? I have seen "External Activation" for SQL Server Service broker but haven't explored is yet. But I would try to stick to something that lives inside server environment and scale-able, and don't like the idea of installing different component in prod environment (it is always a overhead for support personal,as there is one more point which can fail)
I am using Windows auth and my credential has sys_Admin access.
I think you can take out the "WITH EXECUTE AS CALLER" and everything (the proc and then the package that ends up getting called) will be run under the security context of the Service Broker. As long as that context has the permissions to do what you want to do, you should be fine.
I have not used a Service Broker in this way, but I do the same thing with jobs fired off by the SQL Agent. As long as the Agent's security context has the permissions needed in the procs/packages everything runs fine. We use network accounts for our services so it all works between servers as well.
This has a code smell of tight coupling and my first instinct is to decouple the queue, the DB that houses the proc, and the SSIS execution into a PowerShell script. Have the script get the messages from service broker then call SSISDB on a different connection without wrapping [catalog].[create_execution] and [catalog].[set_execution_parameter_value] in a stored proc. You can still run this script directly from Agent.
This approach gives you the most flexibility with regard to security contexts, if one of the components moves to a different server, if something is named differently in dev/QA, or technologies change (Azure ServiceBus instead of Broker for instance). You can also get creative with logging/monitoring.

How to best test what environment the server is running in?

Where I work the production databases are backed up nightly and restored to dev, test, and QA environments. When running any of the programs in non-production environments, to avoid making changes we don't want to make in production, such as sending real users email, our programs test the environments using a combination of internally parsing the command line, and calling an SQL user function. The function selects ##SERVERNAME, then parses the result looking for specific strings ie. if the ServerName contains "-PROD-", it is a production server.
The problem is the hardware group is implementing a high availability project so if a server fails ##SERVERNAME will return the name of the backup server. I am looking at extending the current logic to account for whatever the fail-over server names will be, but I was hoping there was a better way to test the environment than parsing text for static strings.
Store a setting in a database that is separate from your application database(s) then read that setting as-needed using a function. When your application runs in production, you'll get the production values. When your application runs in Development, you'll get the development values.
The nice thing about this is you can store all kinds of values and easily get to them from your SPROCS, PowerShell or whatever front end you have.
CREATE DATABASE SETTINGSDB
GO
USE SETTINGSDB
GO
-- A table to hold key/value pairs
CREATE TABLE MYSETTINGS
(
SettingName VARCHAR(50) PRIMARY KEY CLUSTERED ,
SettingValue VARCHAR(500)
)
GO
-- On DEVELOPMENT SERVER, run this
INSERT INTO MYSETTINGS
VALUES ('ENVIRONMENT', 'DEV'),
('SOME_SETTING', 'True')
-- On PRODUCTION SERVER, run this
INSERT INTO MYSETTINGS
VALUES ('ENVIRONMENT', 'PROD'),
('SOME_SETTING', 'False')
GO
-- A function to pull key/value pairs.
CREATE FUNCTION dbo.GetEnvVar( #SettingName VARCHAR(50) )
RETURNS VARCHAR(500)
AS
BEGIN
RETURN (SELECT SettingValue FROM SETTINGSDB.dbo.MYSETTINGS WHERE SettingName = #SettingName)
END
GO
Once you are setup, you can then check the value, and it will be different between DEV/PROD. For example:
-- Then use these values:
USE YourApplicationDatabaseNameHere
GO
CREATE PROCEDURE SampleApplicationSprocThatSendsEmail
#EmailAddress VARCHAR(50),
#Subject VARCHAR(50)
AS
IF (dbo.GetEnvVar('ENVIRONMENT') = 'PROD' )
BEGIN
-- Only Executes in Production
-- TODO: SEND THE EMAIL
END ELSE
BEGIN
-- Only Executes in Development
PRINT 'Send email to ' + #EmailAddress
END
Instead of finding all possible objects that might cause harm or annoyance to some unknown number of targets (mailboxes, files, databases, etc...) why not just isolate the dev/test environment at the network layer? Put them in an isolated network/subnet where only inbound is permitted, all outbound gets blocked or rerouted. It's hard to know for sure you've gotten every single endpoint. For example, what happens if your admins add more secondaries for additional protection and read-only queries?
We used SQL Server's trace replay capabilities regularly for many years for service pack, upgrade (app and db), regression, cross-workload and other tests. Every so often we'll have some smart, highly motivated new team member (new hire or transfer) who would write scripts or "cleansing" apps to scrub the trace files and databases so the tests can be run on his/her workstation. Every single one of them will learn that's a bad idea, sometimes in a very, very hard way (e.g. re-index a clustered index on a highly volatile 2billion row table in mid-morning).
Blocking at the network layer has the added benefit of minimal prep/setup work for each test run plus you can have an identical setup as prod. When you encounter bugs or regressions, you have a few items less to check.

Is there a way to insert an encrypted script into a SQL Server database?

My company considers database scripts we write part of our intellectual property.
With new releases, we deliver a 2-part setup for our users:
a desktop application
an executable that wraps up the complexities of initializing/updating a database (RedGate SQL Packager).
I know I can encrypt a stored procedure on a database once the script is present, but is there any way to insert it in an encrypted form? I don't want plain-text to be able to be intercepted across the "wire" (or more accurately, between the SQL script executable and the server).
I'm not really tied to the tool we're using - I just want to know if it's possible without having to resort to something hokey.
Try using Enctyptpassphrase and DecryptPassPharse functions.
Use ENCRYPTBYPASSPHRASE to encrypt all your DDL statements and then DECRYPTBYPASSPHRASE on the server to decrypt and execute.
declare #encrypt varbinary(200)
select #encrypt = EncryptByPassPhrase('key', 'your script goes here' )
select #encrypt
select convert(varchar(100),DecryptByPassPhrase('key', #encrypt ))
Create a procedure that would look like this
CREATE PROCEDURE DBO.ExecuteDDL
(
#script varbinary(max)
)
AS
BEGIN
DECLARE #SQL nvarchar(max)
SET #SQL = (select convert(varchar(max),DecryptByPassPhrase('key', #script )))
EXECUTE sp_executesql #SQL
END
Once this is in place you can publish scripts to your server like this
This isn't plain-text and last I checked, it still works:
declare #_ as varbinary(max)
set #_ =0x0D000A005000520049004E0054002000270054006800690073002000620069006E00610072007900200073007400720069006E0067002000770069006C006C002000650078006500630075007400650020002200530045004C0045004300540020002A002000460052004F004D0020005300590053002E004F0042004A00450043005400530022003A0027000D000A00530045004C0045004300540020002A002000460052004F004D0020005300590053002E004F0042004A0045004300540053000D000A00
exec (#_)
Technically, it's not encryption, but it's not plaintext either and it can server as the basis for some mild encryption pretty easily.
There's little you can do to reliably prevent the code in the database to be read by anyone who really wants. The WITH ENCRYPTION parameter is really just an obfuscation and many simple scripts are able to get it again in plain text, and when the database is being upgraded, ultimately the profiler will always be able to catch the ALTER PROCEDURE statement with the full text. Network tracers can be evaded by using an encrypted connection to the server.
The real problem comes from the fact that the database is installed in a server that your users own and fully control (correct me if that's not the case). No matter what you do, they'll have full access to the whole database, it's schema, and internal programming inside sprocs/functions.
The closest I can think of to prevent that is to switch to CLR stored procedures, which are installed by copying a DLL to the server and registering within SQL Server. They pose other problems, as they are totally different to program and may not be the best tool for what you use a sproc normally. Also, since the are made of standard .NET code, they can also be trivially decompiled.
The only way I can think of fully protecting the database structure and code would be to put it in a server of yours, that you expose to your customers though, say, a webservice or a handful of sprocs as wrappers, so no one can peek inside.

Simply add a message to a SQL Server Service Broker queue using T-SQL?

I created a queue thus:
CREATE QUEUE log_line_queue
WITH RETENTION = ON, --can decrease performance
STATUS = ON,
ACTIVATION (
MAX_QUEUE_READERS = 1, --number of concurrent instances of sp_insert_log_line
PROCEDURE_NAME = sp_insert_log_line,
EXECUTE AS OWNER
);
What can I do quickly in SSMS to add an item to my queue using T-SQL?
In SSMS select required database in Object Explorer. Then find Service Broker of this database, right click on it and select 'New Service Broker Application...' command. This will create template for you to start using Service Broker quickly. Also you'll see minimal recommended configuration needed to implement and run your own application.
As for using one queue - if this is your first experience with Service Broker why not to follow common practice at the beginning? After running several samples and/or your own prototypes you decide how much queues to use and you know how to do it.

Transmitting sessions id from SQL Server to web scripts

I have a bunch of stored procs doing the business logic job in a SQL Server instance within a web application. When something goes wrong all of them queue a message in a specific table (let's say 'warnings') keeping track of the error severity and issue description. I want the web application using the stored procs to be able to query the message table at the end of the run but getting a list of the proper message only, i.e. the message created during the specific session or that specific connection: so I am in doubt if
have the web application send to the db a guid to INSERT as column value in the message records (but somehow I have to keep track of it in several stored procs running at the page level, so I need something "global")
OR
if I can use some id related to the connection opened by the application - and this would be definitely more sane. Something like (this is pseudo code):
SELECT #sessionid = sessionid FROM sys.something where this = that
Have some hints?
Thanks a lot for your ideas
To retrieve your current SessionId in SQL Server you can simply execute the following:
select ##SPID
See:
http://msdn.microsoft.com/en-us/library/ms189535.aspx

Resources