Service Broker External Activator response take long - sql-server

I have two databases on SQL Server 2014, SourceDB and LogDB. On SourceDB, Service Broker; and on server, Service Broker External Activator service are activated.
On SourceDB I have TargetQueue of which a table's (Product) insert trigger sends changes on TargetQueue and TargetQueue has Event notification which nudges my external exe client. Inside exe client I finally dequeue data via WAITFOR(RECEIVE TOP (1)).. and log them directly to LogDB.
So, when I start the SBEA service and on very first insertion into table a/a few record (after delete all records), TargetQueue immediately filled but the interval from time of insertion to SourceDB till insertion to LogDB is approx 3-6 seconds, event notification based time consumption here I guess, not sure. For further insertions after this, the interval becomes 100ms as seen below.
First
Further
Why is the first insertion take too long, why after delete all records of table, it becomes to take long again? Why, further ones take shorter than the first?
Can I decrese the interval under 10ms as I can achieve the almost same structure with SQLCLR under 10ms and the fastest response is crucial for my application as well? (Both structures are on same SQL Server Instance works locally)

You can streamline the process by ditching the External Activator and the Event Notification. Instead have your program continuously running WAITFOR (RECEIVE directly on the target queue in a loop.
Here's a sample to get you started: https://code.msdn.microsoft.com/Service-Broker-Message-e81c4316

Related

Can a CLR procedure open a port and listen on it?

I want to have a clr procedure that when called opens a udp port to listen for incoming data. It never returns to the caller. The caller has their timeout set to infinite. Will MSSQL server allow that?
It is a judgement call. I'd say the most determining factor is how long the socket remains open. If it is of a "transactional" length, which varies highly from system to system - then I would say this is fine. I don't see any issues with clustering since SQL Server does not have Active/Active clusters (but it would certainly be possible with NLB). The other determining factor would be whether the socket is single instance (listening on a non-broadcast port).
So, yes if:
The function/stored procedure is to be called from T-SQL (SQL Server Agent or SSRS?)
The function/stored procedure is expected to exit within a "transaction" time period
(I'd say 30 seconds is typical - but on highly loaded system, a call which takes locks then waits for 30 seconds could be damning, so you will need to consider your use case.)
The function is reentrant (can be run multiple times in parallel)
No if:
The function is called from a C# or other more capable client (do your processing there - perhaps a library?)
The function will wait an undetermined amount of time far in excess of the typical transaction length
The function is single instance (only a single session would be able to successfully execute the function)
Consider a few examples which illustrate the difference:
A SQL Server Agent job which populates a table daily with the rate chart scraped from a mainframe.
A CLR function is installed which takes an XML scraping document, connects to the Mainframe and uses TN 3270 Screen Scraping to return a result-set in the shape specified in the XML document
Must be called from SQL (for ease of maintenance and failure reporting)
Should timeout if it exceeds a set timeout (~30s)
Can be run in multiple sessions in parallel
Answer: A good candidate for an SQL CLR Function
A "Ticker" application where an application listens to UDP broadcasts of price updates and returns the result set to the client as a streamed result set
A CLR function opens a UDP broadcast listening port and asynchronously writes results back to the client. This continues until an application defined end condition (specific packet payload, query cancellation, or timeout) is reached
Must be called from SQL to handle a mix of platforms (some clients on PHP on Linux, others on VB6 on Windows)
Has a defined and user controllable timeout (but partially fails this test since it would likely exceed the transaction duration).
Can run in multiple sessions in parallel (broadcast)
Answer: An "ok" candidate for SQL CLR Function
A syslog server which dumps received events to a table
A function which opens a UDP listener on 0.0.0.0:514 and returns events as a streamed result set.
No real reason it has to be called from SQL since it is buffering to a table regardless
Has no timeout - expected to run 24/7
Calling application (SQL Server Agent) has to ensure this function is always executing exactly once and restart it if it fails (reimplementation of service manager)
Answer: Not a good fit for SQL CLR, consider Service Broker + an NT Service

Trigger XMPP message from SQL Server trigger or ON INSERT

I need to be able to send an XMPP message when a row gets inserted into a particular table in our SQL Server database (and have it not make the insert fail if the XMPP server or code isn't available/fails/etc).
Is this possible without causing the insert to fail in some circumstances?
To avoid potentially blocking your database application, I'd recommend NOT to send any external messages directly from a trigger. After all, the trigger executes in the context of the SQL statement that caused it to fire, and if the trigger is delayed, then your statement will have to wait until the trigger is done (or has timed out).
Instead, what I'd do is this:
insert a row into a "command" table with enough information to be able to later send your XMPP message - this can be done in the trigger
have a separate piece of code, e.g. a scheduled SQL Server job, that checks that "Command" table every x minutes or hours or however frequently (or infrequently) that you need - and this job running separately and independently from your application should them attempt to send out those messages, and handle any potential error situations - while your main application happily works along not bothered by any delays, time outs etc.

DB's long-live connections or on-demand connections?

I was assigned to implement an application (in C++) to evaluate pending submissions (a submission is a programming algorithm to a given problem). A site (in ASP.NET MVC) posts problems and allows the users to submit their answers, then marks the submissions as "pending to evaluation" on the database (SQL Server 2008R2) and that is when my work begins:
I'll have 3 (or maybe more) instances of my application running as services.
Each instance has to check if any pending submissions exists in the DB every 2 seconds.
If it exists I retrieve and compile it, after successful compilation I execute it and finally, after execution, check the correctness of the answer. Then I update that submission setting the results and deleting it from the pending table.
I need to specify in the DB the current status of the pending submission (compiling, running, judging).
The time to evaluate a submition is ~(1-3)s and the same instance never evaluates more that one submission at the same time.
My problem is: How to connect to the DB server?
I have 3 possibles solutions and I need to know what should be better (in order to increase efficiency) and why:
1 - Establish a connection to the DB once I instantiate the application and never close it (close it when I delete the instance or shut down the server, that theoretically never will happen.)
2 - Open a connection each 2s in order to get the pending submission (if any one exists) wait for the full evaluation process to end, sets the evaluations results and then close the connection.
3 - Same as 2, but closing the connection when I retrieve the submission, when the compilation finish, open it again and update pending submission's status, close it, when the execution finish, open it again and update pending submission's status, close it, finally when the judging finish open it and set the evaluation result.
You don't say what database access library you are using (ODBC, ado.net, other?). Opening and closing database connections is a relatively expensive operation. You should be using some sort of connection pooling scheme in your db access framework. A pool of connections is opened for a period of time, and when your app opens a connection it will get handed an already open connection from a pool. That will make it more efficient. Go read about connection pooling
for SQL Server

Schedule service broker to receive messages automatically

I am new to Sql Server Service Broker and experimenting with it.
I was able to send messages from one DB and receive those messages in another DB (of the same SQL server) and I am inserting those messages into a table in the receiving DB.
Everything is working so far, but everytime I send a message from the source DB, I have to go the destination DB and run the RECEIVE query manually to fetch the message from the receiving queue and insert into the table.
I would like to automatically receive the messages from the receive queue as soon as they arrive (or in a schedule, say every 10 minutes) and insert them into my destination table, without me manually doing it.
One option is to create a SP and schedule that to run every 10 minutes. I am not sure if that is the recommended way or if there is any other better way to listen to the receiving queue and automatically retrieve the messages as soon as they arrive.
Any help would be appreciated.
What you're looking for is what's called broker activation (specifically, internal activation). In essence, you can "attach" a stored procedure to a service broker queue that will be called when a message shows up on the queue. Read all about it in BOL.

Sending and Receiving SQL Server Service Broker Messages within Nested Transactions

I'd like to use SQL Server 2008 Service Broker to log the progress of a long-running (up to about 30 minutes) transaction that is dynamically created by a stored procedure. I have two goals:
1) To get real-time logging of the dynamically-created statements that make up the transaction so that the progress of the transaction can be monitored remotely,
2) To be able to review the steps that made up the transaction up to a point where a failure may have occurred requiring a rollback.
I cannot simply PRINT (or RAISERROR(msg,0,0)) to the console because I want to log the progress messages to a table (and have that log remain even if the stored procedure rollsback).
But my understanding is that messages cannot be received from the queue until the sending thread commits (the outer transaction). Is this true? If so, what options do I have?
It is true that you cannot read messages from the service queue until the transaction is committed.
You could try some other methods:
use a sql clr procedure to send a .net remoting message to a .net app that receives the messages and them log them.
use a sql clr procedure to write a text or other log file to disk.
Some other method...
Regards
AJ

Resources