Asynchronous Triggers in Azure SQL Database - sql-server

I'm looking to implement some "Asynchronous Triggers" in Azure SQL Database. There was this other question that asked this same question with pretty much the same needs as mine but for SQL Server 2005/2008. The answer was to use the Service Broker. And it's a great answer that would serve my needs perfectly if it was supported in Azure SQL Databases, but it's not.
My specific need is that we have a fairly small set of inputs selected and stored by a user. A couple of those inputs are identifications of specific algorithms and some aggregate-level data all into a single record of a single table. Once saved, we want a trigger to execute those selected algorithms and process the aggregate-level data to break it down into tens of thousands of records into a few different tables. This 2-8 seconds to process, depending on the algorithms. (I'm sure I could optimize this a bit more but I don't think I can get this faster than 2-5 seconds just because of the logic that must be built into it.)
I am not interested in installing SQL Server inside a VM in Azure - I specifically want to continue using Azure SQL Database for many reasons I'm not going to get into in this post.
So my question is: Is there a good/obvious way to do this in Azure SQL Database alone? I can't think of one. The most obvious options that I can see are either not inside Azure SQL Database or are non-starters:
Use real triggers and not asynchronous triggers, but that's a problem because it takes many seconds for these triggers to process as it crunches numbers based on the stored inputs.
Use a poor-man's queueing system in the database (i.e. a new table that is treated as a queue and insert records into it as messages) and poll that from an external/outside source (Functions or Web Jobs or something). I'd really like to avoid this because of the added complexity and effort. But frankly, this is what I'm leaning towards if I can't get a better idea from the smart people here!
Thanks for the help!
(I am posting this here and not on DBA.StackExchange because this is more of an architectural problem than a database problem. You may disagree but because my current best option involves non-database development and the above question I referenced that was almost perfect for me was also located here, I chose to post here instead of there.)

As far as I know, it's not possible to do directly in Azure SQL Database, but there are a few options:
As #gotqn mentioned in a comment, you can use Azure Automation/Runbooks; applied to Azure SQL specifically.
You can also check out database jobs.

You can use LogicApps. It has a SQL connector that implements an asynchronous trigger...
https://azure.microsoft.com/en-us/services/logic-apps/

Related

Log inserted/updated/deleted rows in all tables for a given database in SQL Server 2008

Whats the best way to track/Log inserted/updated/deleted rows in all tables for a given database in SQL Server 2008?
Or is there a better "Audit" feature in SQL Server 2008?
Short answer is that there is no one single solution fits all. It depends on the system but and requirements but here are couple different approaches.
DML Triggers
Relatively easy to implement, because you have to write one that works well for one table and then apply it to other tables.
Downside is that it can get messy when you have a lot of tables and even more triggers. Managing 600 triggers for 200 tables (insert, update and delete trigger per table) is not an easy task.
Also, it might cause a performance impact.
Creating audit triggers in SQL Server
Log changes to database table with trigger
Change Data Capture
Very easy to implement, natively supported but only in enterprise edition which can cost a lot of $ ;). Another disadvantage is that CDC is still not as evolved as it should be. For example, if you change your schema, history data is lost.
Transaction log analysis
Biggest advantage of this is that all you need to do is to put the database in full recovery mode and all info will be stored in transaction log
However, if you want to do this correctly you’ll need a third party log reader because this is not natively supported.
Read the log file (*.LDF) in SQL Server 2008
SQL Server Transaction Log Explorer/Analyzer
If you want to implement this I’d recommend you try out some of the third party tools that exist out there. I worked with couple tools from ApexSQL but there are also good tools from Idera and Netwrix
ApexSQL Log – auditing by reading transaction log
ApexSQL Comply – uses traces in the background and then parses those traces and stores results in central database.
Disclaimer: I’m not affiliated with any of the companies mentioned above.
Change Data Capture is designed to do what you want, but it requires each table be set up individually, so depending on the number of tables you have, there may be some logistics to it. It will also only store the data in capture tables for a couple of days by default, so you may need an SSIS package to pull it out and store for longer periods.
I don't remember whether there is already some tool for this, but you could always use triggers (then you will have access for temporal tables with changed rows- INSERTED and DELETED). Unfortunately, it could be quite a work to do if you would like to track all tables. I believe that there should be some simpler solution, but do not remember as I said.
EDIT.
Maybe this could be helpful:
--Change tracking
http://msdn.microsoft.com/en-us/library/cc280462.aspx
http://msdn.microsoft.com/en-us/library/cc280386.aspx
This allows you to do audits at the database level; it may or may not be enough to meet the business requirements, as database records usually don't make all that much sense without the logic to glue them together. For instance, knowing that user x inserted a record into the "time_booked" table with a foreign key to the "projects", "users", "time_status" tables may not make all that much sense without the SQL query to glue those 4 tables together.
You may also need to have each database user connect with their own user ID - this is fine with integrated security and a client app, but probably won't work with a website using a connection pool.
The sql server logs are not possible to analyze just like that. There are some 3rd party tools available to read the logs but as far as I know you can't query them for statistics and such. If you need this kind of info you'll have to create some sort of auditing to capture all these events in separate tables. You can use "DDL triggers".

sql server replication algorithm

Anyone know how the underlying replication model in sql server works? Do they essentially depend on UTC datetime values to determine if something is new or do they keep a table of all the changes (like a table of tableID+rowid that have changed).
I am building my own "replication" system and was planning on using the dates to know what to replicate. Then I started wondering what would happen if the date got off in the computer for some reason. The obvious choice is to keep a log of the changes as you go and once you replicate those changes, you remove from the log of changes. But thats a lot of extra work, instead of just checking dates.
I figure if sql server replication works by just checking the dates, then that should be good enough for me.
Any wisdom here?
thanks
As a transaction occurs in SQL Server, it is written to the transaction log along with information pertinent to the transaction.
SQL Server replication uses this transaction log to determine which transactions have not yet been processed and to move them to the subscriber. There is a lot more going on under the hood to keep track of the intersection between transactions, publications, subscriptions, etc. but I will leave that to MSDN documentation about SQL Server replication http://msdn.microsoft.com/en-us/library/ms151198.aspx
Moving on to your point about building your own replication system:
Do not build your own replication system. There are too many complications involved that will cause you to spend many many days working. You will be much better off using the items that are shipped with SQL Server.
SQL Server replication methods are pretty impressive out of the box.
If you outline what causes you to think in terms of building your own replication system, we can help you figure out how to use existing items to provision what you need.
Also, read up as much as you can here to get an idea of what it can do for you http://msdn.microsoft.com/en-us/library/ms151198.aspx
SQL Server has a LogReader job that is aptly named. Replication reads the transaction log and applies appropriate transactions to the subscribing databases.
For one thing SQLServer (and it's not the only one) supports multiple replication algorithms.
You can find here details about the ones implemented in SQLServer 2008. Read first the X Replication Overview then follow the How X Replication works for more details.

Linked server vs integration

We have an application which needs to interact with 3 different databases
(SQL Server) to fetch the user details and display them on a web page. Is it a good option to use a linked server or should we copy the user details (via some daily job) to the application database?
Using a linked server will give you a round trip delay every time you query the data. If you only query the data once per day or per session this might be acceptable. If however you are issuing many queries to these servers you may find that the performance is so poor that your application is unusable.
You could use SQL replication to push (or pull) the data from each of the servers into a local copy on the application server. This will provide you with much better query performance (no round trip delay) while also ensuring that you have the latest data. There are lots of options with SQL replication you should be able to find something that suits your needs.
For more information on SQL Replication see http://technet.microsoft.com/en-us/library/ms151198.aspx
A linked server is only going to allow your databases to talk to each other. If the application is interacting with three discrete databases, then you simply need discrete connections. I would not recommend heavily using the linked servers unless you are moving a lot of data (since picking it up into the application and putting it into another database may take even longer).

what's a good way to synchronize a sql server 2008 database from a 2005 database automatically?

Ok, the scenario is... two servers, on completely different parts of the internet.
The sql 2008 database just needs to get data updates and schema changes. It doesn't need to send anything to the 2005 database. Basically just suck data and schema as efficiently as possible automatically as a scheduled task.
The database is quite huge.... but the changes per day are probablly around 20/30 megabytes of data/
I can't run any of the inbuilt replication on the 2005 database.
I've had a wee look at the Sync Framework, I think that might do what I want, but seems a bit painful and requires a bit of work to get going. I'm wondering if there is tooling out there to make this easier?
or?? not quite sure what my options are.
I can't run any of the inbuilt
replication on the 2005 database.
Any reason for this restriction? Replication is the way to solve your problem. W/o a replication infrastructure you simply won't be able to detect data changes, nor schema changes. There are only two ways to detect the changes: either via triggers and tracking tables (and that is Merge Replication) or via the database log (and that is Transactional Replication).
Sync Framework itself, if it would be used, would require either Change Tracking or Change Data Capture. But these are 2008 specific technologies and they're really nothing else but replication in disguise (they use the very same infrastructure used by Merge and respectively Transactional Replication).
Even if you want to roll your own, you'll find out quickly that shipping the changes over is the trivial part, eg. using Service Broker for reliable delivery semantics. But the Real hard problem is detecting the changes, and that is hard. Diff-ing a 'quite huge' database over the internet to detect changes is just not going to work. So relying on the built-in infrastructure to detect changes, namely the two forms of Replication, is just the obvious solution.
Could you automate RedGate's SQL Compare and/or SQL Data Compare? http://www.red-gate.com/products/SQL_Compare/index.htm ... you could at least try that out with the 14-day trial and see if it is worth the investment. Much cheaper than tooling it yourself, IMHO.
maybe these questions help you:
Microsoft Sync Framework Or Replication
SQL Server Data Archive Solution
Is there a way to replicate some data not all data in db by sql server replication?
you can make an application that generate a script from your changed data in your favorite period and then run this script in your target server.

How would you migrate hundreds of MS Access databases to a central service?

We have literally 100's of Access databases floating around the network. Some with light usage and some with quite heavy usage, and some no usage whatsoever. What we would like to do is centralise these databases onto a managed database and retain as much as possible of the reports and forms within them.
The benefits of doing this would be to have some sort of usage tracking, and also the ability to pay more attention to some of the important decentralised data that is stored in these apps.
There is no real constraints on RDBMS (Oracle, MS SQL server) or the stack it would run on (LAMP, ASP.net, Java) and there obviously won't be a silver bullet for this. We would like something that can remove the initial grunt work in an automated fashion.
We upsize (either using the upsize wizard or by hand) users to SQL server. It's usually pretty straight forward. Replace all the access tables with linked tables to the sql server and keep all the forms/reports/macros in access. The investment in access isn't lost and the users can keep going business as usual. You get reliability of sql server and centralized backups. Keep in mind - we’ve done this for a few large access databases, not hundreds. I'd do a pilot of a few dozen and see how it works out.
UPDATE:
I just found this, the sql server migration assitant, it might be worth a look:
http://www.microsoft.com/sql/solutions/migration/default.mspx
Update: Yes, some refactoring will be necessary for poorly designed databases. As for how to handle access sprawl? I've run into this at companies with lots of technical users (engineers esp., are the worst for this... and excel sprawl). We did an audit - (after backing up) deleted any databases that hadn't been touched in over a year. "Owners" were assigned based the location &/or data in the database. If the database was in "S:\quality\test_dept" then the quality manager and head test engineer had to take ownership of it or we delete it (again after backing it up).
Upsizing an Access application is no magic bullet. It may be that some things will be faster, but some types of operations will be real dogs. That means that an upsized app has to be tested thoroughly and performance bottlenecks addressed, usually by moving the data retrieval logic server-side (views, stored procedures, passthrough queries).
It's not really an answer to the question, though.
I don't think there is any automated answer to the problem. Indeed, I'd say this is a people problem and not a programming problem at all. Somebody has to survey the network and determine ownership of all the Access databases and then interview the users to find out what's in use and what's not. Then each app should be evaluated as to whether or not it should be folded into an Enterprise-wide data store/app, or whether its original implementation as a small app for a few users was the better approach.
That's not the answer you want to hear, but it's the right answer precisely because it's a people/management problem, not a programming task.
Oracle has a migration workbench to port MS Access systems to Oracle Application Express, which would be worth investigating.
http://apex.oracle.com
So? Dedicate a server to your Access databases.
Now you have the benefit of some sort of usage tracking, and also the ability to pay more attention to some of the important decentralised data that is stored in these apps.
This is what you were going to do anyway, only you wanted to use a different database engine instead of NTFS.
And now you have to force the users onto your server.
Well, you can encourage them by telling them that you aren't going to overwrite their data with old backups anymore, because now you will own the data, and you won't do that anymore.
Also, you can tell them that their applications will run faster now, because you are going to exclude the folder from on-access virus scanning (you don't do that to your other databases, which is why they are full of sql-injection malware, but these databases won't be exposed to the internet), and planning to turn packet signing off (you won't need that on a dedicated server: it's only for people who put their file-share on their domain-server).
Easy upgrade path, improved service to users, greater centralization and control for IT. Everyone's a winner.
Further to David Fenton's comments
Your administrative rule will be something like this:
If the data that is in the database is just being used by one user, for their own work (alone), then they can keep it in their own network share.
If the data that is in the database is for being used by more than one person (even if it is only two), then that database must go on a central server and go under IT's management (backups, schema changes, interfaces, etc.). This is because, someone experienced needs to coordinate the whole show or we will risk the time/resources of the next guy down the line.

Resources