Does sql transactions reliable over internet - sql-server

i have cloud based solutions (azure function, which reads json from service bus and convert to c# object) which inserts data into on prem sql server with multiple insert statements, are sql transactions reliable and secure over the internet.
what if connection fails during insertion in to database.

When connection fails during insertion, transaction is not complete and no rows are inserted.
When you call INSERT inside transaction, no rows are inserted until you call COMMIT TRANSACTOIN. If you use EF and transaction is not commited on end of using block, rollback is called automatically.
This is one of the typical scenario the transactions are designed for.

Related

Can I update an SQL Server database table as well as a DynamoDB table in one operation?

We have a user facing web app powered by a SQL Server that allows users top update a table in our SQL Server that also needs to update a document record in our dynamo database table.
How could I reliably ensure that both commits have taken place? We can allow up to a few seconds in latency.
Short answer, you can't.
Dynamo DB doesn't support two-phase (aka distributed) commitment control like most (all?) relational databases do.
Long answer, given that once DDB returns a successful (2xx) response, the record is durable you might consider
start transaction
write to SQL table
write to DDB
if DDB returns 200, commit SQL transaction
else: rollback SQL server transaction.
Another thought, would be to take advantage of DDB streams. Have your app write just to DDB and have another application (Lambda) pick up the change and write to SQL Server.
The first option is "easier", but less robust. No guarantees that something couldn't go wrong (app crashing) between the request to DDB and your app seeing the response. Thus rolling back the SQL while the DDB is updated.
The second option is more work, basically you're building (buying?) a data replication engine from DDB to SQL server. But since the DDB stream data lives for 24hrs you've got that long to fix any problems in SQL server and pickup where you left off.
Why not just leverage the SQL transaction (because dynamo transactions are harder to work with)?
<begin trx>
save item in SQL db
save item in Dynamo DB
<commit trx>
If save in SQL fails it will be rolled back
If SQL save succeeds but dynamo save fails, you get an exception and rollback
If both succeed you commit
Am I missing something

Transaction from vb.net to insert data into different SQL Server

I have two SQL Server. I want to get data from the first one to the second and to update the first with value sent=1 there are not on the same area
What I want to do exactly is a transation in vb.net that can union all the query that will be applicated to the both db
Is it possible because if I put the query of the first db in a transaction and if i make if transaction = true then execute the second truncation (that applicated to the second db) if I lost the connection that may because problem any one can help me to join all the querys on the same transaction
In VB you would use the TransactionScope class to wrap your data access operations. It will create a local transaction to begin with and then promote it to a distributed transaction when required. The SQL Server Distributed Transaction Manager must be active for that to be possible, so that's something for you to read up on.

Azure SQL Database trigger to insert audit info into Azure Table

I am working on a database auditing solution and was thinking of having SQL Server triggers take care of changes and inserting them into an auditing table. Since this is a SQL Azure Database and will be fairly large I am concerned about the cost of a growing database due to auditing.
In order to cut down on the costs needed for auditing purposes, I am considering storing the audit table (or tables) in Azure Tables instead of Azure SQL databases. So the question becomes, how to get the SQL Server trigger to get the changed data into Azure Tables?
The only thing I can come up with is to have an audit table (or tables) in SQL Databases so the trigger can insert the rows locally, and then have a Worker Role every X seconds pull any rows from that and move them to Azure Tables and delete from the SQL Database table so it doesn't grow large.
Is there a better way to do this integration? Can I somehow put a message in a queue from a trigger?
Azure SQL Database (formerly SQL Azure) doesn't support CLR (hence no EXTERNAL NAME trigger parameter) so there's no way for your triggers to do anything outside of T-SQL. If you want audit content to go to a table, you could take the approach you came up with (temporarily write to SQL table, then move content periodically to Table). There are other approaches you could take (and this would be opinion/subjective, frowned upon here), but going with the queue concept for a minute, since you asked about queues, and illustrating what you could do with Azure Queues:
You could use an Azure queue to specify an item to insert/update in your SQL database. The queue processing code could then be responsible for performing the update and writing to the Azure table. Since the queue messages must be explicitly deleted after processing, you could simply repeat the queue message processing if something failed during execution (e.g. you write to SQL but fail before writing to table storage). The message eventually becomes visible for reading again, if you don't delete it before its timeout value. As long as your operations are idempotent, you'd be ok with this pattern.
A cheaper solution than using worker roles would be to use a combination of Azure Scheduled Tasks (you can enable them for free to run every 15 min within Mobile Apps) and Azure Web Sites. Basically the way it would work is to run this scheduled job every 15 min which would make an HTTP call to some code you have running within your Azure Web Site. This code would do the same work you had outlined for your worker role.
Alternatively, use SQL Server System-Versioned temporal tables to automatically handle the writing of audited record (i.e., changes) to corresponding history tables.

Using SAVE TRANSACTION with a linked server

Inside a transaction that have a savepoint I have to make a join with a table that is in a linked server. When I try to do it, I get the error message:
“Cannot use SAVE TRANSACTION within a distributed transaction”
The remote table data rarely changes. It is almost fixed. Is is possible to tell SqlServer to exclude this table from the transaction? I've tried a (NOLOCK) hint, but it isn't possible to use this hint for a table in a linked server.
Does anyone knows about a workaround? I'm using the ole SqlServer 2000.
One thing that you could do is to make a local copy of the remote table before you start the transaction. I know that this may sound like a lot of overhead, but remote joins are frequently a performance problem anyway and the SOP fix for that is also to make a local copy.
According to this link, the ability to use SAVEPOINTs in a Distributed transaction was dropped in SQL 7.
To allow application migration from Microsoft SQL Server 6.5 when
savepoints inside distributed transactions are in use, Microsoft SQL
Server 2000 Service Pack 1 introduces a trace flag that allows a
savepoint within a distributed transaction. The trace flag is 8599 and
can be turned on during the SQL Server startup or within an individual
session (that is, prior to enabling a distributed transaction with a
BEGIN DISTRIBUTED TRANSACTION statement) by using the DBCC TRACEON
command. When trace flag 8599 is set to ON, SQL Server allows you to
use a savepoint within a distributed transaction.
So unfortunately, you may either have to drop the bounding ACID transaction, or change the SPROC on the remote server so that it doesn't use SAVEPOINTs.
On a side note (Although I have seen that you have tagged it SQL SERVER 2000) but to make a point that SQL SERVER 2008 has remote proc trans Option for this.
In this case if the distributed table is not too large I would copy it to a temp table. If possible, include any filtering to get the number of rows to a minimum. Then you can proceed normally. Another option since the data changes rarely is copy the data to a permanant table and checking if anything has changed to prevent sending to much data over the network every time you run the transaction. You could only pull over the recent changes.
If you wish to handle transaction from UI level and you have Visual Studio 2008/.net fx 3.5 or + framework then you can wrap your logic with TransactionScope Class. If you dont have any frontends and you are working only on Sql Servers kindly ignore my answer...

SQL Server Transactions

When to use ado.net and SQL Server transactions?
Are there any reasons why one would choose to use ado.net transactions over SQL Server transactions when there is only one SP or query fired against the database within that transactions.
What if you have multiple resources contributing in the transaction then use ADO.NET (and let DTC/Enterprise Services manage it for you), else can use mssql transaction from the SP.
I wouldn't use ADO.NET transactions if using one call to one stored proc which does one query. The purpose of the ADO.NET transaction would be to wrap the execution of multiple calls in to a single transaction. Your case does not fall in to that criteria.

Resources