I would like some advice on the best approach for transferring DB data from one SQL server to another.
This is my scenario:
Customers can make orders for products either through our website (hosted front end) or through our in-office call center (back end)
Customers, once logged in, have a $$ balance that is displayed on the website. This balance is retrieved from the back-end sql. This balance is also available via the call center. Products that can be ordered is determined by this $$ balance.
Orders made from the call center get saved to a back-end sql DB. Orders made from the website get saved to a front-end sql DB but needs to be transferred to the same back-end sql DB as call center order asap so that the order fulfillment team can start working on it. The order fulfillment team don’t have access to the front-end.
The $$ balance displayed on the website needs to take into account orders that have not been transferred to the back-end yet. Currently we have a bit flag "HasBeenTransferred" on the order record indicating if it has been transferred or not.
What would be the best method for transferring these orders from our front end to our back end SQL Db?
I’ve looked into SQL replication - but the problem I’ve come across is that I won’t be able to set the "HasBeenTransferred” bit flag reliably using this method and this is critical for the system to work
Any help would be greatly appreciated.
You should probably still consider replication for this. Other than setting the flag, your scenario sound perfect for replication. And, you can customize the replication so that the flag can be set. Although, if you use transactional replication, the information could transfer before anyone can evaluate the "HasBeenTransferred" flag.
If you need to data transferred immediately, use either SQL Server Continuous Replication, or a trigger which executes a SPROC, which does populates the data in the other server via a linked server call.
If you don't need it immediately, use a SQL Server Job, which synchronizes the data at a given time interval.
Related
Assuming query store is enabled, you can use query performance insights to see a query ID, CPU usage, execution time, and more. The problem I'm tasked with is attributing these queries to departments which share databases. How would you recommend tracing who initiated a query?
Query Store aggregates data across all users and does not try to give you a per-user view of what is happening. (That's not its job - it is about performance management and troubleshooting). If you want to have an audit trail of who executed every query in the system, then running an Xevent session is the right model to do this (tracking statement completed and login events so you can stitch together who did what when you want to link things together later).
Making query store try to track per-user operations would have made it too expensive to be on always in every application.
You can enable Auditing for the Azure SQL database and check the query that was executed and also the user
I am looking for the best practice for the following scenario.
We have a CRM in our company. When an employee updates the record of a company, there is trigger that fires a stored procedure which has a CRUD statement to the linked server hosting the SQL DB of our website.
Question:
What happens when the connection is lost in the middle of the CRUD and the SQL DB of the website did not get updated? What would be the best way to have the SQL statement processed again when the connection is back?
I read about Service Broker or Transactional Replication. Is one of these more appropriate for that situation?
The configuration:
Local: SQL server 2008 R2
Website: SQL server 2008
Here's one way of handling it, assuming that the CRUD statement isn't modal, in the sense that you have to give a response from the linked server to the user before anything else can happen:
The trigger stores, in a local table, all the meta-information you need to run the CRUD statement on the linked server.
A job runs every n minutes that reads the table, attempts to do the CRUD statements stored in the table, and flags them as done if the linked server returns any kind of success message. The ones that aren't successful stay as they are in the table until the next time the job runs.
If the transaction failed in the middle of the trigger it would still be in the transaction and the data would not be written to either the CRM database or the web database. There is also the potential problem of performance, the SQL server data modification query wouldn't return control to the client until both the local and remote change had completed. The latter wouldn't be a problem if the query was executed async, but fire and forget isn't a good pattern for writing data.
Service Broker would allow you to write the modification into some binary data and take care of ensuring that it was delivered in order and properly processed at the remote end. Performance would not be so bad as the insert into the queue is designed to be completed quickly returning control to the trigger and allowing the original CRM query to complete.
However, it is quite a bit to set up. Even using service broker for simple tasks on a local server takes quite a bit of setup, partly because it is designed to handle secure, multicast, reliable ordered conversations so it needs a few layers to work. Once it is all there is is very reliable and does a lot of the work you would otherwise have to do to set up this sort of distributed conversation.
I have used it in the past to create a callback system from a website, a customer enters their number on the site and requests a callback this is sent via service broker over a VPN from the web to the back office server and a client application waits for a call on the service broker queue. Worked very efficiently once it was set up.
disclaimer: I must use a microsoft access database and I cannot connect my app to a server to subscribe to any service.
I am using VB.net to create a WPF application. I am populating a listview based on records from an access database which I query one time when the application loads and I fill a dataset. I then use LINQ to dataset to display data to the user depending on filters and whatnot.
However.. the access table is modified many times throughout the day which means the user will have "old data" as the day progresses if they do not reload the application. Is there a way to connect the access database to the VB.net application such that it can raise an event when a record is added, removed, or modified in the database? I am fine with any code required IN the event handler.. I just need to figure out a way to trigger a vb.net application event from the access table.
Think of what I am trying to do as viewing real-time edits to a database table, but within the application.. any help is MUCH appreciated and let me know if you require any clarification - I just need a general direction and I am happy to research more.
My solution idea:
Create audit table for ms access change
Create separate worker thread within the users application to query
the audit table for changes every 60 seconds
if changes are found it will modify the affected dataset records
Raise event on dataset record update to refresh any affected
objects/properties
Couple of ways to do what you want, but you are basically right in your process.
As far as I know, there is no direct way to get events from the database drivers to let you know that something changed, so polling is the only solution.
I the MS Access database is an Access 2010 ACCDB database, and you are using the ACE drivers for it (if Access is not installed on the machine where the app is running) you can use the new data macro triggers to record changes to the tables in the database automatically to an audit table that would record new inserts of updates, deletes, etc as needed.
This approach is the best since these happen at the ACE database driver level, so they will be as efficient as possible and transparent.
If you are using older versions of Access, then you will have to implement the auditing yourself. Allen Browne has a good article on that. A bit of search will bring other solutions as well.
You can also just run some query on the tables you need to monitor
In any case, you will need to monitor your audit or data table as you mentioned.
You can monitor for changes much frequently than 60s, depending on the load on the database, number of clients, etc, you could easily check ever few seconds.
I would recommend though that you:
Keep a permanent connection to the database while your app is running: open a dummy table for reading, and don't close it until you shutdown your app. This has no performance cost to anyone, but it will ensure that the expensive lock file creation is done only once, and not for every query you run. This can have a huge performance import. See this article for more information on why.
Make it easy for your audit table (or for your data table) to be monitored: include a timestamp column that records when a record was created and last modified. This makes checking for changes very quick and efficient: you just need to check if the most recent record modified date matches the last one you read.
With Access 2010, it's easy to add the trigger to do that. With older versions, you'll need to do that at the level of the form.
If you are using SQL Server
Up to SQL 2005 you could use Notification Services
Since SQL Server 2008 R2 it has been replaced by StreamInsight
Other database management systems and alternatives
Oracle
Handle changes in a middle tier and signal the client
Or poll. This requires you to configure the interval so you do not miss out on a change too long.
In general
When a server has to be able to send messages to clients it needs to keep a channel/socket open to the clients this can become very expensive when there are a lot of clients. I would advise against a server push and try to do intelligent polling. Intelligent polling means an interval that is as big as possible and appropriate caching on the server to prevent hitting the database to many times for the same data.
I'd like to have some degree of fault tolerance / redundancy with my SQL Server Express database. I know that if I upgrade to a pricier version of SQL Server, I can get "Replication" built in. But I'm wondering if anyone has experience in managing replication on the client side. As in, from my application:
Every time I need to create, update or delete records from the database -- issue the statement to all n servers directly from the client side
Every time I need to read, I can do so from one representative server (other schemes seem possible here, too).
It seems like this logic could potentially be added directly to my Linq-To-SQL Data Context.
Any thoughts?
Every time I need to create, update or
delete records from the database --
issue the statement to all n servers
directly from the client side
Recipe for disaster.
Are you going to have a distributed transaction or just let some of the servers fail? If you have a distributed transaction, what do you do if a server goes offline for a while.
This type of thing can only work if you do it at a server-side data-portal layer where application servers take in your requests and are aware of your database farm. At that point, you're better off just using a higher grade of SQL Server.
I have managed replication from an in-house client. My database model worked on an insert-only mode for all transactions, and insert-update for lookup data. Deletes were not allowed.
I had a central table that everything was related to. I added a field to this table for a date-time stamp which defaulted to NULL. I took data from this table and all related tables into a staging area, did BCP out, cleaned up staging tables on the receiver side, did a BCP IN to staging tables, performed data validation and then inserted the data.
For some basic Fault Tolerance, you can scheduling a regular backup.
My company has recently put up a demo database on his remote servers to allow some of our partners to test our beta version of the software.
We noticed, as expected, some bottlenecks in some parts of the program, in particular on the places in which many queries are done.
For example, I have to load customer from database, with all his products associated to him.
Normally this operation would be done as
select somefields from customers where id = someId
Store it to business class
select someotherfiels from products where customerid = previousid
store to collection and show/use them.
Imagine a much more complex business logic, which would gather 8 to 12 tables in the same way. With a local database, this would be instant. But by connecting to a remotely hosted SQL Server, this is extremely slow.
We found out that making a single query, like
SELECT Customer.Field1, Customer.Field2, Product.Field1, Product.Field2,
--//LIST OTHER 200 FIELDS FROM 10 DIFFERENT TABLE SE SAME WAY
FROM Customer
LEFT JOIN Product ON Product.IdCustomer = Customer.IdCustomer
--//LEFT JOIN OTHER 10 TABLES THE SAME WAY
WHERE Customer.IdCustomer = 10
amazingly improved the speed of the whole execution.
The fact that this is more complex than the single selects of each of them doesn't compare to the fact that is only one access to the server.
I am talking about ~2000/3000ms to 80-120ms.
Here comes the true problem (sorry about the long preface).
How can i save data to a remote SQL Server in an efficient way?
Imagine i have a window/form/dialog/frame/whatever where i ought to save more tables after a given operation.
For example:
INSERT INTO Customer (Field) VALUES ('hello world') "IdCustomer" is an identity column
Fetch the new Id from database
INSERT INTO SomeLinkedTable (IdCustomer, Field) VALUES (x, 'new value')
Fetch new "IdSomeLinkedTable", use it to link some table, ecc ecc..
This is a common way of multiple saving in our program. Even if we don't leave the user without some message that the operation is gonna take a while, waiting ~10s to do a frequent multi-insert operation is way too much.
Maybe it's a simple server configuration problem, but i swear all the firewalls in both sides (our & server) are properly configured to allow some SQL Server access.
Anyone encountered the same problem?
If you pulled all your business logic into the app layer, then your app layer must live close to the database.
The solution to make a fast and snappy remote applicaiton is to use fiefdoms-and-emissaries so you remove the tight coupling to the database.
As an interim solution you can speed up any operation by removing the round-trips to the database. If you have to save a master-detail form, you send the entire operation as a single T-SQL batch (INSERT header, SELECT ##identity, INSERT all details) as one single batch. Of course, in practice this means you moved your business logic into the T-SQL batches, and now that the logic lives in the database is better to have in in stored procs anyway.
At the end of the day, you can't have your cake and eat it too. You can either make a autonomous disconnected application that supports a remote, ocassionally connected db, and demo that, or make a tightly coupled application that must live close to the database and demo that.