I'm working on a bug in a legacy Windows Mobile 5 app, which uses SQL CE Replication to sync a SQL CE database with a SQL Server 2005 or 2008 database (using Merge replication).
There is some behavior in the application which I don't believe is related to the bug, but I was curious what the side-effects of it might be. The code ends up calling "ReinitializeSubscription(true)" on the SqlCeReplication object before it calls Synchronize. The "true" flag just tells the reinit to upload any changes before reinitializing, which is fine. I don't believe there is a concrete reason to reinit the sub each time, but that's what it does...
What is the impact of calling ReinitializeSubscription on the SqlCeReplication object before each Synchronize call? Is it just a performance hit, or is it actually doing something different with the data synchronization, compared to not calling ReinitializeSubscription before Synchronize?
Unless you are experiencing major client íssues, do not reinit each time you sync, that will download the entire client database (after uploading changes), and consume time and bandwidth, and give a poor user experience (unless you data set is very small and you have high bandwidth, of course)
Related
I am trying to open a web URL through SQL Server 2012, we have tried SQLCLR but its outdadted, we tried to run a batch file and it would get stuck in the executing process
EXEC xp_cmdshell 'c:\PATH.bat'
that's the code we used to open the batch file and then it gets stuck in executing query and i waited 5 minutes still nothing popped up
we have checked through file permissions and everything is allowed, its the 4th time ive tried to this and i couldnt manage can someone please show me an alternate solution ?
While there are pros and cons to accessing a URL from within SQL Server, SQLCLR is most definitely not outdated. Even if you have no custom Assemblies, it is still being used internally for several things:
Hierarchy, Geometry, Geography datatypes
Replication
Several built-in functions such as FORMAT, TRY_PARSE, etc
etc
For more info on what SQLCLR actually is and can do, please see the series of articles I am writing on this topic on SQL Server Central: Stairway to SQLCLR (free registration is required to read content on that site, but it's worth it :-). Level 1 ("What is SQLCLR?") is a fairly comprehensive look at what SQLCLR both is and is not.
If you want a command line utility then you might be able to get away with using curl.
If you want a pre-made SQLCLR function that can handle this so that you don't need to worry about the learning curve of doing such an operation in SQLCLR, then that is available in the SQL# library that I created (but it is not in the Free version; only available in the Full / paid version).
IF you are going to be making this URL / Web Sevice call from within a Trigger (whether it is a SQLCLR Trigger or T-SQL Trigger calling a SQLCLR object), then you need to be very careful since Triggers execute within a system-created Transaction (if no explicit Transaction already exists). What this means is that the actual committing of the Transaction (i.e. the true saving of the change to the DB) will wait until the external call completes. The two problems you run into here are:
The Web Service does not respond super quickly (and it needs to respond super quickly)
There are more concurrent requests made to the specific URI such that .NET waits until there is an opening. This is controlled by ServicePointManager.DefaultConnectionLimit, which can be accessed via the HttpWebRequest object (I think there is a ServicePoint property). The default limit is 2, so any more than 1 - 3 calls to the Web Service per second (generally speaking) can cause blocking, even if the Web Service has the ability to respond quickly. 1 - 3 calls per second might not seem like much, but if using this approach in an audit Trigger scenario on multiple tables, it becomes quite easy to reach this limit. So you need to increase the limit to something much higher than 2, and per each call as it is stored in the App Domain which sometimes gets unloaded due to memory pressure.
For more info and considerations, please see my related answers to similar questions here on S.O.:
SQL Server 2012 make HTTP 'GET' Request from a stored procedure
SQL CLR awaitable not getting executed
SQL CLR for Event Driven Communication
Logging not Persisting When Exception Occurs in Method Executed in a Trigger
Also, this S.O. question is very similar in terms of wanting to get near real-time notification of DML changes, and might apply to your goal:
SqlDependency vs SQLCLR call to WebService
I have two apps. One inserts into AzureSQL DB and other reads. I want second app to cache query results and invalidate cache only when something changed in table/query results. In standalone SQL Server it was possible by SQLDependency (or SQLCacheDependency) mechanism. As far as I understood, in AzureSQL this mechanism is unavailable. It requires ServiceBroker component to be enabled, and there's no such component in Azure SQL.
I apoligize if I reapeat already asked questions, but all answers come from 2012 or so. Were there any changes? It's 2017.
And the questions is, what is the mechanism to inform application (say, ASP.NET) about changes in AzureSQL?
PS: I know there's related feature "ChangesTracking", but it is about inserting records about some other changes in speical table. That is "within" database. I need to inform app outside of DB.
To my understanding, SQLDependency works by using DependencyListener, that is an implementation of RepositoryListener and relays on ServiceBroker, as you stated AzureSQL does not support ServiceBroker. But you could use the PollingListener implementation of RepositoryListener to verify a change.
"The PollingListener will run until cancelled and will simply compare the result of the query against until change is detected. Once it is detected, a callback method will be called"
(Source1)
(Source 2)
We have a requirement for notifying external systems of changes in data in various tables in a SQL Server database. The choice of which data to monitor is somewhat under the control of the user (gets to choose from a list of what we support). The recipients of the notifications may be on a locally connected network (i.e., in the same data center) or they may be remote.
We currently handle this by application code within our data access layer that detects changes and queues notifications on a Service Broker queue which is monitored by a Windows service that performs the actual notification. Not quite real time but close enough.
This has proven to have some maintenance problems so we are looking at using one of the change detection mechanisms that are built into SQL Server. Unfortunately none of the ones I have looked at (I think I looked at them all) seem to fit very well:
Change Data Capture and Change Tracking: Major problem is that they require polling the captured information to determine changes that are to be passed on to recipients. I suspect that will introduce too much overhead.
Notification Services: Essentially uses SQL Server as a web server, which is a horrible waste of licenses. It also requires access through at least two firewalls in the network, which is unacceptable from a security perspective.
Query Notification: Seems the most likely candidate but does not seem to lend itself particularly well to dynamically choosing the data elements to watch. The need to re-register the query after each notification is sent means that we would keep SQL Server busy with managing the registrations
Event Notification: Designed to notify on database or instance level events, not really applicable to data change detection.
About the best idea I have come up with is to use CDC and put insert triggers on the change data tables. The triggers would queue something to a Service Broker queue that would be handled by some other code to perform the notifications. This is essentially what we do now except using a SQL Server feature to do the change detection. I'm not even sure that you can add triggers to those tables but I thought I'd get feedback before spending a lot of time with a POC.
That seems like an awful roundabout way to get the job done. Is there something I have missed that will make the job easier or have I misinterpreted one of these features?
Thanks and I apologize for the length of this question.
Why don't you use update and insert triggers? A trigger can execute clr code, which is explained enter link description here
I have a web service that is used to manage files on a filesystem that are also tracked in a Microsoft SQL Server database. We have a .NET system service that watches for files that are added using the FileSystemWatcher class. When a file-added callback comes from FileSystemWatcher, metadata about the file is added to our database, and it works fairly well.
I've now come to a bit of a scalability problem. I'm adding large quantities of files to the filesystem in rapid succession, and this ends up hammering the database with file adds which results in locking up my web front-end.
I have yet to work on database scability issues, so I'm trying to come up with mitigate tactics. I was thinking of perhaps caching file adds and only writing them off to the database every five minutes or so, but I'm not sure how practical that is. This is data that needs to find its way into our database at some point anyway, and so it's going to have to get hammered at some point. Maybe I could limit the number of file db entries written per second to a certain amount, but then I risk having that amount be less than the rate at which files are added. How can I best tackle this?
Have you thought about using something like SQL Server Service Broker? That way you could push through tons of entries in a burst and it would level out the inserts into your database.
Basically you'd be pushing messages onto a queue which would then be consumed by a receiver stored procedure that would perform the insert for you. You could limit the maximum number of receivers executing to help with the responsiveness issues in your web interface.
There's a nice intro paper here. Although it's for 2005, not much has changed between 2005 and the newer versions of SQL Server.
You have a performance problem and you should approach it with a performance investigation methodology like Waits and Queues. Once you identify the actual problem, we can discuss solutions.
This is just a guess but, assuming the notification 'update metadata' code is a stright forward insert, the likely problem is that you're generating one transaction per notification. This results in commit flush waits, see Diagnosing Transaction Log Performance . Batch commit (aggregate multiple notifications before committing) is the canonical solution.
first option is using Caching to handle high-volume data. or using clusters for analysis high volume data. please click here for more information.
I'm developing a client-server app using WCF and Linq2Sql. My server-side program exposes to the clent an interface that provides methods of reading from and writing to my SQL Server DB.
But when the client writes some date into DB, perhabs waites some time, and then tries to read that data from DB, it seems like no data has been written to DB, but if I restart my server-side app or perform DB detaching and reataching or restarting of sqlserver-service, then my client-side program can get that data from server-side program.
Does anyone have any idea what's wrong with my app (server?) and how to fix this?
UPDATE: I'm using Linq2Sql (calling CataContext.SubmitChanges()).
UPDATE 2: I've discovered, than if I add some new rows into my table, all is correct, but when I'm updating some pieces of row (some properties of objects) and then save changes, the changes become displayed only after reconnection to DB. It appears not to have flushed data immediatly after updating some properties and invocation of DataContext.SubmitChanges().
I don't have an answer, but some ideas for how to further track down the issue.
How do you write to the DB? Do you use transactions, that maybe remain open? Can you query
the updates in the database when they don't show up in your WCF response? Does your update maintain locks and somehow not release them? Did you eliminate caching as the cause?
Try remote-debugging to find out what happens on the server. A WCF trace might be helpful, too.