We currently run scheduled overnight jobs to sync heavily calculated data into flat tables for use in reports. These processes can take anywhere between 5mins to 2hrs per database, depending on the size. This has been working fine for a long time.
The need we now have is to try to keep the data as up-to-date as possible with our current setup.
I wrote a sync routine that can sync the specific users' data as it gets modified.
The short version is a trigger inserts the userids into a holding table when their records get modified, and there is a job that runs every 10 seconds that checks that table and if a userid is found it then fires the sync for that user and updates the flat table (and then flags the record as complete). This can take anywhere between instant and ~1min depending again on how much data it needs to calculate. Far better than the 24hrs it used to be.
Now onto the 'problem'.
We have upwards of 30 databases that all need to be checked.
Currently, our overnight jobs have a step for each database and run through it in turn.
The problem I can foresee is if customer 1 has a lot of users that are syncing then it'll wait to finish that before moving on to customer 2's database, etc.. by the time you get to customer 30 it could be a relatively long wait before their sync even begins; regardless of how quick the actual sync is. Ie. 2 customers enter data at the same time: As far as customer 1 is concerned, their sync happened in seconds whereas customer 30 took 30 mins before their data updated, even though the sync routine itself only took seconds to complete as it had to wait for 29 other databases to finish their work first.
I have had the idea of changing how we do our scheduled job here and create a job for each database/customer. That way the sync check will run synchronously across all database and no customer at the end of the queue will be waiting for other customers' sync to finish before theirs starts.
Is this a viable solution? Is it a had idea to have 30 jobs checking 30 databases every 10 seconds? Is there another, better option I haven't considered?
Our servers are running Microsoft SQL Server Web and Standard currently, though I believe we may be upgrading to enterprise at some point. (If the version makes a difference to my options here)
Related
I am creating a database medical system and then I came to a point where I am trying to create a notification feature and i will use SQL jobs in it, where the SQL job responsibility is to check some tables and the entities that will find it need to be notified for a change in certain data will put their ids in an entity called Notification and a trigger will be called for the app to check that table and send the notificiation.
what I want to ask is how many SQL jobs can a sql server handle ?
Does the number of running SQL jobs in background affect the performance of my application or the database performance in a way or another ?
NOTE: the SQL job will run every 10 seconds
I couldn't find any useful information online.
thanks in advance.
This question really doesn't have enough background to get a definitive answer. What are the considerations?
Do the queries in your ten-second job actually complete in ten seconds, even when your DBMS is under its peak transactional workload? Obviously, if the job routinely doesn't complete in ten seconds, you'll get jobs piling up.
Do the queries in your job lock up tables and/or indexes so the transactional load can't run efficiently? (You should use SET ISOLATION LEVEL READ UNCOMMITTED; as much as you can so database reads won't lock things unnecessarily.)
Do the queries in your job do a lot of rows' worth of inserts and updates, and so swamp the SQL Server transaction logs?
How big is your server? (CPU cores? RAM? IO capacity?) How big is your database?
If your project succeeds and you get many users, will your answers to the above questions remain the same? (Hint: no.)
You should spend some time on the execution plans for the queries in your job, and try to make them as efficient as possible. Add the necessary indexes. If necessary refactor the queries to make them more efficient. SSMS will show you the execution plans and suggest appropriate indexes.
If your job is doing things like deleting expired rows, you may want to build the expiration in your data model. For example, suppose your job does
DELETE FROM readings WHERE expiration_date >= GETDATE()
and your application does this, relying on your job to avoid getting expired readings.
SELECT something FROM readings
You can refactor your application query to say
SELECT something FROM readings WHERE expiration_date < GETDATE()
and then run your job overnight, at a quiet time, rather than every ten seconds.
A ten-second job is not the greatest idea in the world. If you can rework your application so it will function correctly with a ten-second, ten-minute, or twelve-hour job, you'll have a more resilient production system. At any rate if something goes wrong with the job when your system is very busy you'll have more than ten seconds to fix it.
Not a technical question as such, but I do wonder how other people deal with the Schedules in SQL Server in relation to SQL Agent.
Personally, I like to create a bunch of schedules and then reuse them for various jobs.
As an example, I like to name my schedules in the following manner:
Daily - Every 15 seconds midnight to midnight
Daily - Every 15 seconds between 03:20 and 23:55
Daily - Every 22 minutes from midnight to midnight
Daily - Every 3 hours - starting at 03:30 to 23:59:59
Daily - Every 30 seconds from 00:00:05
It's not that stringent, but it helps me to understand the schedules a little better.
And then, I like to associate my jobs with existing schedules (the rule in the team is : DO NOT modify schedules)
So I do end up with a number of schedules that are linked to numerous jobs.
Discussing this with a colleague and wondering what is the correct approach, most efficient approach to this, we tested things a little.
One behaviour I thought was surprising is as follows:
create a new job
during that process, create a new schedule and associate it to that job
run the job once (mine was "SELECT 1 as one") to be sure to be sure
then drop that SQLAgent job
I would have expected the schedule to remain, but as it turns out the newly created schedule is also dropped!
How do others feel about that? is this correct behaviour?
And do you prefer to create a new schedule for every new job you create? or re-use Schedules?
Hoping to hear interesting views,
Kindest,
B
I'm not sure this is entirely opinion based, but I also won't be surprised if it gets flagged as such.
In the shop where I had the most sway over job scheduling, we opted to create new schedules for each job. Sometimes just one, sometimes several. That way, if we were experiencing high load on any given server, we could tweak individual job schedules to spread out the load without impacting multiple jobs. Say, for instance, one of 6 jobs with the same start time started to process exponentially more data, that one job could easily be shifted by X minutes so it wasn't competing for resources, or causing competition.
I'm current on POS project. User require this application can work both online and offline which mean they need local database. I decide to use SQL Server replication between each shop and head office. Each shop need to install SQL Server Express and head office already has SQL Server Enterprise Edition. Replication will run every 30 minutes as schedule and I choose Merge Replication because data can change at both shop and head office.
When I'm doing POC, I found this solution not work properly, sometime job is error and I need to re-initialize its. This solution also take a very long time, which obviously unacceptable to user.
I want to know, are there any solutions better than one that I'm doing now?
Update 1:
Constraints of the system are
Almost of transactions can occur at
both shop and head office.
Some transaction need to work in real-time mode, that being said,
after user save data to their local shop that data should go to update at head office too. (If they're currently online)
User can working even their shop has disconnected from head office database.
Our estimation about amount of data is at-most 2,000 rows in each day.
Windows 2003 is OS of Server at head office and Windows XP is OS of all clients.
Update 2:
Currently they're about 15 clients, but this number will growing in fairly slow rate.
Data's size is about 100 to 200 rows per replication, I think it may not more than 5 MB.
Client connect to server by lease-line connection; 128 kbps.
I'm in situation that replication take a very long time (about 55 minutes while we've only 5 minutes or so) and almost of times I need to re-initialize job to start replicate again, if I don't re-initialize job, it can't replicate at all. In my POC, I find that it always take very long time to replicate after re-initialize, amount of time doesn't depend on amount of data. By the way, re-initialize is only solution I find it work for my problem.
As above, I conclude that, replication may not suitable for my problem and I think it may has another better solution that can serve what I need in Update 1:
Sounds like you may need to roll your own bi-directional replication engine.
Part of the reason things take so long is that over such a narrow link (128kbps), the two databases have to be consistent (so they need to check all rows) before replication can start. As you can imagine, this can (and does) take a long time. Even 5Mb would take around a minute to transfer over this link.
When writing your own engine, decide what needs to be replicated (using timestamps for when items changed), figure out conflict resolution (what happens if the same record changed in both places between replication periods) and more. This is not easy.
My suggestion is to use MS access locally and keep updating data to the server after a certain interval. Add a updated column to every table. When a record is added or updated, set the updated coloumn. For deletion you need to have a seprate table where you can put primary key value and table name. When synchronizing fetch all local records whose updated field not set and update (modify or insert) it to central server. Delete all records using local deleted table and you are done!
I assume that your central server is only for collecting data.
I currently do exactly what you describe using SQL Server Merge Replication configured for Web Synchronization. I have my agents run on a 1-minute schedule and have had success.
What kind of error messages are you seeing?
I have two SQL Server 2005 instances that are geographically separated. Important databases are replicated from the primary location to the secondary using transactional replication.
I'm looking for a way that I can monitor this replication and be alerted immediately if it fails.
We've had occasions in the past where the network connection between the two instances has gone down for a period of time. Because replication couldn't occur and we didn't know, the transaction log blew out and filled the disk causing an outage on the primary database as well.
My google searching some time ago led to us monitoring the MSrepl_errors table and alerting when there were any entries but this simply doesn't work. The last time replication failed (last night hence the question), errors only hit that table when it was restarted.
Does anyone else monitor replication and how do you do it?
Just a little bit of extra information:
It seems that last night the problem was that the Log Reader Agent died and didn't start up again. I believe this agent is responsible for reading the transaction log and putting records in the distribution database so they can be replicated on the secondary site.
As this agent runs inside SQL Server, we can't simply make sure a process is running in Windows.
We have emails sent to us for Merge Replication failures. I have not used Transactional Replication but I imagine you can set up similar alerts.
The easiest way is to set it up through Replication Monitor.
Go to Replication Monitor and select a particular publication. Then select the Warnings and Agents tab and then configure the particular alert you want to use. In our case it is Replication: Agent Failure.
For this alert, we have the Response set up to Execute a Job that sends an email. The job can also do some work to include details of what failed, etc.
This works well enough for alerting us to the problem so that we can fix it right away.
You could run a regular check that data changes are taking place, though this could be complex depending on your application.
If you have some form of audit train table that is very regularly updated (i.e. our main product has a base audit table that lists all actions that result in data being updated or deleted) then you could query that table on both servers and make sure the result you get back is the same. Something like:
SELECT CHECKSUM_AGG(*)
FROM audit_base
WHERE action_timestamp BETWEEN <time1> AND BETWEEN <time2>
where and are round values to allow for different delays in contacting the databases. For instance, if you are checking at ten past the hour you might check items from the start the last hour to the start of this hour. You now have two small values that you can transmit somewhere and compare. If they are different then something has most likely gone wrong in the replication process - have what-ever pocess does the check/comparison send you a mail and an SMS so you know to check and fix any problem that needs attention.
By using SELECT CHECKSUM_AGG(*) the amount of data for each table is very very small so the bandwidth use of the checks will be insignificant. You just need to make sure your checks are not too expensive in the load that apply to the servers, and that you don't check data that might be part of open replication transactions so might be expected to be different at that moment (hence checking the audit trail a few minutes back in time instead of now in my example) otherwise you'll get too many false alarms.
Depending on your database structure the above might be impractical. For tables that are not insert-only (no updates or deletes) within the timeframe of your check (like an audit-trail as above), working out what can safely be compared while avoiding false alarms is likely to be both complex and expensive if not actually impossible to do reliably.
You could manufacture a rolling insert-only table if you do not already have one, by having a small table (containing just an indexed timestamp column) to which you add one row regularly - this data serves no purpose other than to exist so you can check updates to the table are getting replicated. You can delete data older than your checking window, so the table shouldn't grow large. Only testing one table does not prove that all the other tables are replicating (or any other tables for that matter), but finding an error in this one table would be a good "canery" check (if this table isn't updating in the replica, then the others probably aren't either).
This sort of check has the advantage of being independent of the replication process - you are not waiting for the replication process to record exceptions in logs, you are instead proactively testing some of the actual data.
I have Merge replication setup on a CRM system. Sales reps data merges when they connect to the network (I think when SQL detects the notebooks are connected), and then they take the laptops away and merge again when they come back (there are about 6 laptops in total merging via 1 server).
This system seems fine when initially setup, but then it almost grinds to a halt after about a month passes, with the merge job taking nearly 2 hours to run, per user, the server is not struggling in any way.
If I delete the whole publication and recreate all the subscriptions it seems to work fine until about another month passes, then I am back to the same problem.
The database is poorly designed with a lack of primary keys/indexes etc, but the largest table only has about 3000 rows in it.
Does anyone know why this might be happening and if there is a risk of losing data when deleting and recreating the publication?
The problem was the metadata created by sql server replication, there is an overnight job that emptys and refills a 3000 row table. This causes replication to replicate all of these rows each day.
The subscriptions were set to never expire which means the old meta data was never being deleted by sql server.
I have set the subscription period to 7 days now in the hope tat it will now clean up the meta data after this period. I did some testing and proved that changes were not lost if a subscription expired. But any updates on the server took priority over the client.
I have encountered with "Waiting 60 second(s) before polling for further changes" recently in 2008 R2.
Replication monitor shows "In progress state" for replication but only step 1 (Initialization) and step 2 (Schema changes and bulk inserts) were performed.
I was very puzzled why other steps do not executed?
The reason was simple - it seems that for merge replication demands tcp/ip (and or not sure) named pipes protocols activation.
No errors were reported.
Probably the similar problem (some sort of connection problem) became apparent in Ryan Stephens case.