Below is my code:
SqlConnection.ClearPool(serverConn);
serverConn = new SqlConnection(Utility.ConnStr_SqlSync_Server);
SqlSyncStoreRestore databaseRestore = new SqlSyncStoreRestore(serverConn);
databaseRestore.CommandTimeout = 1000; //Database is about 300 Gb
databaseRestore.PerformPostRestoreFixup(); //line #31
Error message:
Unhandled Exception: Microsoft.Synchronization.Data.DbNotProvisionedException: The current operation could not be completed because the database is not provisioned for sync or you not have permissions to the sync configuration tables.
at Microsoft.Synchronization.Data.SqlServer.SqlManagementUtils.VerifyRuntimeAndSchemaVersionsMatch(SqlConnection connection, SqlTransaction trans, String objectPrefix, String objectSchema, Boolean throwWhenNotProvisioned)
at Microsoft.Synchronization.Data.SqlServer.SqlSyncStoreRestore.PerformPostRestoreFixup()
at FixSyncEnabledDbAfterBackup.Program.Main(String[] args) ..\Visual Studio 2010\Projects\FixSyncEnabledDbAfterBackup\FixSyncEnabledDbAfterBackup\Program.cs:line 31
And my question:
How do I know is there permission or provision problem?
I ran this code on two analogous systems with same result. One system sync perfectly and other gives following warning in trace log
System.ArgumentException: SQL Server Change Tracking is not enabled for table 'Users' and stops.
Change tracking is enabled as I have checked. By means of PerformPostRestoreFixup() method I hope to recover database of second system after it was switched to simple recovery model and back to full model (probable cause of sync problem as I think).
It was not the matter of enabling change tracking for the table as error message says. Permissions to View Change Tracking Information should be added for the process which sync data. It could be checked in table properties in SSMS.
Related
I have a website and on that website, certain reports are run using SQL via the PSQL ODBC Source. Any report where the SQL takes longer than 30 seconds to run seems to time out!
I have tried amending the statement_timeout settings within PostgreSQL and this hasn't helped. I have also attempted to set the statement timeout within the ODBC Data Source Administrator and that did not help either.
Can someone please advise as to why the statement timeouts would only affect the web sessions and nothing else?
The error shown in the web logs and PostgreSQL database logs is below:
16:33:16.06 System.Data.Odbc.OdbcException (0x80131937): ERROR [57014] ERROR: canceling statement due to statement timeout;
I don't think the issue is in relation to the statement timeout setting in the PostgreSQL config file itself. The reason I say this, is because, I don't get the 30 second statement timeout when running queries in PGAdmin, I only get the timeout when I'm running reports on my website. The website connects to the PostgreSQL database via a PSQLODBC Driver.
Nonetheless, I did try setting the statement timeout within the PostgreSQL config file and I set the statement timeout to 90 Seconds and this change had no impact on the reports, they were still timing out after 30 seconds (The change was applied as show statement_timeout; did show the updated value). Secondly, I tried editing the Connect Settings via the ODBC Data Source Administrator Data Source Options and I set the statement timeout in here using the following command: SET STATEMENT_TIMEOUT TO 90000
The change was applied but had no impact again and the statement still kept timing out.
I have also tried altering the user statement timeout via running a query similar to the below:
Alter User "MM" set statement_timeout = 90000;
This again had no impact. The web session logs show the following:
21:36:32.46 Report SQL Failed: ERROR [57014] ERROR: canceling statement due to statement timeout;
Error while executing the query
21:36:32.46 System.Data.Odbc.OdbcException (0x80131937): ERROR [57014] ERROR: canceling statement due to statement timeout;
Error while executing the query
at System.Data.Odbc.OdbcConnection.HandleError(OdbcHandle hrHandle, RetCode retcode)
at System.Data.Odbc.OdbcCommand.ExecuteReaderObject(CommandBehavior behavior, String method, Boolean needReader, Object[] methodArguments, SQL_API odbcApiMethod)
at System.Data.Odbc.OdbcCommand.ExecuteReaderObject(CommandBehavior behavior, String method, Boolean needReader)
at System.Data.Odbc.OdbcCommand.ExecuteReader(CommandBehavior behavior)
at System.Data.Odbc.OdbcCommand.ExecuteReader()
at Path.PathWebPage.SQLtoJSON(String strSQL, Boolean fColumns, Boolean fFields, String strODBC, Boolean fHasX, Boolean fHasY, Boolean fIsChart, String strUser, String strPass)
I have tried amending the statement_timeout settings within Postgres and this hasn't helped
Please show what you tried, and what error message (if any) you got when you tried it. Also, after changing the setting, did show statement_timeout; reflect the change you tried to make? What does select source from pg_settings where name ='statement_timeout' show?
In the "stock" versions of PostgreSQL, a user is always able to change their session's setting for statement_timeout. Some custom compilations like those run by hosted providers might block such changes though (hopefully with a suitable error message).
There are several ways this could be getting set in a way that differs from the value specified in postgresql.conf. It could be set on a per-user basis or per-database basis using something like alter user "webuser" set statement_timeout=30000 (which will take effect next time that user logs on, or someone logs on to that database). It could be set in the connection string (although I don't know how that works in ODBC). Or if your app uses a centralized subroutine to establish the connection, that code could just execute a set statement_timeout=30000 SQL command on the connection before it hands it back to the caller.
I am writing in C++ against a SQL Server database. I have an object called SQLTransaction that when created at the start of a code block, sends 'begin transaction' to SQL Server.
I then send one or more SQL statements to the server. If all goes well, I set a flag in the SQLTransaction object to let it know the set of commands went well. When the SQLTransaction object then goes out of scope it either sends 'commit transaction' or 'rollback transaction to the server depending on the state of the flag.
It looks something like this:
{
TSQLTransaction SQLTran();
try
{
Send( SomeSQLCommand );
}
catch(EMSError &e)
{
InformOperator();
return;
}
SQLTran.commit();
}
I had a SQL statement in one of these blocks that sent a poor command and that command threw a SQL error 8114
Error converting data type varchar to numeric
I have since fixed that particular issue. What I don't understand is the fact that I was also receiving a second SQL error with the message
The ROLLBACK TRANSACTION request has no corresponding BEGIN TRANSACTION.
I can't find anything that tells me this transaction could or should not be rolled back after failure.
This exact same SQLTransaction object is used in many places in my application and always seemed to work correctly until now. This SQL error seems to be treated differently for some reason. Are there some errors that SQL Server automatically rolls back? I'd really like to understand what's happening here.
Thanks
There is a connection option, SET XACT_ABORT, that determines the fate of current transaction(s) when SQL statement throws an error. Basically, when set to OFF, the transaction (usually) survives and execution continues; if it's ON, all opened transactions in the current connection are rolled back and batch is terminated.
The option could be set:
On the connection level;
Different database access drivers might have different defaults for connection options;
There is a default value on the SQL Server instance level.
Check whether any of these were changed recently. Also, if you capture the trace in the SQL Profiler, the "ExistingConnection" event lists current connection settings. You can always check the option state there, and rule it out if it's turned off. In that case, I would look closer at the trace, there might be additional commands sent to the server that aren't apparent from your client code.
I am just starting to experiment with SQL Server Alerts. I set up an Alert on Errors/sec with a threshold of zero, thinking that I would get an email any time an error was written to the log. I got a lot of emails! I raised the threshold to notify me when it was over one per second, and I still get quite a few emails from time to time.
As an example, I get emails that contains something like this:
DESCRIPTION: The SQL Server performance counter 'Errors/sec' (instance '_Total') of object 'MyServerName:SQL Errors' is now above the threshold of 1.00 (the current value is 4.45).
Here is the command for the alert I am using:
EXEC msdb.dbo.sp_add_alert #name=N'SQL Errors',
#message_id=0,
#severity=0,
#enabled=1,
#delay_between_responses=0,
#include_event_description_in=1,
#notification_message=N'Check the ERRORLOG for details.',
#category_name=N'[Uncategorized]',
#performance_condition=N'MyServerName:SQL Errors|Errors/sec|_Total|>|0',
#job_id=N'00000000-0000-0000-0000-000000000000'
When I look at the log, I don't find any errors. I do find informational messages (a backup completed, etc.) though. Is this alert really "Entries/sec" and not truly "Errors/sec" or am I looking in the wrong place (SSMS | Server | Management | SQL Server Logs | Current) for the actual errors?
All errors are not logged, a insert might be executed that might break a constraint, an error will be raised to the client, but it doesn't mean that it is logged in the error log.
For example if you execute the following t-sql.
RAISERROR ('This is an error!', 16, 1) WITH LOG
the error will be logged in the error log, ommitting the WITH LOG will just cause the error to be raised without logging.
Errors have attributes which you can use to filter on, such as message id, severity, but you are monitor all by the looks of things. Severity might be what you need.
I have read about but never used Alerts. Based on this, I believe the "SQL Alerts system" is based off of data writtin in the Windows Application Event log, e.g.
Start
Programs
Administrative Tools
Event Viewer
and look under the "Application" entry. (Look at the rest of them while you're there, to see all those bugs and errors you never knew about. It's a lot like looking under a rock in the woods...)
I'm doing an integration on a community platform called Telligent. I'm using a 3rd-party add-on called BlogML to import blog posts from an XML file (in BlogML format) into my local Telligent site. The Telligent platform comes with many classes in their SDK so that I can programmatically add content, such as blog posts. E.g.
myWeblogService.AddPost(myNewPostObject);
The BlogML app I'm using essentially parses the XML and creates blog post objects then adds them to the site using code like the above sample line. After about 40 post imports I get a SQL error:
Exception Details: System.Data.SqlClient.SqlException:
String or binary data would be truncated.
The statement has been terminated.
I believe this error means that I'm trying to insert too much data into a db field that has a max size limit. Unfortunately, I cannot tell which field this is an issue for. I ran the SQL Server Profiler while doing the import but I cannot seem to see what stored procedure the error is occurring on. Is there another way to use the profiler or another tool to see exactly what stored procedure and even what field the error is being caused by? Are there any other tips to get more information about where specifically to look?
Oh the joys of 3rd-party tools...
You are correct in that the exception is due to trying to stuff too much data into a character/binary based field. Running a trace should definitely allow you to see which procedure/statement is throwing the exception if you are capturing the correct events, those you'd want to capture would include:
SQL:BatchStarting
SQL:BatchCompleted
SQL:StmtStarting
SQL:StmtCompleted
RPC:Starting
RPC:Completed
SP:Starting
SP:Completed
SP:StmtStarting
SP:StmtCompleted
Exception
If you know for certain it is a stored procedure that includes the faulty code, you could do away with capturing #'s 1-4. Be sure you capture all associated columns in the trace as well (should be the default if you are running a trace using the Profiler tool). The Exception class will include the actual error in your trace, which should allow you to see the immediate preceding statement within the same SPID that threw the exception. You must include the starting events in addition to the completed events as an exception that occurs will preclude the associated completed events from firing in the trace.
If you can filter your trace to a particular database, application, host name, etc. that will certainly make it easier to debug if you are on a busy server, however if you are on an idle server you may not need to bother with the filtering.
Assuming you are using Sql 2005+, the trace will include a column called 'EventSequence', which is basically an incrementing value ordered by the sequence that events fire. Once you run the trace and capture the output, find the 'Exception' event that fired (if you are using profiler, the row's it will be in Red color), then you should be able to simply find the most recent SP:StmtStarting or SQL:StmtStarting event for the same SPID that occurred before the Exception.
Here is a screen shot of a profile I captured reproducing an event similar to yours:
You can see the exception line in Red, and the line highlighted is the immediate preceding SP:StmtStarting event that fired prior to the exception for the same SPID. If you want to find what stored procedure this statement is a part of, look for the values in the ObjectName and/or ObjectId columns.
By doing some silly mistakes you will get this error.
if you are trying to insert a string like.
String reqName="Food Non veg /n";
here /n is the culprit.Remove /n from the string to get out of this error.
I hope this will help some one.
I have transactional replication running between two databases. I fear they have fallen slightly out of sync, but I don't know which records are affected. If I knew, I could fix it manually on the subscriber side.
SQL Server is giving me this message:
The row was not found at the Subscriber when applying the replicated command. (Source: MSSQLServer, Error number: 20598)
I've looked around to try to find out what table, or even better what record is causing the issue, but I can't find that information anywhere.
The most detailed data I've found so far is:
Transaction sequence number: 0x0003BB0E000001DF000600000000, Command ID: 1
But how do I find the table and row from that? Any ideas?
This gives you the table the error is against
use distribution
go
select * from dbo.MSarticles
where article_id in (
select article_id from MSrepl_commands
where xact_seqno = 0x0003BB0E000001DF000600000000)
And this will give you the command (and the primary key (ie the row) the command was executing against)
exec sp_browsereplcmds
#xact_seqno_start = '0x0003BB0E000001DF000600000000',
#xact_seqno_end = '0x0003BB0E000001DF000600000000'
I'll answer my own question with a workaround I ended up using.
Unfortunately, I could not figure out which table was causing the issue through the SQL Server replication interface (or the Event Log for that matter). It just didn't say.
So the next thing I thought of was, "What if I could get replication to continue even though there is an error?" And lo and behold, there is a way. In fact, it's easy. There is a special Distribution Agent profile called "Continue on data consistency errors." If you enable that, then these types of errors will just be logged and passed on by. Once it is through applying the transactions and potentially logging the errors (I only encountered two), then you can go back and use RedGate SQL Data Compare (or some other tool) to compare your two databases, make any corrections to the subscriber and then start replication running again.
Keep in mind, for this to work, your publication database will need to be "quiet" during the part of the process where you diff and fix the subscriber database. Luckily, I had that luxury in this case.
If your database is not prohibitively large, I would stop replication, re-snapshot and then re-start replication. This technet article describes the steps.
If it got out of sync due to a user accidently changing data on the replica, I would set the necessary permissions to prevent this.
This replication article is worth reading.
Use this query to find out the article that is out of sync:
USE [distribution]
select * from dbo . MSarticles
where article_id IN ( SELECT Article_id from MSrepl_commands
where xact_seqno = 0x0003BB0E000001DF000600000000)
of course if you check the error when the replication fails it also tells you which record is at fault and you could extract that data from the core system and just insert it on the subscriber.
This is better than skipping errors as with the SQL Data Compare it will lock the table for the comparison and if you have millions of rows this can take a long time to run.
Tris
Changing the profile to "Continue on data consistency errors" won't always work. Obviously it reduces or nullifies an error, but you won't get the whole proper data. It will skip the rows by which an error occurs, and hence you fail to get accurate data.
the following checks resolve my problem
check that all the replication SQL Agents jobs are working fine and if not start them.
in my case it was stopped because of some killed session occurred a few hours before by Some DBA because of blocking issue
after a very short time all data in subscription were updated and no
other error in replication monitor
in my case all above queries did not returned nothing
This error usually comes when particular record does not exists on subscriber and a update or delete command executed for same record on primary server and which got replicated on subscriber as well.
As this records does not exists on subscriber, replication throws an error " Row Not Found"
Solution of this error to make replication work back to the normal running state:
We can check with following query, whether request at publisher was of update or delete statement:
USE [distribution]
SELECT *
FROM msrepl_commands
WHERE publisher_database_id = 1
AND command_id = 1
AND xact_seqno = 0x00099979000038D6000100000000
We can get artical id information from above query, which can be passed to below proc:
EXEC Sp_browsereplcmds
#article_id = 813,
#command_id = 1,
#xact_seqno_start = '0x00099979000038D60001',
#xact_seqno_end = '0x00099979000038D60001',
#publisher_database_id = 1
Above query will give information about, whether it was a update statement or delete statement.
In Case of Delete Statement
That record can be directly deleted from msrepl_commands objects so that replication wont make retry attempts for the record
DELETE FROM msrepl_commands
WHERE publisher_database_id = 1
AND command_id =1
AND xact_seqno = 0x00099979000038D6000100000000
In case of update statement:
You need to insert that record manually from publisher DB to subscriber DB: