We are hosting one of our solutions which was originally built on SQL Server 2008 R2 and this instance is hosted on a SQL Server 2008 instance (not R2). The database created fine but for some reason the service broker queues were created with:
POISON_MESSAGE_HANDLING(STATUS = OFF)
I have tried setting this to on but with no luck, we have always declared the queue like this:
CREATE QUEUE QueueName WITH STATUS=ON, ACTIVATION
(STATUS = ON, MAX_QUEUE_READERS = 1,
PROCEDURE_NAME = QueueProcedureName, EXECUTE AS OWNER);
Is there a way to create the queue as about with the defaults of R2?
EDIT - More Info:
This is the error message which makes no sense as works fine on 2008 R2.
GO
ALTER QUEUE [Store].[UpdateStoredPublishingSegmentUsersSendQueue]
WITH POISON_MESSAGE_HANDLING(STATUS = ON);
Msg 102, Level 15, State 1, Line 2
Incorrect syntax near 'POISON_MESSAGE_HANDLING'.
This is an issue with the version of sql server. The POISON-MESSAGE_HANDLING is not supported in version less than 2008 R2. Hope this helps!
DISCLAIMER: I haven't tried the following commands, as I am running on 2005 which doesn't support the POISON_MESSAGE_HANDLING option.
Have you tried the ALTER QUEUE command after executing the CREATE?
ALTER QUEUE <queue name> WITH
POISON_MESSAGE_HANDLING ( STATUS = ON )
In alternative try modifying your CREATE command like this:
CREATE QUEUE <queue name> WITH
STATUS=ON,
ACTIVATION (
STATUS = ON,
MAX_QUEUE_READERS = 1,
PROCEDURE_NAME = <activated sproc name>,
EXECUTE AS OWNER
),
POISON_MESSAGE_HANDLING ( STATUS = ON );
Related
After I triggered and refreshed the dag task, it went from running, delayed, to failed. The error log from the airflow told me to check the error from sql server which I got "Failed to start system task System Task" when I checked the logs on my sql server docker container. I'm not sure if I need to specify a schema but the rest of the connection params are correct.
[entrypoint.sh]
"${AIRFLOW_CONN_MY_SRC_DB:=mssql+pyodbc://SA:P#SSW0RD#mssqlcsc380:1433/?driver=ODBC+Driver+17+for+SQL+Server}"
[dag.py]
with DAG (
'mssql_380_dag',
start_date=days_ago(1),
schedule_interval=None,
catchup=False,
default_args={
'owner' : 'me',
'retries' : 1,
'retry_delay' : dt.timedelta(minutes=5)
}
) as dag:
get_requests = MsSqlOperator(
task_id = 'get_requests',
mssql_conn_id = 'my_src_db',
sql = 'select * from Request',
dag = dag
)
The issue was just that it couldn't notice the table so I specified the database which fixed the issue even though the database should of been recognized since I've passed it on the connection string.
sql = 'use csc380db; select * from Request',
What I am using
Ubuntu 16.04
Python 3.6
FreeTDS, TDS Version 7.3
SQLAlchemy 1.2.5
Windows server 2012
SQL Server 2008 Enterprise
My purpose
I write code in Python on Ubuntu machine to insert and execute stored procedure on MS SQL Server 2008. I create an order for customer. An order may have many main ingredients, toppings. When finish order, I run a stored procedure to process data to user_order and employee_order.
The stored procedure
In stored procedure, when select data from source tables and process data, if any error is happened, transaction is rolled back.
My code snippet
def process():
engine = get_engine() # my method get engine by connection string
session_maker = sessionmaker(bind=engine.execution_options(isolation_level='SERIALIZABLE'))
session = session_maker()
ref = 'REF0000001'
try:
# Create order
order = Order(id=1, ref=ref)
# Add main ingredients
main1 = Main(order=1, name='coffee')
main2 = Main(order=1, name='milk')
# Topup
topup1 = TopUp(order=1, name='cookies')
topup2 = TopUp(order=1, name='chocolate')
session.add(order)
session.flush()
session.add_all([main1, main2])
session.flush()
session.add_all([topup1, topup2])
session.flush()
session.commit()
except:
session.rollback()
reraise
finally:
session.close()
del session
time.sleep(1)
session = session_maker()
session.execute('EXEC finish_order %a' % ref)
session.commit()
session.close()
del session
And result is
There is no error, but there is no data in user_order and employee_order even though stored procedure finish_order is run.
But, if I run the stored procedure again as a simple query in terminal or SQL Studio Management, the data is imported to destination tables.
Doubts
Is there any chance that data has not been finished inserting into origin tables yet when stored procedure is called?
Please help me with this case.
Thank you!
really simple doubt, guess it is a bug, or something I got wrong
I have a databse in Azure, as Standard:S0 Tier, now 178 mb, and I want to make a copy (in a master's procedure) but with result database in BASIC pricing tier
Tought as:
CREATE DATABASE MyDB_2 AS COPY OF MyDB ( EDITION = 'Basic')
With unhappier result :
Database is created as pricing tier Standard:S0
Then tried:
CREATE DATABASE MyDB_2 AS COPY OF MyDB ( SERVICE_OBJECTIVE = 'Basic' )
or
CREATE DATABASE MyDB_2 AS COPY OF MyDB ( EDITION = 'Basic', SERVICE_OBJECTIVE = 'Basic' )
With even unhappy result :
ERROR:: Msg 40808, Level 16, State 1, The edition 'Standard' does not support the service objective 'Basic'.
tried also:
CREATE DATABASE MyDB_2 AS COPY OF MyDB ( MAXSIZE = 500 MB, EDITION = 'Basic', SERVICE_OBJECTIVE = 'Basic' )
with unhappier result :
ERROR:: Msg 102, Level 15, State 1, Incorrect syntax near 'MAXSIZE'.
.
May I be doing something not allowed ?
But if you copy your database via portal, you'd notice that basic tier is not available with message'A database can only be copied within the same tier as the original database.'. The behavior is documented here:'You can select the same server or a different server, its service tier and performance level, a different performance level within the same service tier (edition). After the copy is complete, the copy becomes a fully functional, independent database. At this point, you can upgrade or downgrade it to any edition. The logins, users, and permissions can be managed independently.'
We have recently upgraded from SQL Server 2005 to SQL Server 2008 (R2, SP1). This upgrade included some publications, where all tables are published with a default conflict resolver based on the "later wins" principle. Its smart name is 'Microsoft SQL Server DATETIME (Later Wins) Conflict Resolver', and the corresponding dll file is ssrmax.dll.
As you all know, once a table is published with a conflict resolver, the same conflict resolver must be used in all later publications using this table. Fair enough, but, when adding previously published tables to new publications, and specifying the very same conflict resolver to be used for this table, we are getting an error message:
use [myDb]
exec sp_addmergearticle
#publication = N'myDb_Pub',
#article = N'Tbl_blablabla',
#source_owner = N'dbo',
#source_object = N'Tbl_blablabla',
#type = N'table',
#description = N'',
#creation_script = N'',
#pre_creation_cmd = N'drop',
#schema_option = 0x000000000C034FD1,
#identityrangemanagementoption = N'none',
#destination_owner = N'dbo',
#force_reinit_subscription = 1,
#column_tracking = N'false',
#article_resolver = N'Microsoft SQL Server DATETIME (Later Wins) Conflict Resolver',
#subset_filterclause = N'',
#resolver_info = N'ddmaj',
#vertical_partition = N'false',
#verify_resolver_signature = 0,
#allow_interactive_resolver = N'false',
#fast_multicol_updateproc = N'true',
#check_permissions = 0,
#subscriber_upload_options = 0,
#delete_tracking = N'true',
#compensate_for_errors = N'false',
#stream_blob_columns = N'false',
#partition_options = 0
GO
And this is the error we get:
The article '...' already exists in another publication with a different article resolver.
By trying to understand how the same conflict resolver is not considered by the machine as 'the same conflict resolver', I discovered that there were two conflict resolvers with the same name, different versions, in the registry:
the 2005 version:
file ssrmax.dll,
version 2005.90.4035.0,
cls_id D604B4B5-686B-4304-9613-C4F82B527B10
the 2008 version:
file ssrmax.dll,
version 2009.100.2500.0,
cls_id 77209412-47CF-49AF-A347-DCF7EE481277
And I checked that our 2008 server is considering the second one as the 'available custom resolver' (I got this by running sp_enumcustomresolvers). The problem is that both references are available in the registry, so I guess that old publications do refer to the 2005 version, while new publications try to refere to the 2008 version, which is indeed different from the previous one.
So the question is: how can I have the server consider only one of these 2 versions, and this (of course) without having to drop and recreate the existing publications (which would turn our life into hell for the next 2 weeks).
Well .. so nobody got an answer. But I think I (finally) got it. Guess what... it is somewhere in the metamodel (as usual)!
When adding an item to the subscription, the new conflict resolver references to be used by the stored procedure come from the [distribution].[MSmerge_articleresolver] table
But, for existing subscriptions, previous conflict resolver references are stored in the system tables of the publishing database, ie [sysmergearticles], [sysmergeextendedarticlesview], and [sysmergepartitioninfoview]
So we have on one side an item initialy published with SQLSERVER 2005, where the publication references the 2005 conflict resolver, as per the publishing database metamodel. On the other side, the machine will attempt to add the same item to a new publication, this time with a default reference to the conflict resolver available in the distibution database, which is indeed different from the 2005 one ....
To illustrate this, you can check the following
USE distribution
go
SELECT article_resolver, resolver_clsid
FROM [MSmerge_articleresolver] WHERE article_resolver like '%Later Wins%'
GO
Then,
USE myPublicationDatabase
go
SELECT article_resolver, resolver_clsid
FROM [sysmergearticles] WHERE article_resolver like '%Later Wins%'
GO
SELECT article_resolver, resolver_clsid
FROM [sysmergeextendedarticlesview] WHERE article_resolver like '%Later Wins%'
GO
SELECT article_resolver, resolver_clsid
FROM [sysmergepartitioninfoview] WHERE article_resolver like '%Later Wins%'
GO
So it seems that I should update either the references in the distribution database or the references in the publication database. Let's give it a try!
Thanks, had something similar on a re-publisher where the subscriber article had a CLSID that made no sense on the server (looked with Regedit) but when trying to add the article to a publication would produce said error.
Updated the resolver_clsid field of sysMergeArticles table for the subscribed article with the clisd it was trying to get
{
declare #resolver_clsid nvarchar(50)
exec sys.sp_lookupcustomresolver N'Microsoft SQL Server DATETIME (Earlier Wins) Conflict Resolver', #resolver_clsid OUTPUT
select #resolver_clsid
}
and could then add the article
I'm trying to automate my db restores during development, using TSQL on SQL Server 2008, using sqlalchemy with pyodbc as a transport.
The command I'm executing is:
"""CREATE DATABASE dbname
restore database dbname FROM DISK='C:\Backups\dbname.bak' WITH REPLACE,MOVE 'dbname_data' TO 'C:\Databases\dbname_data.mdf',MOVE 'dbname_log' TO 'C:\Databases\dbname_log.ldf'"""
Unfortunately, the in SQL Management Studio, after the code has run, I see that the DB remains in state "Restoring...".
If I restore through management studio, it works. If I use subprocess to call "sqlcmd", it works. pymssql has problems with authentication and doesnt even get that far.
What might be going wrong?
The BACKUP and RESTORE statements run asynchronously so they don't terminate before moving on to the rest of the code.
Using a while statement as described at http://ryepup.unwashedmeme.com/blog/2010/08/26/making-sql-server-backups-using-python-and-pyodbc/ solved this for me:
# setup your DB connection, cursor, etc
cur.execute('BACKUP DATABASE ? TO DISK=?',
['test', r'd:\temp\test.bak'])
while cur.nextset():
pass
Unable to reproduce the problem restoring directly from pyodbc (without sqlalchemy) doing the following:
connection = pyodbc.connect(connection_string) # ensure autocommit is set to `True` in connection string
cursor = connection.cursor()
affected = cursor.execute("""CREATE DATABASE test
RESTORE DATABASE test FROM DISK = 'D:\\test.bak' WITH REPLACE, MOVE 'test_data' TO 'D:\\test_data.mdf', MOVE 'test_log' to 'D:\\test_log.ldf' """)
while cursor.nextset():
pass
Some questions that need clarification:
What is the code in use to do the restore using sqlalchemy?
What version of the SQL Server ODBC driver is in use?
Are there any messages in the SQL Server log related to the restore?
Thanks to geographika for the Cursor.nextset() example!
For SQL Alchemy users, and thanks to geographika for the answer: I ended up using the “raw” DBAPI connection from the connection pool.
It is exactly as geographika's solution but with a few additional pieces:
import sqlalchemy as sa
driver = 'SQL+Server'
name = 'servername'
sql_engine_str = 'mssql+pyodbc://'\
+ name\
+ '/'\
+ 'master'\
+ '?driver='\
+ driver
engine = sa.create_engine(sql_engine_str, connect_args={'autocommit': True})
connection = engine.raw_connection()
try:
cursor = connection.cursor()
sql_cmd = """
RESTORE DATABASE [test]
FROM DISK = N'...\\test.bak'
WITH FILE = 1,
MOVE N'test'
TO N'...\\test_Primary.mdf',
MOVE N'test_log'
TO N'...\\test_log.ldf',
RECOVERY,
NOUNLOAD,
STATS = 5,
REPLACE
"""
cursor.execute(sql_cmd)
while cursor.nextset():
pass
except Exception as e:
logger.error(str(e), exc_info=True)
Five things fixed my problem with identical symptoms.
Found that my test.bak file contained the wrong mdf and ldf files:
>>> cursor.execute(r"RESTORE FILELISTONLY FROM DISK = 'test.bak'").fetchall()
[(u'WRONGNAME', u'C:\\Program Files\\Microsoft SQL ...),
(u'WRONGNAME_log', u'C:\\Program Files\\Microsoft SQL ...)]
Created a new bak file and made sure to set the copy-only backup option
Set the autocommit option for my connection.
connection = pyodbc.connect(connection_string, autocommit=True)
Used the connection.cursor only for a single RESTORE command and nothing else
Corrected the test_data MOVE to test in my RESTORE command (courtesy of #beargle).
affected = cursor.execute("""RESTORE DATABASE test FROM DISK = 'test.bak' WITH REPLACE, MOVE 'test' TO 'C:\\test.mdf', MOVE 'test_log' to 'C:\\test_log.ldf' """)