I'm trying to (implicitly) create a temp table in SQL server 2014 (12.0.4100.1), using the following code:
proc sql;
create table UNDEAD."##_28DaysLater"n as
select * from UNDEAD.inv_overrides;
UNDEAD is an OLEDB libref, and the code is running on SAS 9.3_M2 (Windows). The error I am getting is below:
ERROR: Cursor extended fetch error: IRowset::GetNextRows failed. : The
object is in a zombie state. An object may enter a zombie
state when either ITransaction::Commit or ITransaction::Abort is called, or when a storage object was created and not yet
released.
The full log (with sastrace) is below, executed in a fresh session of Enterprise Guide (5.1).
What is actually happening here? Is it possible to prevent this error by configuration, on the SAS or SQL server side?
15 LIBNAME UNDEAD OLEDB
16 PROPERTIES=('Integrated Security'=SSPI 'Persist Security Info'=True 'initial catalog'=BDS)
17 DATASOURCE='Kernkraft400' PROVIDER=SQLNCLI11.1 SCHEMA=dbo connection=shared;
NOTE: Libref UNDEAD was successfully assigned as follows:
Engine: OLEDB
Physical Name: SQLNCLI11.1
18 OPTIONS SASTRACE=',,,d' SASTRACELOC=SASLOG NOSTSUFFIX;
19 proc sql;
20 create table UNDEAD."##_28DaysLater"n as
21 select * from UNDEAD.inv_overrides;
OLEDB_13: Prepared: on connection 3
SELECT * FROM "dbo"."inv_overrides"
OLEDB: AUTOCOMMIT turned ON for connection id 4
OLEDB: *-*-*-*-*-*-* COMMIT *-*-*-*-*-*-* on connection 4
OLEDB: AUTOCOMMIT turned OFF for connection id 4
OLEDB: AUTOCOMMIT turned ON for connection id 4
OLEDB: *-*-*-*-*-*-* COMMIT *-*-*-*-*-*-* on connection 4
NOTE: SAS variable labels, formats, and lengths are not written to DBMS tables.
OLEDB_14: Executed: on connection 3
SELECT * FROM "dbo"."inv_overrides"
OLEDB: AUTOCOMMIT turned ON for connection id 3
OLEDB: *-*-*-*-*-*-* COMMIT *-*-*-*-*-*-* on connection 3
OLEDB_15: Executed: on connection 3
CREATE TABLE "dbo"."##_28DaysLater" ("TECH_FROM_DTTM" datetime2(3),"MSF_BK" varchar(400),"COLUMN_NM" varchar(32),"OVERRIDE_VALUE"
varchar(1000),"APPLY_IND" varchar(3),"TECH_TO_DTTM" datetime2(3))
OLEDB: AUTOCOMMIT turned OFF for connection id 3
OLEDB: *-*-*-*-*-*-* COMMIT *-*-*-*-*-*-* on connection 3
OLEDB_16: Prepared: on connection 3
INSERT INTO "dbo"."##_28DaysLater" ("TECH_FROM_DTTM","MSF_BK","COLUMN_NM","OVERRIDE_VALUE","APPLY_IND","TECH_TO_DTTM") VALUES ( ?
, ? , ? , ? , ? , ? )
OLEDB_17: Executed: on connection 3
INSERT INTO "dbo"."##_28DaysLater" ("TECH_FROM_DTTM","MSF_BK","COLUMN_NM","OVERRIDE_VALUE","APPLY_IND","TECH_TO_DTTM") VALUES ( ?
, ? , ? , ? , ? , ? )
ERROR: Cursor extended fetch error: IRowset::GetNextRows failed. : The object is in a zombie state. An object may enter a zombie
state when either ITransaction::Commit or ITransaction::Abort is called, or when a storage object was created and not yet
released.
OLEDB: Performing ROLLBACK on connection 3
OLEDB: *-*-*-*-*-*-* ROLLBACK *-*-*-*-*-*-*
OLEDB: *-*-*-*-*-*-* ROLLBACK *-*-*-*-*-*-* on connection 3
NOTE: SUCCESSFUL INSERT of 1 ROWS
WARNING: File deletion failed for UNDEAD.'##_28DaysLater'n.DATA.
Staking the existence of a grave problem on the SQL Server side, I reincarnated the table via SASWORK and the log moaned no more:
data;
set UNDEAD.inv_overrides;
run;
proc sql;
create table UNDEAD."##_28DaysLater"n as
select * from &syslast;
NOTE: Table UNDEAD.'##_28DaysLater'n created, with 4 rows and 6
columns.
Related
I'm using MariaDB 10.6.8 and have one of master DB and two of slave DBs. Those DBs are set up for replication.
When I excute INSERT or UPDATE query without database selection, replication doesn't seem to work. In other words, master DB's data is changed but slave DB's data is remains intact.
/* no database is selected */
MariaDB [(none)]> show master status \G
*************************** 1. row ***************************
File: maria-bin.000007
Position: 52259873
Binlog_Do_DB:
Binlog_Ignore_DB:
1 row in set (0.000 sec)
MariaDB [(none)]> UPDATE some_database.some_tables SET some_datetime_column = now() WHERE primary_key_column = 1;
Query OK, 1 row affected (0.002 sec)
Rows matched: 1 Changed: 1 Warnings: 0
MariaDB [(none)]> show master status \G
*************************** 1. row ***************************
File: maria-bin.000007
Position: 52260068
Binlog_Do_DB:
Binlog_Ignore_DB:
1 row in set (0.000 sec)
/* only change master database's record even though the replication position is changed */
However, after selecting the database, replication work fine.
/* but, after selecting the database */
MariaDB [(none)]> USE some_database;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
MariaDB [some_database]> UPDATE some_tables SET some_datetime_column = now() WHERE primary_key_column = 1;
Query OK, 1 row affected (0.002 sec)
Rows matched: 1 Changed: 1 Warnings: 0
/* then change master and slave database's record */
Can anyone tell me what could be the cause of this situation?
Regardless of the binary log format (MIXED, STATEMENT, ROW) all DML commands will be written to the binary log file as soon the transaction will be committed.
When using ROW format a TABLE_MAP event will be logged first, which contains a unique ID, the database and table name. The ROW_EVENT (Delete/Insert/Update) refers to one or more table id's to identify the tables used.
The STATEMENT format logs a query event, which contains the default database name, timestamp and the SQL statement. If there is no default database, the statement itself will contain the database name.
Binlog dump example for STATEMENT format (I removed the non relevant parts such as timestamp and user variables from output)
without default database
#230210 4:42:41 server id 1 end_log_pos 474 CRC32 0x1fa4fa55 Query thread_id=5 exec_time=0 error_code=0 xid=0
insert into test.t1 values (1),(2)
/*!*/;
# at 474
#230210 4:42:41 server id 1 end_log_pos 505 CRC32 0xfecc5d48 Xid = 28
COMMIT/*!*/;
# at 505
with default database:
#230210 4:44:35 server id 1 end_log_pos 639 CRC32 0xfc862172 Query thread_id=5 exec_time=0 error_code=0 xid=0
use `test`/*!*/;
insert into t1 values (1),(2)
/*!*/;
# at 639
#230210 4:44:35 server id 1 end_log_pos 670 CRC32 0xca70b57f Xid = 56
COMMIT/*!*/;
If a session doesn't use a default database on the source server, it may not be replicated if a binary log filter was specified on the replica, e.g. replicate_do_db, since the replica doesn't parse the statement but checks if the database name applies to the filter.
To avoid inconsistent data on your replicas I would recommend to use ROW format instead.
I've managed to perform XA transactions using SQLServer OLEDB driver on Windows. Now I've ported the C++ application to Linux. On Linux Microsoft provides the SQLServer 2019 ODBC driver and since version 17.3 of this driver, XA is reported to be supported.
Microsoft provides following example that illustrates how to implement the xa_* functions:
Using XA Transactions
The example itself works. Using the code in another context doesn't work. The call to SQLSetConnectAttr(..., SQL_ATTR_ENLIST_IN_XA, ...) for operation OP_START fails and I don't get useful information by CheckRC().
How to get more informaton about failing SQL_ATTR_ENLIST_IN_XA?
How does XA work with the SQL_ATTR_ENLIST_IN_XA approach compared to OLEDB?
Is it possible to change the isolation level in XA mode?
Share your experiences and details with us, please.
Strict XID data layout
The SQLSetConnectAttr(..., SQL_ATTR_ENLIST_IN_XA, ...) function is very sensitive regarding the XID. If the XID has a branch ID then the branch ID must start at byte 64 of xid_t::data. Storing a global ID like "f9707929-a367-4e3a-9a80-3fbb3a23ab11" + branch ID "1234" directly in one sequence and identifiying the string layout by xid_t::gtrid_length and xid_t::bqual_length will work with other DB APIs and IBM MQ, but it fails with SQL_ATTR_ENLIST_IN_XA in SQLServer.
In order to get above sample XID work the UUID has to be stored at the beginning of xid_t::data (byte 0-36) and the branch id has to be stored staring at byte (64-68). The xid_t field gtrid_length has to be set to 36 and bqual_length to 4. The formatID I set to 1.
If the XID layout doesn't fit the SQL_ATTR_ENLIST_IN_XA with operation OP_START fails and SQLGetDiagRec() reports nothing about it.
Setting the isolation level
By default a XA transaction runs under isolation level "Serializable". Microsoft describes this isolation leve as follows:
The highest level where transactions are completely isolated from one another. The SQLServer keeps read and write locks acquired on selected data to be released at the end of the transaction. Range-locks are acquired when a SELECT operation uses a ranged WHERE clause, especially to avoid phantom reads.
On each call to xa_start the isolation level is set to "Serializable". Setting the isolation level using SQLSetConnectAttr(..., SQL_ATTR_TXN_ISOLATION, ...) after connect doesn't help. You have to call this after SQLSetConnectAttr(..., SQL_ATTR_ENLIST_IN_XA, OP_START, ...).
Doing so allows you to set the isolation level for instance to SQL_TXN_READ_COMMITTED. The database option READ_COMMITTED_SNAPSHOT will be also be considered. This means setting isolation level to SQL_TXN_READ_COMMITTED and having database option READ_COMMITTED_SNAPSHOT enabled will switch the isolation level to "Snapshot".
The command "DBCC useroptions" can be used to query the isolation level of the current session.
Following query is also useful for checking the isolation level and status of active transactions:
SELECT tst.session_id, [database_name] = db_name(s.database_id)
, tat.transaction_begin_time
, transaction_duration_s = datediff(s, tat.transaction_begin_time, sysdatetime())
, transaction_type = CASE tat.transaction_type WHEN 1 THEN 'Read/write transaction'
WHEN 2 THEN 'Read-only transaction'
WHEN 3 THEN 'System transaction'
WHEN 4 THEN 'Distributed transaction' END
, input_buffer = ib.event_info, tat.transaction_uow
, transaction_state = CASE tat.transaction_state
WHEN 0 THEN 'The transaction has not been completely initialized yet.'
WHEN 1 THEN 'The transaction has been initialized but has not started.'
WHEN 2 THEN 'The transaction is active - has not been committed or rolled back.'
WHEN 3 THEN 'The transaction has ended. This is used for read-only transactions.'
WHEN 4 THEN 'The commit process has been initiated on the distributed transaction.'
WHEN 5 THEN 'The transaction is in a prepared state and waiting resolution.'
WHEN 6 THEN 'The transaction has been committed.'
WHEN 7 THEN 'The transaction is being rolled back.'
WHEN 8 THEN 'The transaction has been rolled back.' END
, trn_iso_level = CASE s.transaction_isolation_level
WHEN 0 THEN 'Unspecified'
WHEN 1 THEN 'ReadUncommitted'
WHEN 2 THEN 'ReadCommitted'
WHEN 3 THEN 'RepeatableRead'
WHEN 4 THEN 'Serializable'
WHEN 5 THEN 'Snapshot' END
, transaction_name = tat.name, request_status = r.status
, tst.is_user_transaction, tst.is_local
, session_open_transaction_count = tst.open_transaction_count
, s.host_name, s.program_name, s.client_interface_name, s.login_name, s.is_user_process
FROM sys.dm_tran_active_transactions tat
INNER JOIN sys.dm_tran_session_transactions tst on tat.transaction_id = tst.transaction_id
INNER JOIN Sys.dm_exec_sessions s on s.session_id = tst.session_id
LEFT OUTER JOIN sys.dm_exec_requests r on r.session_id = s.session_id
CROSS APPLY sys.dm_exec_input_buffer(s.session_id, null) AS ib;
The Advantage of SQLServer ODBC driver SQL_ATTR_ENLIST_IN_XA
Implementing SQLServer XA with the OLEDB driver and ITransactionJoin interface directly communicates with the local distributed transaction controller (DTC) service. In case the SQLServer is running on another host then the local DTC and the DTC on the SQLServer host are involved. The DTC service must communicate over network. RPC, dynamic port ranges, firewall and security settings often makes this very difficult getting it to work.
With the new ODBC SQL_ATTR_ENLIST_IN_XA interace the DTC to DTC communication is no longer needed. The appliction has only a connection to the SQLServer database instance and on the SQLServer host the DTC service must run and the "XA option" must be set in this DTC. The application that utilizes SQL_ATTR_ENLIST_IN_XA doesn't require a local DTC.
Using the robot-framework to connect with Sybase DB
Then DELETE row, UPDATE row in a TABLE.
When the below query is executed in robot framework, it works fine.
Sybase DB Connection - Delete and Update for a single pass
connect To Database Using Custom Params pyodbc "Driver={Adaptive Server Enterprise}; server=<myserver>; port=<myport>;db=<mydb>;uid=<myuser>; pwd=<mypasswd>;"
# Run Select Query
#{selectQuery} Query select * from TABLE where FIELD1 = '1000'
Log Many #{selectQuery}
Log "Selected Query Executed"
# Run Delete Query
#{DeleteQuery} Execute Sql String set chained off ; Delete from TABLE where FIELD1 = '1000' AND FIELD2 = 'VALUE2' AND FIELD3 = 'VALUE3'
Log Many #{DeleteQuery}
Log "Delete Query Executed"
#Run Update Query
#{updateQuery} Execute Sql String set chained off ; UPDATE TABLE SET FIELD2 = 'VALUE2' where FIELD1 = '1001'
Log Many #{updateQuery}
Log "Update Query Executed"
Disconnect From Database
Whereas when the for loop is used as below :
Sybase DB Connection - Delete with for loop for mutliple passes
connect To Database Using Custom Params pyodbc "Driver={Adaptive Server Enterprise}; server=<myserver>; port=<myport>;db=<mydb>;uid=<myuser>; pwd=<mypasswd>;"
#Run DELETE Query
:FOR ${num} IN RANGE 100
\ Execute Sql String set chained off ; Delete from TABLE where FIELD1 = ${num} and FIELD2= "${VALUE2[${num}]}" and FIELD3 = "${VALUE3[${num}]}"
\ sleep 1
It fails with the below error :
[Sybase][ODBC Driver][Adaptive Server Enterprise]SET CHAINED command not allowed within multi-statement transaction.\n (226) (SQLExecDirectW);
[Sybase][ODBC Driver][Adaptive Server Enterprise]Stored procedure 'abc_sp' may be run only in unchained transaction mode. The 'SET CHAINED OFF' command will cause the current session to use unchained transaction mode.\n (7713)")
Used the commit
The below for loop worked fine.
connect To Database Using Custom Params pyodbc "Driver={Adaptive Server Enterprise}; server=<myserver>; port=<myport>;db=<mydb>;uid=<myuser>; pwd=<mypasswd>;"
#Run DELETE Query
:FOR ${num} IN RANGE 100
\ Execute Sql String commit
\ Execute Sql String set chained off ; Delete from TABLE where FIELD1 = ${num} and FIELD2= "${VALUE2[${num}]}" and FIELD3 = "${VALUE3[${num}]}"
\ Execute Sql String commit
I am trying to use GO to get R to pull a multipart query from a SQL Server database but R keeps erroring out on me when I try this. Does anyone know a workaround to get RODBC to run multipart queries?
Example query:
query2 = "IF OBJECT_ID('tempdb..#ATTTempTable') IS NOT NULL
DROP TABLE #ATTTempTable
GO
SELECT
* INTO #ATTTempTable
FROM ETL.ATT.fact_responses fr
WHERE fr.ResponseDateTime > '2015-07-06'
"
channel <- odbcConnect("<host name>", uid="<uid>", pwd="<pwd>")
raw = sqlQuery(channel, query2)
close(channel)
and result
> raw
[1] "42000 102 [Microsoft][ODBC Driver 11 for SQL Server][SQL Server]Incorrect syntax near 'GO'."
[2] "[RODBC] ERROR: Could not SQLExecDirect 'IF OBJECT_ID('tempdb..#ATTTempTable') IS NOT NULL\n DROP TABLE #ATTTempTable\n\nGO\n\nSELECT\n\t* INTO #ATTTempTable\nFROM ETL.ATT.fact_responses fr\nWHERE fr.ResponseDateTime > '2015-07-06'\n'"
>
Because your query contains multiple line with conditional logic it resembles a stored procedure.
Simply save that stored procedure in SQL Server:
CREATE PROCEDURE sqlServerSp #ResponseDateTime nvarchar(10)
AS
IF OBJECT_ID('tempdb..#ATTTempTable') IS NOT NULL
DROP TABLE #ATTTempTable;
GO
-- suppresses affected rows message so RODBC returns a dataset
SET NO COUNT ON;
GO
-- runs make-table action query
SELECT * INTO #ATTTempTable
FROM ETL.ATT.fact_responses fr
WHERE fr.ResponseDateTime > #ResponseDateTime;
GO
And then run the stored procedure in R. You can even pass parameters like the date:
channel <- odbcConnect("<host name>", uid="<uid>", pwd="<pwd>")
raw = sqlQuery(channel, "EXEC sqlServerSp #ResponseDateTime='2015-07-06'")
close(channel)
You can't. See https://msdn.microsoft.com/en-us/library/ms188037.aspx
you have to divide your query into two statements and run them separately.
In the SQL Server Full-Text Indexing scheme i want to know if a table is in
start_chage_tracking mode
update_index mode
start_change_tracking and start_background_updateindex modes
The problem is that i set my tables to "background update index", and then tell it to "start change tracking", but then some months later it doesn't seem to be tracking changes.
How i can i see the status of "background updateindex" and "change tracking" flags?
example:
sp_fulltext_table #tabname='DiaryEntry', #action='start_background_updateindex'
Server: Msg 15633, Level 16, State 1, Procedure sp_fulltext_table, Line 364
Full-text auto propagation is currently enabled for table 'DiaryEntry'.
sp_fulltext_table #tabname='Ticket', #action='start_background_updateindex'
Server: Msg 15633, Level 16, State 1, Procedure sp_fulltext_table, Line 364
Full-text auto propagation is currently enabled for table 'Ticket'.
Obviously a table has an indexing status, i just want to know it show i can display it to the user (i.e. me).
The other available API:
EXECUTE sp_help_fulltext_tables
only returns the tables that are in the catalog, it doesn't return their status.
TABLE_OWNER TABLE_NAME FULLTEXT_KEY_INDEX_NAME FULLTEXT_KEY_COLID FULLTEXT_INDEX_ACTIVE FULLTEXT_CATALOG_NAME
=========== ========== ======================= ================== ===================== =====================
dbo DiaryEntry PK_DiaryEntry_GUID 1 1 FrontlineFTCatalog
dbo Ticket PK__TICKET_TicketGUID 1 1 FrontlineFTCatalog
And i can get the PopulateStatus of an entire catalog:
SELECT FULLTEXTCATALOGPROPERTY('MyCatalog', 'PopulateStatus') AS PopulateStatus
which returns a status for the catalog:
0 = Idle
1 = Full population in progress
2 = Paused
3 = Throttled
4 = Recovering
5 = Shutdown
6 = Incremental population in progress
7 = Building index
8 = Disk is full. Paused.
9 = Change tracking
but not for a table.
SQL Server 2000 SP4
SELECT ##version
Microsoft SQL Server 2000 - 8.00.194 (Intel X86)
Aug 6 2000 00:57:48
Copyright (c) 1988-2000 Microsoft Corporation
Standard Edition on Windows NT 5.0 (Build 2195: Service Pack 4)
Regardless of any bug, i want to create UI to easily be able to see its status.
Christ. i had a whole nicely formatted answer. i was scrolling to hit save when IE crashed.
Short version:
OBJECTPROPERTY
TableFullTextPopulateStatus
TableFullTextBackgroundUpdateIndexOn
TableFullTextCatalogId
TableFullTextChangeTrackingOn
TableFullTextKeyColumn
TableHasActiveFulltextIndex
TableFullTextBackgroundUpdateIndexOn
1=TRUE
0=FALSE
TableFullTextPopulateStatus
0=No population
1=Full population
2=Incremental population
Full example:
SELECT
--indicates whether full-text change-tracking is enabled on the table (0, 1)
OBJECTPROPERTY(OBJECT_ID('DiaryEntry'), 'TableFullTextChangeTrackingOn') AS TableFullTextChangeTrackingOn,
--indicate the population status of a full-text table (0=No population, 1=Full Population, 2=Incremental Population)
OBJECTPROPERTY(OBJECT_ID('DiaryEntry'), 'TableFullTextPopulateStatus') AS TableFullTextPopulateStatus,
--indicates whether a table has full-text background update indexing (0, 1)
OBJECTPROPERTY(OBJECT_ID('DiaryEntry'), 'TableFullTextBackgroundUpdateIndexOn') AS TableFullTextBackgroundUpdateIndexOn,
-- provides the full-text catalog ID in which the full-text index data for the table resides (0=table is not indexed)
OBJECTPROPERTY(OBJECT_ID('DiaryEntry'), 'TableFullTextCatalogId') AS TableFullTextCatalogId,
--provides the column ID of the full-text unique key column (0=table is not indexed)
OBJECTPROPERTY(OBJECT_ID('DiaryEntry'), 'TableFullTextKeyColumn') AS TableFullTextKeyColumn,
--indicates whether a table has an active full-text index (0, 1)
OBJECTPROPERTY(OBJECT_ID('DiaryEntry'), 'TableHasActiveFulltextIndex') AS TableHasActiveFulltextIndex
What version of SQL / Service pack are you running; this used to be a bug in sql 2000
http://support.microsoft.com/kb/290212
execute the sp_fulltext_table in this sequence to temporarily fix the issue. (The low disk space is likely the cause)
stop_change_tracking
start_change_tracking
stop_background_updateindex
start_background_updateindex
OK to monitor the status you need to look at this very handy resource on SQL Server FTI on MSSQL Tips; i think the script there will give you what you are looking for.