I'm currently running a DDL Script using the Liquibase Java API. The whole script and the corresponding changeSet is exceuted successfully. However, after this execution Liquibase throws a LockException.
The ERROR LOG is as follows,
21713 [main] DEBUG liquibase.ext.mssql.database.MSSQLDatabase - Executing Statement: ALTER
TABLE [dbo].[VALIDATIONEXECUTORS] CHECK CONSTRAINT [FK_MSTAPPTYPE_VLDTNEXCUTORS]
21713 [main] INFO liquibase.executor.jvm.JdbcExecutor - ALTER TABLE [dbo].[VALIDATIONEXECUTORS]
CHECK CONSTRAINT [FK_MSTAPPTYPE_VLDTNEXCUTORS]
21715 [main] DEBUG liquibase.executor.jvm.JdbcExecutor - 0 row(s) affected
21715 [main] DEBUG liquibase.ext.mssql.database.MSSQLDatabase - Executing Statement: COMMIT
21715 [main] INFO liquibase.executor.jvm.JdbcExecutor - COMMIT
21735 [main] DEBUG liquibase.executor.jvm.JdbcExecutor - -1 row(s) affected
21735 [main] INFO liquibase.changelog.ChangeSet - SQL in file
E:\\LQBASE\\LiquibaseDemo\\src\\main\\resources\\db\\changelog\\ddl\\DBSchema.sql executed
21737 [main] INFO liquibase.changelog.ChangeSet - ChangeSet
src/main/resources/db/changelog/ddl_changelog.xml::Create_DB::skini ran successfully in 18064ms
21738 [main] INFO liquibase.executor.jvm.JdbcExecutor - select schema_name()
21739 [main] INFO liquibase.executor.jvm.JdbcExecutor - SELECT MAX(ORDEREXECUTED) FROM
IND_DEV.DATABASECHANGELOG
21742 [main] INFO liquibase.executor.jvm.JdbcExecutor - select schema_name()
21744 [main] DEBUG liquibase.executor.jvm.JdbcExecutor - Release Database Lock
21745 [main] INFO liquibase.executor.jvm.JdbcExecutor - select schema_name()
21747 [main] DEBUG liquibase.executor.jvm.JdbcExecutor - UPDATE IND_DEV.DATABASECHANGELOGLOCK
SET LOCKED = 0, LOCKEDBY = NULL, LOCKGRANTED = NULL WHERE ID = 1
21749 [main] INFO liquibase.executor.jvm.JdbcExecutor - select schema_name()
**21751 [main] INFO liquibase.lockservice.StandardLockService - Successfully released change log
lock
21752 [main] ERROR liquibase.Liquibase - Could not release lock
liquibase.exception.LockException: liquibase.exception.DatabaseException: Error executing SQL
UPDATE IND_DEV.DATABASECHANGELOGLOCK SET LOCKED = 0, LOCKEDBY = NULL, LOCKGRANTED = NULL WHERE
ID = 1: Invalid object name 'IND_DEV.DATABASECHANGELOGLOCK'.**
at liquibase.lockservice.StandardLockService.releaseLock(StandardLockService.java:357)
at liquibase.Liquibase.update(Liquibase.java:206)
at liquibase.Liquibase.update(Liquibase.java:179)
at liquibase.Liquibase.update(Liquibase.java:175)
at liquibase.Liquibase.update(Liquibase.java:168)
at
com.sk.liquibase.LiquibaseDemo.LiquibaseConfig.createManageIDDatabase(LiquibaseConfig.java:34)
at com.sk.liquibase.LiquibaseDemo.App.main(App.java:12)
**Caused by: liquibase.exception.DatabaseException: Error executing SQL UPDATE
IND_DEV.DATABASECHANGELOGLOCK SET LOCKED = 0, LOCKEDBY = NULL, LOCKGRANTED = NULL WHERE ID = 1:
Invalid object name 'IND_DEV.DATABASECHANGELOGLOCK'.**
According to the error, IND_DEV (which is the DB username) is somehow being appended to the DATABASECHANGELOGLOCK table. Does anyone have any idea what the issue could be?
Sometimes if the update application is abruptly stopped, then the lock remains stuck. Possibly due to a killed liquibase process not releasing its lock
Then running
UPDATE DATABASECHANGELOGLOCK SET LOCKED=0, LOCKGRANTED=null, LOCKEDBY=null;
against the database helps.
Or you can simply drop the DATABASECHANGELOGLOCK table, it will be recreated. or whatever changeloglock name you have configured.
Related
I am running bulk transactions on HANA database and getting following error:
2018-01-15 10:23:33,865 ERROR c.c.t.Payment [tpcc-thread-5] UPDATE district SET d_ytd = d_ytd + 1601.0 WHERE d_w_id = 1 AND d_id = 1
com.sap.db.jdbc.exceptions.JDBCDriverException: SAP DBTech JDBC: [138]: transaction serialization failure: TrexUpdate failed on table 'SYSTEM:DISTRICT' with error: transaction order error, rc=4614
at com.sap.db.jdbc.exceptions.SQLExceptionSapDB._newInstance(SQLExceptionSapDB.java:193)
.......
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2018-01-15 10:23:34,139 ERROR c.c.t.Payment [tpcc-thread-5] Payment error
java.lang.Exception: Payment update transaction error
Anybody has any idea what could be wrong?
I found out that this was a valid error. For transaction type REPEATABLE_READ or SERIALIZABLE this problem can occur when before end of one transaction, another transaction modifies the same data which transaction one was working on.
To get rid of this error, we can use transaction type READ_COMMITTED, which takes a snapshot of the data at the beginning and uses the same data throughout the transaction. It will also lock the rows its working on so that other transactions cannot modify it.
These are commands which I am running:-
bin/zookeeper-server-start etc/kafka/zookeeper.properties &
bin/kafka-server-start etc/kafka/server.properties &
bin/schema-registry-start etc/schema-registry/schema-registry.properties &
bin/connect-standalone etc/schema-registry/connect-avro-standalone.properties etc/kafka-connect-jdbc/quickstart-sqlserver.properties &
bin/kafka-avro-console-consumer --new-consumer --bootstrap-server localhost:9094 --topic test3-sqlserver-jdbc-ErrorLog --from-beginning
I am trying to connect sqlserver using confluent platform(kafka-connect) and facing following issues:
When I am trying to connect to default schema i.e. dbo , connection is built but it is not able to fetch data into the kafka consumer. The connection details that I am using are:
name=test-sqlserver-jdbc-autoincrement
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1
connection.url=jdbc:sqlserver://********:1433;database=AdventureWorks2012;user=****;password=****
mode=incrementing
incrementing.column.name=ErrorLogID
topic.prefix=test3-sqlserver-jdbc-
table.whitelist=ErrorLog
schema.registry=dbo
When I am trying to connect to any other schema, the producer is throwing error, connection details that i am using are :
name=test-sqlserver-jdbc-autoincrement
connector.class=io.confluent.connect.jdbc.JdbcSourceConnector
tasks.max=1
connection.url=jdbc:sqlserver://********:1433;database=AdventureWorks2012;user=****;password=****
mode=incrementing
incrementing.column.name=AddressID
topic.prefix=test3-sqlserver-jdbc-
table.whitelist=Address
schema.registry=Person
Error :
INFO Source task WorkerSourceTask{id=test-sqlserver-jdbc-autoincrement-0} finished
initialization and start (org.apache.kafka.connect.runtime.WorkerSourceTask:138)
[2017-03-07 17:55:47,041] ERROR Failed to run query for table
TimestampIncrementingTableQuerier{name='Address', query='null',
topicPrefix='test3-sqlserver-jdbc-', timestampColumn='null',
incrementingColumn='AddressID'}:
com.microsoft.sqlserver.jdbc.SQLServerException: Invalid object name 'Address'.
io.confluent.connect.jdbc.JdbcSourceTask:239)
[2017-03-07 17:55:52,124] ERROR Failed to run query for table
TimestampIncrementingTableQuerier{name='Address', query='null',
topicPrefix='test3-sqlserver-jdbc-', timestampColumn='null',
incrementingColumn='AddressID'}: com.microsoft.sqlserver.jdbc.SQLServerException:
Invalid object name 'Address'. (io.confluent.connect.jdbc.JdbcSourceTask:239)
[2017-03-07 17:55:53,684] INFO Reflections took 9299 ms to scan
262 urls, producing 12112 keys and 79402 values
(org.reflections.Reflections:229)
[2017-03-07 17:55:57,181] ERROR Failed to run query for table
TimestampIncrementingTableQuerier{name='Address', query='null',
topicPrefix='test3-sqlserver-jdbc-', timestampColumn='null',
incrementingColumn='AddressID'}:
com.microsoft.sqlserver.jdbc.SQLServerException: Invalid object name 'Address'.
(io.confluent.connect.jdbc.JdbcSourceTask:239)
I started to explore R on SQL 2016 but running into errors. I resolved a few starting errors but can't get through with this one:
exec sp_execute_external_script
#language =N'R',
#script=N'OutputDataSet<-InputDataSet',
#input_data_1 =N'select 1 as hello'
with result sets (([hello] int not null));
go
Error:
Msg 39021, Level 16, State 1, Line 1
Unable to launch runtime for 'R' script. Please check the configuration of the 'R' runtime.
Msg 39019, Level 16, State 1, Line 1
An external script error occurred:
Unable to launch the runtime. ErrorCode 0x80070490: 1168(Element not found.).
Msg 11536, Level 16, State 1, Line 1
EXECUTE statement failed because its WITH RESULT SETS clause specified 1 result set(s), but the statement only sent 0 result set(s) at run time.
I found answers to set the Working Directory for R in Rlauncher.config.
But there is no Rlauncher.config on below path on my machine. Not Sure why?
C:\Program Files\Microsoft SQL Server 2016\MSSQL13.SQL2016\MSSQL\Binn
When I check the error log I see the following errors:
2016-11-13 19:41:14.131 Security Context Manager is initialized successfully.
2016-11-13 19:41:14.132 Satellite Session Manager is initialized successfully.
2016-11-13 19:41:14.133 Launcher DLL RLauncher.dll not loaded! Error: 126
2016-11-13 19:41:14.133 Failed to load the launcher RLauncher.dll and check satellite version
2016-11-13 19:41:14.133 No Launcher dlls were registered!
Please help.
Please make sure you have installed the R Services (In-Database) in the SQL Setup Feature selection page. Please note this is different from the R Server option. See here for details.
I'm trying to (implicitly) create a temp table in SQL server 2014 (12.0.4100.1), using the following code:
proc sql;
create table UNDEAD."##_28DaysLater"n as
select * from UNDEAD.inv_overrides;
UNDEAD is an OLEDB libref, and the code is running on SAS 9.3_M2 (Windows). The error I am getting is below:
ERROR: Cursor extended fetch error: IRowset::GetNextRows failed. : The
object is in a zombie state. An object may enter a zombie
state when either ITransaction::Commit or ITransaction::Abort is called, or when a storage object was created and not yet
released.
The full log (with sastrace) is below, executed in a fresh session of Enterprise Guide (5.1).
What is actually happening here? Is it possible to prevent this error by configuration, on the SAS or SQL server side?
15 LIBNAME UNDEAD OLEDB
16 PROPERTIES=('Integrated Security'=SSPI 'Persist Security Info'=True 'initial catalog'=BDS)
17 DATASOURCE='Kernkraft400' PROVIDER=SQLNCLI11.1 SCHEMA=dbo connection=shared;
NOTE: Libref UNDEAD was successfully assigned as follows:
Engine: OLEDB
Physical Name: SQLNCLI11.1
18 OPTIONS SASTRACE=',,,d' SASTRACELOC=SASLOG NOSTSUFFIX;
19 proc sql;
20 create table UNDEAD."##_28DaysLater"n as
21 select * from UNDEAD.inv_overrides;
OLEDB_13: Prepared: on connection 3
SELECT * FROM "dbo"."inv_overrides"
OLEDB: AUTOCOMMIT turned ON for connection id 4
OLEDB: *-*-*-*-*-*-* COMMIT *-*-*-*-*-*-* on connection 4
OLEDB: AUTOCOMMIT turned OFF for connection id 4
OLEDB: AUTOCOMMIT turned ON for connection id 4
OLEDB: *-*-*-*-*-*-* COMMIT *-*-*-*-*-*-* on connection 4
NOTE: SAS variable labels, formats, and lengths are not written to DBMS tables.
OLEDB_14: Executed: on connection 3
SELECT * FROM "dbo"."inv_overrides"
OLEDB: AUTOCOMMIT turned ON for connection id 3
OLEDB: *-*-*-*-*-*-* COMMIT *-*-*-*-*-*-* on connection 3
OLEDB_15: Executed: on connection 3
CREATE TABLE "dbo"."##_28DaysLater" ("TECH_FROM_DTTM" datetime2(3),"MSF_BK" varchar(400),"COLUMN_NM" varchar(32),"OVERRIDE_VALUE"
varchar(1000),"APPLY_IND" varchar(3),"TECH_TO_DTTM" datetime2(3))
OLEDB: AUTOCOMMIT turned OFF for connection id 3
OLEDB: *-*-*-*-*-*-* COMMIT *-*-*-*-*-*-* on connection 3
OLEDB_16: Prepared: on connection 3
INSERT INTO "dbo"."##_28DaysLater" ("TECH_FROM_DTTM","MSF_BK","COLUMN_NM","OVERRIDE_VALUE","APPLY_IND","TECH_TO_DTTM") VALUES ( ?
, ? , ? , ? , ? , ? )
OLEDB_17: Executed: on connection 3
INSERT INTO "dbo"."##_28DaysLater" ("TECH_FROM_DTTM","MSF_BK","COLUMN_NM","OVERRIDE_VALUE","APPLY_IND","TECH_TO_DTTM") VALUES ( ?
, ? , ? , ? , ? , ? )
ERROR: Cursor extended fetch error: IRowset::GetNextRows failed. : The object is in a zombie state. An object may enter a zombie
state when either ITransaction::Commit or ITransaction::Abort is called, or when a storage object was created and not yet
released.
OLEDB: Performing ROLLBACK on connection 3
OLEDB: *-*-*-*-*-*-* ROLLBACK *-*-*-*-*-*-*
OLEDB: *-*-*-*-*-*-* ROLLBACK *-*-*-*-*-*-* on connection 3
NOTE: SUCCESSFUL INSERT of 1 ROWS
WARNING: File deletion failed for UNDEAD.'##_28DaysLater'n.DATA.
Staking the existence of a grave problem on the SQL Server side, I reincarnated the table via SASWORK and the log moaned no more:
data;
set UNDEAD.inv_overrides;
run;
proc sql;
create table UNDEAD."##_28DaysLater"n as
select * from &syslast;
NOTE: Table UNDEAD.'##_28DaysLater'n created, with 4 rows and 6
columns.
I am trying to install OIF - Oracle Identity federation as per the OBE http://www.oracle.com/webfolder/technetwork/tutorials/obe/fmw/oif/11g/r1/oif_install/oif_install.htm
I have installed the Oracle 11gR2 11.2.0.3 with the charset = AL32UTF8 and db_block size of 8K and nls_length_semantics=CHAR. Created database and listener needed.
Installed weblogic 10.3.6
Started installation of OIM - Oracle identity management, chosen install and configure option and schema creation options.
Installation goes fine, but during configuration it fails. Below is the relevant part of the logs.
I have tried multiple times just to fail again and again. If someone can kindly shed some light as what is going wrong in here. Please let me know, if you need more info on the setup...
_File : ...//oraInventory/logs/install2013-05-30_01-18-31AM.out_
ORA-01450: maximum key length (6398) exceeded
Percent Complete: 62
Repository Creation Utility: Create - Completion Summary
Database details:
Host Name : vccg-rh1.earth.com
Port : 1521
Service Name : OIAMDB
Connected As : sys
Prefix for (non-prefixable) Schema Owners : DEFAULT_PREFIX
RCU Logfile : /data/OIAM/installed_apps/fmw/Oracle_IDM1_IDP33/rcu/log/rcu.log
RCU Checkpoint Object : /data/OIAM/installed_apps/fmw/Oracle_IDM1_IDP33/rcu/log/RCUCheckpointObj
Component schemas created:
Component Status Logfile
Oracle Internet Directory Failed /data/OIAM/installed_apps/fmw/Oracle_IDM1_IDP33/rcu/log/oid.log
Repository Creation Utility - Create : Operation Completed
Repository Creation Utility - Dropping and Cleanup of the failed components
Repository Dropping and Cleanup of the failed components in progress.
Percent Complete: 93
Percent Complete: -117
Percent Complete: 100
RCUUtil createOIDRepository status = 2------------------------------------------------- java.lang.Exception: RCU OID Schema Creation Failed
at oracle.as.idm.install.config.IdMDirectoryServicesManager.doExecute(IdMDirectoryServicesManager.java:792)
at oracle.as.install.engine.modules.configuration.client.ConfigAction.execute(ConfigAction.java:375)
at oracle.as.install.engine.modules.configuration.action.TaskPerformer.run(TaskPerformer.java:88)
at oracle.as.install.engine.modules.configuration.action.TaskPerformer.startConfigAction(TaskPerformer.java:105)
at oracle.as.install.engine.modules.configuration.action.ActionRequest.perform(ActionRequest.java:15)
at oracle.as.install.engine.modules.configuration.action.RequestQueue.perform(RequestQueue.java:96)
at oracle.as.install.engine.modules.configuration.standard.StandardConfigActionManager.start(StandardConfigActionManager.java:186)
at oracle.as.install.engine.modules.configuration.boot.ConfigurationExtension.kickstart(ConfigurationExtension.java:81)
at oracle.as.install.engine.modules.configuration.ConfigurationModule.run(ConfigurationModule.java:86)
at java.lang.Thread.run(Thread.java:662)
_File : ...///fmw/Oracle_IDM1_IDP33/rcu/log/oid.log_
CREATE UNIQUE INDEX rp_dn on ct_dn (parentdn,rdn)
*
ERROR at line 1:
ORA-01450: maximum key length (6398) exceeded
Edited by: 1008964 on May 30, 2013 12:10 PM
Edited by: 1008964 on May 30, 2013 12:12 PM
Update :
I looked at the logs again and tracked which sql statements were leading to the above error…
CREATE BIGFILE TABLESPACE "OLTS_CT_STORE" EXTENT MANAGEMENT LOCAL AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO DATAFILE '/data/OIAM/installed_apps/db/oradata/OIAMDB/gcats1_oid.dbf' SIZE 32M AUTOEXTEND ON NEXT 10240K MAXSIZE UNLIMITED;
CREATE TABLE ct_dn (
EntryID NUMBER NOT NULL,
RDN varchar2(1024) NOT NULL,
ParentDN varchar2(1024) NOT NULL)
ENABLE ROW MOVEMENT
TABLESPACE OLTS_CT_STORE MONITORING;
*CREATE UNIQUE INDEX rp_dn on ct_dn (parentdn,rdn)
TABLESPACE OLTS_CT_STORE
PARALLEL COMPUTE STATISTICS;*
I ran these statements from sqlplus and I was able to create the index without issues and as per the table space creation statement, autoextend is on. If RCU – repo creation utility runs to create the schemas needed, it fails with the same error as earlier. Any pointers ?
Setting NLS_LENGTH_SEMANTICS=BYTE worked