When trying to setup a target to VoltDB, SymmetricDS fails on schema creation (specifically the table - SYM_INCOMING_ERROR).
Is there a setting to change the schema without having to recompile?
[voltdb] - AbstractSymmetricEngine - An error occurred while starting
SymmetricDS
org.jumpmind.db.sql.SqlException: General Provider Error
(GRACEFUL_FAILURE): 'Error: Table SYM_INCOMING_ERROR has a maximum row size
of 2403801 but the maximum supported row size is 2097152'
I would export the DDL SQL, fix the issue and apply it using flyway or liquibase
Related
I am running below command to create an index on the table of oracle ebs.
create index mtl_index_N1 on MTL_MATERIAL_TRANSACTIONS(last_update_date);
But I am getting error which does not seems correct with index. what is this issue? is error related with Database indexes or sqldeveloper issue?
I just tried setting up Azure SQL data sync's sync group using sample Azure SQL database. I am syncing dbo.BuildVersion table of the sample database.
I get the following error on member database.
Database provisioning failed with the exception "Incorrect syntax near the keyword 'NOT'.Inner exception: SqlException ID: 39f49622-6a56-4a44-8e55-2a646f99a584, Error Code: -2146232060 - SqlError Number:156, Message: SQL error with code 156 For more information, provide tracing ID ‘679953bc-7dac-4490-89e9-ea6d145d0442’ to customer support."
How should I resolve this issue?
Thanks
I am able to resolve this issue by creating empty table in the member database first and then running the sync.
Does Sync service not create table when table does not exist in the member database?
It does create it, however, under certain circumstances it fails to create it do to some issue with the schema itself. Therefore, the workaround, as you figured out, is to create the table manually.
We are using latest Spring Data JDBC milestone (1.1.0.M3) together with SQL Server.
Updating referenced aggregates (not the aggregate root itself) - fails with:
com.microsoft.sqlserver.jdbc.SQLServerException: Cannot insert explicit value for identity column in table 'mytable' when IDENTITY_INSERT i set to OFF
Updating the aggregate root itself works OK.
Any ideas or suggestions? We are locked to using SQL Server.
Note that the above works with H2 in-mem DB.
Spring Data JDBC doesn't support MS-SqlServer yet.
The currently blocking problem is that it doesn't allow insertion of IDs in columns that are declared as IDENTITY.
There is an issue for that: https://jira.spring.io/browse/DATAJDBC-278
You probably don't need the id on the referenced entity though. If you remove it the problem should go away.
I am trying to convert and migrate an Oracle schema to MSSQL server. At the last step, migrating data, I get the error message:
The table [MYDATABASE].[MYSCHEMA].MYTABLE] doesnot exist in target. You must first convert the table then load it into the database.
This error message appears for each table in my schema.
Can someone explain what is happening and what I need to do to get past this?
Are you tries to migrate the data before doing the ‘synchronize with the database’ operation?
If Yes,
This error message generally occurs when the target table doesn’t
exist on SQL server database. After converting schema, you need to
synchronize the table with the database before migrating the data.
To do this you right click on the SQL Server database in Metadata
explorer and click “Synchronize with database” menu.
Note: Table structure will not be created in the SQL server database until you synchronize.
I'm very familiar with the process of exporting from Azure SQL V12 down to my dev box and then importing to my local sql (2014) instance. I'm spinning up a new Win10 box and have installed the SQL 2016 CTP. I'm connecting to that same Azure instance and can operate against it -- and can export a .bacpac just as with 2014.
But when I try to import to local I'm getting:
Could not import package.
Warning SQL72012: The object [FOO33_Data] exists in the target, but it will not be dropped even though you selected the 'Generate drop statements for objects that are in the target database but that are not in the source' check box.
Warning SQL72012: The object [FOO33_Log] exists in the target, but it will not be dropped even though you selected the 'Generate drop statements for objects that are in the target database but that are not in the source' check box.
Error SQL72014: .Net SqlClient Data Provider: Msg 547, Level 16, State 0, Line 3 The ALTER TABLE statement conflicted with the FOREIGN KEY constraint "FK_dbo.Address_dbo.User_idUser". The conflict occurred in database "FOO33", table "dbo.User", column 'idUser'.
Error SQL72045: Script execution error. The executed script:
PRINT N'Checking constraint: FK_dbo.Address_dbo.User_idUser [dbo].[Address]';
ALTER TABLE [dbo].[Address] WITH CHECK CHECK CONSTRAINT [FK_dbo.Address_dbo.User_idUser];
Since this question was also asked and answered on MSDN, I will share here.
https://social.msdn.microsoft.com/Forums/azure/en-US/0b025206-5ea4-4ecb-b475-c7fabdb6df64/cannot-import-sql-azure-bacpac-to-2016-ctp?forum=ssdsgetstarted
Text from linked answer:
I suspect what's going wrong here is that the export operation was performed using a DB instance that was changing while the export was on-going. This can cause the exported table data to be inconsistent because, unlike SQL Server's physical backup/restore, exports do not guarantee transactional consistency. Instead, they're essentially performed by connecting to each table in the database in turn and running select *. When a foreign key relationship exists between two tables and the read table data is inconsistent, it results in an error during import after the data is written to the database and the import code attempts to re-enable the foreign key. We suggest using the database copy mechanism (create database copyDb as copy of originalDb), which guarantees a copy with transactional consistency, and then exporting from the non-changing database copy.