Why after recreating column Hibernate still looks at old table? - sql-server

Backend code = java, hibernate, maven, hosted in AEM.
DB = SQL db
Existing table had column of type INT.
Backup of the original table containing that column was made by Select * into backupTable from originalTable
Backup of the audit table (for the original table) containing that column was made by Select * into backupTable_AUDIT from originalTable_AUDIT
Column type INT was changed to VARCHAR(255) by dropping-recreating that column in originalTable.
Column type INT was changed to VARCHAR(255) by dropping-recreating that column in originalTable_AUDIT.
All places in the code that used that column has been changed to accomodate VARCHAR.
BE Code was rebuilt.
Code was deployed.
When trying to run app getting error: "wrong column type encountered in column [mycolumn] in table [originalTable]; found [nvarchar (Types#NVARCHAR)], but expecting [int (Types#INTEGER)])"
After deleting backupTable_AUD => no error any more, all works fine.
As much as I know each table in the db schema has an id.
It seems BE-code/Hibernate was looking at backup table id?
Can somebody please explain more why deleting backup tables did help to eliminate the error,
and which step was missed during deployment/backup to avoid this error?
Many thanks

Related

how to remove dirty data in yugabyte ( postgresql )

I try to add a column to a table with GUI Tableplus, but no response for long time.
So I turn to the db server, but got these error:
Maybe some inconsistent data generated during the operation through the Tableplus.
I am new to postgresql , and don't know what to do next.
-----updated------
I did some operation as #Dri372 told, and got some progress.
The failed reason for table sys_role and s2 is that the tables are not empty, they have some records.
If I run sql like this create table s3 AS SELECT * FROM sys_role; alter table s3 add column project_code varchar(50);, I successed.
Now how could I still work on the table sys_role?

SQL Server: Error converting data type varchar to numeric (Strange Behaviour)

I'm working on a legacy system using SQL Server in 2000 compatibility mode. There's a stored procedure that selects from a query into a virtual table.
When I run the query, I get the following error:
Error converting data type varchar to numeric
which initially tells me that something stringy is trying to make its way into a numeric column.
To debug, I created the virtual table as a physical table and started eliminating each column.
The culprit column is called accnum (which stores a bank account number, which has a source data type of varchar(21)), which I'm trying to insert into a numeric(16,0) column, which obviously could cause issues.
So I made the accnum column varchar(21) as well in the physical table I created and it imports 100%. I also added an additional column called accnum2 and made it numeric(16,0).
After the data is imported, I proceeded to update accnum2 to the value of accnum. Lo and behold, it updates without an error, yet it wouldn't work with an insert into...select query.
I have to work with the data types provided. Any ideas how I can get around this?
Can you try to use conversion in your insert statement like this:
SELECT [accnum] = CASE ISNUMERIC(accnum)
WHEN 0 THEN NULL
ELSE CAST(accnum AS NUMERIC(16, 0))
END

Unable to copy data from external table to exact copy of external table

While building a test DB environment in a SQL Azure DB, I dynamically generate a new DB using CREATE TABLE scripts generated from an originating, prototype DB.
Some of these tables require data from the prototype DB so for each of these I create an external table (referencing the table in the prototype DB) and then run an INSERT INTO query which takes data from the external table and inserts it into the exact copy in the test DB.
The important point here is that both the new table and the external table are dynamically generated using a script which is built in the prototype DB; the new table and the external table should therefore be exact copies of that in the prototype DB.
However, for one of these tables, I ran into an exception
Large object column support is limited to only nvarchar(max) data type.
The table in question didn't have anything greater than NVARCHAR (MAX) (such as TEXT), though it did have 10 NVARCHAR (MAX) columns. So I altered these columns to NVARCHAR (4000) and ran the process again.
I now encounter the following exception:
The data type of the column 'my_column_name' in the external table is different than the column's data type in the underlying standalone or sharded table present on the external source.
I've refreshed and checked the column type in the prototype DB, the external table and the new table, and all of these show that the data type is NVARCHAR (4000).
So why does it tell me that the data type is different?
Is it a coincidence that the column was previously NVARCHAR (MAX)?

Laravel migrations on SQL Server use the data type NCHAR, how can I force it to use CHAR instead?

I've created a very simple migration, that creates a table with a FK referencing a column on an existing table. The problem is that the migration creates a NCHAR datatype column, while the referenced column is of CHAR datatype, so the FK can't be created because of different datatypes columns.
Is there any way to enforce Laravel to use CHAR instead of NCHAR?
Thanks!
I've got a work around for this issue, I got the idea from mikebronner's comment here https://github.com/laravel/framework/issues/9636.
I've modified my migration to alter the 'cliente' column type, after it's been created, using raw SQL. This way I can override Laravel's default datatype of NCHAR, when creating CHAR columns. The altered column can't have any restrictions such as FK or PK before being modified. Hope this helps anyone else having this problem in the future.
The following code is inside my migration file, right after the code that creates the table itself, inside the up() function.
Schema::table('UsuariosWeb', function ($table) {
DB::statement("
ALTER TABLE UsuariosWeb ALTER COLUMN cliente CHAR(6) NOT NULL;
");
$table->primary('cliente');
$table->foreign('cliente')->references('Cliente')->on('Clientes');
});

SQL Azure raise 40197 error (level 20, state 4, code 9002)

I have a table in a SQL Azure DB (s1, 250Gb limit) with 47.000.000 records (total 3.5Gb). I tried to add a new calculated column, but after 1 hour of script execution, I get: The service has encountered an error processing your request. Please try again. Error code 9002 After several tries, I get the same result.
Script for simple table:
create table dbo.works (
work_id int not null identity(1,1) constraint PK_WORKS primary key,
client_id int null constraint FK_user_works_clients2 REFERENCES dbo.clients(client_id),
login_id int not null constraint FK_user_works_logins2 REFERENCES dbo.logins(login_id),
start_time datetime not null,
end_time datetime not null,
caption varchar(1000) null)
Script for alter:
alter table user_works add delta_secs as datediff(second, start_time, end_time) PERSISTED
Error message:
9002 sql server (local) - error growing transactions log file.
But in Azure I can not manage this param.
How can I change my structure in populated tables?
Azure SQL Database has a 2GB transaction size limit which you are running into. For schema changes like yours you can create a new table with the new schema and copy the data in batches into this new table.
That said the limit has been removed in the latest service version V12. You might want to consider upgrading to avoid having to implement a workaround.
Look at sys.database_files by connecting to the user database. If the log file current size reaches the max size then you hit this. At this point either you have to kill the active transactions or update to higher tiers (if this is not possible because of the amount of data you modifying in a single transaction).
You can also get the same by doing:
DBCC SQLPERF(LOGSPACE);
Couple ideas:
1) Try creating an empty column for delta_secs, then filling in the data separately. If this still results in txn log errors, try updating part of the data at a time with a WHERE clause.
2) Don't add a column. Instead, add a view with the delta_secs column as a calculated field instead. Since this is a derived field, this is probably a better approach anyway.
https://msdn.microsoft.com/en-us/library/ms187956.aspx

Resources