I wanted to change a field in a database table to be a calculated field rather than being populated using a trigger.
I am using Entity Framework database first so I added this sql to a migration to carry out the task:
EXECUTE('ALTER TABLE [dbo].[BaseAnsweredQuestions] DROP COLUMN [CalculatedDueDate]
')
EXECUTE('ALTER TABLE dbo.BaseAnsweredQuestions ADD
CalculatedDueDate AS etc...')
This gave me a problem because a couple of indexes are dependent on this field. So I added drop sql for the index before the table changes with recreate sql after the changes.
When I ran the migration on my "staging" database I got an error:
Transaction Log for database is full
And my application would not run at all. I ran a database backup and that appeared to clear the problem, however I want to run the migration on production. How can I prevent the transaction log overflow occurring?
Related
I have a scheduled SSIS package where it loads the data overnight into the data warehouse. Before loading, it drops the entire database and drops all the tables. But now I had a situation where I don't want to drop one table and want to do an incremental load using merge SQL statement. Because it is dropping the entire database, I won't be able to do that in the current scenario. If I change drop database to delete database, I think, I should be able to do incremental load on the table I want. Are there any possible complications of doing that. Can you foresee any problems if I change drop database to delete database, will I be missing something. Any thoughts highly appreciated. Thanks for your time.
As far as I know with delete database you only delete the rows whereas with drop database you delete all tables incl. the rows. If your logic works, you could do a delete database, then drop all tables except the one you want to keep.
A drop/delete of the database will remove all of the contents of the database. If the requirement is to retain a single table, you'll need to retain the schema and database that holds it as well.
If I'm understanding correctly, you're dropping the target database. Is this a STAGE database for the data warehouse? If so, you'll also have a TARGET (the main tables of the warehouse) that are loaded from STAGE. If this is the case, you should be able to run a MERGE statement from the newly STAGED table to the TARGET table.
After adding table to replication it appears in sysmergearticles with status "6" and its not changing after sync, so replication always make drop/create table at subscriber. It started to appear suddenly and i have other tables that works good, but since several days i can't add table to publication without having this issue.
I need to move my teams database changes from our development environment to our test environment.
I know that Visual Studio can diff two databases and output a script. But for tables that we have added columns it is dropping the table and re-inserting in with the new columns.
It tries to keep the data, but it is not going to work. It will cause FK issues, and when I try to move this to production, I will lose all the statistics on the table.
Is there a way to get it to script the table with an alter script? (So that it alters the table to add the new column?)
I see this happen when columns are added to the middle of a table. If you're doing that, don't.
I am using MERGE REPLICATION on my server and now all tables have a rowguid, the last models generated before this change is working very good, but now the new table I imported (using database-first) get the rowguid and making impossible to update, I deleted this column in Model.edmx and I got this error.
Error 3023: Problem in mapping fragments starting at line 551: Column Location.rowguid in table Location must be mapped: It has no default value and is not nullable.
You can backup your database then restore it on another computer without preserving replication settings which will remove all replication traces inclusing the added rowguid columns, then you can generate your entity from the restored database.
I have a table in SQL Server 2005 whose primary key is an identity column (increment 1), and I also have a default value set for one of the other columns.
When I open the table in SQL Server Management Studio and type in a new record into the table, the inserted values are not displayed, and I get the following message on save:
However, if the table has either an identity column, or one or more columns with a default value specified, the inserted value(s) will be displayed in the table after a save. And can be edited.
I frequently create test data in ssms this way and this issue makes it cumbersome to do some things I would like to.
Is there any way around this?
Right click on it and say Execute SQL...it should not display it(error)..its just sql server way of doing things..since it inserts the identity column later..You should not add records in that way in the first place.
You should not add records to a database that way! It can have unfortunate side effects (especially on large tables) as you have discovered.
Records for lookup tables should be added through rerunable scripts. Those scripts should in source control. This makes them easy to promote from dev to Qa to staging to prod.
Test records should also be done in scripts (including scripts to remove the test records) so that you can run thenm on other environments as well as being able to delete and recreate them if some process you are testing went bad. These too should eb in source control (as should all database changes which also should not be done through the GUI).