Consider the code below in a unit test, where I add a new Tag object in a pre-populated SQLite database.
#Test // Line 1
public void add() {
Tag tagToAdd = new Tag("Tall");
Tag addedTag = this.tagDao.add(tagToAdd);
assertNotNull(addedTag);
assertEquals(3L, addedTag.getId()); // Line 6
assertEquals(tagToAdd.getTag(), addedTag.getTag());
List<Tag> tags = this.tagDao.get();
assertEquals(3, tags.size());
}
On line 6, I expect the ID of the Tag to be 3, because the field is an AUTOINCREMENT and the test is initialized with a database already containing 2 Tags. This works fine every time I run the test and the ID is always 3.
Now, I am integrating flyway to the project. Every time I run the test, the AUTOINCREMENT starts from the value of the last run, so the Tag ID increments by 1 every run, and the test fails.
Any idea on how I can get flyway to always reset the database to a brand new state, and reset the AUTOINCREMENT value ? I could write a query to do it manually, but this is not maintainable.
What I have tried so far ?
Integrate #FlywayTest, as this executes flyway task clean
Defined a FlywayMigrationStrategy bean, which contains flyway.clean()
Set spring.flyway.clean-on-validation-error to true in my application.properties (that said, there was no change in my sql, so not sure if this changed anything)
-- Edit
My 1st migration script contains the below.
DROP TABLE IF EXISTS Tag;
CREATE TABLE Tag(
id INTEGER PRIMARY KEY AUTOINCREMENT,
tag VARCHAR(255) NOT NULL UNIQUE,
createdDate TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
modifiedDate TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
If I understood everything correctly - you have a database and a table in this database which is created once and the same table is used for tests every time - you just delete rows from the table (without removing it) when tests are completed (or before starting next tests) and flyway just inserts two tags into this table every time you run the tests.
If that's right - you can just reset sequence in SQLite to set it back to 1 so next inserted row will be inserted with this id. You can do it by running the following query:
UPDATE `sqlite_sequence` SET `seq` = 1 WHERE `name` = 'tags_table_name';
Alternatively, you can set seq to 0 - this value is incorrect so SQLite will use next available correct value (if there are no rows in the table - it will be one, if there are some values - it will first available number).
Yet another possibility is just to delete your table after tests and recreate it before running next tests - as it is a database and table just for tests - it should work correctly. This way you have your sequence counter set back to value 1 each time. I would actually go this way until you have really good reason not to delete the table.
Related
I have to write a trigger for the tables I made and in insert update, I have to record a separate log table for those that are updated or inserted.
Columns in the log table will be like;
Done_process (will write update, insert)
Person (student number of the person treated)
Before (previous value for update, blank for insert)
After (new value for update, new value for insert)
This is my student_info table,
CREATE TABLE student_info (
school_id NUMBER,
id_no NUMBER NOT NULL UNIQUE,
name VARCHAR2(50) NOT NULL,
surname VARCHAR2(50) NOT NULL,
city VARCHAR2(50) NOT NULL,
birth_date DATE NOT NULL,
CONSTRAINT student_info_pk PRIMARY KEY(school_id )
);
CREATE TABLE og_log(
done_process VARCHAR2(30),
person VARCHAR2(30),
before VARCHAR2(30),
after VARCHAR2(30)
);
CREATE OR REPLACE TRIGGER og_trigger
BEFORE INSERT OR UPDATE OR DELETE ON student_info
REFERENCING OLD AS OLD NEW AS NEW
FOR EACH ROW
ENABLE
DECLARE
BEGIN
IF INSERTING THEN
INSERT INTO og_log(done_process, person, before, after)
VALUES ('Insert',:new.school_id,:old.name,:new.name);
ELSIF UPDATING THEN
INSERT INTO og_log(done_process, person, before, after)
VALUES ('Update',:new.school_id,:old.name,:new.name);
END IF;
END;
/
When I try to run the code it gave an error as follows;
> Trıgger OG_TRIGGER created.
>
>
> Error starting at line : 280 in command - ELSIF UPDATING THEN Error
> report - Unknown Command
>
> SP2-0552: Bind variable "NEW" not declared.
>
> 0 rows inserted.
>
>
> Error starting at line : 283 in command - END IF Error report -
> Unknown Command
>
> SP2-0044: For a list of known commands enter HELP and to leave enter
> EXIT.
>
> Error starting at line : 284 in command - END Error report - Unknown
> Command
I believe you are creating this trigger for learning purpose and not something a real use case because what you do in trigger doesn't really making any sense.
The trigger you have mentioned is not compiling due to syntactical problems like where v_id := 20201033.
Where clause is used to compare the value and thus you should use = instead := which is an assignment operator.
Besides this problem few points which still needs to be taken care
Give a explicit convention for creating local variables. e.g. you have created a local variable v_id and the same column is also available in student_info table. Though it is not a problem in this case but it's good practice to keep the local variable specific like let's say l_v_id.
You have used a select statement inside trigger which could leads to NO_DATA_FOUND error and you should handle it by either in the exception section or another way would be using aggregate function like max() if obviously v_id is primary key. I doubt why you need this select statement ( you could use between old and new using something like coalesce(:old.school_id,:new_schoold_id) if I understood you) but I would leave it open to you to decide and act accordingly.
Considering above points final code will be,
CREATE OR REPLACE TRIGGER og_trigger
BEFORE INSERT OR UPDATE OR DELETE ON student_info
REFERENCING OLD AS OLD NEW AS NEW
FOR EACH ROW
ENABLE
DECLARE
BEGIN
IF INSERTING THEN
INSERT INTO og_log(done_process, person, before, after)
VALUES ('Insert',:new.school_id,:old.city,:new.city);
ELSIF UPDATING THEN
INSERT INTO og_log(done_process, person, before, after)
VALUES ('Update',:new.school_id,:old.city,:new.city);
END IF;
END;
/
Find demo db<>fiddle
EDITED: Solving probably tool issue
I doubt the issue is with SQL Developer tool usage , however last try i would like to make,
Step1:
Drop both the tables used by issuing drop command
drop table STUDENT_INFO;
drop table og_log;
Step2:
Open another SQL worksheet using alt+F10 and do as I have shown in the following image. Please try and let me know.
I have a problem to resolve cache updates when delta includes fields that have UNIQUE constraint on the database. I have a database with the following DDL schema (SQLite in memory can be used to reproduce):
create table FOO
(
ID integer primary key,
DESC char(2) UNIQUE
);
The initial database table contains one record with ID = 1 and DESC = R1
Acessing this table with a TFDQuery (select * from FOO), if the following steps are performed, the generated delta will be correctly applied with ApplyUpdates:
Update record ID = 1 to DESC = R2
Append a new record ID = 2 with DESC = R1
Delta includes the following:
R2
R1
No error will be generated on ApplyUpdates, because the first operation on delta will be an update. The second will be an insert. As record 1 now is R2, the insertion can be done because there are no violation of the unique contraint on this transaction.
Now, performing the following steps, will generate the exactly same delta (look at the FDQuery.Delta property), but a UNIQUE constraint violation will be generated.
Append a new temporary record ID = 2 with DESC = TT
Update the first record ID = 1 to DESC = R2
Update the temporary record 2 - TT to DESC = R1
Delta includes the following:
R2
R1
Note that FireDAC generates the same delta on both scenarios, this can be viewed through the FDquery's Delta property.
This steps cand be used to reproduce the error:
File > New VCL Forms Application; Drop a FDConnection and FDQuery on form; Set FDConnection to use SQLite driver (using in memory database); Drop two buttons on form, one to reproduce the correctly behavior, and another to reproduce the error, as follows:
Button OK:
procedure TFrmMain.btnOkClick(Sender: TObject);
begin
// create the default database with a FOO table
con.Open();
con.ExecSQL('create table FOO' + '(ID integer primary key, DESC char(2) UNIQUE)');
// insert a default record
con.ExecSQL('insert into FOO values (1,''R1'')');
qry.CachedUpdates := true;
qry.Open('select * from FOO');
// update the first record to T2
qry.First();
qry.Edit();
qry.Fields[1].AsString := 'R2';
qry.Post();
// append the second record to T1
qry.Append();
qry.Fields[0].AsInteger := 2;
qry.Fields[1].AsString := 'R1';
qry.Post();
// apply will not generate a unique constraint violation
qry.ApplyUpdates();
end;
Button Error:
// create the default database with a FOO table
con.Open();
con.ExecSQL('create table FOO' + '(ID integer primary key, DESC char(2) UNIQUE)');
// insert a default record
con.ExecSQL('insert into FOO values (1,''R1'')');
qry.CachedUpdates := true;
qry.Open('select * from FOO');
// append a temporary record (TT)
qry.Append();
qry.Fields[0].AsInteger := 2;
qry.Fields[1].AsString := 'TT';
qry.Post();
// update R1 to R2
qry.First();
qry.Edit();
qry.Fields[1].AsString := 'R2';
qry.Post();
qry.Next();
// update TT to R1
qry.Edit();
qry.Fields[1].AsString := 'R1';
qry.Post();
// apply will generate a unique contraint violation
qry.ApplyUpdates();
Update Since writing the original version of this answer, I've done some more investigation and am beginning to think that either there is a problem with ApplyUpdates, etc, in FireDAC's support for Sqlite (in Seattle, at least), or we are not using the FD components correctly. It would need FireDAC's author (who is a contributor here) to say which it is.
Leaving aside the ApplyUpdates business for a moment, there are a number of other problems with your code, namely your dataset navigation makes assumptions about the ordering on the rows in qry and the numbering of its Fields.
The test case I have used is to start (before execution of the application) with the Foo table containing the single row
(1, 'R1')
Then, I execute the following Delphi code, at the same time as monitoring the contents of Foo using an external application (the Sqlite Manager plug-in for FireFox). The code executes without an error being reported in the application, but notice that it does not call ApplyUpdates.
Con.Open();
Con.StartTransaction;
qry.Open('select * from FOO');
qry.InsertRecord([2, 'TT']);
assert(qry.Locate('ID', 1, []));
qry.Edit;
qry.FieldByName('DESC').AsString := 'R2';
qry.Post;
assert(qry.Locate('ID', 2, []));
qry.Edit;
qry.FieldByName('DESC').AsString := 'R1';
qry.Post;
Con.Commit;
qry.Close;
Con.Close;
The added row (ID = 2) is not visible to the external application until after Con.Close has executed, which I find puzzling. Once Con.Close has been called, the external application shows Foo as containing
(1, 'R2')
(2, 'R1')
However, I have been unable to avoid the constraint violation error if I call ApplyUpdates, regardless of any other changes I make to the code, including adding a call to ApplyUpdates after the first Post.
So, it seems to me that either the operation of ApplyUpdates is flawed or it is not being used correctly.
I mentioned FireDAC's author. His name is Dmitry Arefiev and he has answered a lot of FD qs on SO, though I haven't noticed him here in the past couple of months or so. You might try catching his attention by posting in EMBA's FireDAC NG forum, https://forums.embarcadero.com/forum.jspa?forumID=502.
I'm developing a website which, as of current, both has a production and a test database.
The production database is hosted externally while the test database is hosted locally.
Whenever I make changes to my database I apply the changes through a migration.
After having added a new migration I run the update-database command on both my production and test database to keep them in sync.
I applied the migration just fine to my production database, however, when I wanna apply the migration to my test database I see that it attempts to apply ALL the previous migrations (and not just the new one):
Here is the output:
Applying explicit migrations: [201603230047093_Initial,
201603232305269_AddedBlobNameToImage,
201603242121190_RemovedSourceFromRealestateDbTable,
201603311617077_AddedSourceUrlId,
201604012033331_AddedIndexProfileAndFacebookNotifications,
201604012233271_RemovedTenantIndexProfile,
201604042359214_AddRealestateFilter]. Applying explicit migration:
201603230047093_Initial. System.Data.SqlClient.SqlException
(0x80131904): There is already an object named 'Cities' in the
database.
Obviously it fails since the current state of the database is at the second latest migration. However I wonder why it attempts to apply ALL the previous migrations?
Unlike the production database (which has had all the migrations applied one at a time), the test database was deleted and created at the previous migration so its migration history table only contains one row:
201604012239054_InitialCreate
(I assume InitialCreate is an auto generated name of all the previous migrations combined).
In summary:
Why is the test database trying to apply ALL the previous migrations instead of just the newly added?
EDIT:
When running COMMMAND I get the follow output script:
DECLARE #CurrentMigration [nvarchar](max)
IF object_id('[dbo].[__MigrationHistory]') IS NOT NULL
SELECT #CurrentMigration =
(SELECT TOP (1)
[Project1].[MigrationId] AS [MigrationId]
FROM ( SELECT
[Extent1].[MigrationId] AS [MigrationId]
FROM [dbo].[__MigrationHistory] AS [Extent1]
WHERE [Extent1].[ContextKey] = N'Boligside.Migrations.Configuration'
) AS [Project1]
ORDER BY [Project1].[MigrationId] DESC)
IF #CurrentMigration IS NULL
SET #CurrentMigration = '0'
IF #CurrentMigration < '201603230047093_Initial'
(it proceeds making if statements for each previous migration)
The current migrations table in my database looks the following (note that the first row is for a logging framework so it's not related):
One issue that can cause migrations to rerun is if your context key changes which can happen during refactoring. There are a couple of ways to solve this:
1) Update the old records in __MigrationHistory with the new values:
UPDATE [dbo].[__MigrationHistory]
SET [ContextKey] = ‘New_Namespace.Migrations.Configuration’
WHERE [ContextKey] = ‘Old_Namespace.Migrations.Configuration’
2) You can hard code the old context key into the constructor of your migration Configuration class:
public Configuration()
{
AutomaticMigrationsEnabled = false;
this.ContextKey = “Old_Namespace.Migrations.Configuration”;
}
Here is a good article on how migrations run under the hood: https://msdn.microsoft.com/en-US/data/dn481501?f=255&MSPPError=-2147217396
See also http://jameschambers.com/2014/02/changing-the-namespace-with-entity-framework-6-0-code-first-databases/
First of all I have never attempted something like this in SSIS and I am very new to SSIS package development.
I need to build a component in my package that will run through a table of data (say 80 rows) and set a field titled DisplayOrder to the auto incremented number. The catch is that one of the records HAS to be set to 0 and then the rest of he records set to the auto incremented number.
In regards to code, I am not even sure what code to attach to this question or even what screenshots.
I finally figured it out and there is no need for a loop.
Create a SQL Task to clear the linked Table.
Script Used
DELETE FROM [Currency].[ExchangeRates]
Create a SQL Task to clear the main table.
Script Used
DELETE FROM [Currency].[CurrencyList]
Load the values into the main table.
Actions Used
Load values from XML Source
Dump values to [ExchangeRates] Table
Create a SQL Task to load the Values from the main table to the linked table.
Script Used
INSERT INTO [Currency].[CurrencyList] (CurrencyCode, CurrencyName, ExchangeRateID, DisplayOrder) SELECT [er].[TargetCurrency] AS [CurrencyCode], [er].[TargetName] AS [CurrencyName], [er].[ID] AS [ExchangeRateID], ROW_NUMBER() OVER (ORDER BY [ER].[TargetName]) AS [DisplayOrder] FROM [Currency].[ExchangeRates] AS [er] ORDER BY [CurrencyName]
Create a SQL Task to load a new record to the main table for use as DisplayOrder 0.
Script Used
INSERT INTO [Currency].[ExchangeRates] ([Title], [Link], [Description], [PubDate], [BaseCurrency], [TargetCurrency], [TargetName], [ExchangeRate]) VALUES ('1 USD = 1 USD','http://www.floatrates.com/usd/usd/','1 U.S. Dollar = 1 U.S. Dollar',(SELECT TOP 1 [PubDate] FROM [Currency].[ExchangeRates]),'USD','USD','United States Dollar','1')
Create a SQL Task to reference the newly created record from the main table.
Script Used
INSERT INTO [Currency].[CurrencyList] (CurrencyCode, CurrencyName, ExchangeRateID, DisplayOrder) SELECT [er].[TargetCurrency] AS [CurrencyCode], [er].[TargetName] AS [CurrencyName], [er].[ID] AS [ExchangeRateID], 0 AS [DisplayOrder] FROM [Currency].[ExchangeRates] AS [er] WHERE [er].[TargetCurrency] = 'USD'
I am trying to use Dapper support my data access for my server app.
My server app has another application that drops records into my database at a rate of 400 per minute.
My app pulls them out in batches, processes them, and then deletes them from the database.
Since data continues to flow into the database while I am processing, I don't have a good way to say delete from myTable where allProcessed = true.
However, I do know the PK value of the rows to delete. So I want to do a delete from myTable where Id in #listToDelete
Problem is that if my server goes down for even 6 mintues, then I have over 2100 rows to delete.
Since Dapper takes my #listToDelete and turns each one into a parameter, my call to delete fails. (Causing my data purging to get even further behind.)
What is the best way to deal with this in Dapper?
NOTES:
I have looked at Tabled Valued Parameters but from what I can see, they are not very performant. This piece of my architecture is the bottle neck of my system and I need to be very very fast.
One option is to create a temp table on the server and then use the bulk load facility to upload all the IDs into that table at once. Then use a join, EXISTS or IN clause to delete only the records that you uploaded into your temp table.
Bulk loads are a well-optimized path in SQL Server and it should be very fast.
For example:
Execute the statement CREATE TABLE #RowsToDelete(ID INT PRIMARY KEY)
Use a bulk load to insert keys into #RowsToDelete
Execute DELETE FROM myTable where Id IN (SELECT ID FROM #RowsToDelete)
Execute DROP TABLE #RowsToDelte (the table will also be automatically dropped if you close the session)
(Assuming Dapper) code example:
conn.Open();
var columnName = "ID";
conn.Execute(string.Format("CREATE TABLE #{0}s({0} INT PRIMARY KEY)", columnName));
using (var bulkCopy = new SqlBulkCopy(conn))
{
bulkCopy.BatchSize = ids.Count;
bulkCopy.DestinationTableName = string.Format("#{0}s", columnName);
var table = new DataTable();
table.Columns.Add(columnName, typeof (int));
bulkCopy.ColumnMappings.Add(columnName, columnName);
foreach (var id in ids)
{
table.Rows.Add(id);
}
bulkCopy.WriteToServer(table);
}
//or do other things with your table instead of deleting here
conn.Execute(string.Format(#"DELETE FROM myTable where Id IN
(SELECT {0} FROM #{0}s", columnName));
conn.Execute(string.Format("DROP TABLE #{0}s", columnName));
To get this code working, I went dark side.
Since Dapper makes my list into parameters. And SQL Server can't handle a lot of parameters. (I have never needed even double digit parameters before). I had to go with Dynamic SQL.
So here was my solution:
string listOfIdsJoined = "("+String.Join(",", listOfIds.ToArray())+")";
connection.Execute("delete from myTable where Id in " + listOfIdsJoined);
Before everyone grabs the their torches and pitchforks, let me explain.
This code runs on a server whose only input is a data feed from a Mainframe system.
The list I am dynamically creating is a list of longs/bigints.
The longs/bigints are from an Identity column.
I know constructing dynamic SQL is bad juju, but in this case, I just can't see how it leads to a security risk.
Dapper request the List of object having parameter as a property so in above case a list of object having Id as property will work.
connection.Execute("delete from myTable where Id in (#Id)", listOfIds.AsEnumerable().Select(i=> new { Id = i }).ToList());
This will work.