I have a number of integration tests which access the DB directly - create test prerequisite objects - performs the tests and then cleans up afterwards - however I wonted to try out the same approach in-memory.
I have just used Effort in my project and it works very easily. However I've hit a problem that I have been trying - but unable to solve.
One of the tables that I need filled up with dummy data - as a test prerequisite - contains a computed column (nvarchar, not null). For the scope of the test I really don't care about that column's value - but even if I try to insert dummy data, my data is ignored and then I get hit with an error:
"Column 'x' cannot be null. Error code: GenericError"
In my tests I am using the same edmx file as is used by the actual code. This prevents me from constantly updating the edmx copy.
Is there a way in which I can force the test to update the edmx (at runtime) so that column is a nullable non-computed column? [overriding OnModelCreating] or is there way to insert a default value (anything goes for this column) to stop this error? [overriding SaveChanges]
I have currently tried the following:
Attaching the objects using .Attach() instead of .Add()
Setting the EntityState to Unchanged after adding
Forcing the value through Entry.OriginalValues [this values since entity is in Added state]
Edit:
I have tried overriding the OnModelCreating method, but to no avail since this is DB-First.
modelBuilder.Entity<Entity_Name>().Property(p => p.x).IsOptional().HasDatabaseGeneratedOption(System.ComponentModel.DataAnnotations.Schema.DatabaseGeneratedOption.None);
Open your EDMX file in XML editor, find your entity under the StorageModels,
and add to the column definition StoreGeneratedPattern="Computed".
But if you update or delete and add that table you will loose this modification. Actually you can write console app that will update you edmx file and add StoreGeneratedPattern="Computed" where needed and you can add those app to prebuild events in studio.
The reason of the problem was a bug in the Effort database. When the computed column is based on non-nullable columns, the computed column would also automatically become non-nullable. Therefor, the Effort database was expecting a non-null value. With the latest update the problem is resolved. You have to set the global EntityFrameworkEffortManager.UseDefaultForNotNullable flag as true.
See the issue on github
Related
In my Visual Studio Database Solution, I have some objects, which I had to set Build action = None, due to several reasons (Problems in build caused by OPENQUERY,...).
Unfortunately the Schema Compare doesn't compare those elements. Whenever i do a compare "source = development DB" & "target = solution", they are marked as new and schema compare will suggest to add those objects.
If i add those objects, the update will recognize, that they're already in the solution and will add the elements with a new name [objectname]+_1 and Build action = Build , which will of course cause problems during the next build.
Does anybody know if there is an easy way around this problem? Except working with Pre-build and Post-Build command line, to disable objects before building and enable them after building again.
Thanks in advance
Minimal reproducible example as requested in comment:
SebTHU: Adding minimal reproducible example.
Create a new, empty sandbox database.
In the database, run this script:
CREATE TABLE Table1(PersonID INT NOT NULL,FullNam nvarchar(255) NOT NULL) GO CREATE TABLE Table1_New (PersonID INT NOT NULL,FullName nvarchar(255) NOT NULL) GO CREATE VIEW vwOriginalView AS SELECT PersonID,FullNam FROM Table1 GO EXEC sp_rename 'Table1','ZZZTable1','OBJECT' GO EXEC sp_rename 'Table1_New','Table1','OBJECT' GO CREATE VIEW vwNewView AS SELECT PersonID,FullName FROM Table1 GO
This simulates an effective ALTER TABLE on Table1, but with the original table being retained as a renamed deprecated object. vwOriginalView now has an invalid reference, but we want to retain it (for the moment) as well; it would be renamed, but that's not necessary to demonstrate this problem.
In VS, create a new Database Project.
Run Schema Compare against the sandbox database. Press Update to add scripts for the 4 objects into the project. Keep the comparison window open.
There are now build errors (vwOriginalView has an invalid reference to column Fullnam). To ignore this object, set its BuildAction to None. The errors disappear.
Press Compare on the comparison window again. vwOriginalView now appears as a "new" object in the DB, to be added to the project.
This is the problem. It's nice to be reminded that, if it does exist in the project, the object's BuildAction is set to None. But with many (20-30) objects of this kind, SchemaCompare becomes confusing.
What I need is either a way for Compare to treat "BuildAction=None" objects as existing objects in the project - ideally switchable as an option, so that these objects can be made clearly visible in Compare if needed; or a way to make deprecated objects (specifically, my choice of objects) not cause build errors - an alternative to "BuildAction=None".
I've tried SQL error suppression in VS, but for one thing it doesn't work, and for another suppressing these kind of errors globally would be a bad idea.
I have an MS-Access front end to a MSSQL server back end. The existing, functional updates that the tool makes are applied through a MSSQL view which is inserted to MS-Access as a linked table. There is a primary key defined for this linked 'table' (view).
The user sees a subset of records that match previously selected criteria, and uses comboboxes (unbound) to select the value of several fields that are then applied to all matching records using DoCmd.RunSQL with Me.Filter on the "After Update" Event.
Users have requested an additional piece of functionality.
I have:
Added the new column required to the underlying table referenced in the view
Added the column to be output in the view
Refreshed the linked table in MS-Access
Added the new field to the form that will be updating it, and modified the DoCmd.RunSQL statement to enact the UPDATE
When updating the new field via the form, I get the standard message "You are about to change x rows" where x is the appropriate number. Pressing OK gives no errors, but the table is not updated.
To debug, I attempted to change the record in the linked table view directly. Again no errors were thrown, and the row seems to be updated, but this is not reflected in SSMS, and reloading the table in MS-Access the change is no longer present. I can change the values of columns other than the new one.
I also tested adding the underlying table as a linked table and I can edit the rows in MS-Access in this table.
(Update)
At #ErikvonAsmuth suggestion below I tried using Recordsets on the bound form instead of the DoCmd.RunSQL. Again could access the record and an update gave no error on rst.Update, but the change is not reflected in the database for the new field. I can change a previously existing field using this method as above.
Seems my problem is independent of the update method.
(/Update)
I would appreciate any ideas for next steps to check.
I found the issue.
There was a trigger defined on the view to handle saving to the table. I altered this in SSQL to add the new column and everything is working now.
Found a hint when I tried to edit the field in SSMS and it wouldn't work there either - was getting a Row failed to retrieve on last operation error.
That lead me to a thread referencing triggers on ExpertsExchange
I will be going back and changing the DoCMD.RunSQL statements to use recordsets.
Note: this question is referencing an old EF Core issue. See this answer for a discussion relevant to EF Core 5.
I'm seeing conflicting information about this. LearnEntityFrameworkCore.com says you can write
.HasComputedColumnSql("GetUtcDate()");
The value of the column is generated by the database's GetUtcDate() method whenever the row is created or updated
However, in the technet documentation for computed columns on SQL Server it says:
Their values are recalculated every time they are referenced in a query.
The two documentation you referenced are correct. Each is talking about different methods. There are several overloads for this method. You can check this over here (search for "HasComputedColumnSql" in the browser).
If there are triggers in the database that define values for this column, you will use the HasComputedColumnSql() implementation.
If you want to store a computed value on the property, but don't want it to persist in the table, you will use the HasComputedColumnSql("SomeFunction()") implementation.
If you want to store the value in a column, you will use the implementation HasComputedColumnSql("SomeFunction()", stored: true).
Good question. It's unclear from EF Core documentation what type of computed column does HasComputedColumnSql represent. Probably because it's a part of the relational extensions, thus is considered provider specific. But since it has no configuration options, I agree that the behavior should be specified.
A quick test with SQL Server shows that the created column is not physically stored (no PERSISTED option used), and actually it can't since GetUtcDate is not deterministic. Hence the LastModified example from the first link is incorrect - the column will be recalculated any time it's been read, so it cannot be used for the desired intent, and should be created as normal column and updated by the code.
Looks like the intended usage in EF Core is to allow simple calculation based on other columns like FullName or using database specific conversion function with data not stored properly (for instance numeric/date time data stored as text) etc.
In my application I work with Sqlite. In one of the tables inside database I've implemented a trigger (basically, after an insert event on the table TAB, it has to update a column named codecolumn which depends on the ID PK field)
In my code I create and object from a PeeweeModel previously setted
objfromModel = Model(params....)
After the execution of line:
objfromModel.save()
We hoped to get appart from the _id field generated -in fact objfromModel.id is retrieved from DB-, but also the codecolumn new field generated by the trigger execution on insert event. But objfromModel.codecolumn is None
Question: is there a trick to make on Peewee in order to recover this new field generated in database by trigger.
Unfortunately SQLite does not support the concept of INSERT ... RETURNING. What you could do is a couple of things:
A. After creation simply re-fetch the codecolumn. e.g. self.codecolumn = MyModel.select(MyModel.codecolumn).where(MyModel.id == self.id).scalar(convert=True) (the use of "scalar" says return just one value, the "convert=True" says convert the underyling database type to a Python type. This is really only necessary if the database type is a date or datetime
B. Create a post-insert trigger that calls a user-defined function. Register a handler for the user-defined function on your database instance, and have your callback receive the new codecolumn value and set it as a database attribute in the callback. Hopefully this makes sense?
C. Move the codecolumn trigger out of SQL and into Python, making it easier to know ahead-of-time what its value will be. This depends obviously on what that column contains.
Hope these ideas help.
I haven't had this problem until I first tried to manually add data to a database since my upgrade to WebMatrix 3, so I don't know if this is a bug or some kind of fault prevention.
I have defined a very simple table with the primary key as int and set it to not allow nulls, and be of the type IsIdentity so that the int value will automatically increment, as needed.
A pic of that is shown here:
Okay, seems simple enough, but when I try to manually add data to the table, it, as it should, does NOT allow me to modify the primary key value in any way (because it is automatic).
All I do is put in a couple of string values to the type and location columns and it tells me that it couldn't commit changes to the database because of the invalid value in the primary key field (it acts as though it is gonna try to throw NULL in as the value, but this should be overridden when it automatically adds the row. The user-interface does not allow me to control or edit this value in anyway).
A pic of this is shown here:
What is this? Some kind of bug? Is it a new rule that WebMatrix does not allow a developer to add values to the database manually? Do I have to write a query every time I want to add something to the database? Am I in the Twilight Zone? (Okay, sorry about the last one...)
Also, I've noticed that if I don't have IsIdentity set, I can edit the field, put a PERFECTLY VALID integer therein, and it still errors the same way, so I use ESC to backup my changes, then hit refresh, only to find that it did, indeed, add the row anyway :/ . So, this interface seems kind of buggy to begin with. In my scenario above (using IsIdentity), it DOES NOT add the row anyway, unfortunately.
--------------------UPDATE--------------------------
I just recently downloaded a WebMatrix update, and it appears that they have fixed this! Yay! (till now I was just querying generic INSERT INTO statements and editing them manually from there).
I think the SQL CE tooling with WM3 is broken, suggest you look at other tools for editing data - I can recommend the standalone SQL Server Compact Toolbox (disclosure: I am the author)