does anyone know a way how I could set through mapping the default value of a column so for e.g. when I generate DB from mappings I would have DateTime column having getdate() as default value?
I tried so far this (looks exactlly like what I need) but it doesn't work
this.Map(x => x.LastPersistedOn, "DateModified")
.Access.Property()
.Default("getdate()");
I just tried setting some default values and it worked as expected. I am using Fluent as retrieve from Git the 24.05.2010 so updating your copy may solve your problem.
Mapping
public class SampleEntity
{
public virtual DateTime DateTimeProperty { get; set; }
}
With
public class SampleEntityMap
: ClassMap<SampleEntity>
{
public SampleEntityMap()
{
Map(x => x.DateTimeProperty, "DateTimeColumn")
.Access.Property() //actually not necessary
.Not.Nullable()
.Default("getDate()");
}
}
this will produce the following sql (from output to the console)
create table SampleEntity(
DateTimeColumn DATETIME default getDate() not null
)
--
Dom
The way to do this is to assign the current DateTime in code rather than using default value in the database. Then treat it as a normal column. Seemed a bit strange to me at first coming from a model-driven design background, but managing default values at the POCO level is the DDD way to do it.
Would be good to hear others' opinions too
Related
Using Code First technique with EF6
When I declare my DateTime properties like this:
Public Property StartDate As DateTime
I get this error:
System.Data.SqlClient.SqlException The conversion of a datetime2 data type to a datetime data type resulted in an out-of-range value. The statement has been terminated.
When I change my declaration to this:
Public Property StartDate As Nullable(Of DateTime)
Everything is fine.
What I don't understand is why. I have seen in other SO posts that we need to use DateTime2 with SQL, but I don't know what that means. It's not a type in code. Is it something else I should be setting up?
Am I taking a shortcut using the nullable declaration or is that automatically changing to datetime2 during the EF processes?
You might want to create custom EF convention and plug it to instruct Entity Framework how to deal with every single property of DateTime.
Create following convention class.
public class DateTime2Convention : Convention
{
public DateTime2Convention()
{
this.Properties<DateTime>()
.Configure(c => c.HasColumnType("datetime2"));
}
}
Then add custom convention to convention collection overriding OnModelBuilder method in class inheriting DbContext.
protected override void OnModelCreating(DbModelBuilder modelBuilder)
{
// existing code, if any
modelBuilder.Conventions.Add(new DateTime2Convention());
}
You might want to take a look at Entity Framework Custom Code First Conventions (EF6 onwards) for more info.
I am trying spring-data-rest with spring-data-mongo and a lot of things are working beautifully out of the box, including support for eTag field.
#EnableMongoAuditing annotations works very well too: when a document is created, the #CreatedDate and #LastModifiedDate fields are set.
The problem is that the #CreatedDate field being set to null during updates. I found an unresolved issue Mongo Auditing: #CreatedDate field gets set to null on updates with Spring Data Rest with a suggested workaround of using the #JsonIgnore annotation which does not work for me.
There was also a similar question here which does not appear to be the same issue.
I am using version 1.10.1.RELEASE of spring-data-mongo and 2.6.1.Release of spring-data-rest.
Is there a solution to this issue?
One solution is to tell Jackson to output the field to JSON when serializing the object, but never read the value when deserializing the object, using the access element of JsonProperty:
#Document
public class MyDocument {
#JsonProperty(access = JsonProperty.Access.READ_ONLY)
#CreatedDate
private Instant createdDate;
}
Spring Data REST will still output the createdDate field to JSON, but it will never read from it, including when performing an update.
Note that this will affect the serialization of your document class throughout the entire application. Often this will not be an issue, but it would pose a problem if there are other places in the code that need to be able to deserialize the createdDate from JSON.
Created date make sense only for immutable entities (which you are not going to update)
If entity is updatable, would like to use only last modified instead
For any other cases probably make sense use some history audition log..
#Entity
// ...
public class MyEntity {
// ...
#CreatedDate
private LocalDateTime createdAt; // modifiedAt
#PreUpdate
public void fixSpringDataRestNullDate() {
createdAt = LocalDateTime.now();
}
}
I am beginning to use Dapper and love it so far. However as i venture further into complexity, i have ran into a big issue with it. The fact that you can pass an entire custom object as a parameter is great. However, when i add another custom object a a property, it no longer works as it tries to map the object as a SQL parameter. Is there any way to have it ignore custom objects that are properties of the main object being passed thru? Example below
public class CarMaker
{
public string Name { get; set; }
public Car Mycar { get; set; }
}
propery Name maps fine but property MyCar fails because it is a custom object. I will have to restructure my entire project if Dapper can't handle this which...well blows haha
Dapper extensions has a way to create custom maps, which allows you to ignore properties:
public class MyModelMapper : ClassMapper<MyModel>
{
public MyModelMapper()
{
//use a custom schema
Schema("not_dbo_schema");
//have a custom primary key
Map(x => x.ThePrimaryKey).Key(KeyType.Assigned);
//Use a different name property from database column
Map(x=> x.Foo).Column("Bar");
//Ignore this property entirely
Map(x=> x.SecretDataMan).Ignore();
//optional, map all other columns
AutoMap();
}
}
Here is a link
There is a much simpler solution to this problem.
If the property MyCar is not in the database, and it is probably not, then simple remove the {get;set;} and the "property" becomes a field and is automatically ignored by DapperExtensions. If you are actually storing this information in a database and it is a multi-valued property that is not serialized into a JSON or similar format, I think you are probably asking for complexity that you don't want. There is no sql equivalent of the object "Car", and the properties in your model must map to something that sql recognizes.
UPDATE:
If "Car" is part of a table in your database, then you can read it into the CarMaker object using Dapper's QueryMultiple.
I use it in this fashion:
dynamic reader = dbConnection.QueryMultiple("Request_s", param: new { id = id }, commandType: CommandType.StoredProcedure);
if (reader != null)
{
result = reader.Read<Models.Request>()[0] as Models.Request;
result.reviews = reader.Read<Models.Review>() as IEnumerable<Models.Review>;
}
The Request Class has a field as such:
public IEnumerable<Models.Review> reviews;
The stored procedure looks like this:
ALTER PROCEDURE [dbo].[Request_s]
(
#id int = null
)
AS
BEGIN
SELECT *
FROM [biospecimen].requests as bn
where bn.id=coalesce(#id, bn.id)
order by bn.id desc;
if #id is not null
begin
SELECT
*
FROM [biospecimen].reviews as bn
where bn.request_id = #id;
end
END
In the first read, Dapper ignores the field reviews, and in the second read, Dapper loads the information into the field. If a null set is returned, Dapper will load the field with a null set just like it will load the parent class with null contents.
The second select statement then reads the collection needed to complete the object, and Dapper stores the output as shown.
I have been implementing this in my Repository classes in situations where a target parent class has several child classes that are being displayed at the same time.
This prevents multiple trips to the database.
You can also use this approach when the target class is a child class and you need information about the parent class it is related to.
After I read one question in attached link, I got a sense of how to set DateCreated and DateModified columns in Entity Framework and use it in my application. In the old SQL way though, the trigger way is more popular because is more secure from DBA point of view.
So any advice on which way is the best practice? should it be set in entity framework for the purpose of application integrity? or should use trigger as it make more sense from data security point of view? Or is there a way to compose trigger in entity framework? Thanks.
EF CodeFirst: Rails-style created and modified columns
BTW, even though it doesn't matter much, I am building this app using ASP.NET MVC C#.
Opinion: Triggers are like hidden behaviour, unless you go looking for them you usually won't realise they are there. I also like to keep the DB as 'dumb' as possible when using EF, since I'm using EF so my team wont need to maintain SQL code.
For my solution (mix of ASP.NET WebForms and MVC in C# with Business Logic in another project that also contains the DataContext):
I recently had a similar issue, and although for my situation it was more complex (DatabaseFirst, so required a custom TT file), the solution is mostly the same.
I created an interface:
public interface ITrackableEntity
{
DateTime CreatedDateTime { get; set; }
int CreatedUserID { get; set; }
DateTime ModifiedDateTime { get; set; }
int ModifiedUserID { get; set; }
}
Then I just implemented that interface on any entities I needed to (because my solution was DatabaseFirst, I updated the TT file to check if the table had those four columns, and if so added the interface to the output).
UPDATE: here's my changes to the TT file, where I updated the EntityClassOpening() method:
public string EntityClassOpening(EntityType entity)
{
var trackableEntityPropNames = new string[] { "CreatedUserID", "CreatedDateTime", "ModifiedUserID", "ModifiedDateTime" };
var propNames = entity.Properties.Select(p => p.Name);
var isTrackable = trackableEntityPropNames.All(s => propNames.Contains(s));
var inherits = new List<string>();
if (!String.IsNullOrEmpty(_typeMapper.GetTypeName(entity.BaseType)))
{
inherits.Add(_typeMapper.GetTypeName(entity.BaseType));
}
if (isTrackable)
{
inherits.Add("ITrackableEntity");
}
return string.Format(
CultureInfo.InvariantCulture,
"{0} {1}partial class {2}{3}",
Accessibility.ForType(entity),
_code.SpaceAfter(_code.AbstractOption(entity)),
_code.Escape(entity),
_code.StringBefore(" : ", String.Join(", ", inherits)));
}
The only thing left was to add the following to my partial DataContext class:
public override int SaveChanges()
{
// fix trackable entities
var trackables = ChangeTracker.Entries<ITrackableEntity>();
if (trackables != null)
{
// added
foreach (var item in trackables.Where(t => t.State == EntityState.Added))
{
item.Entity.CreatedDateTime = System.DateTime.Now;
item.Entity.CreatedUserID = _userID;
item.Entity.ModifiedDateTime = System.DateTime.Now;
item.Entity.ModifiedUserID = _userID;
}
// modified
foreach (var item in trackables.Where(t => t.State == EntityState.Modified))
{
item.Entity.ModifiedDateTime = System.DateTime.Now;
item.Entity.ModifiedUserID = _userID;
}
}
return base.SaveChanges();
}
Note that I saved the current user ID in a private field on the DataContext class each time I created it.
As for DateCreated, I would just add a default constraint on that column set to SYSDATETIME() that takes effect when inserting a new row into the table.
For DateModified, personally, I would probably use triggers on those tables.
In my opinion, the trigger approach:
makes it easier; I don't have to worry about and remember every time I save an entity to set that DateModified
makes it "safer" in that it will also apply the DateModified if someone finds a way around my application to modify data in the database directly (using e.g. Access or Excel or something).
Entity Framework 6 has interceptors which can be used to set created and modified. I wrote an article how to do it: http://marisks.net/2016/02/27/entity-framework-soft-delete-and-automatic-created-modified-dates/
I agree with marc_s - much safer to have the trigger(s) in the database. In my company's databases, I require each field to have a Date_Modified, Date_Created field, and I even have a utility function to automatically create the necessary triggers.
When using with Entity Framework, I found I needed to use the [DatabaseGenerated] annotation with my POCO classes:
[Column(TypeName = "datetime2")]
[DatabaseGenerated(DatabaseGeneratedOption.Computed)]
public DateTime? Date_Modified { get; set; }
[Column(TypeName = "datetime2")]
[DatabaseGenerated(DatabaseGeneratedOption.Computed)]
public DateTime? Date_Created { get; set; }
I was attempting to use stored procedure mapping on an entity, and EF was creating #Date_Modified, #Date_Created parameters on my insert/update sprocs getting the error
Procedure or function has too many arguments specified.
Most of the examples show using [NotMapped], which will allow select/insert to work, but then those fields will not show up when that entity is loaded!
Alternately you can just make sure any sprocs contain the #Date_Modified, #Date_Created parameters, but this goes against the design of using triggers in the first place.
How do I get this column as similar to a PERSISTED COMPUTED column in the database?
My current attempt (it loads all CompCol rows with null in seed) :
public class Call
{
public Call()
{
}
[Key]
public int Id { get; set; }
[DatabaseGenerated(DatabaseGeneratedOption.Computed)]
public string CompCol
{
get
{
return "ABC-" + Convert.ToString(Id).PadLeft(5, '0');
}
protected set {}
}
}
The solution I found was to :
Make sure auto migrations are turned off. This is so that VS will generate a script (fluent api code) for us to further customise instead of just running it. So in the configuration class :
public Configuration()
{
AutomaticMigrationsEnabled = false;
}
Add the field to the class and set it as computed like so, the setter is private because we obviously cannot write to a computed field :
[DatabaseGenerated(DatabaseGeneratedOption.Computed)]
public string BreakdownNo { get; private set; }
Then do an add-migration [xyz-name] in the Package Manager Console to generate the migration code, which will appear under the migrations folder with the given name.
Inside the migration comment out the code in Up() and add custom SQL like so :
public override void Up()
{
//AddColumn("dbo.Calls", "BreakdownNo", c => c.String());
Sql("ALTER TABLE dbo.Calls ADD BreakdownNo AS ('BD'+RIGHT('00000'+ CAST(Id AS VARCHAR), 6))");
}
Do an update-database in the PM and it should add the computed column properly.
FURTHER NOTES : If you get the formula wrong then you will have to revert back the migration by doing an update-database -targetMigration: [name of migration to go back to] then do another add-migration name and amend your formula there, finishing off with update-database. There may be a better way but this is what I found and used.
I did not however find a way to make the field persisted yet.
Why not calling sql like that:
public class demo
{
void demoMethod()
{
Model1 model = new Model1();//Model1 : DbContext
model.Database.ExecuteSqlCommand("alter table Results drop column Total; alter table Results add Total AS (Arabic + English + Math + Science)");
}
}
I ran into some trouble using the method proposed in the accepted answer. I offer an alternative solution that worked for me.
I encountered a failure when running this query:
oDb.LogEntries.SingleOrDefault(Function(LogEntry) LogEntry.LogTime = dDate)
The error message:
The 'MinutesOffline' property on 'LogEntry' could not be set to a 'System.Int32' value. You must set this property to a non-null value of type 'System.Single'.
As we can see, EF 6.2 is trying to write a value to the property. Whether this is due to EF internally attempting to write to the Private Set, I don't know. It almost seems like it. But the end result is what matters: the statement fails.
Instead of setting the column to DatabaseGeneratedOption.Computed, I ended up ignoring it altogether: Builder.Entity(Of LogEntry).Ignore(Function(LogEntry) LogEntry.MinutesOffline).
This enabled me to create a read-only property:
Public ReadOnly Property MinutesOffline As Single
Get
Return IIf(Me.Scale < 1, 5, 0)
End Get
End Property
It also has the added benefit that we don't have to comment out any lines in the generated migration.
We still have to make the custom Sql() call in Up():
ALTER TABLE [LogEntries] ADD [MinutesOffline] AS (IIF([Scale] < 1, 5, 0)) PERSISTED
...and the PERSISTED keyword does work here. This becomes a persisted computed column.
YMMV
--EDIT--
I found out why I was getting the casting error; it had nothing to do with migrations and everything to do with my code. I wasn't properly casting the computed column at creation time:
ALTER TABLE [LogEntries] ADD [MinutesOffline] AS (IIF([Scale] < 1, 5, 0)) PERSISTED
The proper syntax for doing so is this:
ALTER TABLE [LogEntries] ADD [MinutesOffline] AS (CAST((IIF([Scale] < 1, 5, 0)) AS REAL)) PERSISTED
Accordingly, I've reverted the Ignore() calls and switched everything back over to the method proposed in the accepted answer.
Hat tip to JotaBe for the assistance.