Dataset field DBNull -> int? - dataset

SQLServer int field. Value sometimes null.
DataAdapter fills dataset OK and can display data in DatagridView OK.
When trying to retrieve the data programmatically from the dataset the Dataset field retrieval code throws a StronglyTypedException error.
[global::System.Diagnostics.DebuggerNonUserCodeAttribute()]
public int curr_reading {
get {
try {
return ((int)(this[this.tableHistory.curr_readingColumn]));
}
catch (global::System.InvalidCastException e) {
throw new global::System.Data.StrongTypingException("The value for column \'curr_reading\' in table \'History\' is DBNull.", e);
}
Got past this by checking for DBNull in the get accessor and returning null but...
When the dataset structure is modified (Still developing) my changes (unsurprisingly) are gone.
What is the best way to handle this situation?
It seems I am stuck with dealing with it at the dataset level.
Is there some sort of attribute that can tell the auto code generator to leave the changes in place?

In the typed dataset designer there is the nullvalue property.
By default its value is throw exception (hence your generated code)
You can set it to the desired default value.. ie. 0.
Then it will return 0 instead of an exception. (other code is generated)
VS2008 :This works directly in the dataset designer.
VS2005 : It only works for strings in the designer but you can directly edit the XSD an set the property msprop:nullValue="0"

Leave the auto-generated code alone. There's no way to "intercept" it getting generated so any changes you do make are guaranteed to get blown away sooner or later.
.NET (well at least the .NET 2.0 system.data bits) will not convert from DBNull into anything else. This sucks but you can't do anything about it.
Write an extension method called ToNullable() or similar: it can do this:
.
public static Nullable<T> ToNullable(this object x){
if(x == DBNull.Value)
return default(T); // return null thing
else
return (T)x;
}
then you can do
int? thing = DataRow["column"].ToNullable<int>();

The dataset will have a boolean property to indicate null.
int curr_reading = ( Iscurr_readingColumnNull) ?
<default_value> : row.curr_readingColumn;

If memory serves, you need to mark the row as being edited - using .BeginEdit() or similar - then make your edits and save the row, probably using .EndEdit() or similar. You may want to do a little bit of reading into these methods (they may be on the DataSet, DataTable or DataRow) - my memory is a little hazy.
Hope this helps at least a little bit.

if(row["curr_reading"] is DBNull){
}else{
row.curr_reading;
}

Related

Change ItemList in a ComboBox depending from another ComboBox choice

first I must apologize because english isn't my mother language, but I'll try to be clear in what I'm asking.
I have a set of rows in a tableview, every row has diferent comboboxs per columns. So, the interaction between combobox must be per row. If in the Combobox A1, I select Item 1, in the Combobox A2 the itemlist will be updated.
My problem is that every combobox A2, B2, C2, etc. Is being updated according the choice in A1... same thing with B1,C1 combobox.
I need to updated just the A2, according to A1. B2 according to B1, etc.
I set the comboboxes by cellfactory, because I have to save the data from behind in a serializable object.
Hope is clear.
Regards.
This is pretty much a pain...
From a TableCell, you can observe the TableRow via it's tableRowProperty().
From the TableRow, you can observe the item in the row, via the table row's itemProperty().
And of course, from the item in the row, you can observe any properties defined in your model class, and update a list of items in the combo box accordingly.
The painful part is that any of these value can, and will at some point change. So the things you need to observe keep changing, and you have to manage adding and removing listeners as this happens.
The Bindings.select method is supposed to help manage things like this, but as of JavaFX 8, it prints huge stack traces to the output as warnings whenever it encounters a null value, which it does frequently. So I recommend doing you own listener management until that is fixed. (For some reason, the JavaFX team doesn't seem to consider this a big deal, even though encountering null values in the path defined in a Bindings.select is explicitly supported in the API docs.)
Just to make it slightly more unpleasant, the getTableRow() method in TableCell<S,T> returns a TableRow, instead of the more obvious TableRow<S>. (There may be a reason for this I can't see, but, well...). So your code is additionally littered with casts.
I created an example that works: apologies for it being based on US geography, but I had much of the example already written. I really hope I'm missing something and that there are easier ways to do this: please feel free to suggest something if anyone has better ideas.
On last note: the EasyBind library may provide a simpler way to bind to the properties along an arbitrary path.
As #James_D's example no longer runs due to link rot, and I was dealing with this same issue, here's how I figured out to create this effect.
View the full test case here.
I extend the builtin ComboBoxTableCell<S, T> to expose necessary fields. The custom TableCell has a Supplier<S> tableValue = (S) this.getTableRow().getItem(); used to access the applicable Data object. Additionally, I reflectively retrieve and store a reference to the cell's ComboBox. Because it is lazily instantiated in the superclass, I also have to set it via reflection before I can get it. Finally, I have to initialize the ComboBox as well, as it would be in javafx.scene.control.cell.CellUtils.createComboBox, since I'm manually creating it. It is important to expose these, as:
In the column's CellFactory, we finish initializing the ComboBoxCell. We just need to create a new instance of our custom ComboBoxTableCell and then when the comboBox is shown for the first time (e.g. we can be sure that we have a Data object associated with the cell), we bind the ComboBox#itemsProperty to a Bindings.When returning the proper ObservableList for the case.
CellFactory:
column1.setCellFactory(c -> {
TransparentComboBoxTableCell<Data, Enum> tcbtc = new TransparentComboBoxTableCell<>();
tcbtc.comboBox.setOnShown(e -> {
if (!tcbtc.comboBox.itemsProperty().isBound()) tcbtc.comboBox.itemsProperty().bind(
Bindings.when(tcbtc.tableValue.get().base.isEqualTo(BASE.EVEN)).then(evens).otherwise(
Bindings.when(tcbtc.tableValue.get().base.isEqualTo(BASE.ODD)).then(odds).otherwise(
FXCollections.emptyObservableList()
))
);
});
return tcbtc;
});
custom ComboBoxTableCell:
public static class TransparentComboBoxTableCell<S, T> extends ComboBoxTableCell<S, T> {
public TransparentComboBoxTableCell() {
this(FXCollections.observableArrayList());
}
public TransparentComboBoxTableCell(ObservableList<T> startingItems) {
super(startingItems);
try {
Field f = ComboBoxTableCell.class.getDeclaredField("comboBox");
f.setAccessible(true);
f.set(this, new ComboBox<>());
comboBox = (ComboBox<T>) f.get(this);
// Setup out of javafx.scene.control.cell.CellUtils.createComboBox
// comboBox.converterProperty().bind(converter);
comboBox.setMaxWidth(Double.MAX_VALUE);
comboBox.getSelectionModel().selectedItemProperty().addListener((ov, oldValue, newValue) -> {
if (this.isEditing()) {
this.commitEdit((T) newValue);
}
});
} catch (NoSuchFieldException | SecurityException | IllegalArgumentException | IllegalAccessException ex) {
Logger.getLogger(FXMLDocumentController.class.getName()).log(Level.SEVERE, null, ex);
throw new Error("Error extracting 'comboBox' from ComboBoxTableCell", ex);
}
tableValue = () -> (S) this.getTableRow().getItem();
}
public final ComboBox<T> comboBox;
public final Supplier<S> tableValue;
}

Entity Framework and WPF best practices

Is it ever a good idea to work directly with the context? For example, say I have a database of customers and a user can search them by name, display a list, choose one, then edit that customer's properties.
It seems I should use the context to get a list of customers (mapped to POCOs or CustomerViewModels) and then immediately close the context. Then, when the user selects one of the CustomerViewModels in the list the customer properties section of the UI populates.
Next they can change the name, type, website address, company size, etc. Upon hitting a save button, I then open a new context, use the ID from the CustomerViewModel to retrieve that customer record, and update each of its properties. Finally, I call SaveChanges() and close the context. This is a LOT OF WORK.
My question is why not just work directly with the context leaving it open throughout? I have read using the same context with a long lifetime scope is very bad and will inevitably cause problems. My assumption is if the application will only be used by ONE person I can leave the context open and do everything. However, if there will be many users, I want to maintain a concise unit of work and thus open and close the context on a per request basis.
Any suggestions? Thanks.
#PGallagher - Thanks for the thorough answer.
#Brice - your input is helpful as well
However, #Manos D. the 'epitome of redundant code' comment concerns me a bit. Let me go through an example. Lets say I'm storing customers in a database and one of my customer properties is CommunicationMethod.
[Flags]
public enum CommunicationMethod
{
None = 0,
Print = 1,
Email = 2,
Fax = 4
}
The UI for my manage customers page in WPF will contain three check boxes under the customer communication method (Print, Email, Fax). I can't bind each checkbox to that enum, it doesn't make sense. Also, what if the user clicked that customer, gets up and goes to lunch... the context sits there for hours which is bad. Instead, this is my thought process.
End user chooses a customer from the list. I new up a context, find that customer and return a CustomerViewModel, then the context is closed (I've left repositories out for simplicity here).
using(MyContext ctx = new MyContext())
{
CurrentCustomerVM = new CustomerViewModel(ctx.Customers.Find(customerId));
}
Now the user can check/uncheck the Print, Email, Fax buttons as they are bound to three bool properties in the CustomerViewModel, which also has a Save() method. Here goes.
public class CustomerViewModel : ViewModelBase
{
Customer _customer;
public CustomerViewModel(Customer customer)
{
_customer = customer;
}
public bool CommunicateViaEmail
{
get { return _customer.CommunicationMethod.HasFlag(CommunicationMethod.Email); }
set
{
if (value == _customer.CommunicationMethod.HasFlag(CommunicationMethod.Email)) return;
if (value)
_customer.CommunicationMethod |= CommunicationMethod.Email;
else
_customer.CommunicationMethod &= ~CommunicationMethod.Email;
}
}
public bool CommunicateViaFax
{
get { return _customer.CommunicationMethod.HasFlag(CommunicationMethod.Fax); }
set
{
if (value == _customer.CommunicationMethod.HasFlag(CommunicationMethod.Fax)) return;
if (value)
_customer.CommunicationMethod |= CommunicationMethod.Fax;
else
_customer.CommunicationMethod &= ~CommunicationMethod.Fax;
}
}
public bool CommunicateViaPrint
{
get { return _customer.CommunicateViaPrint.HasFlag(CommunicationMethod.Print); }
set
{
if (value == _customer.CommunicateViaPrint.HasFlag(CommunicationMethod.Print)) return;
if (value)
_customer.CommunicateViaPrint |= CommunicationMethod.Print;
else
_customer.CommunicateViaPrint &= ~CommunicationMethod.Print;
}
}
public void Save()
{
using (MyContext ctx = new MyContext())
{
var toUpdate = ctx.Customers.Find(_customer.Id);
toUpdate.CommunicateViaEmail = _customer.CommunicateViaEmail;
toUpdate.CommunicateViaFax = _customer.CommunicateViaFax;
toUpdate.CommunicateViaPrint = _customer.CommunicateViaPrint;
ctx.SaveChanges();
}
}
}
Do you see anything wrong with this?
It is OK to use a long-running context; you just need to be aware of the implications.
A context represents a unit of work. Whenever you call SaveChanges, all the pending changes to the entities being tracked will be saved to the database. Because of this, you'll need to scope each context to what makes sense. For example, if you have a tab to manage customers and another to manage products, you might use one context for each so that when a users clicks save on the customer tab, all of the changes they made to products are not also saved.
Having a lot of entities tracked by a context could also slow down DetectChanges. One way to mitigate this is by using change tracking proxies.
Since the time between loading an entity and saving that entity could be quite long, the chance of hitting an optimistic concurrency exception is greater than with short-lived contexts. These exceptions occur when an entity is changed externally between loading and saving it. Handling these exceptions is pretty straightforward, but it's still something to be aware of.
One cool thing you can do with long-lived contexts in WPF is bind to the DbSet.Local property (e.g. context.Customers.Local). this is an ObservableCollection that contains all of the tracked entities that are not marked for deletion.
Hopefully this gives you a bit more information to help you decide which approach to help.
Microsoft Reference:
http://msdn.microsoft.com/en-gb/library/cc853327.aspx
They say;
Limit the scope of the ObjectContext
In most cases, you should create
an ObjectContext instance within a using statement (Using…End Using in
Visual Basic).
This can increase performance by ensuring that the
resources associated with the object context are disposed
automatically when the code exits the statement block.
However, when
controls are bound to objects managed by the object context, the
ObjectContext instance should be maintained as long as the binding is
needed and disposed of manually.
For more information, see Managing Resources in Object Services (Entity Framework). http://msdn.microsoft.com/en-gb/library/bb896325.aspx
Which says;
In a long-running object context, you must ensure that the context is
disposed when it is no longer required.
StackOverflow Reference:
This StackOverflow question also has some useful answers...
Entity Framework Best Practices In Business Logic?
Where a few have suggested that you promote your context to a higher level and reference it from here, thus keeping only one single Context.
My ten pence worth:
Wrapping the Context in a Using Statement, allows the Garbage Collector to clean up the resources, and prevents memory leaks.
Obviously in simple apps, this isn't much of a problem, however, if you have multiple screens, all using alot of data, you could end up in trouble, unless you are certain to Dispose your Context correctly.
Hence I have employed a similar method to the one you have mentioned, where I've added an AddOrUpdate Method to each of my Repositories, where I pass in my New or Modified Entity, and Update or Add it depending upon whether it exists.
Updating Entity Properties:
Regarding updating properties however, I've used a simple function which uses reflection to copy all the properties from one Entity to Another;
Public Shared Function CopyProperties(Of sourceType As {Class, New}, targetType As {Class, New})(ByVal source As sourceType, ByVal target As targetType) As targetType
Dim sourceProperties() As PropertyInfo = source.GetType().GetProperties()
Dim targetProperties() As PropertyInfo = GetType(targetType).GetProperties()
For Each sourceProp As PropertyInfo In sourceProperties
For Each targetProp As PropertyInfo In targetProperties
If sourceProp.Name <> targetProp.Name Then Continue For
' Only try to set property when able to read the source and write the target
'
' *** Note: We are checking for Entity Types by Checking for the PropertyType to Start with either a Collection or a Member of the Context Namespace!
'
If sourceProp.CanRead And _
targetProp.CanWrite Then
' We want to leave System types alone
If sourceProp.PropertyType.FullName.StartsWith("System.Collections") Or (sourceProp.PropertyType.IsClass And _
sourceProp.PropertyType.FullName.StartsWith("System.Collections")) Or sourceProp.PropertyType.FullName.StartsWith("MyContextNameSpace.") Then
'
' Do Not Store
'
Else
Try
targetProp.SetValue(target, sourceProp.GetValue(source, Nothing), Nothing)
Catch ex As Exception
End Try
End If
End If
Exit For
Next
Next
Return target
End Function
Where I do something like;
dbColour = Classes.clsHelpers.CopyProperties(Of Colour, Colour)(RecordToSave, dbColour)
This reduces the amount of code I need to write for each Repository of course!
The context is not permanently connected to the database. It is essentially an in-memory cache of records you have loaded from disk. It will only request records from the database when you request a record it has not previously loaded, if you force it to refresh or when you're saving your changes back to disk.
Opening a context, grabbing a record, closing the context and then copying modified properties to an object from a brand new context is the epitomy of redundant code. You are supposed to leave the original context alone and use that to do SaveChanges().
If you're looking to deal with concurrency issues you should do a google search about "handling concurrency" for your version of entity framework.
As an example I have found this.
Edit in response to comment:
So from what I understand you need a subset of the columns of a record to be overridden with new values while the rest is unaffected? If so, yes, you'll need to manually update these few columns on a "new" object.
I was under the impression that you were talking about a form that reflects all the fields of the customer object and is meant to provide edit access to the entire customer record. In this case there's no point to using a new context and painstakingly copying all properties one by one, because the end result (all data overridden with form values regardless of age) will be the same.

Accessing both stored procedure output parameters AND the result set in Entity Framework?

Is there any way of accessing both a result set and output parameters from a stored procedure added in as a function import in an Entity Framework model?
I am finding that if I set the return type to "None" such that the designer generated code ends up calling base.ExecuteFunction(...) that I can access the output parameters fine after calling the function (but of course not the result set).
Conversely if I set the return type in the designer to a collection of complex types then the designer generated code calls base.ExecuteFunction<T>(...) and the result set is returned as ObjectResult<T> but then the value property for the ObjectParameter instances is NULL rather than containing the proper value that I can see being passed back in Profiler.
I speculate the second method is perhaps calling a DataReader and not closing it. Is this a known issue? Any work arounds or alternative approaches?
Edit
My code currently looks like
public IEnumerable<FooBar> GetFooBars(
int? param1,
string param2,
DateTime from,
DateTime to,
out DateTime? createdDate,
out DateTime? deletedDate)
{
var createdDateParam = new ObjectParameter("CreatedDate", typeof(DateTime));
var deletedDateParam = new ObjectParameter("DeletedDate", typeof(DateTime));
var fooBars = MyContext.GetFooBars(param1, param2, from, to, createdDateParam, deletedDateParam);
createdDate = (DateTime?)(createdDateParam.Value == DBNull.Value ?
null :
createdDateParam.Value);
deletedDate = (DateTime?)(deletedDateParam.Value == DBNull.Value ?
null :
deletedDateParam.Value);
return fooBars;
}
According to this SO post, the sproc doesn't actually execute until you iterate the resultset. I simulated your scenario, ran some tests and confirmed this is the case. You didn't add a code sample, so I can't see what you're doing exactly, but as per your response below, try caching the resulset in a list (eg, Context.MyEntities.ToList()) and then check the value of the ObjectParameter

Entity Framework 4 - Trim Database Char(50) value for Name on legacy database

This should be simple, but I haven't found a way yet...
I have a legacy database with name fields that are stored as CHAR(50). When this is bound to a TextBox with a Max Length of 50, you cannot insert.
How can I make the EF trim these values or at least map to RTrim(Column)?
I've tried using value converters, but the round trip causes issues with back spacing and spaces getting deleted between words.
Note that I only want to trim some fields, not all.
We are using SQL Server 2000 as the database. Soon to move to SQL 2008.
Thanks!
Entity framework is able to map only to table directly. You can also map to view or custom DB query but in such case your entity will became readonly unless you also map Insert, Delete and Update operations to stored procedures.
I think the problem you describes is related to ANSI PADDING behavior. It can be turned on but:
It is not recommended. In future version of SQL server it will be considered as error.
Must be configured before you create a column
You must handle trimming in the application. You can for example modify T4 template (if you use them) to trim string properties. Not sure how it works with WPF but you probably can inherit text box and override Text property to trim values.
Another way is handling ObjectMaterialized event on ObjectContext and manually trimming text properties but it can slow down your execution of your queries.
There's no way to do this with EF and SQL Server that I have found. I solved it with an extension method on IEnumerable<T> that calls TrimEnd() on each string property:
public static IEnumerable<TEntity> Trim<TEntity>(this IEnumerable<TEntity> collection)
{
Type type = typeof(TEntity);
IEnumerable<PropertyDescriptor> properties = TypeDescriptor.GetProperties(type).Cast<PropertyDescriptor>()
.Where(p => p.PropertyType == typeof(string));
foreach (TEntity entity in collection)
{
foreach (PropertyDescriptor property in properties)
{
string value = (string) property.GetValue(entity);
if (!String.IsNullOrEmpty(value))
{
value = value.TrimEnd();
property.SetValue(entity, value);
}
}
}
return collection;
}
Just make sure you call it after EF has retrieved the entities from the database. For example, after ToList():
public IEnumerable<Country> FetchCountries()
{
return _context.Set<Country>().ToList().Trim();
}
Have a look at the available attributes for your Database Connection String. I had a similar issue with Sybase Advantage database and solved with it's TrimTrailingSpaces attribute. Your database may support something similar.
Data Source=\\serverx\volumex\path\db.add;User ID=user;Password=pass;ServerType=REMOTE;TrimTrailingSpaces=TRUE;
http://www.connectionstrings.com/

Nhibernate: How to find responsible Field for SqlDateTime overflow exception

I know the reason for the exception (SqlDateTime overflow. Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM.) is a non nullable DateTime field in a Entity and so Nhibernate wants to save a smaller DateTime value than MSSQL accepts.
The Problem ist that there are far to many entities in the project to find the right DateTime field.
The exception occurs after an SaveOrUpdate() but is not triggered by the entity i want to save but any other entity which was loaded in the current session and now is affected by the flush().
How can i find out which field really is responsible for the exception?
If you cast the exception to a SqlTypeException, that will expose the Data collection. Normally there is a single Key and a single Value in the collection. The value is the SQL that was attempted to be executed. By examining the DML you can then see what table was being acted upon. Hopefully that table is narrow enough to make determining the offending column trivial.
Here's some simple code I use to spit out the Key and Value of the exception.
catch (SqlTypeException e)
{
foreach(var key in e.Data.Keys)
{
System.Console.Write("Key is " + key.ToString());
}
foreach(var value in e.Data.Values)
{
Console.WriteLine("Value is "+value.ToString());
}
}
Have you tried forcing NHib to output the generated sql and reviewing that for the rogue DateTime? It'd be easier if you were using something like NHProfiler (I don't work for them, just a satisfied customer), but really all that's doing for you is showing/isolating the sql anyway, which you can do from the output window with a little extra effort. The trick will be if it's a really deep save, then there could potentially be a lot of sql to read through, but chances are you'll be able to spot it pretty quickly.
You can create a class that implements both IPreUpdateEventListener and IPreInsertEventListener as follows:
public class InsertUpdateListener : IPreInsertEventListener, IPreUpdateEventListener {
public bool OnPreInsert(PreInsertEvent #event) {
CheckDateTimeWithinSqlRange(#event.Persister, #event.State);
return false;
}
public bool OnPreUpdate(PreUpdateEvent #event) {
CheckDateTimeWithinSqlRange(#event.Persister, #event.State);
return false;
}
private static void CheckDateTimeWithinSqlRange(IEntityPersister persister, IReadOnlyList<object> state) {
var rgnMin = System.Data.SqlTypes.SqlDateTime.MinValue.Value;
// There is a small but relevant difference between DateTime.MaxValue and SqlDateTime.MaxValue.
// DateTime.MaxValue is bigger than SqlDateTime.MaxValue but still within the valid range of
// values for SQL Server. Therefore we test against DateTime.MaxValue and not against
// SqlDateTime.MaxValue. [Manfred, 04jul2017]
//var rgnMax = System.Data.SqlTypes.SqlDateTime.MaxValue.Value;
var rgnMax = DateTime.MaxValue;
for (var i = 0; i < state.Count; i++) {
if (state[i] != null
&& state[i] is DateTime) {
var value = (DateTime)state[i];
if (value < rgnMin /*|| value > rgnMax*/) { // we don't check max as SQL Server is happy with DateTime.MaxValue [Manfred, 04jul2017]
throw new ArgumentOutOfRangeException(persister.PropertyNames[i], value,
$"Property '{persister.PropertyNames[i]}' for class '{persister.EntityName}' must be between {rgnMin:s} and {rgnMax:s} but was {value:s}");
}
}
}
}
}
You also need to then register this event handler when you configure the session factory. Add an instance to Configuration.EventListeners.PreUpdateEventListeners and to Configuration.EventListeners.PreInsertEventListeners and then use the Configuration object when creating NHibernate's session factory.
What this does is this: Every time NHibernate inserts or updates an entity it will call OnPreInsert() or OnPreUpdate() respectively. Each of these methods in turn calls CheckDateTimeWithinSqlRange().
CheckDateTimeWithinSqlRange() iterates over all property values of the entity, ie the object, that is being saved. If the property value is not null it then checks if it is of type DateTime. If that is the case it checks that it is not less than SqlDateTime.MinValue.Value (note the additional .Value to avoid exceptions). There is no need to check against SqlDateTime.MaxValue.Value if you are using SQL Server 2012 or later. They will happily accept even DateTime.MaxValue which is a few time ticks greater than SqlDateTime.MaxValue.Value.
If the value is outside of the allowed range this code will then throw an ArgumentOutOfRangeException with an appropriate message that includes the names of the class (entity) and property causing the problem as well as the actual value that was passed in. The message is similar to the equivalent SqlServerException for the SqlDateTime overflow exception but will make it easier to pinpoint the problem.
A couple of things to consider. Obviously this does not come for free. You will incur a runtime overhead as this logic consumes CPU. Depending on your scenario this may not be a problem. If it is, you can also consider optimizing the code given in this example to make it faster. One option could perhaps be to use caching to avoid the loop for the same class. Another option could be to use it only in test and development environments. For production you could then rely that the rest of the system operates correctly and the values will always be within valid range.
Also, be aware that this code introduces a dependency on SQL Server. NHibernate is typically used to avoid dependencies like this. Other database servers that are supported by NHibernate may have a different range of allowed values for datetime. Again, there are options for resolving this as well, e.g. by using different boundaries depending on SQL dialect.
Happy coding!

Resources