Hibernate and date comparisons - database

Let's say I have a due date and a reminder timespan. How do I find the ones where due date is less than current date + reminder with Hibernate 3.6 criteria queries? In other words, I want to find my Events I've displayed the reminder. The reminder is a Long marking when the reminder should be sent either days or milliseconds, whichever is easier.
To summarize, my entities are following:
java.util.Date Event.DueDate
Long Event.Type.Reminder.Before // (in days or millis)
Examples
Today is 2012-06-11.
Included:
DueDate is 2012-06-15 and Before is 30 days.
Excluded:
DueDate is 2012-06-15 and Before is 1 day.

Ultimately this is just what ANSI SQL calls date/time arithmetic and specifically you are looking for INTERVAL datatype handling. Unfortunately, databases vary widely on support for INTERVAL datatype. I really want to support this in HQL (and possibly criterias, although that relies on agreement in the JPA spec committee). The difficulty, like I said, is the varied (if any) support for intervals.
The best bet at the moment (through Hibernate 4.1) is to provide a custom function (org.hibernate.dialect.function.SQLFunction) registered with either the Dialect (search google to see how this is done) or the "custom function registry" (org.hibernate.cfg.Configuration#addSqlFunction). You'd probably want this to render to your database-specific representation of date-arith with an interval.
Here is an example using the Oracle NUMTODSINTERVAL function:
public class MySqlFunction implements SQLFunction
{
public Type getReturnType(Type firstArgumentType,
Mapping mapping) throws QueryException
{
return TimestampType.INSTANCE;
}
public String render(Type firstArgumentType,
List arguments,
SessionFactoryImplementor factory) throws QueryException
{
// Arguments are already interpreted into sql variants...
final String dueDateArg = arguments.get( 0 );
final String beforeArg = arguments.get( 1 );
// Again, using the Oracle-specific NUMTODSINTERVAL
// function and using days as the unit...
return dueDateArg + " + numtodsinterval(" + beforeArg + ", 'day')";
}
public boolean hasArguments() { return true; }
public boolean hasParenthesesIfNoArguments() { return false; }
}
You would use this in HQL like:
select ...
from Event e
where current_date() between e.dueDate and
interval_date_calc( e.dueDate, e.before )
where 'interval_date_calc' is the name under which you registered your SQLFunction.

Related

query by object value inside array on firebase firestore [duplicate]

This is my structure of the firestore database:
Expected result: to get all the jobs, where in the experience array, the lang value is "Swift".
So as per this I should get first 2 documents. 3rd document does not have experience "Swift".
Query jobs = db.collection("Jobs").whereArrayContains("experience.lang","Swift");
jobs.get().addOnSuccessListener(new OnSuccessListener<QuerySnapshot>() {
#Override
public void onSuccess(QuerySnapshot queryDocumentSnapshots) {
//Always the queryDocumentSnapshots size is 0
}
});
Tried most of the answers but none worked out. Is there any way to query data in this structure? The docs only available for normal array. Not available for array of custom object.
Actually it is possible to perform such a query when having a database structure like yours. I have replicated your schema and here are document1, document2, and document3.
Note that you cannot query using partial (incomplete) data. You are using only the lang property to query, which is not correct. You should use an object that contains both properties, lang and years.
Seeing your screenshot, at first glance, the experience array is a list of HashMap objects. But here comes the nicest part, that list can be simply mapped into a list of custom objects. Let's try to map each object from the array to an object of type Experience. The model contains only two properties:
public class Experience {
public String lang, years;
public Experience() {}
public Experience(String lang, String years) {
this.lang = lang;
this.years = years;
}
}
I don't know how you named the class that represents a document, but I named it simply Job. To keep it simple, I have only used two properties:
public class Job {
public String name;
public List<Experience> experience;
//Other prooerties
public Job() {}
}
Now, to perform a search for all documents that contain in the array an object with the lang set to Swift, please follow the next steps. First, create a new object of the Experience class:
Experience firstExperience = new Experience("Swift", "1");
Now you can query like so:
CollectionReference jobsRef = rootRef.collection("Jobs");
jobsRef.whereArrayContains("experience", firstExperience).get().addOnCompleteListener(new OnCompleteListener<QuerySnapshot>() {
#Override
public void onComplete(#NonNull Task<QuerySnapshot> task) {
if (task.isSuccessful()) {
for (QueryDocumentSnapshot document : task.getResult()) {
Job job = document.toObject(Job.class);
Log.d(TAG, job.name);
}
} else {
Log.d(TAG, task.getException().getMessage());
}
}
});
The result in the logcat will be the name of document1 and document2:
firstJob
secondJob
And this is because only those two documents contain in the array an object where the lang is set to Swift.
You can also achieve the same result when using a Map:
Map<String, Object> firstExperience = new HashMap<>();
firstExperience.put("lang", "Swift");
firstExperience.put("years", "1");
So there is no need to duplicate data in this use-case. I have also written an article on the same topic
How to map an array of objects from Cloud Firestore to a List of objects?
Edit:
In your approach it provides the result only if expreience is "1" and lang is "Swift" right?
That's correct, it only searches for one element. However, if you need to query for more than that:
Experience firstExperience = new Experience("Swift", "1");
Experience secondExperience = new Experience("Swift", "4");
//Up to ten
We use another approach, which is actually very simple. I'm talking about Query's whereArrayContainsAny() method:
Creates and returns a new Query with the additional filter that documents must contain the specified field, the value must be an array, and that the array must contain at least one value from the provided list.
And in code should look like this:
jobsRef.whereArrayContainsAny("experience", Arrays.asList(firstExperience, secondExperience)).get().addOnCompleteListener(new OnCompleteListener<QuerySnapshot>() {
#Override
public void onComplete(#NonNull Task<QuerySnapshot> task) {
if (task.isSuccessful()) {
for (QueryDocumentSnapshot document : task.getResult()) {
Job job = document.toObject(Job.class);
Log.d(TAG, job.name);
}
} else {
Log.d(TAG, task.getException().getMessage());
}
}
});
The result in the logcat will be:
firstJob
secondJob
thirdJob
And this is because all three documents contain one or the other object.
Why am I talking about duplicating data in a document it's because the documents have limits. So there are some limits when it comes to how much data you can put into a document. According to the official documentation regarding usage and limits:
Maximum size for a document: 1 MiB (1,048,576 bytes)
As you can see, you are limited to 1 MiB total of data in a single document. So storing duplicated data will only increase the change to reach the limit.
If i send null data of "exprience" and "swift" as "lang" will it be queried?
No, will not work.
Edit2:
whereArrayContainsAny() method works with max 10 objects. If you have 30, then you should save each query.get() of 10 objects into a Task object and then pass them one by one to the to the Tasks's whenAllSuccess(Task... tasks).
You can also pass them directly as a list to Tasks's whenAllSuccess(Collection> tasks) method.
With your current document structure, it's not possible to perform the query you want. Firestore does not allow queries for individual fields of objects in list fields.
What you would have to do is create an additional field in your document that is queryable. For example, you could create a list field with only the list of string languages that are part of the document. With this, you could use an array-contains query to find the documents where a language is mentioned at least once.
For the document shown in your screenshot, you would have a list field called "languages" with values ["Swift", "Kotlin"].

adding new methods to LINQ to Entities

Is there any way to define the SQL conversion component for additional functions to Linq2Entities.
For example:
myQuery.Where(entity => entity.Contains('foo', SearchFlags.All))
Ideally I am looking for something that doesn't require editing and building a new version the EntityFramework.dll directly. Is there any way to allow extension methods to entity framework that can support SQL generation.
So far I have a template which would represent the method I need to replace for LINQ to Entities:
public static bool Contains(this object source, string searchTerms, SearchFlags flags)
{
return true;
}
Of course this causes the error:
LINQ to Entities does not recognize the method 'Boolean
CONTAINS(System.Object, System.String, SearchFlags)' method, and this method
cannot be translated into a store expression.
To be clear, I don't want to do:
myQuery.AsEnumerable().Where(entity => entity.Contains('foo', SearchFlags.All))
Because I want to be able to execute code in SQL space and not return all the entities manually.
I also cannot use the .ToString() of the IQueryable and execute it manually because I need Entity Framework to populate the objects from several .Include joins.
I don't understand your Q clearly. However if your problem is that you can't use your own methods or other linq to objects method, just use .AsEnumerable() and do your other jobs through linq to objects, not L2E:
myQuery.AsEnumerable().Where(entity => entity.Contains('foo', SearchFlags.All))
And if you need to use your myQuery several times somewhere else, first load it to memory, then use it as many as you want:
var myQuery = from e in context.myEntities
select d;
myQuery.Load();
// ...
var myOtherQuery = from d in context.myEntities.Local
select d;
// Now any L2O method is supported...
I ended up doing the following (which works but is very far from perfect):
All my entities inherit from an IEntity which defines long Id { get; set; }
I then added a redundant restriction
context.myEntities.Where(entity => entity.Id != 0) this is
redundant since the identity starts at 1, but Linq2Entities doesn't
know that.
I then call .ToString() on the IQueryable after I have done all
my other queries, since it is of type DBQuery<Entity> it returns
the SQL Command Text, I do a simple replace with my query restriction.
In order to get all the .Include(...) to work I actually execute
two different sql commands. There is no other more pretty way to tap into this because of query execution plan caching causes issues otherwise (even when disabled).
As a result my code looks like this:
public IQueryable<IEntity> MyNewFunction(IQueryable<IEntity> myQueryable, string queryRestriction)
{
string rawSQL = myQueryable.Select(entity => entity.Id).ToString().Replace("[Extent1].Id <> 0", queryRestriction);
List<long> ids = // now execute rawSQL, get the list of ids;
return myQuerable.Where(entity => ids.Contains(entity.Id));
}
In short, other than manually executing the SQL or running a similar SQL command and appending the restriction using the existing commands the only way to write your own methods to Linq-to-Entities is to manually alter and build your own EntityFramework.dll from the EF6 source.

Displaying Mutable PostgreSQL Arrays in the NetBeans Master/Detail Sample Form using JPA 1.0

Some Background
I have a game database with a table called Games that has multiple attributes and one called Genres. The Genres attribute is defined as an integer[] in PostgreSQL. For the sake of simplicity, I'm not using any foreign key constraints, but essentially each integer in this array is a foreign key constraint on the id attribute in the Genres table. First time working with the NetBeans Master/Detail Sample Form and Java persistence and it's been working great so far except for 1 thing. I get this error when the program tries to display a column that has a 1-dimensional integer array. In this example, the value is {1, 11}.
Exception Description: The object [{1,11}], of class [class org.postgresql.jdbc3.Jdbc3Array], from mapping [oracle.toplink.essentials.mappings.DirectToFieldMapping[genres-->final.public.games.genres]] with descriptor [RelationalDescriptor(finalproject.Games --> [DatabaseTable(final.public.games)])], could not be converted to [class [B].
Exception [TOPLINK-3002] (Oracle TopLink Essentials - 2.0.1 (Build b09d-fcs (12/06/2007))): oracle.toplink.essentials.exceptions.ConversionException
My Research
From what I've been able to read, it looks like PostgreSQL arrays need something special done to them before you can display and edit them in this template. By default, the sample form uses TopLink Essentials (JPA 1.0) as its persistence library, but I can also use Hibernate (JPA 1.0).
Here is the code that needs to be changed in some way. From the Games.java file:
#Entity
#Table(name = "games", catalog = "final", schema = "public")
#NamedQueries({
// omitting named queries
#NamedQuery(name = "Games.findByGenres", query = "SELECT g FROM Games g WHERE g.genres = :genres")
})
public class Games implements Serializable {
#Transient
private PropertyChangeSupport changeSupport = new PropertyChangeSupport(this);
private static final long serialVersionUID = 1L;
// omitting other attributes
#Column(name = "genres")
private Serializable genres;
// omitting constructors and other getters/setters
public Serializable getGenres() {
return genres;
}
public void setGenres(Serializable genres) {
Serializable oldGenres = this.genres;
this.genres = genres;
changeSupport.firePropertyChange("genres", oldGenres, genres);
}
} // end class Games
Here are also some of the sites that might have the solution that I'm just not understanding:
https://forum.hibernate.org/viewtopic.php?t=946973
http://blog.xebia.com/2009/11/09/understanding-and-writing-hibernate-user-types/
// omitted hyperlink due to user restriction
Attempted Solutions
I'm able to get the data to display if I change the type of genres to String, but it is immutable and I cannot edit it. This is what I changed to do this:
#Column(name = "genres")
private String genres;
public String getGenres() {
return genres;
}
public void setGenres(String genres) {
String oldGenres = this.genres;
this.genres = genres;
changeSupport.firePropertyChange("genres", oldGenres, genres);
}
I also attempted to create a UserType file for use with Hibernate (JPA 1.0), but had no idea what was going wrong there.
I also attempted to use the #OneToMany and other tags, but these aren't working probably because I'm not using them properly.
What I'm Looking For
There has to be a simple way to get this data to display and make it editable, but since I'm completely new to persistence, I have no idea what to do.
The effort put into your question shows. Unfortunately JPA does not currently support PostgreSQL arrays. The fundamental problem is that arrays are not frequently used in many other databases frequently and so heavy reliance on them is somewhat PostgreSQL specific. Thus you can expect that general cross-db persistence API's are not generally going to support them well if at all. JPA is no exception, having currently no support for PostgreSQL arrays.
I have been looking at writing my own persistence API in Java that would support arrays, but it hasn't happened yet, would be PostgreSQL-only when written, and would be based on a very different principle than JPA and friends.

Nhibernate: How to find responsible Field for SqlDateTime overflow exception

I know the reason for the exception (SqlDateTime overflow. Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM.) is a non nullable DateTime field in a Entity and so Nhibernate wants to save a smaller DateTime value than MSSQL accepts.
The Problem ist that there are far to many entities in the project to find the right DateTime field.
The exception occurs after an SaveOrUpdate() but is not triggered by the entity i want to save but any other entity which was loaded in the current session and now is affected by the flush().
How can i find out which field really is responsible for the exception?
If you cast the exception to a SqlTypeException, that will expose the Data collection. Normally there is a single Key and a single Value in the collection. The value is the SQL that was attempted to be executed. By examining the DML you can then see what table was being acted upon. Hopefully that table is narrow enough to make determining the offending column trivial.
Here's some simple code I use to spit out the Key and Value of the exception.
catch (SqlTypeException e)
{
foreach(var key in e.Data.Keys)
{
System.Console.Write("Key is " + key.ToString());
}
foreach(var value in e.Data.Values)
{
Console.WriteLine("Value is "+value.ToString());
}
}
Have you tried forcing NHib to output the generated sql and reviewing that for the rogue DateTime? It'd be easier if you were using something like NHProfiler (I don't work for them, just a satisfied customer), but really all that's doing for you is showing/isolating the sql anyway, which you can do from the output window with a little extra effort. The trick will be if it's a really deep save, then there could potentially be a lot of sql to read through, but chances are you'll be able to spot it pretty quickly.
You can create a class that implements both IPreUpdateEventListener and IPreInsertEventListener as follows:
public class InsertUpdateListener : IPreInsertEventListener, IPreUpdateEventListener {
public bool OnPreInsert(PreInsertEvent #event) {
CheckDateTimeWithinSqlRange(#event.Persister, #event.State);
return false;
}
public bool OnPreUpdate(PreUpdateEvent #event) {
CheckDateTimeWithinSqlRange(#event.Persister, #event.State);
return false;
}
private static void CheckDateTimeWithinSqlRange(IEntityPersister persister, IReadOnlyList<object> state) {
var rgnMin = System.Data.SqlTypes.SqlDateTime.MinValue.Value;
// There is a small but relevant difference between DateTime.MaxValue and SqlDateTime.MaxValue.
// DateTime.MaxValue is bigger than SqlDateTime.MaxValue but still within the valid range of
// values for SQL Server. Therefore we test against DateTime.MaxValue and not against
// SqlDateTime.MaxValue. [Manfred, 04jul2017]
//var rgnMax = System.Data.SqlTypes.SqlDateTime.MaxValue.Value;
var rgnMax = DateTime.MaxValue;
for (var i = 0; i < state.Count; i++) {
if (state[i] != null
&& state[i] is DateTime) {
var value = (DateTime)state[i];
if (value < rgnMin /*|| value > rgnMax*/) { // we don't check max as SQL Server is happy with DateTime.MaxValue [Manfred, 04jul2017]
throw new ArgumentOutOfRangeException(persister.PropertyNames[i], value,
$"Property '{persister.PropertyNames[i]}' for class '{persister.EntityName}' must be between {rgnMin:s} and {rgnMax:s} but was {value:s}");
}
}
}
}
}
You also need to then register this event handler when you configure the session factory. Add an instance to Configuration.EventListeners.PreUpdateEventListeners and to Configuration.EventListeners.PreInsertEventListeners and then use the Configuration object when creating NHibernate's session factory.
What this does is this: Every time NHibernate inserts or updates an entity it will call OnPreInsert() or OnPreUpdate() respectively. Each of these methods in turn calls CheckDateTimeWithinSqlRange().
CheckDateTimeWithinSqlRange() iterates over all property values of the entity, ie the object, that is being saved. If the property value is not null it then checks if it is of type DateTime. If that is the case it checks that it is not less than SqlDateTime.MinValue.Value (note the additional .Value to avoid exceptions). There is no need to check against SqlDateTime.MaxValue.Value if you are using SQL Server 2012 or later. They will happily accept even DateTime.MaxValue which is a few time ticks greater than SqlDateTime.MaxValue.Value.
If the value is outside of the allowed range this code will then throw an ArgumentOutOfRangeException with an appropriate message that includes the names of the class (entity) and property causing the problem as well as the actual value that was passed in. The message is similar to the equivalent SqlServerException for the SqlDateTime overflow exception but will make it easier to pinpoint the problem.
A couple of things to consider. Obviously this does not come for free. You will incur a runtime overhead as this logic consumes CPU. Depending on your scenario this may not be a problem. If it is, you can also consider optimizing the code given in this example to make it faster. One option could perhaps be to use caching to avoid the loop for the same class. Another option could be to use it only in test and development environments. For production you could then rely that the rest of the system operates correctly and the values will always be within valid range.
Also, be aware that this code introduces a dependency on SQL Server. NHibernate is typically used to avoid dependencies like this. Other database servers that are supported by NHibernate may have a different range of allowed values for datetime. Again, there are options for resolving this as well, e.g. by using different boundaries depending on SQL dialect.
Happy coding!

Designing a generic DB utility class

Time and again I find myself creating a database utility class which has multiple functions which all do almost the same thing but treat the result set slightly differently.
For example, consider a Java class which has many functions which all look like this:
public void doSomeDatabaseOperation() {
Connection con = DriverManager.getConnection("jdbc:mydriver", "user", "pass");
try {
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("SELECT whatever FROM table"); // query will be different each time
while (rs.next()) {
// handle result set - differently each time
}
} catch (Exception e) {
// handle
} finally {
con.close();
}
}
Now imagine a class with 20 of these functions.
As you can see, tons of boilerplate (opening a connection, try-finally block), and the only thing that changes would be the query and the way you handle the result set. This type of code occurs in many languages (considering you're not using an ORM).
How do you manage your DB utility classes so as to reduce code duplication? What does a typical DB utility class look like in your language/framework?
The way I have done in one of my project is that I followed what Spring does with JDBC template and came up with a Query framework. Basically create a common class which can take select statement or pl/sql calls and bind parameters. If the query returns resultset, also pass the Rowmapper. This rowmapper object will be called by the framework to convert each row into an object of any kind.
Example -
Query execute = new Query("{any select or pl/sql}",
// Inputs and Outputs are for bind variables.
new SQL.Inputs(Integer.class, ...),
// Outputs is only meaningful for PL/SQL since the
// ResultSetMetaData should be used to obtain queried columns.
new SQL.Outputs(String.class));
If you want the rowmapper -
Query execute = new Query("{any select or pl/sql}",
// Inputs and Outputs are for bind variables.
new SQL.Inputs(Integer.class, ...),
// Outputs is only meaningful for PL/SQL since the
// ResultSetMetaData should be used to obtain queried columns.
new SQL.Outputs(String.class), new RowMapper() {
public Object mapRow(ResultSet rs, int rowNum) throws SQLException {
Actor actor = new Actor();
actor.setFirstName(rs.getString("first_name"));
actor.setSurname(rs.getString("surname"));
return actor;
});
Finally a Row class is the output which will have list of objects if you have passed the RowMapper -
for (Row r : execute.query(conn, id)) {
// Handle the rows
}
You can go fancy and use Templates so that type safety is guaranteed.
Sounds like you could make use of a Template Method pattern here. That would allow you to define the common steps (and default implementations of them, where applicable) that all subclasses will take to perform the action. Then subclasses need only override the steps which differ: SQL query, DB-field-to-object-field mapping, etc.
When using .net, the Data Access Application Block is in fairly widespread use to provide support for the following:
The [data access] application block
was designed to achieve the following
goals:
Encapsulate the logic used to perform
the most common data access tasks.
Eliminate common coding errors, such
as failing to close connections.
Relieve developers of the need to
write duplicated code for common data
access tasks.
Reduce the need for
custom code.
Incorporate best
practices for data access, as
described in the .NET Data Access
Architecture Guide.
Ensure that, as
far as possible, the application block
functions work with different types of
databases.
Ensure that applications
written for one type of database are,
in terms of data access, the same as
applications written for another type
of database.
There are plenty of examples and tutorials of usage too: a google search will find msdn.microsoft, 4guysfromrolla.com, codersource.com and others.

Resources