JCR SQL2 query with dynamic date comparison - jackrabbit

I need to query the jcr repository to find nodes where a date property (e.g. jcr:created) is younger than a specific date.
Using SQL2, I do the check "jcr:created > date" like that (which works fine):
SELECT * FROM [nt:base] AS s WHERE s.[jcr:created] > CAST('2012-01-05T00:00:00.000Z' AS DATE)
Now the tricky part:
There's an additional property which declares a number of days which have to be added to the jcr:created date dynamically.
Let's say the property contains 5 (days) then the query should not check "jcr:created > date" but rather "(jcr:created + 5) > date". The next node containing the property value 10 should be checked by "(jcr:created + 10) > date".
Is there any intelligent / dynamic operand which could do that? As the property is node specific I cannot add it statically to the query but it has to read it of each node.

Jackrabbit doesn't currently support such dynamic constraints.
I believe the best solution for now is to run the query with a fixed date constraint and then explicitly filter the results by yourself.
An alternative solution would be to precompute the "jcr:created + extratime" value and store it in an additional property. Such computation could either be located in the code that creates/updates the nodes in the first place, or you could place it in an observation listener so it'll get triggered regardless of how the node is being modified.

I had a need to find documents created in last 12 hours
I had a hard time how to get a valid date in the CAST function, Pasting for others who may need it.
SimpleDateFormat dateFromat = new SimpleDateFormat("yyyy-MM-dd'T'HH:mm:ss.SSS'Z'");
cal.setTime(cal.getTime());
cal.add(Calendar.HOUR, -12);
String queryString = "SELECT * FROM [nt:base] AS s WHERE "
+ "ISDESCENDANTNODE([/content/en/documents/]) "
+ "and s.[jcr:created] >= CAST('"+dateFromat.format(cal.getTime())+"' AS DATE)";

I found the receipe there:
test.sql2.txt
A list of test. My query look like:
SELECT * FROM [nt:base] where [jcr:created] > cast('+2012-01-01T00:00:00.000Z' as date)
Everything inside the cast string is require: +yyyy-MM-ddT00:00:00.000Z

Related

SQL Report Builder: Use aggregate function on ReportItem

I've entered the following expression for a given cell, which is essentially a dollar value divided by a quantity to get a cents per gallon value, labeled as Textbox41:
=ReportItems!Total_Gross_Profit2.Value / ReportItems!Gallon_Qty3.Value
What I was trying to do is use this expression for an AVG aggregation in another cell =avg(ReportItems!Textbox41.Value), but I'm getting an error:
The Value expression for the textrun
'Textbox79.Paragraphs[0].TextRuns[0]' uses an aggregate function on a
report item. Aggregate functions can be used only on report items
contained in page headers and footers.
Is there some limitation that does not allow aggregations on ReportItems? I've also tried the following, which also did not work:
=AVG(ReportItems!Total_Gross_Profit2.Value / ReportItems!Gallon_Qty3.Value)
Where am I going wrong here?
Regarding your question:
Is there some limitation that does not allow aggregations on ReportItems?
You have your answer in the error message you provided.
As for the resolution, it's hard to give precise guidance with the information you provided, but in general, start thinking in terms of dataset fields instead of report objects. If you're operating from inside a matrix or table, and if the values for 'Total_Gross_Profit' and 'Gallon_Qty_3' look something analogous to this:
= ReportItems!ProfitsFromX.Value + ReportItems!ProfitsFromY.Value
= ReportItems!GallonQtyA.Value + ReportItems!GallonQtyB.Value
Point to the fields directly instead:
= Fields!ProfitsFromX.Value + Fields!ProfitsFromY.Value
= Fields!GallonQtyA.Value + Fields!GallonQtyB.Value
That way, when it comes to aggregation, it's more clear what to do:
= avg(
(Fields!ProfitsFromX.Value + Fields!ProfitsFromY.Value)
/ (Fields!GallonQtyA.Value + Fields!GallonQtyB.Value)
)
And if you find that cumbersome, you can create calculated fields on the dataset object, and reference those instead where appropriate.

MongoDB numeric index

I was wondering if it's possible to create a numeric count index where the first document would be 1 and as new documents are inserted the count would increase. If possible are you also able to apply it to documents imported via mongoimport? I have created and index via db.collection.createIndex( {index : 1} ) but it doesn't seem to be applying.
I would strongly recommend using ObjectId as your _id field. This has the benefit of being a good value for distributed systems, but also based on the date it was created. It also has a built-in index inside MongoDB.
Example using Morphia:
Date d = ...;
QueryImpl<MyClass> query = datastore.createQuery(MyClass);
query.field("_id").greaterThanOrEq(new ObjectId(d));
query.sort("_id");
query.limit(100);
List<MyClass> myDocs = query.asList();
This would fetch all documents created since date d in order of creation.
To load the next batch, change to:
query.field("_id").greaterThan(lastDoc.getId());
This will very efficiently load the next batch based on the ID of the last document from the previous batch.

TypeError: can't compare datetime.date to DateProperty

I am trying to query if a certain date belongs to a specific range of dates. Source code example:
billing_period_found = BillingPeriod.query(
ndb.AND(
transaction.date > BillingPeriod.start_date,
transaction.date < BillingPeriod.end_date)
).get()
Data definition:
class Transaction(ndb.Model):
date = ndb.DateProperty(required=False)
class BillingPeriod(ndb.Model):
start_date = ndb.DateProperty(required=False)
end_date = ndb.DateProperty(required=False)
Getting the following error:
TypeError: can't compare datetime.date to DateProperty
The message error does make sense because datetime is different from DateProperty. However, as you can see, the definition for transaction.date is not datetime, so I am not getting where this attempt to convert datetime to date is coming from. Anyways - If I figure out how to convert datetime to DateProperty, I guess it would fix the problem.
Any ideas on how to solve this?
Thanks!
The App Engine datastore does not allow queries with inequalities on multiple properties (not a limitation of ndb, but of the underlying datastore). Selecting date-range entities that contain a certain date is a typical example of tasks that this makes it impossible to achieve in a single query.
Check out Optimizing a inequality query in ndb over two properties for an example of this question, and, in the answer, one suggestion that might work: query for (in your case) all BillingPeriod entities with end_date greater than the desired date, perhaps with a projection to just get their key and start_date; then, select out of those only those with start_date less than the desired date, in your own application (if you only want one of them, then a next over the iterator will stop as soon as it finds one).
Edit: the issue above is problem #1 with this code; once solved, problem #2 arises -- as clearly listed at https://cloud.google.com/appengine/docs/python/ndb/queries, the property is ndb queries is always on the left of the comparison operator. So, one can't do date < BillingPeriod.end_date, as that would have the property on the right; rather, one does BillingPeriod.end_date > date.

Salesforce Junction Objects

To all salesforce experts i need some assistance. I have my contacts and a custom object named programs. I created a junction object using to master detail relationships with contacts and programs. I want to avoid relating the same contact to the same program. I tried triggers but I couldn't create the testing part to use it outside sandbox.
I went back to the basics and created a Unique text field. I tried to use default value but EVERYTHING i write in that crap is wrong -_-. I tried Contact__r.Email & "-" & Program__r.Name but to no avail.
I tried workflow rules with a field update but my field update NEVER runs.(Yes I did activate the workflow rule) and I didn't know what to write in my rule's code.
The workflow firing condition could be simply a formula that says true. Alternatively use "every time record is inserted". It also depends whether your master-details are set once and that's it or they will be "reparentable" (option introduced in Summer '12 I think). Maybe post a screenshot / text description of your firing condition? Also - is your unique field set to "case sensitive"?
As for the formula to populate the unique field - something like Contact__c + ' ' + Program__c (or whatever the API names of your fields are) should be OK. Don't use Contact__r.Email etc as these don't have to be unique...
You'll have to somehow fill in the uniqueness criteria for all existing records (maybe that's why you claimed it doesn't work?). If you can use Apex for data fixes - something like this should get you started.
List<Junction__c> junctions = [SELECT Contact__c, Program__c
FROM Junction__c
WHERE Unique_Text_Field__c = null
LIMIT 10000];
for(Junction__c j : junctions){
String key = String.valueOf(j.Contact__c).left(15) + ' ' + String.valueOf(j.Program__c).left(15);
j.Unique_Text_Field__c = key;
}
update junctions;
Keep rerunning it until it starts to show 0 rows processed. The Ids are cut down to 15 chars because in Apex you'd usually see full 18-char Id but workflows use 15-char versions.

Adding a projection to an NHibernate criteria stops it from performing default entity selection

I'm writing an NHibernate criteria that selects data supporting paging. I'm using the COUNT(*) OVER() expression from SQL Server 2005(+) to get hold of the total number of available rows, as suggested by Ayende Rahien. I need that number to be able to calculate how many pages there are in total. The beauty of this solution is that I don't need to execute a second query to get hold of the row count.
However, I can't seem to manage to write a working criteria (Ayende only provides an HQL query).
Here's an SQL query that shows what I want and it works just fine. Note that I intentionally left out the actual paging logic to focus on the problem:
SELECT Items.*, COUNT(*) OVER() AS rowcount
FROM Items
Here's the HQL:
select
item, rowcount()
from
Item item
Note that the rowcount() function is registered in a custom NHibernate dialect and resolves to COUNT(*) OVER() in SQL.
A requirement is that the query is expressed using a criteria. Unfortunately, I don't know how to get it right:
var query = Session
.CreateCriteria<Item>("item")
.SetProjection(
Projections.SqlFunction("rowcount", NHibernateUtil.Int32));
Whenever I add a projection, NHibernate doesn't select item (like it would without a projection), just the rowcount() while I really need both. Also, I can't seem to project item as a whole, only it's properties and I really don't want to list all of them.
I hope someone has a solution to this. Thanks anyway.
I think it is not possible in Criteria, it has some limits.
You could get the id and load items in a subsequent query:
var query = Session
.CreateCriteria<Item>("item")
.SetProjection(Projections.ProjectionList()
.Add(Projections.SqlFunction("rowcount", NHibernateUtil.Int32))
.Add(Projections.Id()));
If you don't like it, use HQL, you can set the maximal number of results there too:
IList<Item> result = Session
.CreateQuery("select item, rowcount() from item where ..." )
.SetMaxResult(100)
.List<Item>();
Use CreateMultiCriteria.
You can execute 2 simple statements with only one hit to the DB that way.
I am wondering why using Criteria is a requirement. Can't you use session.CreateSQLQuery? If you really must do it in one query, I would have suggested pulling back the Item objects and the count, like:
select {item.*}, count(*) over()
from Item {item}
...this way you can get back Item objects from your query, along with the count. If you experience a problem with Hibernate's caching, you can also configure the query spaces (entity/table caches) associated with a native query so that stale query cache entries will be cleared automatically.
If I understand your question properly, I have a solution. I struggled quite a bit with this same problem.
Let me quickly describe the problem I had, to make sure we're on the same page. My problem came down to paging. I want to display 10 records in the UI, but I also want to know the total number of records that matched the filter criteria. I wanted to accomplish this using the NH criteria API, but when adding a projection for row count, my query no longer worked, and I wouldn't get any results (I don't remember the specific error, but it sounds like what you're getting).
Here's my solution (copy & paste from my current production code). Note that "SessionError" is the name of the business entity I'm retrieving paged data for, according to 3 filter criterion: IsDev, IsRead, and IsResolved.
ICriteria crit = CurrentSession.CreateCriteria(typeof (SessionError))
.Add(Restrictions.Eq("WebApp", this));
if (isDev.HasValue)
crit.Add(Restrictions.Eq("IsDev", isDev.Value));
if (isRead.HasValue)
crit.Add(Restrictions.Eq("IsRead", isRead.Value));
if (isResolved.HasValue)
crit.Add(Restrictions.Eq("IsResolved", isResolved.Value));
// Order by most recent
crit.AddOrder(Order.Desc("DateCreated"));
// Copy the ICriteria query to get a row count as well
ICriteria critCount = CriteriaTransformer.Clone(crit)
.SetProjection(Projections.RowCountInt64());
critCount.Orders.Clear();
// NOW add the paging vars to the original query
crit = crit
.SetMaxResults(pageSize)
.SetFirstResult(pageNum_oneBased * pageSize);
// Set up a multi criteria to get your data in a single trip to the database
IMultiCriteria multCrit = CurrentSession.CreateMultiCriteria()
.Add(crit)
.Add(critCount);
// Get the results
IList results = multCrit.List();
List<SessionError> sessionErrors = new List<SessionError>();
foreach (SessionError sessErr in ((IList)results[0]))
sessionErrors.Add(sessErr);
numResults = (long)((IList)results[1])[0];
So I create my base criteria, with optional restrictions. Then I CLONE it, and add a row count projection to the CLONED criteria. Note that I clone it before I add the paging restrictions. Then I set up an IMultiCriteria to contain the original and cloned ICriteria objects, and use the IMultiCriteria to execute both of them. Now I have my paged data from the original ICriteria (and I only dragged the data I need across the wire), and also a raw count of how many actual records matched my criteria (useful for display or creating paging links, or whatever). This strategy has worked well for me. I hope this is helpful.
I would suggest investigating custom result transformer by calling SetResultTransformer() on your session.
Create a formula property in the class mapping:
<property name="TotalRecords" formula="count(*) over()" type="Int32" not-null="true"/>;
IList<...> result = criteria.SetFirstResult(skip).SetMaxResults(take).List<...>();
totalRecords = (result != null && result.Count > 0) ? result[0].TotalRecords : 0;
return result;

Resources