JCR, JackRabbit : Making XPath for search for a Year of a Date and for subnodes properties values - jackrabbit

Grettings!.
I am having too much troubling with the following question... i will try to be as clear as possibly.
Currently i have a Jackrabbit JCR implementation running in our web application. All things works fine, buts just a little (big) problem appears when trying to do some specific search.
For a brief synopsis of what kind of data is stored, we have 3 a node class called "Entry", that extends another node class named "BaseEntry" and that extends another called "BaseNode".
The Entry class represents a Node in our JCR system, and has a set of properties (mapped as attributes in the corresponding class), an also inherits the properties mapped in their superclasses as well.
I copy and paste, the important part of the class definition and the properties of interest...
#Node(jcrType = "entry", extend = BaseEntry.class)
public class Entry extends BaseEntry {
... // nothing really important here
}
#Node(jcrType = "baseEntry", extend = BaseNode.class, isAbstract = true)
public abstract class BaseEntry extends BaseNode {
#Collection (jcrType = "attachment",
collectionConverter = NTCollectionConverterImpl.class)
protected List<Attachment> attachments = new ArrayList<Attachment>();
...
}
#Node(jcrType = "baseNode", isAbstract = true)
public abstract class BaseNode {
#Field(jcrName = "name", id = true)
protected String name;
#Field(jcrName = "creationDate")
protected Date creationDate;
...
}
1) How i can make a predicate for select only those nodes (entries) that have a specific year in the property creationDate ignoring the rest. The attribute is of type Date (in the class) and i guess the property is stored in a xs:DateTime format i guess... i really do not know very well... how it really match a Date in the JCR underlying system.
So far i get to this...
there must something like this //element(*, entry)[getYear(#creationDate) == <year>]
must be an integer, string, ... i really don't kwow.
2) How i can make a predicate for select only those nodes (entries) that contains attachments that have name a certain name.
Again the class Attachment, the important part...
#Node(jcrType = "attachment", discriminator = true)
public class Attachment extends BaseNode implements Comparable<Attachment> {
...
}
So far i get to this.. that is working.. but there must be a better way:
//element(*, entry) [jcr:contains(./*,'<nameOfInterest>')]
That all friends, i really apologies for the absent of information that i reader may require to understand better the background of the matter, i guess this is what i can do. I am pretty new to Jackrabbit and JCR, and i have to get the hands (dirty) on it, without knowing very well what i am doing.. and obliviously it began to be very complicated...
Well hope any charity soul can answer this, and help, at least a little :D.
Thanks for advance.
Greetings.
VĂ­ctor.

I'm not an expert, but I try to answer anyway:
Question 1
//element(*, entry)[getYear(#creationDate) == <year>]
I think you could use:
//element(*, entry)[
#creationDate >= '2001-01-01T00:00:00.0'
and #creationDate < '2002-01-01T00:00:00.0']
Question 2
Select only those nodes (entries) that contains attachments that have name a certain name.
I only know the SQL-2 query, using equality on the node name. I'm not sure if this is what you are looking for:
select * from [nt:base] where name() = '<nameOfInterest>'

Related

Create Class object through class name in Salesforce

I am new to Salesforce and want to code one requirement, I have an api name in string variable through which I want to create an object of that class.
For Eg. Object name is Account which is stored in string variable.
String var = 'Account'
Now I want to create object of 'Account'. In java it is possible by Class.forName('Account') but similar approach is not working in Salesforce Apex.
Can anyone please help me out in this.
Have a look at Type class. The documentation for it isn't terribly intuitive, perhaps you'll benefit more from the article that introduced it: https://developer.salesforce.com/blogs/developer-relations/2012/05/dynamic-apex-class-instantiation-in-summer-12.html
I'm using this in managed package code which inserts Chatter posts (feed items) only if the org has Chatter enabled:
sObject fItem = (sObject)System.Type.forName('FeedItem').newInstance();
fItem.put('ParentId', UserInfo.getUserId());
fItem.put('Body', 'Bla bla bla');
insert fItem;
(If I'd hardcode the FeedItem class & insert it directly it'd mark my package as "requires Chatter to run").
Alternative would be to build a JSON representation of your object (String with not only the type but also some field values). You could have a look at https://salesforce.stackexchange.com/questions/171926/how-to-deserialize-json-to-sobject
At the very least now you know what keywords & examples to search for :)
Edit:
Based on your code snippet - try something like this:
String typeName = 'List<Account>';
String content = '[{"attributes":{"type":"Account"},"Name":"Some account"},{"attributes":{"type":"Account"},"Name":"Another account"}]';
Type t = Type.forName(typeName);
List<sObject> parsed = (List<sObject>) JSON.deserialize(content, t);
System.debug(parsed);
System.debug(parsed[1].get('Name'));
// And if you need to write code "if it's accounts then do something special with them":
if(t == List<Account>.class){
List<Account> accs = (List<Account>) parsed;
accs[1].BillingCountry = 'USA';
}

Dapper can't ignore nested objects for parameter?

I am beginning to use Dapper and love it so far. However as i venture further into complexity, i have ran into a big issue with it. The fact that you can pass an entire custom object as a parameter is great. However, when i add another custom object a a property, it no longer works as it tries to map the object as a SQL parameter. Is there any way to have it ignore custom objects that are properties of the main object being passed thru? Example below
public class CarMaker
{
public string Name { get; set; }
public Car Mycar { get; set; }
}
propery Name maps fine but property MyCar fails because it is a custom object. I will have to restructure my entire project if Dapper can't handle this which...well blows haha
Dapper extensions has a way to create custom maps, which allows you to ignore properties:
public class MyModelMapper : ClassMapper<MyModel>
{
public MyModelMapper()
{
//use a custom schema
Schema("not_dbo_schema");
//have a custom primary key
Map(x => x.ThePrimaryKey).Key(KeyType.Assigned);
//Use a different name property from database column
Map(x=> x.Foo).Column("Bar");
//Ignore this property entirely
Map(x=> x.SecretDataMan).Ignore();
//optional, map all other columns
AutoMap();
}
}
Here is a link
There is a much simpler solution to this problem.
If the property MyCar is not in the database, and it is probably not, then simple remove the {get;set;} and the "property" becomes a field and is automatically ignored by DapperExtensions. If you are actually storing this information in a database and it is a multi-valued property that is not serialized into a JSON or similar format, I think you are probably asking for complexity that you don't want. There is no sql equivalent of the object "Car", and the properties in your model must map to something that sql recognizes.
UPDATE:
If "Car" is part of a table in your database, then you can read it into the CarMaker object using Dapper's QueryMultiple.
I use it in this fashion:
dynamic reader = dbConnection.QueryMultiple("Request_s", param: new { id = id }, commandType: CommandType.StoredProcedure);
if (reader != null)
{
result = reader.Read<Models.Request>()[0] as Models.Request;
result.reviews = reader.Read<Models.Review>() as IEnumerable<Models.Review>;
}
The Request Class has a field as such:
public IEnumerable<Models.Review> reviews;
The stored procedure looks like this:
ALTER PROCEDURE [dbo].[Request_s]
(
#id int = null
)
AS
BEGIN
SELECT *
FROM [biospecimen].requests as bn
where bn.id=coalesce(#id, bn.id)
order by bn.id desc;
if #id is not null
begin
SELECT
*
FROM [biospecimen].reviews as bn
where bn.request_id = #id;
end
END
In the first read, Dapper ignores the field reviews, and in the second read, Dapper loads the information into the field. If a null set is returned, Dapper will load the field with a null set just like it will load the parent class with null contents.
The second select statement then reads the collection needed to complete the object, and Dapper stores the output as shown.
I have been implementing this in my Repository classes in situations where a target parent class has several child classes that are being displayed at the same time.
This prevents multiple trips to the database.
You can also use this approach when the target class is a child class and you need information about the parent class it is related to.

Design pattern to process different file types in OOP

I need to process a big set of files that at the moment are all loaded into memory on a List.
i.e:
List(FileClass) Files;
Use:
Reader.Files; //List of files of the type
File Class has one attribute to match each FileInfo object property i.e: Name,CreateDate,etc.
Also has a List of Lines of the type (LineNumber, Data).
Now I need to create an logic to interpret these files. They all have different logic interpreteations and they will be loaded on to their correspondent Business Object.
i.e:
Model model = new Model()
.emp => Process => Employee Class
.ord => Process => Order Class
model.AddObject(emp);
model.AddObject(ord);
My question what is the best design pattern for a problem of this sort.
All I can think of is... something like this:
public ProcessFiles(List<Files> Files)
{
Model model = new Model()
var obj;
foreach(file in Files)
{
switch (File.GetExtension(file))
{
case "emp":
obj = BuildEmployee(file) //returns Employee class type
break;
case "ord":
obj = BuildOrder(file) //returns Order class type
break;
}
model.AddObject(obj);
}
}
Is there a better way to approach this?
This solution looks procedural to me, is there a better Object Oriented Approach to it?
Cheers
UPDATE:
I've come across a few options to solve this:
1)- Use of Partial classes for separation of concerns.
I have a data model that I don't want to mix File processing, Database use, etc (Single Responsibility)
DATA MODEL:
public partial class Employee
{
public int EmployeeID;
public string FirstName;
public string LastName;
public decimal Salary;
}
INTERPRETER/FILE PARSER:
This partial class defines the logic to parse .emp files.
// This portion of the partial class to separate Data Model
from File processing
public partial class Employee
{
public void ProcessFile(string FileName)
{
//Do processing
}
...
}
Intepreter Object
public class Interpreter : IInterpreter
{
foreach(file in Files)
{
switch (fileExtension)
{
case .emp
Employee obj= new Employee();
case .ord
Order obj = new Order(file);
}
obj.ProcessFile(File)
Model.AddObject(emp)
}
}
2)- Perhaps using some sort of Factory Pattern...
The input is the file with an extension type.
This drives the type of object to be created (i.e: Employee, Order, anything) and also the logic to parse this file. Any ideas?
Well, it seems that you want to vary the processing behaviour based on the file type. Behaviour and Type being keywords. Does any behavioural pattern suit your requirement?
Or is it that the object creation is driven by the input file type? Then creation and type become important keywords.
You might want to take a look at strategy and factory method patterns.
Here is something from the book Refactoring to Patterns:
The overuse of patterns tends to result from being patterns happy. We
are patterns happy when we become so enamored of patterns that we
simply must use them in our code. A patterns-happy programmer may work
hard to use patterns on a system just to get the experience of
implementing them or maybe to gain a reputation for writing really
good, complex code.
A programmer named Jason Tiscione, writing on SlashDot (see
http://developers.slashdot.org/comments.pl?sid=33602&cid=3636102),
perfectly caricatured patterns-happy code with the following version
of Hello World. ..... It is perhaps impossible to avoid being patterns
happy on the road to learning patterns. In fact, most of us learn by
making mistakes. I've been patterns happy on more than one occasion.
The true joy of patterns comes from using them wisely.

Google App Engine - JDODetachedFieldAccessException

I'm pretty new to JPA/JDO and the whole objectdb world.
I have an entity with a set of strings, looks a bit like:
#Entity
public class Foo{
#Id
#GeneratedValue(strategy=GenerationType.IDENTITY)
private Key id;
private Set<String> bars;
public void setBars(Set<String> newBars){
if(this.bars == null)
this.bars = new HashSet<String>;
this.bars = newBars;
}
public Set<String> getBars(){
return this.bars;
}
public void addBar(String bar){
if(this.bars == null)
this.bars = new HashSet<String>;
this.bars.add(bar);
}
}
Now, in another part of the code, I'm trying to do something like this:
EntityManager em = EMF.get().createEntityManager();
Foo myFoo = em.find(Foo.class, fooKey);
em.getTransaction().begin();
myFoo.addBar(newBar);
em.merge(myFoo);
em.getTransaction().commit();
When, of course, newBar is a String.
But, what I get is:
javax.jdo.JDODetachedFieldAccessException: You have just attempted to access field "bars" yet this field was not detached when you detached the object. Either dont access this field, or detach it when detaching the object.
I've searched for an answer, but I couldn't find one.
I've seen someone ask about a Set of strings, and he was told to add an #ElementCollection notation.
I tried that, but I got an error about the String class Metadata (I don't really understand what it means.)
I would really appreciate some help on this thing, even a good reference to someone explaining this (in simple English).
OK,
So I found the answer in some blog.
So for anyone who's interested:
In order to use a Collection of simple data types (in JPA), a
#Basic
notation should be added to the collection. So from my example at the top, It should've been written:
#Basic
private Set<String> bars;
So you are using JPA, right? (I see EntityManager rather than JDO's PersistenceManager.) Since you are getting a JDO error, I suspect that your app isn't configured properly for JPA.
JPA docs: http://code.google.com/appengine/docs/java/datastore/jpa/overview.html
JDO docs: http://code.google.com/appengine/docs/java/datastore/jdo/overview.html
You need to pick one datastore wrapper and stick with it. The default new app with the Eclipse tools is configured for JDO, and it is a reasonable choice, but you'll have to change your annotations around a little bit.

How to use definitions stored in a DB to definitions in your source code?

I would like to get some ideas how to use definitions which are stored in a DB (which is used by your application) into the source code of the application.
Example:
Database
Cars
CarTypeDefinitions
A column of Cars links by a foreign key to a row in CarTypeDefinitions, therefore defines the type of the car which is contained in Cars.
Cars contains entries like 'Aston Martin DB8', 'Range Rover' and 'Mercedes Actros'.
CarTypeDefinitions contains entries like 'Sports car', 'SUV' and 'Truck'.
Source code
Now I would like to be able to use these definitions in my source code as well. So somehow we need to create some kind of mapping between a row in the CarTypeDefinitions table and a (preferably) type-safe implementation in the source code.
One possible implementation
The first thing which comes to my mind (and I am especially looking for other solutions or feedback on this one) would be to create an Enum ECarTypeDefinitions.
public enum ECarTypeDefinitions
{
SportsCar = 1,
SUV = 2,
Truck = 3
}
Now that we have a type-safe Enum we can use it e.g. like this:
public bool IsSportsCar(Car currentCar)
{
return (currentCar.CarType == ECarTypeDefinitions.SportsCar);
}
The contents of that enum would be auto-generated from the contents of the CarTypeDefinitions table (add two additional columns for name of the enum and its integer value).
This would also work the other way, e.g. generate the content of the CarTypeDefinitions DB table from the ECarTypeDefinitions Enum.
I'm keen to hear about other ways how to tackle this problem. How have you dealt with this?
I do it the way you have suggested. Some others prefer to combine all constants into a "Lookup" table. You can look at an example of some of the pros and cons here: http://weblogs.foxite.com/andykramek/archive/2009/05/10/8419.aspx
Edit: Here's a thought that may help spark further ideas from you.
Create a class for each Car Type:
public class SportsCarType
{
}
Now add an attribute to CarTypeDefinition:
public class CarTypeDefinition
{
...
public string typeName;
}
Populate the new attribute (for each type you have) using typeof:
...
carTypeDefinition.typeName = typeof(SportsCarType).Name;
...
Finally your new IsSportsCar method:
public bool IsSportsCar(Car currentCar)
{
return (currentCar.CarType.typeName == typeof(SportsCarType).Name);
}
I'm not familiar with Entity Framework so perhaps it has a way to allow this kind of thing to be more cleanly done. Also a little rusty on C#.

Resources