all.
I am a starter of camel.
I have some problem. Here is a scenario.
The file has 2 fields in a body.
(seq, date, charge no.)
So, I generated pojo class for this dataformat.
public class TestDataformat {
int seq;
Date date;
String chargeNo;
}
I made the application for camel.
After some days, dataformat is changed about adding name field.
So, we need pojo class modified and application rebuild again.
But we don’t want to build the application and generate pojo class for dataformat again.
If we can use xsd to make the Pojo class dynamically at runtime, can we handle it without additional build
Is possible about this assumption? or Is there any other way to do this?
Thank you.
A simple solution to this kind of problem is to create a Map, for extra fields, that way you won't have to keep updating the POJO, these links might be useful HOW TO MAP UNKNOWN JSON PROPERTIES WITH JACKSON
Jackson deserialize extra fields as map
hope it helps.
Related
I am currently using Shopizer as a sort of headless CMS, leveraging its out of the box admin pages and REST API for the content. There is a page for editing products in the admin system but I would like to add and/or remove specific fields. Making changes to the base code seems to be the most obvious solution but it is taking me a significant amount of time to implement it successfully.
Is there some sort of a config file or an initialization process to customize the fields for creating categories and products using Shopizer's admin page? What is the best practice for this scenario if the former approach is not possible?
If you need to add fields the easiest way is to add them in model objects
com.salesmanager.core.model.*
Example of an annotated field
#Column (name ="IP_ADDRESS")
private String ipAddress;
Once you restart your instance the new field will be available.
My model consisted of the following example:
class Aggregate {
private SomeClassWithFields property;
}
Now I decided to introduce inheritance to SomeClassWithFields. This results in:
class Aggregate {
private AbstractBaseClass property;
}
The collection already contains a lot of documents. These documents do not contain a _class property inside the DB since they were stored before the inheritance was present.
Is there a way to tell Spring Data MongoDB to use SomeClassWithFields as the default implementation of AbstractBaseClass if no _class property is present?
The other solution would be to add the _class to all the existing documents with a script but this would take some time since we have a lot of documents.
I solved it by using an AbstractMongoEventListener
The AbstractMongoEventListener has an onAfterLoad method which I used to set the default _class value if none was present :) This method is called before any mapping from the DBObject to my domain model by spring so it works then.
Do note that I also needed to let spring data mongodb know the mappingBasePackage in order for it to be able to read an Aggregate before writing one. This can be done implementing the getMappingBasePackage method of the PreconfiguredAbstractMongoConfiguration class.
We are trying to implement a two-dimensional solr cloud cluster where the first dimension is a collection and the second is a shard. Collection should be determined in runtime based on a document properties.
I can see that this functionality is supported by solrj- CloudSolrClient has appropriate methods which accept collection name like add(String collection, SolrInputDocument doc), so I registered #Bean CloudSolrClient("zookeeper.host"). But apparently it isn't enough because methods in SolrTemplate, which is used by Spring Data Solr, doesn't accept a collection name.
Since SolrTemplate uses SolrClient under the hood I tried to workaround this problem extending SolrTemplate and overriding saveBean and saveBeans methods delegating to CloudSolrClient#add(String collection, SolrInputDocument doc) and CloudSolrClient#add(String collection, Collection<SolrInputDocument> docs). It worked fine until I was need to do the same for queries. SolrTemplate#executeSolrQuery is package-private and final, so I can't override it. And here I stuck!
To summarise my question: is there a way to specify a collection name in spring data solr in runtime?
I would greatly appreciate any help!
Regards,
Eugeny
My problem was a bit different, but I had also a problem with collection name in queries and in my case adding #SolrDocument(solrCoreName = "core_to_which_model_class_belong") to model class solved the problem.
This question is related to Spring Data MongoDB model classes without annotations.
I have a situation where I need to store my domain classes either in RDBMS store or NoSQL store. Say for example my domain classes are User, Feature, PaymentRequest, Order, OrderLine, OrderHeader etc.
I cannot use any annotation on my domain classes for various reasons.
Application team will specify in which persistent store they like to store. They might configure to store it in MongoDB or in MySQL or in Oracle etc.
My requirement is when I am storing in MongoDB say using spring-data-mongodb I want to leverage the DBRefs for associated objects in my domain object.
How can I achieve with spring-data-mongodb without using annotations in my model classes.
class Role
{
String id;
String roleName;
}
class User {
String id;
String firstName;
String lastName;
List<Role> userRoles;
}
When I save User object I want to ensure that in MongoDB Role objects are stored as DBRefs instead of actual Role object graph.
My question is ─ without using annotations in my User and Role classes ─ how can I achieve this?
I searched the user's forums and could not find a way. That's why I'm posting my question here.
Thanks,
Kishore Veleti A.V.K.
Not sure if you ever figured this out, but you can use AspectJ to create an ITD (inter-type declaration) to weave in the annotations into the class without having to actually modify the original code.
For example, to turn your userRoles into a DBRef, you just need this aspect:
import org.springframework.data.mongodb.core.mapping.DBRef;
privileged aspect User_Mongo {
declare #field: * User.userRoles : #DBRef;
}
This simply adds the #DBRef annotation to any fields within User named userRoles. You can look at the AspectJ documentation for more information on field patterns and ITDs.
I have a simple entity class in a WPF application that essentially looks like this:
public class Customer : MyBaseEntityClass
{
private IList<Order> _Orders;
public virtual IList<Order> Orders
{
get { return this._Orders; }
set {this._Orders = new ObservableCollection<Order>(value);}
}
}
I'm also using the Fluent automapper in an offline utility to create an NHibernate config file which is then loaded at runtime. This all works fine but there's an obvious performance hit due to the fact that I'm not passing the original collection back to NHibernate, so I'm trying to add a convention to get NHibernate to create the collection for me:
public class ObservableListConvention : ICollectionConvention
{
public void Apply(ICollectionInstance instance)
{
Type collectionType =
typeof(uNhAddIns.WPF.Collections.Types.ObservableListType<>)
.MakeGenericType(instance.ChildType);
instance.CollectionType(collectionType);
}
}
As you can see I'm using one of the uNhAddIns collections which I understand is supposed to provide support for both the convention and INotification changes, but for some reason doing this seems to break lazy-loading. If I load a custom record like this...
var result = this.Session.Get<Customer>(id);
...then the Orders field does get assigned an instance of type PersistentObservableGenericList but its EntityId and EntityName fields are null, and attempting to expand the orders results in the dreaded "illegal access to loading collection" message.
Can anyone tell me what I'm doing wrong and/or what I need to do to get this to work? Am I correct is assuming that the original proxy object (which normally contains the Customer ID needed to lazy-load the Orders member) is being replaced by the uNhAddIns collection item which isn't tracking the correct object?
UPDATE: I have created a test project demonstrating this issue, it doesn't reference the uNhAddins project directly but the collection classes have been added manually. It should be pretty straightforward how it works but basically it creates a database from the domain, adds a record with a child list and then tries to load it back into another session using the collection class as the implementation for the child list. An assert is thrown due to lazy-loading failing.
I FINALLY figured out the answer to this myself...the problem was due to my use of ObservableListType. In NHibernate semantics a list is an ordered collection of entities, if you want to use something for IList then you want an unordered collection i.e. a Bag.
The Eureka moment for me came after reading the answer to another StackOverflow question about this topic.