The title is a little a canvasser, and it is of course my fault if it does not work, as it should.
I want to perform a data transfer from a rdbms to solr and mongo db.
To do that, I have to complete the following steps (for example) :
Get customers ids to transfer
Get custometrs details
Get customers invoices
Get customers payments
Then, aggregate and save to mongo db and solr for indexing.
Here is my code, but I can not get it to work :
from("seda:initial-data-transfer")
.setProperty("recipientList", simple("direct:details,direct:invoices,direct:payments"))
.setProperty("afterAggregate", simple("direct:mongodb,direct:solr"))
.setBody(constant("{{query.initial-data-transfer.ids}}"))
.to(jdbc)
.process(new RowSetIdsProcessor())
.split().tokenize(",", 1000) // ~200k ids - group by 1000 ids
.to("direct:customers-ids");
from("direct:customers-ids")
.recipientList(exchangeProperty("recipientList").tokenize(","))
// ? .aggregationStrategy(new CustomerAggregationStrategy()).parallelProcessing()
.aggregate(header("CamelCorrelationId"), new CustomerAggregationStrategy())
.completionPredicate(new CustomerAggregationPredicate()) // true if details + invoices + payments, etc ....
// maybe a timeOut here ?
.process(businessDataServiceProcessor)
.recipientList(exchangeProperty("afterAggregate").tokenize(","));
from("direct:details")
.setHeader("query", constant("{{query.details}}"))
.bean(SqlTransform.class,"detailsQuery").to(jdbc)
.process(new DetailsProcessor());
from("direct:invoices")
.setHeader("query", constant("{{query.invoices}}"))
.bean(SqlTransform.class,"invoicessQuery").to(jdbc)
.process(new InvoicesProcessor());
I do not understand how works AggregationStrategy.
Sometimes, I can perform 2 or 3 blocks of 1000 ids, and save to mongo DB and Solr but after, all exchanges are empty in the aggregationStrategy ...
I tried a lot of thinks .. but each time, the aggregation fail.
Thanks for your help
Update :
Here is a part of the CustomerAggregationStrategy :
public class CustomerAggregationStrategy implements AggregationStrategy {
#Override
public Exchange aggregate(Exchange oldExchange, Exchange newExchange) {
Message newIn = newExchange.getIn();
CustomerDataCollector collector = null;
if (oldExchange == null) {
int completionSize = newExchange.getProperty("completionSize", Integer.class);
collector = new CustomerDataCollector(completionSize);
CollectData(collector, newIn, newExchange);
newIn.setBody(collector);
return newExchange;
}
collector = oldExchange.getIn().getBody(CustomerDataCollector.class);
CollectData(collector, newIn, newExchange);
return oldExchange;
}
private void CollectData(CustomerDataCollector collector, Message message, Exchange exchange) {
String recipientListEndpoint = (String)exchange.getProperty(Exchange.RECIPIENT_LIST_ENDPOINT);
switch (recipientListEndpoint){
case "direct://details" :
collector.setDetails(message.getBody(Map.class));
break;
case "direct://invoices" :
collector.setInvoices(message.getBody(Map.class));
break;
case "direct://payments" :
collector.setPayments(message.getBody(Map.class));
break;
}
}
}
Update :
I can log this in the CustomerAggregationStrategy :
String camelCorrelationId = (String)exchange.getProperty(Exchange.CORRELATION_ID);
[t-AggregateTask] .i.c.a.CustomerAggregationStrategy : CustomerAggregationStrategy.CollectData : direct://details ID-UC-0172-50578-1484523575668-0-5
[t-AggregateTask] .i.c.a.CustomerAggregationStrategy : CustomerAggregationStrategy.CollectData : direct://invoices ID-UC-0172-50578-1484523575668-0-5
[t-AggregateTask] .i.c.a.CustomerAggregationStrategy : CustomerAggregationStrategy.CollectData : direct://payments ID-UC-0172-50578-1484523575668-0-5
Same values for the CamelCorrelationId as expected.
I thing the CamelCorrelationId is correct. Doesn't it ?
Ok, it is better now.
After the tokeniszer, I set the property CustomCorrelationId like this.
.split().tokenize(",", 1000)
.setProperty("CustomCorrelationId",header("breadcrumbId"))
.to("direct:customers-ids")
And aggregate on this value like this :
from("direct:customers-ids")
.recipientList(exchangeProperty("recipientList").tokenize(","))
from("direct:details")
.setHeader("query", constant("{{query.details}}"))
.bean(SqlTransform.class,"detailsQuery").to(jdbc)
.process(new DetailsProcessor())
.to("direct:aggregate");
...
from("direct:aggregate").routeId("aggregate")
.log("route : ${routeId}")
.aggregate(property("CustomCorrelationId"), new CustomAggregationStrategy())
.completionPredicate(new CustomerAggregationPredicate())
.process(businessDataServiceProcessor)
.recipientList(exchangeProperty("afterAggregate").tokenize(","));
This work fine now and data are correctly aggregated. Thanks for your help.
You pointed out the way.
Related
My app was running smoothly but I am getting this error now.I am getting an error in Kapt Debug Kotlin. I have update versions of dependencies in gradle file. still facing this issue. How it can be resolved? I saw somewhere to see your room database , dao and data class. still not able to figure out what is the issue.
The error is showing this file
ROOM DATABASE
#Database(entities = [Transaction::class], version = 1, exportSchema = false)
abstract class MoneyDatabase : RoomDatabase(){
abstract fun transactionListDao():transactionDetailDao
companion object {
// Singleton prevents multiple instances of database opening at the
// same time.
#Volatile
private var INSTANCE: MoneyDatabase? = null
fun getDatabase(context: Context): MoneyDatabase {
// if the INSTANCE is not null, then return it,
// if it is, then create the database
return INSTANCE ?: synchronized(this) {
val instance = Room.databaseBuilder(
context.applicationContext,
MoneyDatabase::class.java,
"transaction_database"
).build()
INSTANCE = instance
// return instance
instance
}
}
}
}
DAO
#Dao
interface transactionDetailDao {
#Insert(onConflict = OnConflictStrategy.IGNORE)
suspend fun insert(transaction : Transaction)
#Delete
suspend fun delete(transaction : Transaction)
#Update
suspend fun update(transaction: Transaction)
#Query("SELECT * FROM transaction_table ORDER BY id ASC")
fun getalltransaction(): LiveData<List<Transaction>>
}
DATA CLASS
enum class Transaction_type(){
Cash , debit , Credit
}
enum class Type(){
Income, Expense
}
#Entity(tableName = "transaction_table")
data class Transaction(
val name : String,
val amount : Float,
val day : Int,
val month : Int,
val year : Int,
val comment: String,
val datePicker: String,
val transaction_type : String,
val category : String,
val recurring_from : String,
val recurring_to : String
){
#PrimaryKey(autoGenerate = true) var id :Long=0
}
The error is resolved. I was using the kotlin version 1.6.0. I reduced it to 1.4.32. As far as I understood, above(latest) version of Kotlin along with Room and coroutines doesn’t work well.
I believe that your issue is due to the use of an incorrect class being inadvertently used, a likely culprit being Transaction as that it also a Room class
Perhaps in transactionDetailDao (although it might be elsewhere)
See if you have import androidx.room.Transaction? (or any other imports with Transaction)?
If so delete or comment out the import
As an example with, and :-
And with the import commented out :-
Imported from github, had a play issue definitely appears to be with co-routines. commented out suspends in the Dao :-
#Dao
interface transactionDetailDao {
#Insert(onConflict = OnConflictStrategy.IGNORE)
suspend fun insert(transaction : Transaction)
#Delete
suspend fun delete(transaction : Transaction)
#Update
suspend fun update(transaction: Transaction)
#Query("SELECT * FROM transaction_table ORDER BY id ASC")
fun getalltransaction(): LiveData<List<Transaction>>
}
Compiled ok and ran and had a play e.g. :-
I have created a method to update the records in a case.
#RestResource(urlMapping= '/FieldCases/*')
global with sharing class RestCaseController {
#HttpPatch
global static String caseUpdate(String caseId, String caseStatus, String caseNote){
Case companyCase = [SELECT Id, Subject, Status, Description FROM Case WHERE Id = :caseId];
companyCase.Status = caseStatus;
companyCase.Description += caseNote;
update companyCase;
Return 'Updated';
}
}
and in work bench I am using
/services/apexrest/FieldCases
{"caseId" : "0037F00000bQYIjQAO",
"caseStatus" : "Working",
"caseNote" : "updating from the work bench"}
but I am getting the below error
HTTP Method 'PATCH' not allowed. Allowed are POST,DELETE,GET,HEAD
It worked alright for me. Here's a screenshot of the request in workbench:
I want to return a List of "Posts" from an endpoint with optional pagination.
I need 100 results per query.
The Code i have written is as follows, it doesn't seem to work.
I am referring to an example at Objectify Wiki
Another option i know of is using query.offset(100);
But i read somewhere that this just loads the entire table and then ignores the first 100 entries which is not optimal.
I guess this must be a common use case and an optimal solution will be available.
public CollectionResponse<Post> getPosts(#Nullable #Named("cursor") String cursor,User auth) throws OAuthRequestException {
if (auth!=null){
Query<Post> query = ofy().load().type(Post.class).filter("isReviewed", true).order("-timeStamp").limit(100);
if (cursor!=null){
query.startAt(Cursor.fromWebSafeString(cursor));
log.info("Cursor received :" + Cursor.fromWebSafeString(cursor));
} else {
log.info("Cursor received : null");
}
QueryResultIterator<Post> iterator = query.iterator();
for (int i = 1 ; i <=100 ; i++){
if (iterator.hasNext()) iterator.next();
else break;
}
log.info("Cursor generated :" + iterator.getCursor());
return CollectionResponse.<Post>builder().setItems(query.list()).setNextPageToken(iterator.getCursor().toWebSafeString()).build();
} else throw new OAuthRequestException("Login please.");
}
This is a code using Offsets which seems to work fine.
#ApiMethod(
name = "getPosts",
httpMethod = ApiMethod.HttpMethod.GET
)
public CollectionResponse<Post> getPosts(#Nullable #Named("offset") Integer offset,User auth) throws OAuthRequestException {
if (auth!=null){
if (offset==null) offset = 0;
Query<Post> query = ofy().load().type(Post.class).filter("isReviewed", true).order("-timeStamp").offset(offset).limit(LIMIT);
log.info("Offset received :" + offset);
log.info("Offset generated :" + (LIMIT+offset));
return CollectionResponse.<Post>builder().setItems(query.list()).setNextPageToken(String.valueOf(LIMIT + offset)).build();
} else throw new OAuthRequestException("Login please.");
}
Be sure to assign the query:
query = query.startAt(cursor);
Objectify's API uses a functional style. startAt() does not mutate the object.
Try the following:
Remove your for loop -- not sure why it is there. But just iterate through your list and build out the list of items that you want to send back. You should stick to the iterator and not force it for 100 items in a loop.
Next, once you have iterated through it, use the iterator.getStartCursor() as the value of the cursor.
public static List<FieldOption>
getFieldOptionListOfField(PersistenceManager pm, long fieldId) throws NoSuchFieldOptionException {
Query query = pm.newQuery(FieldOption.class);
try {
query.setFilter("this.fieldId == fieldId");
query.declareParameters("long fieldId");
query.setOrdering("orderId ascending");
List<FieldOption> fieldOptions = (List<FieldOption>) query.execute(fieldId);
logger.debug("fieldOptions = " + fieldOptions);
return fieldOptions();
} finally {
query.closeAll();
}
}
After execution of excute method 'fieldOptions' is having certain values. But after closeAll() the list becomes empty. Can you please suggest why it happens?
The "List" returned after you end a transaction and close a PM is not a real List, but instead a lazy load list, that cannot lazy load once it has no connection to the datastore. Easiest option is put (copy) the query results into your own List before closing the txn/PM.
I want to get the list of all fields (i.e. field names) sorted by the number of times they occur in the Solr index, i.e.: most frequently occurring field, second most frequently occurring field and so on.
Alternatively, getting all fields in the index and the number of times they occur would also be sufficient.
How do I accomplish this either with a single solr query or through solr/lucene java API?
The set of fields is not fixed and ranges in the hundreds. Almost all fields are dynamic, except for id and perhaps a couple more.
As stated in Solr: Retrieve field names from a solr index? you can do this by using the LukeRequesthandler.
To do so you need to enable the requestHandler in your solrconfig.xml
<requestHandler name="/admin/luke" class="org.apache.solr.handler.admin.LukeRequestHandler" />
and call it
http://solr:8983/solr/admin/luke?numTerms=0
If you want to get the fields sorted by something you are required to do this on your own. I would suggest to use Solrj in case you are in a java environment.
Fetch fields using Solrj
#Test
public void lukeRequest() throws SolrServerException, IOException {
SolrServer solrServer = new HttpSolrServer("http://solr:8983/solr");
LukeRequest lukeRequest = new LukeRequest();
lukeRequest.setNumTerms(1);
LukeResponse lukeResponse = lukeRequest.process(solrServer );
List<FieldInfo> sorted = new ArrayList<FieldInfo>(lukeResponse.getFieldInfo().values());
Collections.sort(sorted, new FieldInfoComparator());
for (FieldInfo infoEntry : sorted) {
System.out.println("name: " + infoEntry.getName());
System.out.println("docs: " + infoEntry.getDocs());
}
}
The comparator used in the example
public class FieldInfoComparator implements Comparator<FieldInfo> {
#Override
public int compare(FieldInfo fieldInfo1, FieldInfo fieldInfo2) {
if (fieldInfo1.getDocs() > fieldInfo2.getDocs()) {
return -1;
}
if (fieldInfo1.getDocs() < fieldInfo2.getDocs()) {
return 1;
}
return fieldInfo1.getName().compareTo(fieldInfo2.getName());
}
}