Storing the Cursor for App Engine Pagination - google-app-engine

I'm trying to implement pagination using App Engine's RPC and GWT (it's an app engine connected project).
How can I pass both the query results and the web-safe cursor object to the GWT client from the RPC?
I've seen examples using a servlet but I want to know how to do it without a servelt.
I've considered caching the cursor on the server using memcache but I'm not sure if that's appropriate or what should be used as the key (session identifier I would assume, but I'm not sure how those are handled on App Engine).
Links to example projects would be fantastic, I've been unable to find any.

OK, so the best way to do this is to store the cursor as a string on the client.
To do this you have to create a wrapper class that is transportable so you can pass back it to the client via RequestFactory that can hold the results list and the cursor string. To do that you create a normal POJO and then a proxy for it.
here's what the code looks like for the POJO:
public class OrganizationResultsWrapper {
public List<Organization> list;
public String webSafeCursorString;
public List<Organization> getList() {
return list;
}
public void setList(List<Organization> list) {
this.list = list;
}
public String getWebSafeCursorString() {
return this.webSafeCursorString;
}
public void setWebSafeCursorString(String webSafeCursorString) {
this.webSafeCursorString = webSafeCursorString;
}
}
for the proxy:
#ProxyFor(OrganizationResultsWrapper.class)
public interface OrganizationResultsWrapperProxy extends ValueProxy{
List<OrganizationProxy> getList();
void setList(List<OrganizationProxy> list);
String getWebSafeCursorString();
void setWebSafeCursorString(String webSafeCursorString);
}
set up your service and requestFactory to use the POJO and proxy respectively
// service class method
#ServiceMethod
public OrganizationResultsWrapper getOrganizations(String webSafeCursorString) {
return dao.getOrganizations(webSafeCursorString);
}
// request factory method
Request<OrganizationResultsWrapperProxy> getOrganizations(String webSafeCursorString);
Then make sure and run the RPC wizard so that your validation process runs otherwise you'll get a request context error on the server.
Here's the implementation in my data access class:
public OrganizationResultsWrapper getOrganizations(String webSafeCursorString) {
List<Organization> list = new ArrayList<Organization>();
OrganizationResultsWrapper resultsWrapper = new OrganizationResultsWrapper();
Query<Organization> query = ofy().load().type(Organization.class).limit(50);
if (webSafeCursorString != null) {
query = query.startAt(Cursor.fromWebSafeString(webSafeCursorString));
}
QueryResultIterator<Organization> iterator = query.iterator();
while (iterator.hasNext()) {
list.add(iterator.next());
}
resultsWrapper.setList(list);
resultsWrapper.setWebSafeCursorString(iterator.getCursor().toWebSafeString());
return resultsWrapper;
}

a second option would be to save the webSafeCursorString in the memcache, as you already mentioned.
my idea looks like this:
the client sends always request like this "getMyObjects(Object... myParams, int maxResults, String clientPaginationString)". the clientPaginationString is uniquely created like shown below
server receives request and looks into the memcache if there is a webSafeCursorString for the key clientPaginationString
if the server finds nothing, he creates the query and save the webSafeCursorString into memcache with the clientPaginationString as the key. -> returns the results
if the server finds the webSafeCursorString he restarts the query with it and returns the results
the problems are how to clean the memcache and how to find a unique clientPaginationString:
a unique clientPaginationString should be the current UserId + the params of the current query + timestemp. this should work just fine!
i really can't think of a easy way how to clean the memcache, however i think we do not have to clean it at all.
we could store all the webSafeCursorStrings and timestemps+params+userid in a WebSafeCursor-Class that contains a map and store all this in the memcache... and clean this Class ones in a while (timestamp older then...).
one improvement i can think of is to save the webSafeCursorString in the memcache with a key that is created on the server (userSessionId + servicename + servicemethodname + params). however, important is that the client sends an information if he is interested in a new query (memcache is overriden) or wants the next pagination results (gets webSafeCursorString from memcache). a reload of the page should work. a second tap in the browser would be a problem i think...
what would you say?

Related

OptaPlanner: timetable resuming generates a wrong solution, even if the generation starts from where it stopped

I am using Java Spring Boot and OptaPlanner to generate a timetable with almost 20 constraints. At the initial generation, everything works fine. The score showed by the OptaPlanner logging messages matches the solution received, but when I want to resume the generation, the solution contains a lot of problems (like the constraints are not respected anymore) although the generation starts from where it has stopped and it continues initializing or finding a best solution.
My project is divided into two microservices: one that communicates with the UI and keeps the database, and the other receives data from the first when a request for starting/resuming the generation is done and generates the schedule using OptaPlanner. I use the same request for starting/resuming the generation.
This is how my project works: the UI makes the requests for starting, resuming, stopping the generation and getting the timetable. These requests are handled by the first microservice, which uses WebClient to send new requests to the second microservice. Here, the timetable will be generated after asking for some data from the database.
Here is the method for starting/resuming the generation from the second microservice:
#PostMapping("startSolver")
public ResponseEntity<?> startSolver(#PathVariable String organizationId) {
try {
SolverConfig solverConfig = SolverConfig.createFromXmlResource("solver/timeTableSolverConfig.xml");
SolverFactory<TimeTable> solverFactory = new DefaultSolverFactory<>(solverConfig);
this.solverManager = SolverManager.create(solverFactory);
this.solverManager.solveAndListen(TimeTableService.SINGLETON_TIME_TABLE_ID,
id -> timeTableService.findById(id, UUID.fromString(organizationId)),
timeTable -> timeTableService.updateModifiedLessons(timeTable, organizationId));
return new ResponseEntity<>("Solving has successfully started", HttpStatus.OK);
} catch(OptaPlannerException exception) {
System.out.println("OptaPlanner exception - " + exception.getMessage());
return utils.generateResponse(exception.getMessage(), HttpStatus.CONFLICT);
}
}
-> findById(...) method make a request to the first microservice, expecting to receive all data needed by constraints for generation (lists of planning entities, planning variables and all other useful data)
public TimeTable findById(Long id, UUID organizationId) {
SolverDataDTO solverDataDTO = webClient.get()
.uri("http://localhost:8080/smart-planner/org/{organizationId}/optaplanner-solver/getSolverData",
organizationId)
.retrieve()
.onStatus(HttpStatus::isError, error -> {
LOGGER.error(extractExceptionMessage("findById.fetchFails", "findById()"));
return Mono.error(new OptaPlannerException(
extractExceptionMessage("findById.fetchFails", "")));
})
.bodyToMono(SolverDataDTO.class)
.block();
TimeTable timeTable = new TimeTable();
/.. populating all lists from TimeTable with the one received in solverDataDTO ../
return timeTable;
}
-> updateModifiedLessons(...) method send to the first microservice the list of all generated planning entities with the corresponding planning variables assigned
public void updateModifiedLessons(TimeTable timeTable, String organizationId) {
List<ScheduleSlot> slots = new ArrayList<>(timeTable.getScheduleSlotList());
List<SolverScheduleSlotDTO> solverScheduleSlotDTOs =
scheduleSlotConverter.convertModelsToSolverDTOs(slots);
String executionMessage = webClient.post()
.uri("http://localhost:8080/smart-planner/org/{organizationId}/optaplanner-solver/saveTimeTable",
organizationId)
.header(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE)
.body(Mono.just(solverScheduleSlotDTOs), SolverScheduleSlotDTO.class)
.retrieve()
.onStatus(HttpStatus::isError, error -> {
LOGGER.error(extractExceptionMessage("saveSlots.savingFails", "updateModifiedLessons()"));
return Mono.error(new OptaPlannerException(
extractExceptionMessage("saveSlots.savingFails", "")));
})
.bodyToMono(String.class)
.block();
}
I would probably start by making sure that the solution you save to the DB after the first run of startSolver() is the same (in terms of Java equality), including the assignments of planning variables to values, as the solution you retrieve via findById() at the beginning of the second run.

getting to grips with Quickbooks Hello World app, all working ok but I have what I think is an easy question

in application.properties I need to set the OAuth2 keys...
OAuth2AppClientId=AB............................AN
OAuth2AppClientSecret=br................................u8
OAuth2AppRedirectUri=http://localhost:8085/oauth2redirect
Initially I put the keys in "" quotes assuming they should be treated as a string but to get it working I had to remove them. Can someone explain what's happening with
OAuth2AppClientId=AB............................AN when I build the app
and how do I find out more about OAuth2AppClientId?
A Google search is probably the place to start here. Here's a great resource about what a Client ID and Client Secret are:
https://www.oauth.com/oauth2-servers/client-registration/client-id-secret/
I quote:
The client_id is a public identifier for apps.
The client_secret is a secret known only to the application and the authorization server.
Intuit also has a ton of documentation on OAuth2, and how to implement it. You should read it:
https://developer.intuit.com/app/developer/qbo/docs/develop/authentication-and-authorization/oauth-2.0
In summary, the Client ID is how Intuit identifies that it's your app trying to connect to QuickBooks. Nothing is "happening" to the string when you build/compile the app - it's just normal string. But when your app authenticates against QuickBooks Online, your app sends the Client ID to QuickBooks so that QuickBooks knows it's your app trying to authorize a connection to QuickBooks, and not some other app.
If you want to see how to code is loading this, it is only a property being used inside the application
OAuth2PlatformClientFactory
#Service
#PropertySource(value="classpath:/application.properties", ignoreResourceNotFound=true)
public class OAuth2PlatformClientFactory {
#Autowired
org.springframework.core.env.Environment env;
OAuth2PlatformClient client;
OAuth2Config oauth2Config;
#PostConstruct
public void init() {
// intitialize a single thread executor, this will ensure only one thread processes the queue
oauth2Config = new OAuth2Config.OAuth2ConfigBuilder(env.getProperty("OAuth2AppClientId"), env.getProperty("OAuth2AppClientSecret")) //set client id, secret
.callDiscoveryAPI(Environment.SANDBOX) // call discovery API to populate urls
.buildConfig();
client = new OAuth2PlatformClient(oauth2Config);
}
public OAuth2PlatformClient getOAuth2PlatformClient() {
return client;
}
public OAuth2Config getOAuth2Config() {
return oauth2Config;
}
public String getPropertyValue(String propertyName) {
return env.getProperty(propertyName);
}
}
https://github.com/IntuitDeveloper/OAuth2-JavaWithSDK/blob/master/src/main/java/com/intuit/developer/sampleapp/oauth2/client/OAuth2PlatformClientFactory.java

How to know a operations of Google AppEngine datastore are complete

I'm execute method Datastore.delete(key) form my GWT web application, AsyncCallback had call onSuccess() method .Them i refresh http://localhost:8888/_ah/admin immediately , the Entity i intent to delete still exist. Smilar to, I refresh my GWT web application immediately the item i intent to delete still show on web page.Note the the onSuccess() had been call.
So, how can i know when the Entity already deleted ?
public void deleteALocation(int removedIndex,String symbol ){
if(Window.confirm("Sure ?")){
System.out.println("XXXXXX " +symbol);
loCalservice.deletoALocation(symbol, callback_delete_location);
}
}
public AsyncCallback<String> callback_delete_location = new AsyncCallback<String>() {
public void onFailure(Throwable caught) {
Window.alert(caught.getMessage());
}
public void onSuccess(String result) {
// TODO Auto-generated method stub
int removedIndex = ArryList_Location.indexOf(result);
ArryList_Location.remove(removedIndex);
LocationTable.removeRow(removedIndex + 1);
//Window.alert(result+"!!!");
}
};
SERver :
public String deletoALocation(String name) {
// TODO Auto-generated method stub
Transaction tx = Datastore.beginTransaction();
Key key = Datastore.createKey(Location.class,name);
Datastore.delete(tx,key);
tx.commit();
return name;
}
Sorry i'm not good at english :-)
According to the docs
Returns the Key object (if one model instance is given) or a list of Key objects (if a list of instances is given) that correspond with the stored model instances.
If you need an example of a working delete function, this might help. Line 108
class DeletePost(BaseHandler):
def get(self, post_id):
iden = int(post_id)
post = db.get(db.Key.from_path('Posts', iden))
db.delete(post)
return webapp2.redirect('/')
How do you check the existence of the entity? Via a query?
Queries on HRD are eventually consistent, meaning that if you add/delete/change an entity then immediately query for it you might not see the changes. The reason for this is that when you write (or delete) an entity, GAE asynchronously updates the index and entity in several phases. Since this takes some time it might happen that you don't see the changes immediately.
Linked article discusses ways to mitigate this limitation.

How to force a WCF data service to refresh entities?

I'm using a WCF data Service to access a MSSQL database. If the client requests data (e.g. from the table "Projects") i build my cache like this:
var collection = new ObservableCollection<Project>();
foreach (var project in this.Entities.Project)
{
collection.Add(project);
}
return collection;
If I want to refresh the list I just call
collection.Clear();
and call the above method again. If I edit a project and refresh the list as described above it works fine, but if i change the data on one client instance and refresh the list on another one the service doesn't load the changed project.
How can I force the DataService to re- load a whole entity (e.g. "Projects") even if from the service's point of view nothing has changed?
Possible solution:
public partial class Entities
{
public void RefreshProject(Project pr)
{
this.Detach(pr);
pr = this.Project.Where(p => p.Id == pr.Id).Single();
}
}
Usage:
entities.RefreshProject(project);

NHibernate Performance Optimization | Suggestions invited!

I’m facing an issue with NHibernate performance and can you please suggest me some optimizations? Below mentioned is a small summary of my application architecture
I have a windows service which is listening to a messaging bus. On receiving a message the service creates an object out of which a property is the received xml snippet and saves the message to the DB (uses NH). There is a WPF UI with a readonly connection to the DB, and on refresh of the UI it displays the objects on the screen.
While the UI does a refresh, it retrieves the xml and deserializes it , from which the object’s properties are derived and binded to the screen.
For example assume an xml XXX is received by the service, it deserializes the xml , creates the book object and save it to the DB and a property/column is SCHEMA which contains the xml snippet.
The UI while refreshed searches all book objects by ID and creates the book objects out of the xml which is being saved (yes, the xml is the constructor param).
Now my issue is that the refresh takes more than 2 minutes to display say 50 book objects. I analyzed it using the NHibernate profiler, and found that the time spend within the DB is negligible, however time spent to create the entities is proportionally huge(10ms:1990 ms).I guess it’s due to the fairly huge size of xml snippet and it’s deserialization.
My question is, how can I improve the performance. I dispose sessions after every refresh and is not lazy loading (please note that the time spend in DB is negligible). On every refresh it’s possible that all objects are updated by some downstream systems or maybe one of them are updated.Can I implement some sort of caching mechanism in this case?
Thanks in advance for any suggestions.
Regards,
-Mike
The entire list of 50 books could be saved in a singleton class meant for caching. Like a cache manager. You could also use say an enterprise library cache but I would suggest an in memory cache. If a book gets added you could update the cache. The cache would have the entire xml so no deserialisation would happen. Also you could update the db in an ansynchronous thread and reduce the time.
Here is the pseudo code
On the service, whenever I receive a message
public void OnMessage(string message)
{
//deserializes the message
DeserializedObject schema = deserializationFactory.Deserialize(message);
var book = new Book(schema,message);
// saves the book using a new session
repository.Save(book);
}
The book object:
public class Book
{
public DeserializedObject Schema{get;set;}
private string xml;
public string Xml{get{return xml;}}
public Book(DeserializedObject schema,string xml):this(schema)
{
this.xml = xml;
}
public Book(DeserializedObject schema):this()
{
this.Schema = schema;
}
public virtual XmlDocument XmlSchema
{
get
{
var doc = new XmlDocument();
if (Schema!= null)
{
var serializer = new XmlSerializer(typeof(DeserializedObject));
var stream = new MemoryStream();
serializer.Serialize(stream, Schema);
stream.Position = 0;
doc.Load(stream);
}
return doc;
}
}
public virtual string SerializedSchema
{
get { return XmlSchema.OuterXml; }
set
{
if (value != null)
Schema = value.Deserialize< DeserializedObject >();
}
}
public string Author
{
get{return Schema.Author;}
}
}
Now the Mapping for Book(uses FNH)
public class BookMap:ClassMap<Book>
{
LazyLoad();
Table("Books");
IdGenerator.Instance.GenerateId(this, "book_id_seq", book => book.Id);
Map(book=> book.SerializedSchema, "SERIALIZED_SCHEMA")
.CustomSqlType("Clob")
.CustomType("StringClob");
}
On UI:
public void OnRefresh()
{
//In reality the call to DB runs on a background worker and the records are binded to the grid after a context switch.
//GetByCriteria creates a new session every time a refresh happens.
datagrid.DataContext = repository.GetByCriteria(ICriterion allBooksforToday);
}
The important thing to note here is Book type is shared between the service and the UI. However, only service can do a write to the DB, wherin the UI can update the trade object (basically the xml) and sends it over the messaging bus (again the xml). The service once receiving it updates the DB.
The xml size will be approximately 20 KB, so that would mean that if I'm loading say 50 books I'll be loading close to an MB of data.
Thanks,-Mike

Resources