I try to make a chat solution on App Engine for my android app.
A decided that instead of save all messages send to a topic in a separated entity like ChatMessage or something like this, I can save them in a List of Strings inside the Topic entity, like this:
#Entity
public class Topic {
#Id
public String id;
public List<Long> users = new ArrayList<Long>(2);
public Long lastChangeTime;
public LinkedList<String> messages = new LinkedList<String>();
}
I came up with this because usually storing the topic id for every message is more data than the message string itself. :S
What I don't know is, can this list strong consistent?
This is how I add a new message to a Topic:
// 2. get topic if it exists or create a new if not
Topic topic = ofy().load().key(Key.create(Topic.class, topicId)).now();
if (topic == null) {
topic = new Topic(senderId, recipientId);
}
// 3. add message
// this method adds the new string into the topic and update the
// last change time
topic.addMessage(senderId, recipientId, send.message);
// 4. save topic & the new message
ofy().save().entity(topic).now();
So if two users send a message at the same time, can it happens that the first user load the Topic, add his message, but in the same time the second user already loaded the topic (without the first user's message) and add his own new message. The first save the topic first. But can the second override the previous save of first user? Or what happens?
If it can happen, how can i avoid this, bearing in mind that it's a high write rate entity so I need more write than 1/sec!
Thanks, and best regards.
What I don't know is, can this list strong consistent?
Consistency is determined by entity groups and queries, not properties.
So if two users send a message at the same time, can it happens that the first user load the Topic, add his message, but in the same time the second user already loaded the topic (without the first user's message) and add his own new message. The first save the topic first. But can the second override the previous save of first user? Or what happens?
You would need to do this inside a transaction. If a ConcurrentModificationException is thrown inside the transaction (your example scenario) then Objectify will retry for you.
But, to avoid the contention, you will need to change your data model. You could have a Message class and a Topic, like this:
#Entity
public class Topic {
#Id
String id;
List<Long> users = new ArrayList<Long>(2);
Long lastChangeTime;
}
And a Message referencing one or more topics (I'm making assumptions here):
#Entity
public class Message {
#Id
Long id;
Long lastChangeTime;
#Index
Ref<Topic> topic;
}
The #Index annotation on the topic will allow you to query for Messages by topic. You could change the Ref<Topic> to a List of same if you messages can be in multiple topics.
Related
I am using Java Spring Boot and OptaPlanner to generate a timetable with almost 20 constraints. At the initial generation, everything works fine. The score showed by the OptaPlanner logging messages matches the solution received, but when I want to resume the generation, the solution contains a lot of problems (like the constraints are not respected anymore) although the generation starts from where it has stopped and it continues initializing or finding a best solution.
My project is divided into two microservices: one that communicates with the UI and keeps the database, and the other receives data from the first when a request for starting/resuming the generation is done and generates the schedule using OptaPlanner. I use the same request for starting/resuming the generation.
This is how my project works: the UI makes the requests for starting, resuming, stopping the generation and getting the timetable. These requests are handled by the first microservice, which uses WebClient to send new requests to the second microservice. Here, the timetable will be generated after asking for some data from the database.
Here is the method for starting/resuming the generation from the second microservice:
#PostMapping("startSolver")
public ResponseEntity<?> startSolver(#PathVariable String organizationId) {
try {
SolverConfig solverConfig = SolverConfig.createFromXmlResource("solver/timeTableSolverConfig.xml");
SolverFactory<TimeTable> solverFactory = new DefaultSolverFactory<>(solverConfig);
this.solverManager = SolverManager.create(solverFactory);
this.solverManager.solveAndListen(TimeTableService.SINGLETON_TIME_TABLE_ID,
id -> timeTableService.findById(id, UUID.fromString(organizationId)),
timeTable -> timeTableService.updateModifiedLessons(timeTable, organizationId));
return new ResponseEntity<>("Solving has successfully started", HttpStatus.OK);
} catch(OptaPlannerException exception) {
System.out.println("OptaPlanner exception - " + exception.getMessage());
return utils.generateResponse(exception.getMessage(), HttpStatus.CONFLICT);
}
}
-> findById(...) method make a request to the first microservice, expecting to receive all data needed by constraints for generation (lists of planning entities, planning variables and all other useful data)
public TimeTable findById(Long id, UUID organizationId) {
SolverDataDTO solverDataDTO = webClient.get()
.uri("http://localhost:8080/smart-planner/org/{organizationId}/optaplanner-solver/getSolverData",
organizationId)
.retrieve()
.onStatus(HttpStatus::isError, error -> {
LOGGER.error(extractExceptionMessage("findById.fetchFails", "findById()"));
return Mono.error(new OptaPlannerException(
extractExceptionMessage("findById.fetchFails", "")));
})
.bodyToMono(SolverDataDTO.class)
.block();
TimeTable timeTable = new TimeTable();
/.. populating all lists from TimeTable with the one received in solverDataDTO ../
return timeTable;
}
-> updateModifiedLessons(...) method send to the first microservice the list of all generated planning entities with the corresponding planning variables assigned
public void updateModifiedLessons(TimeTable timeTable, String organizationId) {
List<ScheduleSlot> slots = new ArrayList<>(timeTable.getScheduleSlotList());
List<SolverScheduleSlotDTO> solverScheduleSlotDTOs =
scheduleSlotConverter.convertModelsToSolverDTOs(slots);
String executionMessage = webClient.post()
.uri("http://localhost:8080/smart-planner/org/{organizationId}/optaplanner-solver/saveTimeTable",
organizationId)
.header(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE)
.body(Mono.just(solverScheduleSlotDTOs), SolverScheduleSlotDTO.class)
.retrieve()
.onStatus(HttpStatus::isError, error -> {
LOGGER.error(extractExceptionMessage("saveSlots.savingFails", "updateModifiedLessons()"));
return Mono.error(new OptaPlannerException(
extractExceptionMessage("saveSlots.savingFails", "")));
})
.bodyToMono(String.class)
.block();
}
I would probably start by making sure that the solution you save to the DB after the first run of startSolver() is the same (in terms of Java equality), including the assignments of planning variables to values, as the solution you retrieve via findById() at the beginning of the second run.
We have a requirement to create a kind of user session. Our front end is react and backend is .net core 6 api and db is postgres.
When 1 user clicks on a delete button , he should not be allowed to delete that item when another user is already using that item and performing some actions.
Can you guys suggest me an approach or any kind of service that is available to achieve this. Please help
I would say dont make it too complicated. A simple approach could be to add the properties 'BeingEditedByUserId' and 'ExclusiveEditLockEnd' (datetime) to the entity and check these when performing any action on this entity. When an action is performed on the entity, the id is assigned and a timeslot (for example 10 minutes) would be assigned for this user. If any other user would try to perform an action, you block them. If the timeslot is expired anyone can edit again.
I have had to do something similar with Java (also backed by a postgres db)
There are some pitfalls to avoid with a custom lock implementation, like forgetting to unlock when finished, given that there is not guarantee that a client makes a 'goodbye, unlock the table' call when they finish editing a page, they could simply close the browser tab, or have a power outage... Here is what i decided to do:
Decide if the lock should be implemented in the API or DB?
Is this a distributed/scalable application? Does it run as just a single instance or multiple? If multiple, then you can not (as easily) implement an API lock (you could use something like a shared cache, but that might be more trouble than it is worth)
Is there a record in the DB that could be used as a lock, guaranteed to exist for each editable item in the DB? I would assume so, but if the app is backed by multiple DBs maybe not.
API locking is fairly easy, you just need to handle thread safety as most (if not all) REST/SOAP... implementations are heavily multithreaded.
If you implement at the DB consider looking into a 'Row Level Lock' which allows you to request a lock on a specific row in the DB, which you could use as a write lock.
If you want to implement in the API, consider something like this:
class LockManager
{
private static readonly object writeLock = new();
// the `object` is whatever you want to use as the ID of the resource being locked, probably a UUID/GUID but could be a String too
// the `holder` is an ID of the person/system that owns the lock
Dictionary<object, _lock> locks = new Dictionary<object, _lock>();
_lock acquireLock(object id, String holder)
{
_lock lok = new _lock();
lok.id = id;
lok.holder = holder;
lock (writeLock)
{
if (locks.ContainsKey(id))
{
if (locks[id].release > DateTime.Now)
{
locks.Remove(id);
}
else
{
throw new InvalidOperationException("Resource is already locked, lock held by: " + locks[id].holder);
}
}
lok.allocated = DateTime.Now;
lok.release = lok.allocated.AddMinutes(5);
}
return lok;
}
void releaseLock(object id)
{
lock (writeLock)
{
locks.Remove(id);
}
}
// called by .js code to renew the lock via ajax call if the user is determined to be active
void extendLock(object id)
{
if (locks.ContainsKey(id))
{
lock (writeLock)
{
locks[id].release = DateTime.Now.AddMinutes(5);
}
}
}
}
class _lock
{
public object id;
public String holder;
public DateTime allocated;
public DateTime release;
}
}
This is what i did because it does not depend on the DB or client. And was easy to implement. Also, it does not require configuring any lock timeouts or cleanup tasks to release locked items with expired locks on them, as that is taken care of in the locking step.
I'm execute method Datastore.delete(key) form my GWT web application, AsyncCallback had call onSuccess() method .Them i refresh http://localhost:8888/_ah/admin immediately , the Entity i intent to delete still exist. Smilar to, I refresh my GWT web application immediately the item i intent to delete still show on web page.Note the the onSuccess() had been call.
So, how can i know when the Entity already deleted ?
public void deleteALocation(int removedIndex,String symbol ){
if(Window.confirm("Sure ?")){
System.out.println("XXXXXX " +symbol);
loCalservice.deletoALocation(symbol, callback_delete_location);
}
}
public AsyncCallback<String> callback_delete_location = new AsyncCallback<String>() {
public void onFailure(Throwable caught) {
Window.alert(caught.getMessage());
}
public void onSuccess(String result) {
// TODO Auto-generated method stub
int removedIndex = ArryList_Location.indexOf(result);
ArryList_Location.remove(removedIndex);
LocationTable.removeRow(removedIndex + 1);
//Window.alert(result+"!!!");
}
};
SERver :
public String deletoALocation(String name) {
// TODO Auto-generated method stub
Transaction tx = Datastore.beginTransaction();
Key key = Datastore.createKey(Location.class,name);
Datastore.delete(tx,key);
tx.commit();
return name;
}
Sorry i'm not good at english :-)
According to the docs
Returns the Key object (if one model instance is given) or a list of Key objects (if a list of instances is given) that correspond with the stored model instances.
If you need an example of a working delete function, this might help. Line 108
class DeletePost(BaseHandler):
def get(self, post_id):
iden = int(post_id)
post = db.get(db.Key.from_path('Posts', iden))
db.delete(post)
return webapp2.redirect('/')
How do you check the existence of the entity? Via a query?
Queries on HRD are eventually consistent, meaning that if you add/delete/change an entity then immediately query for it you might not see the changes. The reason for this is that when you write (or delete) an entity, GAE asynchronously updates the index and entity in several phases. Since this takes some time it might happen that you don't see the changes immediately.
Linked article discusses ways to mitigate this limitation.
I am struggling while handling sessions in GAE. I am trying to store a two classes and a string in session. Although on DEV environment it runs fine, on production a class and a string are not being persisted in session. The class that is not getting saved as a session attribute is as follows:
#PersistenceCapable(detachable="true")
public class Agent implements Serializable{
#PrimaryKey
#Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY)
private Long id;
#Persistent private String name; //Name of the Agency
#Element(dependent = "true")
private List<Contact> contacts = new ArrayList<Contact>();
#Element(dependent = "true")
private List<Agency> agencies = new ArrayList<Agency>();
#Persistent private List<Long> subAgents = new ArrayList<Long>();
#Persistent private Date createdOn = new Date();
}
I would like to mention again that it works fine on DEV Environment but on production I get values as null. As you can see I have made the class implement Serializable. But I think it is not the problem because I am setting one more attribute as a simple string and that also is failing (I get the attribute value as null). Session however is created as I can see it at the backend and also there is one more class which is persisted in session.
Anybody have suggestions? Thanks in advance.
Your problem is probably related to either:
GAE often serializes sessions almost immediately, dev environment doesn't. So all objects in your graph must implement Serializable.
BUT EVEN MORE LIKELY is that after you modify a session variable, you must do something like req.getSession().setAttribute(myKey,myObj) - it WILL NOT see changes in your object and automatically write them back to the session... so the session attributes will have the value of whatever they had when they were last set.
Problem #2 above cost me countless time and pain until I tripped over (via a lengthy process of elimination).
Have you enabled sessions in your configuration file?
http://code.google.com/intl/en/appengine/docs/java/config/appconfig.html#Enabling_Sessions
Making classes Agency and Contact Serializable solves the problem. That mean each and every object (be it nested or otherwise) which is present inside a session attribute should be serializable.
I’m facing an issue with NHibernate performance and can you please suggest me some optimizations? Below mentioned is a small summary of my application architecture
I have a windows service which is listening to a messaging bus. On receiving a message the service creates an object out of which a property is the received xml snippet and saves the message to the DB (uses NH). There is a WPF UI with a readonly connection to the DB, and on refresh of the UI it displays the objects on the screen.
While the UI does a refresh, it retrieves the xml and deserializes it , from which the object’s properties are derived and binded to the screen.
For example assume an xml XXX is received by the service, it deserializes the xml , creates the book object and save it to the DB and a property/column is SCHEMA which contains the xml snippet.
The UI while refreshed searches all book objects by ID and creates the book objects out of the xml which is being saved (yes, the xml is the constructor param).
Now my issue is that the refresh takes more than 2 minutes to display say 50 book objects. I analyzed it using the NHibernate profiler, and found that the time spend within the DB is negligible, however time spent to create the entities is proportionally huge(10ms:1990 ms).I guess it’s due to the fairly huge size of xml snippet and it’s deserialization.
My question is, how can I improve the performance. I dispose sessions after every refresh and is not lazy loading (please note that the time spend in DB is negligible). On every refresh it’s possible that all objects are updated by some downstream systems or maybe one of them are updated.Can I implement some sort of caching mechanism in this case?
Thanks in advance for any suggestions.
Regards,
-Mike
The entire list of 50 books could be saved in a singleton class meant for caching. Like a cache manager. You could also use say an enterprise library cache but I would suggest an in memory cache. If a book gets added you could update the cache. The cache would have the entire xml so no deserialisation would happen. Also you could update the db in an ansynchronous thread and reduce the time.
Here is the pseudo code
On the service, whenever I receive a message
public void OnMessage(string message)
{
//deserializes the message
DeserializedObject schema = deserializationFactory.Deserialize(message);
var book = new Book(schema,message);
// saves the book using a new session
repository.Save(book);
}
The book object:
public class Book
{
public DeserializedObject Schema{get;set;}
private string xml;
public string Xml{get{return xml;}}
public Book(DeserializedObject schema,string xml):this(schema)
{
this.xml = xml;
}
public Book(DeserializedObject schema):this()
{
this.Schema = schema;
}
public virtual XmlDocument XmlSchema
{
get
{
var doc = new XmlDocument();
if (Schema!= null)
{
var serializer = new XmlSerializer(typeof(DeserializedObject));
var stream = new MemoryStream();
serializer.Serialize(stream, Schema);
stream.Position = 0;
doc.Load(stream);
}
return doc;
}
}
public virtual string SerializedSchema
{
get { return XmlSchema.OuterXml; }
set
{
if (value != null)
Schema = value.Deserialize< DeserializedObject >();
}
}
public string Author
{
get{return Schema.Author;}
}
}
Now the Mapping for Book(uses FNH)
public class BookMap:ClassMap<Book>
{
LazyLoad();
Table("Books");
IdGenerator.Instance.GenerateId(this, "book_id_seq", book => book.Id);
Map(book=> book.SerializedSchema, "SERIALIZED_SCHEMA")
.CustomSqlType("Clob")
.CustomType("StringClob");
}
On UI:
public void OnRefresh()
{
//In reality the call to DB runs on a background worker and the records are binded to the grid after a context switch.
//GetByCriteria creates a new session every time a refresh happens.
datagrid.DataContext = repository.GetByCriteria(ICriterion allBooksforToday);
}
The important thing to note here is Book type is shared between the service and the UI. However, only service can do a write to the DB, wherin the UI can update the trade object (basically the xml) and sends it over the messaging bus (again the xml). The service once receiving it updates the DB.
The xml size will be approximately 20 KB, so that would mean that if I'm loading say 50 books I'll be loading close to an MB of data.
Thanks,-Mike