I want to keep count of some kind of achievements for users in a community based website.
The idea is to give achievements for logging in 5 days in a row, or once every weekend for an entire month.
I'm also gonna give achievements for getting 100 posts, but this is easy to determine. The time based examples I just gave are a little harder I think.
How can I make some kind of generic system to keep count of these metrics per user? Or will I end up with a big table with fields such as "every_weekend_for_month" and "5_days_in_a_row" and once those integers reach 4 and 5, they have been 'achieved'. But then I also, for both fields, have to keep score of the last weekend/day.
You will need to track all data that is (even partially) required to get the achievement.
For the achievements around logging in, you need to track each login once per day, having a table like:
user_id | login
1 | 2013-07-20
1 | 2013-07-19
1 | 2013-07-16
2 | 2013-07-20
...
Whenever the tracking event is triggered, you also check for the achievements.
event onLogin {
// get the last 4 logins before the current login
statement = (
SELECT login FROM tracking_user_login
WHERE user_id = 1
ORDER BY login DESC
LIMIT 1,4
);
statement.execute();
// did the user even login at least 4 times already?
if (statement.rowCount == 4) {
date lastLogin = todaysLogin;
int consecutiveLogins = 1;
// iterate descending through the last days
foreach (row in statement) {
if (row.login == (lastLogin - 1day)) {
consecutiveLogins++; // increment consecution
lastLogin = (lastLogin - 1day); // prepare next comparison
} else {
// consecution interrupted, ignore the rest
break;
}
}
// enough to achieve something?
if (consecutiveLogins >= 5) {
user.addAchievement('5 CONSECUTIVE LOGINS');
}
}
}
You can basically add all achievements around login in this event.
You could track all logins and use that data to extrapolate the achievements. Tracking and searching individual logins can be slower. If you have very large user base and lots of logins, it may not be trivial to do these counts on login.
If you want to be faster you could track the last login date for a particular achievement and then increment a counter which sounds like what you were thinking.
Related
I am using firebase purely to integrate a simple ticket buying system.
I would think this is a very common scenario people have and wondering what the solutions are.
I have an issue with the write limit time, it means I can't keep the stock count updated.
Due to Firebase's 1 second write limit and the way transactions work, they keep timing out when there is a large buy of tickets at one point in time.
For example:
Let's say we have a simple ticket document like this
{
name: "Taylor Bieber Concert"
stock: 100
price: 1000
}
I use a firebase transaction server side that does (pseudo)
transaction{
ticket = t.get(ticketRef).data() //get the data of ticketRef doc
guard (ticket .stock > 0) else return //check the stock is more than 0
t.update(ticketRef, {stock : increment(-1) }) //update the document and remove 1 stock value
}
The transaction and functionality all works however if I get 20-100 people trying to buy a ticket as it releases, it goes into contention it seems and times out a bunch of the requests...
Is there a way to avoid these timeouts? Some sort of queue or something?
I have tried using Transactions server-side in firebase functions to update the stock value, when many people try to purchase the product simultaneously it leads to majority of the transactions being locked out / Aborted Code 10
I have a Table called Trip where the users can create trips, and on the edit screen I have a button that the users can click on it and create Legs for that selected trip. my question is how do I make a field in the TripLegs domain auto increment?
so lets say the user creates four trip legs, so the stop number field "the one I wanted to auto populate" would be
1
2
3
4
if the user goes back and delete Trip leg 2 so how do I change the stop number in the rest three legs to
1
2
3
instead of
1
3
4
I have to agree with Vahid's comment on the original question. My preferred approach would be to dynamically set that value as a transient value on the domain after I sorted the collection by some criteria.
If you wanted to maintain the same sort order every time based on which Leg was created sequentially, you could add a 'Created_Date' column to your table and sort based on that value when you get the list.
Alternatively, if you would really like to store the stop_number value (which there are plenty of reasons why you might want to) you might want to consider the following approach:
Override the default Delete action for TripLegs:
class TripLegController {
def delete = {
def tripLegId = params.tripLegId as Long
def tripLeg = TripLeg.get(tripLegId)
def otherLegs = TripLeg.FindAllByTripAndIdNotEqual(tripLeg.trip, tripLegId, [sort: 'stopNumber', order: 'asc'])
tripLeg.delete(failOnError: true)
def stopNum = 1
otherLegs.each{leg ->
leg.stopNumber = stopNum
leg.save()
stopNum++
}
}
}
I have a system where people can pick some stocks and it values their portfolios but I'm having trouble doing this in a efficient way on a daily basis because I'm creating entries for days that don't have any changes(think of it like I'm measuring the values and having version control so I can track changes to the way the portfolio is designed).
Here's a example(each day's portfolio with stock name and weight):
Day1:
ibm = 10%
microsoft = 50%
google = 40%
day5:
ibm = 20%
microsoft = 20%
google = 40%
cisco = 20%
I can measure the value of the portfolio on day1 and understand I need to measure it again on day5(when it changed) but how do I measure day2-4 without recreating day1's entry in the database?
My approach right now(which I don't like) is to create a temp entry in my database for when someone changes the portfolio and then at the end of the day when I calculate the values if there is a temp entry I use that otherwise I create a new entry(for day2-4) using the last days data. The issue is as data often doesn't change I'm creating entries that are basically duplicates. The catch is: my stock data is all daily. I also thought of taking the portfolio and if it hasn't been updated in 3 days to find the returns of the last 3 days for each stock but I wasn't sure if there was a better solution.
Any ideas? I think this is a straight forward problem but I just can't see a efficient way of doing it.
note: in finance terms, its called creating a NAV and most firms do it the inefficient way I'm doing it but its because the process was created like 50 years ago and hasn't changed. I think this problem is very similar to version control but I can't seem to make a solution.
In storage terms is makes most sense to just store:
UserId - StockId1 - 23% - 2012-06-25
UserId - StockId2 - 11% - 2012-06-26
UserId - StockId1 - 20% - 2012-06-30
So you see that stock 1 went down at 30th. Now if you want to know the StockId1 percentage at the 28th you just select:
SELECT *
FROM stocks
WHERE datecolumn<=DATE(2012-06-28)
ORDER BY datecolumn DESC LIMIT 0,1
If it gives nothing back you did not have it, otherwise you get the last position back.
BTW. if you need for example a graph of stock 1 you could left join against a table full of dates. Then you can fill in the gaps easily.
Found this post here for example:
UPDATE mytable
SET number = (#n := COALESCE(number, #n))
ORDER BY date;
SQL QUERY replace NULL value in a row with a value from the previous known value
Since GAE went to the pricing model at the start of last week I have been wrestling with exceeding my quota of Datastore read and write operations. I'm not sure whether Google counts all updates for one writer as one write or whether every column update is counted as a separate write.
If the latter is true could I get around this by having one update function to update the 6 columns in the parameters or do will I also get charged for 6 updates?
Here is my existing code, used to update a player's score (rating) and the other details at the same time. At the moment I always populate name, email, rating, won, played and achievements with values from the client. One solution may be to only send these from the client side when they have changed value.
Long key = Long.valueOf(updateIdStr);
System.out.println("Key to update: " + key);
PlayerPersistentData ppd =null;
try {
ppd = pm.getObjectById(
PlayerPersistentData.class, key);
// for all of these, make sure we actually got a value via
// the query variables
if (name != null && name.length() > 0) {
ppd.setName(name);
}
if (ratingStr != null && ratingStr.length() > 0) {
ppd.setRating(rating);
}
if (playedStr != null && playedStr.length() > 0) {
ppd.setPlayed(played);
}
if (wonStr != null && wonStr.length() > 0) {
ppd.setWon(won);
}
if (encryptedAchievements != null
&& encryptedAchievements.length() > 0) {
ppd.setAchievements(achievements);
}
if (email != null & email.length() > 0) {
ppd.setEmail(email);
}
resp.getWriter().print(key);
} catch (JDOObjectNotFoundException e) {
resp.getWriter().print(-1);
}
}
The number of writes you are charged for depends on your entity. In general, you are charged for 1 write for the entity, and 1 write for each index update. Each indexed property is included in the ascending and descending single-property indexes, so there's a minimum of 2 writes per indexed entity, plus any writes for composite (user-defined) indexes.
When updating an existing entity, you're charged for the diff of the old indexes and the new ones. So if you modify one property, you'll be charged for the entity write, plus 4 writes per property (deleting the old value and inserting the new one) for the built-in indexes, and likewise for any composite indexes.
Note the changes in pricing structure going into effect July 1st, 2016 going from per operation to per entity. This changes how you think about writing efficiently (cost-wise) to Datastore.
New Cloud Datastore Pricing Starting July 1st, 2016
On July 1, 2016, Google Cloud Datastore pricing will change from
charging per operation to charging per entity. This much simpler
pricing means it will cost significantly less to use the full power of
Google Cloud Datastore.
For example, in the current pricing, writing a new entity with 1
indexed property would cost 4 write operations. In the new pricing, it
would cost only 1 entity write. Similarly, deleting this entity in the
current pricing would cost 4 write operations, but in the new pricing
it would cost only 1 entity delete.
I've got a requirement that I believe must occur very frequently around the world. I have two records that are linked together and whenever a change is made to them a new pair of records is to be created and retain the same link.
The requirement I'm working on has to do with the insurance industry which requires me to deactivate the current insurance policies and re-activate them in a new row in order to show the history of changes made to the insurance policies. When they are re-created they still need to be linked together.
An example of how this process is intended to work from the view of rows in a database:
Insurance Id, Insurance Type, Master Insurance Id, Status
1, Auto Insurance, null, Active
2, Wind Screen Insurance, 1, Active
Note in the above how the link between these policies is denoted by the Master Insurance Id of the second row pointing the the Insurance Id of the first row.
In the code I am writing I am processing each of the policies one at a time so after the first step I have the following:
1, Auto Insurance, null, Inactive
2, Wind Screen Insurance, 1, Active
3, Auto Insurance, null, Active
When I process the second policy I get the following:
1, Auto Insurance, null, Inactive
2, Wind Screen Insurance, 1, Inactive
3, Auto Insurance, null, Active
4, Wind Screen Insurance, 1, Active //needs to be 3 not 1
You'll notice that when I create the new Window Insurance that since we copy the old row we end up with the Master Id insurance pointing to the inactive row.
In order to get around this, I have to keep track of the master insurance id of the previous policy that was processed which has led to the following code:
int masterInsuranceId = -1;
foreach(Policy policy in policyList)
{
//copy the old policy so the new policy has
//the same details as the old one
Policy newPolicy = policyManager.Copy(policy);
//if the new policy is not the master insurance store
//the master its new master insuance
if(newPolicy.MasterInsuranceId.HasValue)
{
newPolicy.MasterInsuranceId = masterInsuranceId;
}
//save the details of the new policy
policyManager.SavePolicy(newPolicy);
//record the master id so we can update the master id
//reference on the next policy
if(newPolicy.MasterInsuranceId == null)
{
masterInsuranceId = newPolicy.Id;
}
else
{
masterInsuranceId = -1;
}
//inactivate the current policy
policy.Status = Inactive;
policyManager.UpdatePolicy(policy);
}
Does anyone know how this can be simplified? What is the best way to ensure two records will remain linked to each other even as a history of the changes is recorded for each change made to the record?
What sort of database schema are you using? Usually, this is where relationship should be stored and I think this should be hanlded at the data processing level rather than code.
Here's a very simplified recommendation
insurance (< insurance_id >, name, description)
insurance_item(< item_id >, < insurance_id >, name, description)
insurance_item_details(< item_id >, < policy_id >, when_changed)
insurance__policy has a 1 to many relationship with insurance_item. An insurance__item has a one to many relationship with insurance_item_details. Each row in insurance__item__details represents a change in policy.
This way, a SQL can quick quickly retrieve the latest two items
SELECT FROM insurance_item_details, insurance_item, insurance where
insurance_item_details.item_id = insurance_item.item_id
AND insurance_item.insurance_id = insurance.insurance_id
ORDER BY when_changed
LIMIT 1
Or you can even retrieve the history.
(The SQL has not been tried)
So the idea is you don't duplicate insurance_item -- you have another table to store the elements that would be changed, and slap a timestamp with it represent that change as a relationship.
I'm not a SQL guru (unfortnately), but all you need to do is to insert into the insurance_item_details table, instead of making copies. From the way it looks, making copies like in your original example seems to violate 2NF, I think.
IF you had a bad code design and you needed to make a change would you refactor? THen why would you not consider refactoring a bad database design? This is something that is more ealisy handled in the database through good design.
If you are working in the insurance industry which is data intensive and you are not strong in database design and query skills, I would suggest you make it a priority to become so.
Thanks everyone who has provided answers. Unfortunately, due to conditions at work I'm unable to implement database changes and am stuck trying to make due with a coding solution.
After spending some time over the weekend on this problem I have come up with a solution that I believe simplifies the code a bit even though it is still nowhere near perfect.
I extracted the functionality out into a new method and pass in the master policy that I want the new policy linked to.
Policy Convert(Policy policy, Policy masterPolicy)
{
Policy newPolicy = policyManager.Copy(policy);
//link the policy to it's master policy
if(masterPolicy != null)
{
newPolicy.MasterPolicyId = masterPolicy.Id;
}
SavePolicy(newPolicy);
//inactivate the current policy
policy.Status = Inactive;
policyManager.UpdatePolicy(policy);
return newPolicy;
}
This allows me to then loop through all the policies and pass in the policy that needs to be linked as long as the policies are sorted in the correct order...which in my case is by start date and then by master policy id.
Policy newPolicy = null;
foreach(Policy policy in policyList)
{
Policy masterPolicy = policy.MasterPolicyId.HasValue ? newPolicy : null;
newPolicy = Convert(policy, masterPolicy);
}
When all is said and done, it isn't all that much less code but I believe it is much more understandable and it allows individual policies to be converted.