I am trying to create a forum app using Django as a backend and React as a front end. I want to find out how many people have visited a post created by a user so that I can store as views and list the posts according to popularity.
I am just a student and I have no experience with live websites, so I'm wondering if it is okay to just save a user at componentdidmount life cycle? But I'm afraid it'd make the same user be counted as many as he visits and the post creator will be able to increase his post's popularity by just spamming his website.
I would suggest you implement this on the backend, not the front end. I don't know Django well, but there should be some way to know whether that particular post is getting requested. At that point, you'll want to increment the counter for that post.
The problem of course is determining which "views" count as real views. Was it the poster? Was it a robot? Was it a spider? Was it a scraper? Was it the same person who is not the poster viewing it many times?
I wouldn't say this last part is not an easy thing to implement, and would probably take some trial and error before finding the right conditions to get your metrics "right".
As #Mike suggests above, there are many analytics packages which use sophisticated algorithms to determine "realness", and you may be able to use this data. My understanding is that you want to actually apply the data to sorting and UI for your app, not just view it on the dashboard of your analytics tool. I've never tried to look for one that supports an API to discover what you're interested in programmatically, but they all probably allow you do download structured data about your traffic. The problem with the latter approach is that creates a delay and a manual step (always something to try to avoid imo).
I recently read a tweet that suggested that if one wants to avoid headaches in the future of an app, they should have the user table have only authentication information and a user profile table for everything else. That is if you have bikes and peaches in the system they should be linked to the user that owns them via the user profile id. The tweet was not clear on what the consequences of using the user profile. Are there maintainability/scalability repercussions to not following this especially in a large web app?
Well, don't take it as a dogma, though it isn't completely worthless. Dependency is a problem: if you have to have a lot of different data that represent particular user, you'll change underlying database oftenly. In case everything is stored in a single column, you might find yourself doing repetative monkey job of "making it work" with your types/ORM and whatsnot gonna be involved in DB <-> RUNTIME communication.
It is all about splitting complicated task into smaller less complex subtasks: auth is self-standing - one of the most important - task itself and it definitely deserves some dedicated space. However, your app might be not that big, or not that concerned with users, and thus it won't be very helpful to split data into multiple columns. You must develop a deep sense of purpose and measure when it comes down to a software design.
I have been working on a social app. I'll first explain the problems, and then summarize in the questions below.
In the network, there would be channels, and users. Users can subscribe to these channels, and to other users. This way, we have two sources from which posts can be generated.
Now, we can simply keep one Activity model where we record all the actions, their kind, and what they affect. Be it from channels, or from the users. And refer these while creating a feed for each user.
I found a solution given in a talk by Brett Slatkin which basically suggests using ListProperty to link each post with each subscriber. But Guido suggests not to use lists if there's going to be more than 1000 elements. So if there's going to be more than 1000 subscribers to a channel, this will probably run into problem. Even if this were to work --
I want to rank the posts based on popularity (based on number of votes, comments), and apply some time decay function. More like Reddit. To do so, I will have to keep the Activity in memory, and filter and order it based on ranks for each user. I'll also need to do it periodically since new activities will keep occurring also old activities will gain, or lose their values.
The challenge is -- To keep the data in memory (for processing the feed as well as to keep things fast). I will have to store a copy of each users feed to persistent storage, but if the order of posts is going to be changing, how do I keep track of that in the database?
Also: I have kept my options open -- I will move to AWS if I have to.
To summarize:
Is there a better solution to keep track of subscribers without using Lists? Using something like PostID > SubscriberID in one entity would be very, very expensive and inefficient.
If there's any cost-effective and fast solution to the problem above, how do I deal with the next challenge -- which is to generate a personalized feed? (memory issues - unknown size of memcache)
If I can generate a personalized feed (which will be dynamic, will be changing) how to I keep it in the database?.
I have gone through several articles and I can probably solve first two problems with AWS, but I am trying to stay away from the manual scaling work. If there is no way, I am willing to move to AWS. Even if I move to AWS, I can't think of a solution to the third problem.
Any thoughts, directions, resources would be helpful! Thanks!
I am working on a group project and we are having a discussion about whether to calculate data that we want from an existing database and store it in a new database to query later, or calculate the data from the existing database every time we need to use it. I was wondering what the pros and cons may be for either implementation. Is there any advice you could give?
Edit: Here is more elaborate explanation. We have a large database that has a lot of information being submitted to it daily. We are building a system to track certain points of data. For example, we are getting the count of how many times a user does something that is entered in the database. Using this example (are actual idea is a bit more complex), we are discussing to methods of getting the count of actions per users. The first method is to create a database that stores the users and their action count, and query this database every time we need the action count. The second method would be to query the large database and count the actions per user every time we need to use it. I hope this explanation helps explain. Thoughts?
Edit 2: Two more things that may be useful to point out is 1: I only have read access to the large database and 2: My ultimate goal is to display this information on a web page for end users.
This is a generic question about optimization by caching. The following was my answer to essentially the same question. Even though that question provided a bunch of different details, none of them were specific enough to merit a non-generic answer either:
The more you want to calculate at query time, the more you want views,
calculated columns and stored or user routines. The more you want to
calculate at normalized base update time, the more you want cascades
and triggers. The more you want to calculate at some other (scheduled
or ad hoc) time, the more you use snapshots aka materialized views and
updated denormalized bases. You can combine these. Any time the
database is accessed it can be enabled by and restricted by stored
routines or other api.
Until you can show that they are in adequate, views and calculated
columns are the simplest.
The whole idea of a DBMS is to store a representation of your
application state as the database (which normalization reduces the
redundancy of) and then you query and let the DBMS implement and
optimize calculation of the answer. You haven't presented a reason for
not doing that in the most straightforward way possible.
[sic]
Always make sure an application is reading its own personal ("external") database that is a view of "the" ("conceptual") database so that when you change the implemention of the former (plus the rest of some combined interfact) by the latter (plus the rest of some compbined mechanisms) your applications do not have to change ("logical independence"). Here the applications are your users' and your trackers'.
Ultimately you must instrument and guestimate. When it is worth it you start caching. Preferably as much as possible in terms of high-level notions like views and snapshots and as little as possible in non-DBMS code. One of he benefits of the relational model is that it is easy to describe a strightforward relational interface in terms of another straightforward relational interface. You protect your applications from change by offering an interface that hides secrets of implementation or which of a family of interfaces is the current one.
It doesn't matter if you're building an eshop or any other application which uses session to store some data between requests.
If you don't want to annoy the user by requiring him to register, you need to allow him to do certain tasks anonymously when possible (user really have to have a reason for registering).
There comes a problem - if user decides to login with his existing profile, he may already have some data in his "anonymous" session.
What are the best practices of merging these data? I'm guessing the application should merge it automatically where possible or let the user decide where not possible.
But what I'm asking more is if there are any resources about how to do the magic in database (where the session data are usually stored) effectively.
I have two basic solutions in my mind:
To keep anonymous session data and just add another "relation" saying what's actually used where and how it's merged
To physically merge these data
We could say that the first solution will probably be more effective, because the information about any relation will probably mean less data than data about the user. But it will also mean more effort when reading the data (as we firstly need to read the relation to get to actual user data).
Are there any articles/resources for designing data structures for this particular use case (anonymous + user data)?
An excellent question that any app developer using user data should ask, and, sadly very few do :(
In fact, there are two completely independent questions here:
Q1 - At what stage require user to sign in/up?
Q2 - Data concurrency and conflict resolution (see below).
And here some analysis for each of the questions. Please excuse my extra passion coming from my own "frustrated user" experience. :)
Q1 is a pure usability question. To which the answer is actually obvious:
Avoid or delay to force the user sign in as much as possible!
Even the need to save state is not enough a reason by itself. If I am as user not interested in saving that state, then don't force me to sign! Please!
The only reason for you (as website) to justify forcing me to sign is when I (as user) want to save my data for later use. Here I speak as user having wasted time on signing only to find the site useless. If you want to get rid of too many users, that is the right way. In any other case - please, delay it as much as possible!
Why so many sites completely disregard such an obvious rule? The possible reasons I can see:
R1- developer friendly vs user friendly. Yes, it is developer friendly to require sign in right away, so we don't need to bother with concurrency (Q2). So we can save developer costs, time etc. But every saving comes at a cost! Which in this case is called User Experience. Which is not necessarily where you would like to look for saving. Especially, since the solution should not be that hard (see below).
R2 - Designer or Manager making the decision is an "indoor enthusiast" :) She lives happy life surrounded by super-fast computers with super-fast internet connection and can't imagine singing up can be that hard for any user. So why is it such a big deal? So many reasons:
It breaks the application flow. Sites living in previous century still replace the whole screen with sometimes rather lengthy form. Some forms are badly designed, some have erratic instructions, some simply don't work. Some have submit buttons that are for some reason disabled in the browser used.
Some form designers have genius idea to lock certain fields with barely noticeable change or colour. Then don't show me the field if you don't want me to fill it!
If the site is serious about user's data, it must request Email and must verity it! Why? How else shall I get back to user who forgot all other credentials? Why verify? What if user mistyped the email? If I don't verify it, next time the user tries to recover password with her correct email, the recovery fails and all data are lost! Obvious, yet there are still sites out there not doing it. Then I need to wait till the verification email is received and click on, hopefully, well-formatted and uniquely identifiable link that does not break in my browser, nor get some funny characters due to broken encoding detection, making the whole link unusable.
The internet connection can be slow or broken, making every additional step a piece of pain. Even with good connection, it happens here and there that page suddenly takes much longer to load. Also the email may not arrive right away. Then impatient user starts furiously clicking the "resend verification" link. In which case 90% of sites resend their link with new token but also disable all previous tokens. Then several emails arrive in unpredictable order and poor user has to guess in vain, which one (and only one) is still valid. Now why those sites find it so hard to keep several tokens active, just for this case, is beyond my understanding.
Finally there is still this so hard to unlearn habit for sites to insist on the so-called "username". So now, beside my email, I have to think hard to come up with this unique username, different from any previous user! Thank you so much for making it sweet and easy! My own way of dealing with it is to use my email as username. Sadly, there are still sites not accepting it! Then what if some fun type used my email as his username? Not so unrealistic if your email is bill#gates.com. But why simply not use Email and Password to avoid all this mess?
Here some possible guidelines to relieve user's pain:
Only force me to sign in/up if you absolutely need and give me a chance to choose not to!
Make it one page form, so I know what I am up to and, needless to say, use as few input fields as possible. Ideally only Email and Password (possibly twice), no Username!
Show your sign in form as small window on top of your page without reloading, and allow me to get rid of it with single click away from that window. Don't force me to look for "close" button or, even worse, icon I could confuse for something else!
Account for user to click back/forth and reload buttons. Don't clear the form upon reload! Don't put clear button at all! It is too easy to click by accident. The data you are ask me to fill should not be so long in first place, that I could not re-enter it without the need of "assistance" to clear.
Now to question Q2. Here we have well known problem of conflict resolution that occurs any time two data need to be merged. For instance, the anonymous data and the registered user data, but also whenever two users modify the same data, or the same user modifies it from different devices at different times, or locally stored data conflict with server data, and so on.
But whatever the source is, the problem is always the same. We have two data, say two objects $obj1 and $obj2 and you need to produce your single merged object $obj3. The logic can be as simple as the rule that server's object always wins, or that the last modified object always wins, or the last modified object keys always win or any more complicated logic. This really depends on the nature of your application. But in each case, all you need to do is to write your logic function with arguments $obj1, $obj2 that returns $obj3.
A solution that will possibly work in many cases is to store timestamp on each object attribute (key) and let the latest changed key win at the moment of synchronisation. That accounts e.g. for the situation when the same user modifies different attributes when being anonymous from different devices.
Imagine I had modified keys A and B on device AA yesterday, then logged today from device BB to enter another B and saved it to the server, then switched back to my device AA, where I am anonymous, to enter yet another A without changing the old B from yesterday, and then realised I want to log in and synchronise. Then my local B is obviously old and should clearly not overwrite the value of B that I changed more recently on device BB. In this seemingly complicated case, the above solutions works seamlessly and effectively. In contrast, putting the timestamp only on whole objects would be wrong.
Now in some cases, it could make sense to keep both objects, and, e.g. distinguish them by adding extra properties, like in case 1 suggested in Radek's question. For instance, Dropbox adds something like "conflicted copy by user X" to the end of the file. Like in Dropbox case, this is sensible in case of collaboration apps, where users like to have some version control.
However, in those cases, you as developer simply save two copies and let the users deal with that problem.
If on the other hand, you have to write a complicated logic based on user's data, having two different copies hanging around can be a nightmare. In that case, I would split data into two groups (e.g. create two objects out of one). The first group has data representing the state of the application as a whole, that is important to be unique. For that data I would use the conflict resolution as above or similar. Then the second group is user-specific, where I would store both data as two separate entries in the database, properly mark them (like Dropbox does), and then let users deal with the list of two (or more) entries of their project.
Finally, if that additional complication of database management makes the developer uneasy, and since Radek asked to give a resource reference, I want to "kill two flies with one shot" by mentioning the blog entry StackMob offline Sync, whose solution provides both database and user management functionality and so relieves the developer from that pain. Surely there is a lot more info to be found when searching for data concurrence, conflict resolution and the likes.
To conclude, I have to add the obligatory disclaimer, that all written here are merely my own thoughts and suggestions, that everyone should use at own risk and don't hold me responsible if you suddenly get too many happy users making your system crash :)
As I am myself working on an app, where I am implementing all those aspects, I am certainly very interested to hear other opinions and what else folks have to say on the subject.
From my experience - both as a user of sites that require a login, and as a developer working with logged in users - I don't think I've ever seen a site behave this way.
The common pattern is to let a user be anonymous and the first time they do something that would require saving state, they are prompted to login. Then the previous action is remembered and the user can continue. For example, if they try to add something to their shopping cart, they are prompted to login and then after login, the item is in their cart.
I suppose some places would allow you to fill a cart and then login at which point the cart is associated with a concrete user.
I would create a SessionUser object that has the state of the site interaction and one field called UserId that is used to retrieve other things like name, address, etc.
With anonymous users, I would create the SessionUser object with an empty reference for UserId. This means we can't resolve a name or an address, but we can still save state. The actions they are performing, the pages they're viewing, etc.
Once they login we don't have to merge two objects, we just populate the UserId field in SessionUser and now we can traverse an object graph to get name, email, address or whatever else.