Building my own `like` system - database

I am thinking to build my own like system, as a way of learning various technologies including Javascript; and to gain a better understanding of authentication and XSS.
Use-case:
Unique ID generated atop a little bit of Javascript code, for embedding in any website
When unique user presses this like button, a +1 is triggered to the 'score' of that UID
On unique user's profile, display what they like'd
I'm unsure as where to start... how would I go about building this system?

Building such system is easy. Securing it properly, making it flow nice and be easy to integrate is the hardest part. You should ignore the latter 3 for now. I would start with a JS file that searches the DOM for any elements with the ID of "mylike" (for example) which inserts a button on to said page. Once a user clicks the button it simply does an AJAX post back to your server containing information like the page title and page URL. I think it may be best for your backend to generate the ID, maybe an algorithm based on the title + URL.
To include user data to know what user liked what I'd suggest a persistent cookie that has a session variable to link to a user in your back end. Simply pull the cookie out using JavaScript and send the cookie along with the AJAX request.

Related

Fetching data every time page is reloaded

I am creating an eCommerce-type website with react and Laravel with the REST APIs. Every time user enters page www.mywebsite.com/products a rest API is called to get all the products, and it takes like a second to load all of them. If the user leaves the page and enters it again, the same request is called, and the same products are loaded.
My question is: What's the best approach for this situation? Maybe I should somehow fetch the products only once, store them inside localStorage and get them using Context? On the other hand, most of the websites I visit seem to load products instantly, so maybe even the REST API is the wrong approach for this kind of website?
I would suggest to check your DB queries and investigate why this request takes so long. Have a look at the laravel Telescope package: https://laravel.com/docs/9.x/telescope
In the request tab you can follow your requests and see which queries are the most expensive ones. My guess is that you're lazy loading some relations on each product.
Next I would propably cache the articles (expect maybe the stock information). Use the built-in feature of laravel or maybe a third party package like this one
Only then I would go for client-side optimizations. Here you should consider maybe a state management extension like akita.

Is it done with Local-storage ? or data-binding in React JS?

I am new to react and developing the react application, where I want that user can save his/her selected values on the frontend and later on can also see those values via clicking a button and user can see what values have been selected previously. now the question is how to provide this functionality, I have studied that either it could be done by local-storage or by the data binding concept in react, but I have no clue which one is best to implement in my scenario.
Let me explain to you with the help of a diagram.
(The first Image with Indicator Generation)This is the main page of my React Application, under the section of "Question & Indicator" there is a thing called Associated Indicator, these are the values which user have select itself on the Frontend side.
(The second image with detailed user-controls) This is the page where all the user-controls are defined, here user can select the values and at the end when user clicked "Associate" it will be associated and the values have been shown under the section of Associated Indicators, from that user when clicking any of the associated indicators it will show all the values selected by user on front end.
Cheers.
Preserving data in a web app is not related to React -or any other front end technology specifically.
Data binding is a concept on how to update your views when certain data changes, and it is not related at all to how data should be saved. But related to how modern front end technologies like react and angular render your components.
You can save data in a web app using different methods, each one is suitable for certain use cases - and so you should use the one that suits you.
For example you can use any of the following:
Cookies: old way of saving data on the front end, can stay infinitely but can also be deleted if the user decided to delete them or clear data from his browser. Cookies Get send to the server-side with each request done with the browser, has limitations with size.
IndexedDB: This one is literally like a database inside the browser, really fast when querying data, also has bigger Size limitations than both Cookies, and LocalStorage. However, this is a really low level API. Won't recommend to use it, unless you really need its features. Can also stay infinitely, and can also be deleted by user.
LocalStorage: New browser API to persist data, much more easier to handle than Cookies And IndexedDB, Can stay infinitely, but can also be deleted by the user. Has bigger size limits than Cookies, but less than IndexedDb.
SessionStorage: Data saved for each tab, and data cleared when page session ends.
Finally If You want data that persist for forever and the user can't delete it, you have to do your own solution in the backend.

SPA - When to use Location Based or Internal State?

Hopefully this is not too opinionated but I am wondering if there are best practices regarding location-based SPAs and Internal based SPAs.
Internal based SPAs - track state internally
Location-based SPAs - URL location / Sessions , etc
In one part of my site if a user pastes in the url the search results will show.
However if I should be doing it for areas like admin section.
For instance I am allow users to add inventory to this point
admin -> add new Inventory -> choose center -> choose subcategory -> add inventory.
This is pretty much the flow, however if I would make it location based then on the "add inventory page" I would have to set the
company
center
subcategory
Which would require ajax requests to get all the data and basically every page I would have to do setting up data. It just seems like alot of work that every page has to be fully setup if they are coming from a url.
I am already using stuff like react-router to do my routing but in the end of the day I would to make sure that everything is always setup to the page can basically run standalone.
So maybe in some situations it would be better to somehow just redirect users back to the root of everything instead?
I would recommend using React Context for resolving your problem.
Once authorized, you can set the Provider value to be the user or their permissions, then on each ComponentDidMount or render() or whatever lifecycle hook you choose, you can check the users permissions and then allow the functionalities based on that.
Context values persist throughout routing, so you won't have to worry about updating it all the time (although you probably should if your user has a timed session).
So it is fully possible and probably more effective to use a mix of both internal and location based state management.

URL handling in a Hypermedia (HATEOAS) driven AngularJS application

We are looking for some advice on handling URLs (and the state related to each URL) in a web application backed by a HATEOAS REST API, more specifically on
how to avoid having the web application URLs coupled with the REST API URLs
how to handle multiple resources in a single view
But let me first provide some more context:
We are building an Angular web application on top of a REST layer with Hypermedia constraint. (Note: I prefer simply using the term 'Hypermedia (constraint)' over HATEOAS).
As dictated by the Hypermedia constraint, the available actions and links in the application at any point in time are provided by the REST API. So the web application should not contain any hardcoded urls of the REST API, except for the 'root' (assuming that concept really exists in a REST API).
On the other hand, each page in the web application needs to be bookmarkable. So we cannot create a black-box application (with a single url and all state changes handled in the SPA without changing the URL). This means the web application also has its URL space, which needs somehow to be mapped to the REST API URL space. Which is already a conflict with the Hypermedia idea.
In the Angular application we use UI Router for handling application state. Here is how we got it working:
We only define states, no URLS
We defined a $urlRouterProvider.otherwise handler that will map the current web application URL to the corrsponding REST API URL, retrieve the representation of the resource that corresponds with that REST URL and pass it to the controller (in $stateParams).
The controller can then use the data (and links and actions) in the representation, just like it would if it would have made the REST call itself (or through a service)
So far so good (or not really) because there are some downsides on this approach:
The Web application URLs are mapped to the REST API URLs, so both URL spaces are coupled, which conflicts with one of the basic assumptions of using Hypermedia constraint: we cannot change the REST API URLs without having to change the web application.
In the $urlRouterProvider.otherwise handler we retrieve the representation of the current web app URL. But in some cases we have two resources in a single view (using UI Router nested states): for example a list of items and a detail of a single item. But there is only a single URL, so only the representation of the item detail is retrieved and the list of items remains empty.
So we would love to hear some suggestions on how we could improve on our approach in handling the two URL spaces. Is there a better way to make the REST API dictate the (available) behaviour of the web application and still have bookmarkable URLs in the webapplication? Because now we have some kind of hybrid approach that does not feel completely right.
Thanks in advance.
Regards,
Luc
that's a tough setup. Roughly you want bookmarks into your API, and RESTful systems somewhat discourage bookmarks.
One possible solution is a "bookmark service" that returns bookmark (bit.ly like) urls for the current resource being presents that are guaranteed to be fowards compatible because as you may change the canonical url structure, the bookmark service can always translate the bit.ly like url into the canonical url. Sounds complicated, but we see this all the time and we call them SEO urls eg: /product-name/ maps to products/ today, but may be /catalog/old-products/ tomorrow.
How you match that up to a UI that shows 2 resources the first being a list of summary like resources, and the second being a specific resource get's really tricky. I would expect such a page to know contain the state of what's it's displaying in it's url (probably in the fragment). As such since it's [likely] the controller that processing such commands it probably needs both (the list resource and the expanded resource) as input. I bet the url would look something like:
list=http://path/to/list/results&expand=http://self/link/of/path
So the last part you have is to make sure that works going forwards. Again this is the bookmark problem. What i may suggest if you don't want to build a bookmark service is that given you want to have such bookmarks you need to transition people to the new URLs. When a request is made to http://path/to/list/results and you want to switch that over you should be 301 redirecting them to the new canonical url and the app should be updating the bookmark. such a redirect can include the &flag=deprecate_message param to trigger the presentation in the UI that the client's bookmark is old and should be replaced. Alternatively the response can be internally forwarded and the deprecation flag & canonical (or latest) link included in the response to the old URL. This causes a phased transition.
In summary: I have yet to see HATEOAS be a cure all for backwards & forwards compatibility, but it's much better than the existing techniques. that said you must still make decisions in v1 of your API about how you want your users to move to v2.

Single Server request per page vs SPA Application

I had the idea to make a SPA application using angularJS and then just sending AJAX updates to the server when I need.
My initial idea would be make the client application fly, but if I have to do an AJAX round trip to the server, I think the time would be approximately the same as to request a single web page.
Requesting a page just has more bytes of data, is not like I'm requesting 20 resources like in this article: https://community.compuwareapm.com/community/display/PUB/Best+Practices+on+Network+Requests+and+Roundtrips
I would be requesting a page or resource per request.
So in the end even if I create my client side application as a SPA using angularJS, these requests (would have to be synchronous and show a please wait message while they don't return, as I don't want to user to take more actions before I make sure his request passes validation and is processed correctly) would take some time and make user wait, just about the same time as requesting a full page.
I think SPA pages would be very useful if I have like a wizard on my app with multiple pages/steps and at the end, submit the results of wizard, to the server, which I don't.
Also found this article:
https://help.optimizely.com/hc/en-us/articles/203326524-AngularJS-Backbone-js-and-other-Single-Page-Applications
One of the biggest advantages of Single Page Apps is that they reduce
data transfer. As a result, pages after the initial loading usually
can be displayed faster and seem more interactive.
But I don't believe this last quote is really true.
Am I right, or is there a way that I'm not seeing to build an application that would look like it's executing locally?
I know how guys will start saying "depends on what you want", but lets focus on this scenario where there's no wizards.
What ever you said is right. But most of the frameworks(Angular,BackBone) you take they are going to cache the templates of html on the browser so the rendering would be pretty fast compared to the normal applications. Traditional apps will have to fetch the html from the server for each request which is a time consuming one.
Hope this helps you!!!
If you are wanting to go through that syncronous server side validation step for each page request, then there is probably no big advantage to using AngularJS.
If you are requesting a page and then manipulating that page's contents once it's loaded you might want to consider AngularJS. A good example would be requesting a page that displays a list of items. Now let's say we want to search that list or order it in different ways. Rather than using AJAX to call the server to filter the list and then re-render it, it could be much faster to user AngularJS to filter and re-render the list without making any further requests to the server.

Resources