I'm a beginner AngularJS user. I've been trying to pull hard coded JSON (backend and server data not ready) currently. It seems that in order to pull data, for instance when using the very common ng-repeat, I need to know the database structure (as the rendered JSON will mirror that structure, right?).
So while I can code independently of the back end, am I correct in my assumption that I must know the database structure? For instance... I might want to pull user comment data. This could be in its own database and I might do this: ng-repeat='comment in comments' and filter for the specific user within each comment entry in database. Whereas if comments are only within a user table it would be ng-repeat='comment in user[0].comments'. I would imagine the former is the correct approach but I honestly have never learned about proper database structure. It seems that it is something you must know in order to properly implement AngularJS though.
Any help is appreciated. I really want to make sure I approach things properly. Thanks!
I don't think you need to (or should) know the database structure. AngularJS is an MVC framework. A basic principle in this architecture is the separation of concerns. Simple put: do not mix stuff, but more specifically, you're talking about the communication between two systems: a local one (the browser running angularJS) and the remote one (a server that might, or might not, be the same that served the angular files to the client)
For example, your view should not be accessing your database (if you were working with, say, PHP, you should not have things like mysql_query(...) in a view).
You should also design components to be loosely coupled: make them as independent as possible. Unit tests help you think that way and AngularJS is particularly unit-tests-friendly with karma. Following this principle, what if you use the twitter API to show tweets in your angularJS application? you don't need to know about the internals of twitter. There is an API that serves this JSON in a format that you can use.
Your backend should provide this (for example, with a façade controller), and you should agree with the backend team what data will be available.
Instead of making your design depend on the database structure, make the backend API depend on your requirements. this way you'll have two systems loosely coupled and the backend team can do whatever they want without affecting you. For example, changing the DBMS or the structure of the tables.
If you want to pull comments, you might have a remote call ($http or ng-resource) that gets all the comments for a specific user (or for a few users, because you might want to minimize the number of remote calls) in a service or in a controller. The server responds with a json file that represents this (and probably some more things that will be needed soon, like profile picture urls, user id's, etc). Then you put the data you want to expose to a view (a subset of what you fetched from the server) in $scope.
Related
I am working on an app which touches sensitive information, like money.
We have some calculators, and we want to prefill the values with whatever the user has entered last. Apart from increasing UX, we don't need those. But we cannot store it in web storage or cookie because of security.
We have
a JS frontend,
an API Gateway backend that is supposed to be "stupid", so it only handles authentication and sending messages to to corresponding services
some services that actually care about the business logic
These possibilities come to mind and I cannot decide which I should do (and foremost: why)
Add a table in backend, that is a catch all for implementing cookie-like functionality in backend
Add a specific table in the service it fits the most
Use a key value store in backend (don't know about this, a coworker put it out there)
As i read your requirements it seams that this is kind of a defaulting including some business logic (stupid or smart). Personally i see defaulting as part of business logic and based on this it's part of the service which cares about this functionality.
Add a table in backend, that is a catch all for implementing cookie-like functionality in backend
This sounds like a generic solution for a pretty generic requirement. what do you wanna achieve with this?
Add a specific table in the service it fits the most
Sounds reasonable especially because you put it there where it belongs. Does it have to be a table? why not calculate or copy the values on runtime?
Use a key value store in backend (don't know about this, a coworker put it out there)
This is maybe a technological decision but first you need a design decision.
i am just wondering how good is this approach to project architecture:
1) You have N services that do X stuff. But there is one constraint - they dont have their own database and they can not access any database directly.
2) For that i have a DB service which can access DB and do any action against that.
So the worklow is like this: If any service needs something from a database it asks database service for the records.
How well is this kind of architecture? Am i running into serious bottlenecks ?
Rather than put your entire database behind a single service and single interface, think about providing separate services for different parts of your dataset according to interfaces driven by your high-level business rules and data model (e.g. user account data service, orders data service, audit log data service). That way you can mock/scale/deploy these independent parts differently according to need and more easily change the backend storage if required later (e.g. archived order retrieval from different db). Also because the data managed by a service is of a particular type, certain decisions can be made independently for each service (e.g. caching policy - config-type data could be cached, active orders data probably not).
Initially you can implement all of these interfaces in a single service and then separate later, but the key to this approach is getting the interfaces abstracted and segregated cleanly.
This is a pretty typical architecture - It's a good idea to write your service's data access code against an abstraction so that you can unit test with a mocked version of your data.
At the least, it's a good idea to consolidate your data access code in one place so that you can make changes to it easily.
I am building a mobile app using AngularJS and PhoneGap. The app allows the user to access a large amount of data-items. These data-items come with the app in form of a number of .json files.
One use-case is that a user can favorite any of those data-items.
Currently, I store the (ids of) the items that have been favorited in localStorage. It works and it's great and very simple.
But now I would like create an online-backend for the app. By this I mean that the (ids of) the items that have been favorited should also be stored on a server somewhere in some form of database.
Now my question is:
How do I best do this?
How do I keep the localStorage data and online-backend data in synch?
In particular, the user might not have an internet connection at the time were he favorites a data-item. Additionally, if the user favorites x data-items in a row, I would need to make x update calls to the server db, which clearly isn't great.
So, how do people do it?
Does Angular have anything build-in for this?
Is there any plugin?
Any other framework?
This very much seems like a common problem that must have a well-known solution?
I think you've almost got the entire solution. All you need to do is periodically (on app start load the data from the service if available, otherwise use current local storage, then maybe with a timer and on app close update the data if connected) send the JSON out to a service (I generally prefer PHP, but Python, Java, Ruby, Perl, whatever floats your boat) that puts it in a database. If you're concerned with merging synchronization changes you'll need to use timestamps in the data in local storage and in the database to make the right call on what should be inserted vs what should be updated.
I don't think there's a one size fits all solution to the problem, though I imagine someone may have crafted a library that handles the different potential scenarios the configuration may be as complicated as just writing the logic yourself.
For my new project I'm looking forward to use JSON data as a text file rather then fetching data from database. My concept is to save a JSON file on the server whenever admin creates a new entry in the database.
As there is no issue of security, will this approach will make user access to data faster or shall I go with the usual database queries.
JSON is typically used as a way to format the data for the purpose of transporting it somewhere. Databases are typically used for storing data.
What you've described may be perfectly sensible, but you really need to say a little bit more about your project before the community can comment on your approach.
What's the pattern of access? Is it always read-only for the user, editable only by site administrator for example?
You shouldn't worry about performance early on. Worry more about ease of development, maintenance and reliability, you can always optimise afterwards.
You may want to look at http://www.mongodb.org/. MongoDB is a document-centric store that uses JSON as its storage format.
JSON in combination with Jquery is a great fast web page smooth updating option but ultimately it still will come down to the same database query.
Just make sure your query is efficient. Use a stored proc.
JSON is just the way the data is sent from the server (Web controller in MVC or code behind in standind c#) to the client (JQuery or JavaScript)
Ultimately the database will be queried the same way.
You should stick with the classic method (database), because you'll face many problems with concurrency and with having too many files to handle.
I think you should go with usual database query.
If you use JSON file you'll have to sync JSON files with the DB (That's mean an extra work is need) and face I/O problems (if your site super busy).
I am currently working within a custom framework that uses the database to setup a Page Object which contains the information about Module, View, Controller, etc which is used by a Front Controller to handle routing and the like within an MVC (obviously) pattern.
The original reason for handling the pages within the database was because we needed to be able to create new landing pages on the fly from within a admin interface and because we also needed to create onLoad and onUnload events to which other dynamic objects could be attached.
However, after reading this post yesterday, it made me wonder if we shouldn't move this handling out of the database and make it all file structure and code driven like other frameworks so that pages can be tested without having the database be a component.
I am currently looking at whether to scrap the custom framework and go with one of the standard frameworks and extend it (which is what's most likely right now), but I'm wondering whether to extend the framework to handle page requests through database like we are now or if we should simply go with whatever routing / handling mechanism comes with the framework?
Usually I'm pretty lenient on what I will allow to go on in a "toy" application, but I think there are some bad habits that should be avoided no matter what. Databases are powerful tools, with reasonably powerful languages via stored procedures for doing whatever you need to have done... but they really should be used for storing and scaling access to data, and enforcing your low level data consistency rules.
Putting business logic in the data layer was common years ago, but separation of concerns really does help with the maintainability of an application over its lifespan.
Note that there is nothing wrong with using a database to store page templates instead of the file system. The line between the two will blur even further in the future and I have one system that all of the templates are in the database because of problems with the low budget hosting and how dynamically generated content need to be saved. As long as your framework can pull out a template as easily from a file or a field and process them, it isn't going to matter much either way.
On the other hand, the post from yesterday was about generating the UI layer elements directly from the database (at least, that's how I read it) and not an unusual storage location for templates. That is a deep concern for the reasons mentioned... the database becomes locked to web apps, and only web apps.
And on the third hand, never take other people's advice too much to heart if you have a system that works well and is easy to extend. Every use-case is slightly different. If your maintainability isn't suffering and it serves the business need, it is good enough.