I provide a form for users to upload their own data. I use ajax-form-submit and then parse the data to create numerous models (one per row in uploaded csv).
Now, I want to create models into a predefined collection.
I can use add which takes an array of models but unfortunately, it does not send PUSH at server side. I know I can iterate and create .create for each model but let's say I have 10k models, it would create 10k calls. Sounds unreasonable. Did I miss anything?
The other way is to accept multiple models at server and use .ajax calls and then add manually to the collection for UI rendering.
Looking for the best route. Thanks.
Backbone and REST simply do not cover all real-world use cases such as your bulk create example. Nor do they have an official pattern for bulk delete, which is also extremely common. I am baffled as to why they refuse to address these extremely common use cases, but in any case, you're left to your own good judgement here. So I would suggest adding a bulkSave or import method to your collection. That should send an AJAX POST request with your CSV form data to the server, the server should save the info and if all goes well, return a JSON array of the newly-created models. You collection should take that JSON array in the POST response and pass it to reset (and parse as well if you need special parsing).
Definitely don't do a POST request for each model (row in your CSV), especially if you plan on having 10K models. However, to be clear, it wouldn't be completely terrible to do that pattern for a few dozen models if your UI shows real-time progress and error handling on a per-record basis (23 of 65 saved, for example).
I like the pragmatic approach of #PeterLyons but another idea could be trying to transform your not REST functionality to a REST functionality.
What you want is to create a bunch of Models at once. REST doesn't allow create multiple resources at one. What REST likes is to create one resource at a time.
No problem, we create a new resource call Bulk with its own url and its own POST verb. The attributes of this Model are the array of Models you want to create.
With this approach you can also solve future functionalities like modify and remove multiple Models at once.
Now you just need to figure out how to associate the array of Models to this new Model and how to make the Bulk.toJSON method responses properly.
Related
Good Evening.
I'm pretty new to mongo db and i'm planning to make an app who will work whit Nosql(MongoDB).
The scope of the app is pretty simple:
Register a profile
Request item to a shopper
Fulfill and sent payment notice.
If i would make this whit SQL i would create a User Table, A Request Item Table a sending Paymen Table.
I would, also in order to learn something, to make it whit NOsql, and i choose mongo.
I could create 3 collection and put every different document and make a search every time i need.
OR, and this is the question, COULD i create collection for EVERY user, and inside every user put every interaction of the very same user.
So if i need to search for User10 order and paymen, i would look only inside User10 collection and search for every item he\she requested.
But on the other hand, how much can affect me if i need to search all order in a specific timeframe? It should be slower than SQL i suppose.
Is a acceptable way to do this, there are some backdraw i did not yet seen, or is discouraged in order to make another approach?
The backend would be write in Java, meanwile the app (for...reason) would be write in Xamarin.Form
.
While this is possible I would personally recommend against this as this is considered an anti pattern, you should read this article about this very topic.
I would personally ask myself what are the advantages of this approach that i'm hoping to gain? if quick queries at a user level is what you seek this should not be a problem with sufficient indexes. (on user_id and on timeframe ).
There are other standard solutions built to deal with scale like collection sharding. From my personal experience MongoDB deals with scale very well, It sounds like this is a personal project to learn from which probably means you'll never really reach hyper scale, The first barrier you'll probably encounter is hardware.
I'm reaching out to gain perspective on possible solutions to this problem. I'll be using Angular and Rails, but really this problem is a bit more abstract and doesn't need to be answered in the context of these frameworks.
What are the best practices for managing complex nested SQL associations on the front-end?
Let's say you have posts and comments and comments are nested under posts. You send your posts to the front-end as JSON with comments nested under them. Now you can display them listed under each post, great. But then questions arise:
What if you want to display recent comments as well? Your comment service would need to have comments in a normalized collection or gain access to them in a fashion that allows them to be sorted by date.
Does this mean you make a separate API call for comments sorted by date? This would duplicate comments on the front-end and require you to update them in two places instead of one (one for the posts and one for the comment, assuming comments can be edited or updated).
Do you implement some kind of front-end data normalization? This meaning you have a caching layer that holds the nested data and then you distribute the individual resources to their corresponding service?
What if you have data that has varying levels of nesting? Continuing with the posts and comments example. What if your comments can be replied to up until a level of 10?
How does this effect your data model if you've made separate API calls for posts and comments?
How does this effect your caching layer if you choose that approach?
What if we're not just talking about posts? What if you can comment on photos and other resources?
How does this effect the two options for data-modeling patterns above?
Breaking from the example, what if we were talking about recursive relationships between friended users?
My initial thoughts and hypothetical solution
My initial thought and how I'd attack this is with a caching layer and normalize the data such that:
The caching layer handles any normalization necessary
The caching layer holds ONE canonical representation of each record
The services communicate with the caching layer to perform CRUD actions
The services generally don't care nor do they need to know how nested/complex the data model is, by the time the data reaches the services it is normalized
Recursive relationships would need to be finite at some point, you can't just continue nesting forever.
This all of course sounds great, but I see lots of potential pitfalls and wish to gain perspective. I'm finding it difficult to separate the abstract best practices from the concrete solutions to specific data models. I very interested to know how others have solved this problem and how they would go about solving it.
Thanks!
I assume you will use restful apis, attention I don't know rails but I will suggest you some general practices that you might consider
Let's say you have one page that shows 10 posts and their 10 comments sorted by date, make this response possible in one api call
There is one more page that shows only 5 posts and no comments use the same api end-point
Make this possible with some query parameters.
Try to optimize your response as much as you can.
You can have multiple response type in one end-point, in any programming languages, if we talking about APIs thats how I do the job.
If you query takes much time, and that query runs serveral times then of course you need to cache but talking about 10 posts in each api call doesn't need caching. It should not hard on database.
For nesting problem you can have a mechanism to make it possible i.e
I will fetch 10 posts and their all comments, I can send a query parameter that I want to include all comments of each post
like bar.com/api/v1/posts?include=comments
if I need only some customized data for their comments, I should be able to implement some custom include.
like bar.com/api/v1/posts?include=recent_comments
You API layer, should first match with your custom include if not found go on relation of the resources
for deeper references, like comments.publisher or recent_comments.publisher your API layer needs to know which resource are you currently working on it. You won't need this for normal include, but custom includes should describe that what model/resource they are point to that way it is possible to create endless chain
I don't know Rails, but you can make this pattern easily possible if you have a powerful ORM/ODM
Sometimes, you need to do some filtering same goes for this job too.
You can have filter query parameter and implement some custom filters
i.e
bar.com/api/v1/posts?include=recent_comments&filters=favorites
or forget about everything and make something below
bar.com/api/v1/posts?transformation=PageA
this will return 10 recent posts with their 10 recent comments
bar.com/api/v1/posts?transformation=PageB
this will return only 10 recent posts
bar.com/api/v1/posts?transformation=PageC
this will return 10 recent post and their all comments
Im quite new to backbone and thinking of collections vs model. Lets say i have to do a json-call to an endpoint wich only returns 2-3 properties in one single object. Is it then really necessary to use a collection for that one single model?
Or can i just do the call directly from a model and then use it in my view. I mean, does the model have the same functionallity as a collection? i.e you can load, fetch, parse etc.
Yes, you can populate model from a server with a model.fetch call. To do this you'll have to set url for your model.
In Backbone, Models represent entities for your application domain while Collections are a way to group models by type.
Collections are basically helpers when dealing with multiple instances of a given Model. They've got functions to sort, filter or iterate (and some more from Underscore.js) and also have several functions to deal with Model creation such as fetch, create, etc.
Since they're helping you deal with multiple models, Collections have an url attribute which is used by its models to build urls when communicating with server individually.
So, if you just have info for an entity you will use a Model (i.e: http://host.com/entity/3). If you have an URL for several entities, you may use a Collection (i.e: http://host.com/entities). However, keep in mind that you may use Collections just for grouping and easier use, even when you don't have any URL for it.
I'm a beginner AngularJS user. I've been trying to pull hard coded JSON (backend and server data not ready) currently. It seems that in order to pull data, for instance when using the very common ng-repeat, I need to know the database structure (as the rendered JSON will mirror that structure, right?).
So while I can code independently of the back end, am I correct in my assumption that I must know the database structure? For instance... I might want to pull user comment data. This could be in its own database and I might do this: ng-repeat='comment in comments' and filter for the specific user within each comment entry in database. Whereas if comments are only within a user table it would be ng-repeat='comment in user[0].comments'. I would imagine the former is the correct approach but I honestly have never learned about proper database structure. It seems that it is something you must know in order to properly implement AngularJS though.
Any help is appreciated. I really want to make sure I approach things properly. Thanks!
I don't think you need to (or should) know the database structure. AngularJS is an MVC framework. A basic principle in this architecture is the separation of concerns. Simple put: do not mix stuff, but more specifically, you're talking about the communication between two systems: a local one (the browser running angularJS) and the remote one (a server that might, or might not, be the same that served the angular files to the client)
For example, your view should not be accessing your database (if you were working with, say, PHP, you should not have things like mysql_query(...) in a view).
You should also design components to be loosely coupled: make them as independent as possible. Unit tests help you think that way and AngularJS is particularly unit-tests-friendly with karma. Following this principle, what if you use the twitter API to show tweets in your angularJS application? you don't need to know about the internals of twitter. There is an API that serves this JSON in a format that you can use.
Your backend should provide this (for example, with a façade controller), and you should agree with the backend team what data will be available.
Instead of making your design depend on the database structure, make the backend API depend on your requirements. this way you'll have two systems loosely coupled and the backend team can do whatever they want without affecting you. For example, changing the DBMS or the structure of the tables.
If you want to pull comments, you might have a remote call ($http or ng-resource) that gets all the comments for a specific user (or for a few users, because you might want to minimize the number of remote calls) in a service or in a controller. The server responds with a json file that represents this (and probably some more things that will be needed soon, like profile picture urls, user id's, etc). Then you put the data you want to expose to a view (a subset of what you fetched from the server) in $scope.
I know the automatic setting is to have any models you define in models.py become database tables.
I am trying to define models that won't be tables. They need to store dynamic data (that we get and configure from APIs), every time a user searches for something. This data needs to be assembled, and then when the user is finished, discarded.
previously I was using database tables for this. It allowed me to do things like "Trips.objects.all" in any view, and pass that to any template, since it all came from one data source. I've heard you can just not "save" the model instantiation, and then it doesn't save to the database, but I need to access this data (that I've assembled in one view), in multiple other views, to manipulate it and display it . . . if i don't save i can't access it, if i do save, then its in a database (which would have concurrency issues with multiple users)
I don't really want to pass around a dictionary/list, and I'm not even sure how i was do that if I had to.
ideas?
Thanks!
Another option may be to use:
class Meta:
managed = False
to prevent Django from creating a database table.
https://docs.djangoproject.com/en/2.2/ref/models/options/#managed
Just sounds like a regular Class to me.
You can put it into models.py if you like, just don't subclass it on django.db.models.Model. Or you can put it in any python file imported into the scope of whereever you want to use it.
Perhaps use the middleware to instantiate it when request comes in and discard when request is finished. One access strategy might be to attach it to the request object itself but ymmv.
Unlike SQLAlchemy, django's ORM does not support querying on the model without a database backend.
Your choices are limited to using a SQLite in-memory database, or to use third party applications like dqms which provide a pure in-memory backend for django's ORM.
Use Django's cache framework to store data and share it between views.
Try to use database or file based sessions.
You need Caching, which will store your data in Memory and will be seperate application.
With Django, you can use various caching backend such as memcache, database-backend, redis etc.
Since you want some basic query and sorting capability, I would recommend Redis. Redis has high performance (not higher than memcache), supports datastructures (string/hash/lists/sets/sorted-set).
Redis will not replace the database, but will fit good as Key-Value Database Model, where you have to prepare the key to efficiently query the data, since Redis supports querying on keys only.
For example, user 'john.doe' data is: key1 = val1
The key would be - john.doe:data:key1
Now I can query all the data for for this user as - redis.keys("john.doe:data:*")
Redis Commands are available at http://redis.io/commands
Django Redis Cache Backend : https://github.com/sebleier/django-redis-cache/
I make my bed to MongoDB or any other nosql; persisting and deleting data is incredibly fast, you can use django-norel(mongodb) for that.
http://django-mongodb.org/