In google app engine, you can do routing at 2 places: in your app.yaml, you can send requests to url's off to different scripts, and inside a script, when you work with wsgiApp, you can again do routing, and send different url's off to different handlers. Is there an advantage of doing your routing in either of these places?
Generally the best approach is to use app.yaml for 'application level' routing - defining paths for static content, utilities like mapreduce, and your main application - and doing the routing for your app from within a single request handler. This avoids the overhead of defining multiple request handlers for each part of your app, while still preserving isolation for distinct components such as external utilities.
You have to use both. Do the high level routing in app.yaml and the more fine grained routing in the wsgi. The important is that you get a god structure of what is routed in each place. I can not see any argument that the one is superior to the other.
Related
We are looking for some advice on handling URLs (and the state related to each URL) in a web application backed by a HATEOAS REST API, more specifically on
how to avoid having the web application URLs coupled with the REST API URLs
how to handle multiple resources in a single view
But let me first provide some more context:
We are building an Angular web application on top of a REST layer with Hypermedia constraint. (Note: I prefer simply using the term 'Hypermedia (constraint)' over HATEOAS).
As dictated by the Hypermedia constraint, the available actions and links in the application at any point in time are provided by the REST API. So the web application should not contain any hardcoded urls of the REST API, except for the 'root' (assuming that concept really exists in a REST API).
On the other hand, each page in the web application needs to be bookmarkable. So we cannot create a black-box application (with a single url and all state changes handled in the SPA without changing the URL). This means the web application also has its URL space, which needs somehow to be mapped to the REST API URL space. Which is already a conflict with the Hypermedia idea.
In the Angular application we use UI Router for handling application state. Here is how we got it working:
We only define states, no URLS
We defined a $urlRouterProvider.otherwise handler that will map the current web application URL to the corrsponding REST API URL, retrieve the representation of the resource that corresponds with that REST URL and pass it to the controller (in $stateParams).
The controller can then use the data (and links and actions) in the representation, just like it would if it would have made the REST call itself (or through a service)
So far so good (or not really) because there are some downsides on this approach:
The Web application URLs are mapped to the REST API URLs, so both URL spaces are coupled, which conflicts with one of the basic assumptions of using Hypermedia constraint: we cannot change the REST API URLs without having to change the web application.
In the $urlRouterProvider.otherwise handler we retrieve the representation of the current web app URL. But in some cases we have two resources in a single view (using UI Router nested states): for example a list of items and a detail of a single item. But there is only a single URL, so only the representation of the item detail is retrieved and the list of items remains empty.
So we would love to hear some suggestions on how we could improve on our approach in handling the two URL spaces. Is there a better way to make the REST API dictate the (available) behaviour of the web application and still have bookmarkable URLs in the webapplication? Because now we have some kind of hybrid approach that does not feel completely right.
Thanks in advance.
Regards,
Luc
that's a tough setup. Roughly you want bookmarks into your API, and RESTful systems somewhat discourage bookmarks.
One possible solution is a "bookmark service" that returns bookmark (bit.ly like) urls for the current resource being presents that are guaranteed to be fowards compatible because as you may change the canonical url structure, the bookmark service can always translate the bit.ly like url into the canonical url. Sounds complicated, but we see this all the time and we call them SEO urls eg: /product-name/ maps to products/ today, but may be /catalog/old-products/ tomorrow.
How you match that up to a UI that shows 2 resources the first being a list of summary like resources, and the second being a specific resource get's really tricky. I would expect such a page to know contain the state of what's it's displaying in it's url (probably in the fragment). As such since it's [likely] the controller that processing such commands it probably needs both (the list resource and the expanded resource) as input. I bet the url would look something like:
list=http://path/to/list/results&expand=http://self/link/of/path
So the last part you have is to make sure that works going forwards. Again this is the bookmark problem. What i may suggest if you don't want to build a bookmark service is that given you want to have such bookmarks you need to transition people to the new URLs. When a request is made to http://path/to/list/results and you want to switch that over you should be 301 redirecting them to the new canonical url and the app should be updating the bookmark. such a redirect can include the &flag=deprecate_message param to trigger the presentation in the UI that the client's bookmark is old and should be replaced. Alternatively the response can be internally forwarded and the deprecation flag & canonical (or latest) link included in the response to the old URL. This causes a phased transition.
In summary: I have yet to see HATEOAS be a cure all for backwards & forwards compatibility, but it's much better than the existing techniques. that said you must still make decisions in v1 of your API about how you want your users to move to v2.
This is not a question I've really seen answered on SO. Most of the tutorials on building RESTful API's using node/express focus on a single technology stack. What my team is having trouble with is building a single Node API that serves multiple stacks, specifically both Angular and Scala but with perhaps more to come later.
In most of the examples I've seen with Angular (for instance), the routing code that Angular uses to set up its MVC goes right into the "app.js" which it seems would not then scale at all to other platforms. My suspicion is that the "trick" is to break the routing out into a separate file in order to set up multiple routing schema using route separation. And then set up routes that go /angular/foo or /scala/bar, etc., on top of which these platforms can be built.
I'm not looking for a "best" way, I'm looking for specific high-level examples of how this problem can be solved, with direct correlation to the node/express stack at the base of the architecture.
From your comments, it appears that there was a bit of confusion and you were trying to fudge Scala and Angular together in an app, and you wanted to have separate endpoints for each one to serve for routing.
Generally speaking, Angular applications are typically SPAs. They don't change page (At least not in the traditional page-per-request sense), and all routing is handled on the client side. They just consume RESTful APIs in the background. The RESTful API typically is a JSON, language independent, endpoint. Angular apps are then served from a static file server.. this can be node but could also be Nginx (or similar).
Now, some angular applications will have all of the routes for their endpoints rewritten to their app.js from the server side. In other words, you may see an Angular application redirect all requests to it's app.js so the client side can handle the routing. This is useful in case someone on an Angular application refreshes the page for example. This is only necessary however if you're using the HTML5 history API - hashbangs don't need this rewriting.
Scala and Angular don't need to have different endpoints for data - only for their file serving. The REST endpoints could be exactly the same as long as they output a format both languages understand (typically JSON).
I don't know why you would want two different APIs for different stacks. Why aren't you able to use one API for both Angular and Scala?
But if you want to split up your API into multiple files, I suggest to use express.Router: http://expressjs.com/4x/api.html#router
I'm new in MEAN stack area and I have some doubts related to Angular routes. Why should I recreate on the client side the routes already made in backend with express.js, what are the benefits? Is this the only way how Angular.js works? I saw some examples with Jade.js and it wasn’t necessary to recreate the routes on the client side, making the things simpler.
Thanks!
Disclaimer: I haven't specifically used Angular myself, but I have used Backbone.js for the same purpose, and the same arguments apply.
There are many use cases where it makes sense to define routes on the client side, not the server side. For instance, I do a lot of work with Phonegap using Backbone, and the architecture is generally a REST API for the back end, and the data gets used to render the pages on the client side. This approach has the advantage that it reduces the amount of data sent over the network, generally making the app quicker. Client-side routing also preserves browser history when compared to just updating the existing content via AJAX.
Ultimately, it's something you have to consider on a case-by-case basis. For something that's very dynamic, building it as a single page web app with client-side routing may make sense. For a more traditional web app, such as a blog or ecommerce site, you're probably better off defining routes on the server side.
I want to use Symfony2 as back end to create a REST API and use AngularJS as front end. The 2 are completely separated ie Symfony 2 will not render anything, it'll just send json data to AngularJS.
I'm not sure on how to configure my web server (nginx).
The Symfony documentation gives the configuration but it's intended for a site that only uses Symfony, and so everything outside of the /web/ folder is not accessible.
I can see several possibilities:
Create 2 different directories (eg /path/frontend and /path/backend) and a corresponding website for both. I would then have 2 different addresses to access the front end and the back end (eg http://myfrontend.com and http://mybackend.com). The problem I see is that I probably won't be able to directly use AJAX calls in AngularJS.
Create 2 different directories (eg /website/frontend and /website/backend) and only one website. I would then probably need to access the front end and back end with something like http://example.com/frontend and http://example.com/backend. I'm not sure how to configure the web server though (issue with root /website/backend/web).
Put the AngularJS directory inside the web folder of Symfony, but then I'd need to also change the configuration so that nginx doesn't only server app.php, app_dev.php and config.php.
Put the AngularJS directory in the src folder of Symfony, and have Symfony handle the routing. I don't know if it will mess with AngularJS' one routing. Also I will probably have a few other php that should be accessible, so I'd need to route them through Symfony also.
What would you suggest and why? Maybe I'm missing something obivous?
I guess you could accomplish your task using any of those methods. It would come down to how you want to structure you application and what it's objectives are. For large scale projects the first method (having the API separate from the AngularJS) would serve you well. Twitter really made that software model big.
So I would suggest going with method one. All you would have to do is specify an Nginx header in your server block that allows cross domain access to another domain. So you would specify the following directive in your frontendangular.com site:
add_header Access-Control-Allow-Origin backendsymfony.com;
This way every time a page request comes in on your front end app Nginx tells the browser that it is safe to access another domain (your symfony setup).
These are 2 frameworks that both have powerful routing capabilities, and it looks like you are going for a best of both worlds. There are many pros and cons to any setup, so I'll list a few that come to mind:
Angular routing / templating is great but it will leave you with SEO and meta issues to solve . It's probably better to manage your major pages with symfony2 and any routing within each page with angular. This would allow you to still make dynamic pages w/out compromising your meta and SEO control. Access Control seems flexible but probably not necessary, I would just put all calls to REST API under http://www.thesite.com/api and if I need another setup something like https://api.thesite.com, nginx can route or proxypass this without leaving the domain.
Locating partials gets a little wonky but that's probably fine for a large application. Just note that you will probably need to search the js location object for [host] / [path] /web/bundles/someBundle/public/blah.... Or you can setup a '/partials' path in nginx.
Twig and Angular tpl may end up a confusing mix as they both use {{foo}}. This alone would make me reconsider mixing the 2, and I might look to go with a frontend server, like node with ejs where I could also benefit from the streaming transfer of the data sent from the API.
You can get around it easy enough with but it's still concerning:
angular.module('myApp', []).config(function($interpolateProvider){
$interpolateProvider.startSymbol('[[').endSymbol(']]');
}
);
You do get the benefit of angular partials as symfony twig, which can be good or bad depending on how you see the flexibility being used. I have seen guys making examples of forms that fill out values with symfony data, but they are just undermining the power of angulars binding.
Don't get me wrong, I actually do really like the idea of the two harmonizing.
Just food for thought - cheers
I'm currently building a single page app using backbone.js
In order to keep all application pages accessible and crawl-able I made sure that the server side can also render the pages when accessing them directly.
The problem is as follows:
When pushState is not available it initiates the router using the current URL (e.g. if I accessed a url with http://example.com/example the router will build the hash fragment on top of that url)
So:
Is there any way of handling this (besides redirecting the use)
If you are redirecting as soon as the JS (using pushState feature detection) you still have a problem of urls not having hash signs.
Generally asking, is there a better approach of designing this kind of application?
Thanks!
I think the evolving consensus is pushstate or nothing (ie to degrade web 1.0 and drop hash-bang routing all together) if SEO-friendly browsing matters to you.
Its one of the reasons I don't use Backbone.js and just use PJAX is that pushstate and DOM rendering times are so good you can be single page with very little JS and hash-bang routing has always been rather hackish.
Thus an option is to not use Backbone's router all together and just let something like PJAX (or DJAX or something similar) do the routing work and let Backbone just do the inner page event/rendering stuff (ie validating forms, modal windows, etc..).