I'm pretty new with AngularJS and server config stuff, and this is a problem I haven't found a satisfactory solution so far.
I would like to be able to use the HTML5 url on a website (without hashbangs), so that I could use addresses like "mydomain/contact" to navigate (I'll stick with the "contact" example for simplicity).
To do that, as I've found so far, one should do two things:
Enable HTML5 on the client side
Enable the HTML5 format on the app/app.js file (also adding the dependency)
$locationProvider.html5Mode(true);
It makes possible to click on links and get the proper url. Still, it doesn't allow someone to access directly the HTML5 url. To get to the "contact" page, I still can't directly access "mydomain/contact" (we get a 404) and I know it makes sense. To solve this, it is still necessary to implement something server-side.
Server-side config
Configure the server to respond with the right file, i.e., I should configure the server to make it respond the same way when I request "mydomain/#/contact" and "mydomain/contact".
The last item is where I'm stuck. I've found many answers like this: "you should configure your server" (they assume the reader already knows how to do this. I don't), but I can't find a complete example on how to do that or where to put any needed files.
I'm using AngularJS 1.6.x and npm 3.10.9 (npm start). My question is: is there any complete example on how to fully use HTML5 urls with AngularJS?
The only problem that exists is that angular can't handle requests it doesn't receive. You need some catch-all so that all routes (/contact etc) are passed to your main index-file.
When you say .htaccess I assume apache. Even so I'd still put nginx in front of apache since it's lightweight and easy to configure (at least compared to the apache behemoth). Sorry, I know that is a very opinionated answer. Anyway with nginx the entire config could look like this:
# usually this file goes in /etc/nginx/conf.d/anyfilename.conf
# but it might vary with os/distro.
server {
listen 80
root /var/www/myapp;
location / {
try_files $uri $uri/ index.html;
}
# And if you want to pass some route to apache:
location /apache {
proxy_pass http://127.0.0.1:81; # Apache listening on port 81.
}
}
I'm sure the same can be achieved with apache alone, but I couldn't tell you how. But perhaps this can be of help: htaccess redirect for Angular routes
There are so many silly toolpacks and utilities I've wasted time learning in my life, but nginx is the one tool I'll never regret I picked up.
Related
Good day, everyone!
There is an "official" Next.js example demonstrating a possible implementation of SSR-Caching.
upd: I should've been more precise. I scrolled through the commit history of the example - it turned out the technique had been drastically different up until a recent commit by #leerob. Take a look at the current version and compare it with what it used to be before the recent commit. Apparently, the older version was meant to work with a custom node.js server, whereas the current one takes advantage of the inbuilt Next.js methods. In this question I refer specifically to the version by #leerob.
Here's the "raison d'etre", as it's stated in one of its commits:
React Server Side rendering is very costly and takes a lot of server's CPU power for that. One of the best solutions for this problem is cache already rendered pages.
How exactly is it supposed to be used? Does it work in conjunction with CDN, does it also work with proxy-caching, like Nginx?
My question is this:
Is there a right way to configure the SSR-Caching technique to work with Nginx's caching?
I'd come up with a somewhat workable solution, but later realized that it doesn't behave as I expected.
Here's the deal:
When there is a CACHE HIT, that is to say, Nginx sent previously rendered page from its cache to the client, the app gets reloaded at the client side. There's a couple of problems with that:
it breaks 'the flow' of SPA (it seems to lack the continuity in rendering pages because of the page's reloading);
it makes the client to load unnecessary data with each request;
the reloading part causes the app to lose its state.
N.B.: you never get this behavior when using SSR (getServerSideProps) only (no proxy-caching).
Here is a schematic example of my current workflow:
The client asks for a resource.
Is it Cached?
YES: Nginx returns a cached page along with accompanying scripts (for that purpose you
have to proxy_pass to '/_next/static')
NO: Nginx proxies the request down to the Next.js, then the page is rendered, cached at the Nginx level and finally reaches the client
Example of Nginx config:
location /products {
### these lines are crucial
proxy_pass http://next:3000;
proxy_cache my_cache;
###
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
proxy_cache_lock on;
proxy_cache_background_update on;
proxy_ignore_headers "Set-Cookie";
add_header X-Cache-Status $upstream_cache_status;
proxy_hide_header X-Powered-By;
}
###static assets
location /_next/static {
proxy_pass http://next:3000;
}
The caching itself works, but how do you make it work properly, without reloading the page?
Are there any workarounds? Or is the described behavior inevitable?
Any help would be much appreciated
I faced with pretty interesting issue.
I'm developing an Angular application and it doesn't work in IE9. I figured out where is the problem.
An nginx server that serves the application in some cases sends different index.html files (using try_files directive) based on requested urls (using location if conditions) (also, note that different index.html has different Angular applications inside). So handling by url doesn't work for IE9 because IE9 doesn't support History API and it falls back for using hashes # and hash fragments are not sent to the server at all. How to fix this? Could you suggest a good way to serve different index.html for different url hash fragments in case of IE9?
I want to use Symfony2 as back end to create a REST API and use AngularJS as front end. The 2 are completely separated ie Symfony 2 will not render anything, it'll just send json data to AngularJS.
I'm not sure on how to configure my web server (nginx).
The Symfony documentation gives the configuration but it's intended for a site that only uses Symfony, and so everything outside of the /web/ folder is not accessible.
I can see several possibilities:
Create 2 different directories (eg /path/frontend and /path/backend) and a corresponding website for both. I would then have 2 different addresses to access the front end and the back end (eg http://myfrontend.com and http://mybackend.com). The problem I see is that I probably won't be able to directly use AJAX calls in AngularJS.
Create 2 different directories (eg /website/frontend and /website/backend) and only one website. I would then probably need to access the front end and back end with something like http://example.com/frontend and http://example.com/backend. I'm not sure how to configure the web server though (issue with root /website/backend/web).
Put the AngularJS directory inside the web folder of Symfony, but then I'd need to also change the configuration so that nginx doesn't only server app.php, app_dev.php and config.php.
Put the AngularJS directory in the src folder of Symfony, and have Symfony handle the routing. I don't know if it will mess with AngularJS' one routing. Also I will probably have a few other php that should be accessible, so I'd need to route them through Symfony also.
What would you suggest and why? Maybe I'm missing something obivous?
I guess you could accomplish your task using any of those methods. It would come down to how you want to structure you application and what it's objectives are. For large scale projects the first method (having the API separate from the AngularJS) would serve you well. Twitter really made that software model big.
So I would suggest going with method one. All you would have to do is specify an Nginx header in your server block that allows cross domain access to another domain. So you would specify the following directive in your frontendangular.com site:
add_header Access-Control-Allow-Origin backendsymfony.com;
This way every time a page request comes in on your front end app Nginx tells the browser that it is safe to access another domain (your symfony setup).
These are 2 frameworks that both have powerful routing capabilities, and it looks like you are going for a best of both worlds. There are many pros and cons to any setup, so I'll list a few that come to mind:
Angular routing / templating is great but it will leave you with SEO and meta issues to solve . It's probably better to manage your major pages with symfony2 and any routing within each page with angular. This would allow you to still make dynamic pages w/out compromising your meta and SEO control. Access Control seems flexible but probably not necessary, I would just put all calls to REST API under http://www.thesite.com/api and if I need another setup something like https://api.thesite.com, nginx can route or proxypass this without leaving the domain.
Locating partials gets a little wonky but that's probably fine for a large application. Just note that you will probably need to search the js location object for [host] / [path] /web/bundles/someBundle/public/blah.... Or you can setup a '/partials' path in nginx.
Twig and Angular tpl may end up a confusing mix as they both use {{foo}}. This alone would make me reconsider mixing the 2, and I might look to go with a frontend server, like node with ejs where I could also benefit from the streaming transfer of the data sent from the API.
You can get around it easy enough with but it's still concerning:
angular.module('myApp', []).config(function($interpolateProvider){
$interpolateProvider.startSymbol('[[').endSymbol(']]');
}
);
You do get the benefit of angular partials as symfony twig, which can be good or bad depending on how you see the flexibility being used. I have seen guys making examples of forms that fill out values with symfony data, but they are just undermining the power of angulars binding.
Don't get me wrong, I actually do really like the idea of the two harmonizing.
Just food for thought - cheers
First time using Backbone's pushState and not getting it 100% working, so help would be appreciated. My needs are:
Must work with deeply nested URLs both navigating to them from the default route and
With direct linking (or page refresh) to deeply nested URLs
Must be able to call my PHP backend API (using Slim Framework) properly with Backbone sync.
I was unable to get all 3 of these things working, although with Nginx rewrites I could achieve #1 and #2.
To achieve #1 I did the standard
location / {
root html;
index index.html index.htm index.php;
try_files $uri $uri/ /index.html;
}
redirecting to index.html which is documented well.
However this does not work with #2. If I go directly to a nested URL like www.example.com/store/item123/subitem345 I would get errors being unable to load my require.js files, which were being looked for at www.example.com/store/item123/ . Naturally this is not right.
I could make #2 work with some rewrite rules that remove the unwanted part of the URL (the /store/item123 part). Is this correct? And if so, is there a universal rewrite to make this work?
I could never get #3 to work fully. Whenever I was in a nested route (e.g. store/someitem123/subitem345), Backbone would append the intermediate parts of the URL to backend API call, which would give a 404 naturally. So instead of the needed (/php/api/args/) I would get (store/someitem123/php/api/args).
There must be a way to either override Backbone's sync function or use an Nginx rewrite to remove the intermediate parts that aren't needed (the store/someitem123 part in my example).
For reference I have to have this block in the Nginx configuration to make the backend calls work at all. But currently they will only work when in at routes that don't have deeply nested URLs.
location /php/ {
try_files $uri $uri/ /php/chs_rest.php?$args;
}
Looks to me like the crux of the issue is route URL rewriting. Based on the Slim documentation for nginx, your conf file is incorrect. See the nginx section here: http://docs.slimframework.com/#Route-URL-Rewriting.
EDIT: Updated to address OP comment below.
In your specific situation, the best recommendation I can make is to split the Backbone.js application and the Slim application into separate apps.
The popular Backbone Wine Cellar tutorial's sample application is an excellent example of this. The Slim portion of the app is a few years old, but it would make an excellent reference for building a current app.
The benefits of splitting the app are numerous, perhaps the largest of which is the fact that each app will be much closer to standard Backbone and Slim applications. The resources at your disposal for learning and problem solving would expand greatly, as blog posts and documentation and SO questions would apply directly to your applications. Future maintenance and continued development will be much easier.
I'm confident that the effort to split the applications would have an extremely high return on investment.
In my Backbone application I have a main view that shows previews of posts. When the user clicks on a post, the post is expanded in an overlay and the URL is changed to reflect that post. While the post is expanded the user may do something that triggers a call to the server that needs to happen at the root context. The problem is that when the post is expanded, the server call that needs to happen at the root context happens from the post context instead. Here's the order of operations:
Page is loaded with main view url: http://localhost:8080/my-web-app/
User clicks post, overlay is shown, url updated to: http://localhost:8080/my-web-app/posts/1
User clicks something that triggers a call to server. Url is: http://localhost:8080/my-web-app/posts/1/load, which is wrong.
In the example above, the load operation needs to happen from the root context: http://localhost:8080/my-web-app/load
I've tried changing the url property for my models, collections, etc. to include a leading /, but this removes the "/my-web-app/" context (the url becomes http://localhost:8080/load), which is necessary in my test environment. This would work fine in a production environment, of course.
To get around this, I have set the Backbone.history root option to be "/my-web-app/" and have overridden every url property to be as follows:
url: function() {
(Backbone.history.options.root != undefined ? Backbone.history.options.root : "") + "load";
}
While this approach works, it is a pain in the ass to override every url function like this... not to mention, it feels hacky. It is also totally unecessary code for a production environment. Is there a more elegant way to manage this so that it works in both test and production environments?
Thanks!
Application routing shouldn't differ on dev environment and production. This will always bring trouble at some point.
Assuming you're using Apache server on your localhost, you can make a virtual host of your choice and make /my-web-app/ available on the /.
First, add a domain name to your /etc/hosts file and point it to 127.0.0.1, like that:
127.0.0.1 mywebapphost
and then add a virtual host to your Apache vhosts.conf
<VirtualHost *:80>
DocumentRoot "/Users/someone/Sites/my-web-app/" # absolute directory of your webapp
ServerName mywebapphost
</VirtualHost>
And you're done! Your webapp is available on //mywebapphost:8080 and all routing is identical to your production environment.
I had the problem and after stressing for a while realized that you can use the * as a prefix to your routes to eliminate the problem with the web context:
*posts/1/load
The only downside is that the route resolution needs to do more work but given that it's client side it should be negligible.
While it may be true that your routes shouldn't differ between dev, qa production, I think it's fair to say that a lot of web applications are written to be context agnostic so the client side stuff should follow suit imho.