Structuring a Rails and Angular app - angularjs

I'd like to build a new single page app using Rails 4 and Angular and Capistrano for deployment process.
I want all the front end to be a static app on Amazon S3, but I'm openminded for other suggestions.
What's important to me is a fast developing process with the ability to scale up easily.
I was wondering what is the best structure I should use:
keep all assets in app/assets and set Bower path to vender directory.
that way i can use rails precompile methods and enjoying Rails html tags for index.html, but i'm sure it will be easy to upload it to S3 and keep it separated.
keep all assets including Bower components in public/app directory, which will keep it as a complete separate application, but then i need to use Grunt or any other service for precompiling assets.
any other idea?

From my experience, I found this approach to work really well:
API app (Rails/Sinatra/Grape/Node/whatever) serves only JSON APIs. Deploys to a server, say api.yourapp.com. Serves Access-Control headers.
Static web app: started by generating with yeoman an AngularJS, Gulp, Bower app. Deploys using gulp aws deploy module to S3.
There's no real reason to have both views and apis in the same app or built with the same technology (as in Rails).
Now there are issues:
S3 doesn't support nicely Angular's HTML5 mode URLs. So pure S3 website isn't an option.
Facebook doesn't read OpenGraph tags that are not in the source of the page.
Couldn't figure out the state of Google/SEO and Angular apps. I didn't see the content in the search results.
So as a solution I introduced another web server app. Can be based on anything - pure rack, node etc. I chose rack.
Solutions to the problems:
Web server app was hosted on www.yourapp.com and proxied (and cached) requests to S3. It supported all URLs (html5Mode) - just proxied to index.html.
OpenGraph meta tags - the API had an endpoint that gets a URL or ID of an object and returns meta tags information. Web server issues a request to API once per URL (caches the response) and injects it in the served index.html.
SEO - as a middleware, used prerender for rack that rendered pages on the server.
As a bonus -
Most apps today have a landing page/marketing site and the actual app. Sometimes it's better to maintain these separately. The web sever knows according to a cookie which app to present on www.yourapp.com - actual app or marketing site. On sign in - set a cookie on client side and voila.

First, I think there's a bit of a confusion here, let me try to clear it up.
There are a couple of ways for achieving this
Pure client -> API
When you have a static application, there's no need to go through the Rails asset pipeline, there are far better ways to manage assets when you are using the tooling for client side applications.
For example, your client application will be an Angular application and you will manage assets with a combination of bower (dependencies) and grunt (build and distribution).
There's no point of deploying to S3 with Capistrano, if it's a pure static application, you can use aws CLI in order to just upload your content.
I'd go through a CDN as well. Something like Fastly works really well over Amazon S3.
I have a Rake task that uploads to S3 and then clears the cache on Fastly (if I need to).
As for your Rails application, it would act as an API, it should not have any assets
Combined
If you have a combined application, some of the actions are served by the server (Rails) and just invokes some client side code (Angular).
If this is the case, I would go through Rails asset pipeline and just keep everything as Rails best practice with compilation pre-deploy etc...
It's one of those questions where "it depends" is the answer really, it all depends on what you want to achieve.
When I have a client application, I try to have a pure client and have the server only as an API, with no assets at all, this way, I separate the concerns.
EDIT 9/9/15
I'd have to say that as long as you can, I'd keep the apps separate.
It's not always possible, especially with more complex apps.
Most apps I have seen in the recent months have kept the client side and the server side code separate, I have seen less use of rails and more use of rails-api because of that (some even ditched rails completely for thinner solutions).

Related

Should I develop a separate express server, or handle all API calls in my next.js app?

My website will perform CRUD operations and will work with MongoDB and Firebase storage+auth.
What are the reasons / advantages to developing a separate Express server instead of integrating everything in my next.js app?
As far as I have seen, it can all be done in my next.js app, but I still see many projects working with a separate server.
Depends on what your app does and how you are hosting it.
Running Next.Js on a standard server will be of little difference whether you are using nextjs's /api or expressjs.
However if you are hosting on serverless (e.g. Vercel), I would recommend using a separate express server if you have alot of CRUD operations because the warming up of serverless is really bad user experience.
Build and Deployment
Next/JS - If you want to edit something on the backend, and push the changes, it will require you to build the entire JS app, and depending on how big is your app, it can take alot of time (especially if alot of static generated pages).
Express - If you running express separately, you can build and deploy front end and backend separately. It's time saving, and you can also better organise your codes frontend/backend.
Choice of deployment
I have a choice to take advantage of Vercel to host my frontend, with static generated pages and some server side generated pages (automatic scaling, caching, CDN etc), and host my backend with a separate cluster of servers.
PS: I moved from single Next.JS app to NextJs+Express
I can think of a few things why they would have a different server from the one NextJS provides:
Familiarity with Express, Koa, etc. All next-connect helps with this
There is an already existing API in PHP, Express, Flask, etc.
It is literally based on what you would want to do, the extra interactions with MongoDB & Firebase would be same on both the technologies, unless you want to isolate respective things separately, I don't see any harm in doing everything together on next.
Given that the idea of using next.js, as per my understanding would be to utilise server side rendering.
I've been using Next.js with Typescript for quite a while now and I, as of now, have found one reason not to include express.js in my project. And the reason is Vercel.
Since I use Vercel for continuous deployment of my projects, and Vercel Not supporting any custom server as of there Docs here, I refrain from using Express or any other custom servers.
I didn't face any problem performing CRUD operations with MongoDB, can't say about firebase.
On Next.js Docs, I found these points to be considered:
A custom server can not be deployed on Vercel, the platform Next.js was made for.
A custom server will remove important performance optimizations, like serverless functions and Automatic Static Optimization.
But at the end of the day it very personal opinion weather to use a custom server or not. It might depend on a very specific use case you might be looking for.
Personally, I try to keep it to just NextJS, but if I have to manage real-time data with Socket.io, I get a separate server because other than WebSockets, serverless functions can do everything else.

Caching an React/Meteor webpage

I want to speed up the initial load of an React/Meteor web page.
One of several Ideas is to cache data. So fare so good...
This was tried with service workers. This was only possible for me under "/public/" folder but in addition I want to cache data from eg. "/client/" for caching more data.
Is this possible to cache more data from other folders?
I did pretty much the same as described here under "Step 1 - Add a service worker ":
https://dev.to/jankapunkt/transform-any-meteor-app-into-a-pwa-4k44
UPDATES:
we are using this Web page only in an intranet without internet connection.
things with React and Meteor don't really work that way.
It is expected to have an up to 1MB JS bundle to deliver to your client. A medium size app should look at 400-500kb of gzipped bundle size.
Don't use the public folder for assets, put everything in a CDN with edge cacheing like the AWS Cloudfront (store in S3 and expose via Cloudfront), or any other storage.
In your CDN you can set the expiry and cache-control (max-age) and this is used by the client (browser) for cacheing assets.
Deliver your JS and CSS bundles from a CDN.
Use split code extensively (ideally at routing level).
Whenever possible use async libraries form maps, players etc instead of NPM (which builds into your bundle).
With a PWA environment you will cache your bundle files not your public folder. The tutorial you followed for PWA is incomplete and useless. It only looks to how to get a green badge in the audit, it does nothing of any use.
One more thing, your Meteor bundle size impacts the traffic on your Meteor server. This is why you better deliver the bundle and all assets from CDN's.
Cacheing more with service workers only leads to flickers, inconsistencies between tabs and browsers, errors.

With AngularJS based single page apps hosted on premise, how to connect to AWS cloud servers

Maybe this is a really basic question, but how do you architect your system such that your single page application is hosted on premise with some hostname, say mydogs.com but you want to host your application services code in the cloud (as well as database). For example, let's say you spin up an Amazon EC2 Container Service using docker and it is running NodeJS server. The hostnames will all have ec2_some_id.amazon.com. What system sits in from of the Amazon EC2 instance where my angularjs app connects to? What architecture facilitate this type of app? Especially AWS based services.
One of the important aspects setting up the web application and the backend is to server it using a single domain avoiding cross origin requests (CORS). To do this, you can use AWS CloudFront as a proxy, where the routing happens based on URL paths.
For example, you can point the root domain to index.html while /api/* requests to the backend endpoint running in EC2. Sample diagram of the architecture is shown below.
Also its important for your angular application to have full url paths. One of the challenges having these are, for routes such as /home /about and etc., it will reload a page from the backend for that particular path. Since its a single page application you won't be having server pages for /home and /about & etc. This is where you can setup error pages in CloudFront so that, all the not found routes also can be forwarded to the index.html (Which serves the AngularJS app).
The only thing you need to care about is the CORS on whatever server you use to host your backend in AWS.
More Doc on CORS:
https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS
Hope it helps.
A good approach is to have two separated instances. It is, an instance to serve your API (Application Program Interface) and another one to serve your SPA (Single Page Application).
For the API server you may want more robust service because it's the one that will suffer the most receiving tons of requests from all client instances, so this one needs to have more performance, band, etc. In addition, you probably want your API server to be scalable when needed (depends on the load over it); maybe not, but is something to keep in mind if your application is supposed to grow fast. So you may invest a little bit more on this one.
The SPA server in the other hand, is the one that will only serve static resources (if you're not using server side rendering), so this one is supposed to be cheaper (if not free). Furthermore, all it does is to serve the application resources once and the application actually runs on client and most files will end up being cached by the browser. So you don't need to invest much on this one.
Anyhow, you question about which service will fit better for this type of application can't be answered because it doesn't define much about that you may find the one that sits for the requisites you want in terms of how your application will be consumed by the clients like: how many requests, downloads, or storage your app needs.
Amazon EC2 instance types

How to use NodeJS to combat social sharing and search engines issues when using single-page frameworks like AngularJS

I read an article about social sharing issues in AngularJS and how to combat by using Apache as a proxy.
The solution is usable for small websites. But if a web app has 20+ different pages, I have to url-write and create static files for all of them. Moreover, a different stack is added to the app by using PHP and Apache.
Can we use NodeJS as the proxy and re-write the url, and what's the approach?
Is there a way to minimize static files creation?
Is there a way to remove proxy, url-rewrite, and static files all together? For example, inside our NodeJS app to check the user agent, if it is facebook bot or twitter and the like, we use request module to download our page and return the raw html code for them, is it a plausible solution?
Normally when someone shares a url in a social network, that social network request that page to generate a preview/thumbnail (aka "scrape").
Most likely those scrapers won't run javascript, so they need a static html version of that page.
Same applies for search engines (even though Google and others are starting to support javascript sites).
Here's a good approach for an SPA to still support scrapers:
use history.pushState in angular to get virtual urls when navigating thru your app (ie. urls without a #)
server-side (node.js or any), detect if a request comes from a user or a bot (eg. check the User-Agent using this lib https://www.npmjs.com/package/is-bot )
if the request url has a file extension, it's probably a static resource request (images, .css, .js), proxy to get the static file
if the request url is a page, for real users, if the url is a page (ie. not a static resource) always serve your index.html that loads your angular app (pro tip: keep this file cached in memory)
if the request url is a page, serve a pre-rendered version of the requested url (they won't run javascript), this is the hard part (side note: ReactJS makes this problem much simpler), you can use a service like https://prerender.io/ they'd take care of loading your angular app, and saving each page as html (if you're curious, they use a headless/virtual browser in memory called PhantomJS to do that, simulating what a real user would do clicking "Save As..."), then you can request and proxy those prerendered pages to bot requests (like social network scrappers). If you want, it's possible to run a prerender instance on your own servers.
All this server-side process I described is implemented in this express.js middleware by prerender:
https://github.com/prerender/prerender-node/blob/master/index.js
(even if you don't like prerender, you can use that code as implementation guide)
Alternatively, here's an implementation example using only nginx:
https://gist.github.com/thoop/8165802

How can I leverage CDN with my AngularJS app on ASP.NET MVC?

I have an MVC project which really just serves an angular application. This will be a public application and I expect it to get a lot of traffic, so I'm trying to use a CDN to keep requests to my servers light.
I've found many articles on how to get angular itself from a CDN but what I'm looking for is for MY file (css, html, js, media) to get served from a CDN. So for example, if I have a directive, then the template would need to be served from a CDN, but I can't hard-code it because the template may not have been uploaded to the test or prod environments yet.
Update: The MVC application only serves the initial layout and home pages. Once my angular script loads MVC isn't really used anymore. The back-end is a separate Web API application.
You might be asking the wrong question, but you're on the right track for how to optimize for high traffic.
First, learn how to serve the angular application as static files. On a past project I worked on I had two projects in my visual studio solution, one was a Web API back-end, and the other was a web project that just served static content. You can configure Visual Studio to run both projects at the same time on different ports from localhost. This doesn't answer your question, but setting up dev like this will get you a step closer.
Next, once you know your project's front-end and back-end are decoupled from the server level, make sure you're not going to get an CORS issues from cross-domain requests. If you're going to be serving your back-end from a subdomain, CORS may be an issue. Look into that, you'll probably need to add some code to your MVC app to solve that problem.
Now that you've got all that figured out, find out how to use a CDN service. There are many offerings with specifics of how to deploy content and SSL certificates varying for each. Amazon Web Services, Microsoft Azure, and many others offer CDN services, but before you even look at that stuff, make sure you can actually serve your back-end from a different port on your dev machine and it still works.
I was trying to think of a good solution to this recently...
In local testing (grunt serve), i use the current non-minified .js/.css file, but when creating a build (grunt build), Id want to point those files to the CDN in my Angular project.
I'm thinking for your grunt(gulp/webpack/etc) build (which creates the minified local files for the build), you could make a task to prepend certain urls/files to your CDN via
https://github.com/callumlocke/grunt-cdnify or https://github.com/tactivos/grunt-cdn (but it would still use your non-minified files for grunt serve).
It would be cool if uploading to CDN and invalidating the files was a grunt/gulp/webpack plugin.
Then you could automate it as part of your prod builds.
Guessing your build system like Jenkins/etc would upload your app to wherever.
and then either with a grunt/gulp plugin like so could invalidate your files:
https://github.com/keycdn/grunt-keycdn if you use that CDN service.
I am guessing your build system like Jenkins could also do this work for you via its own CDN upload/invalidation plugins.
I know this answer is mentioning grunt specifically, but there should be similar plugins for gulp/webpack/etc as well.

Resources