When to use url-loader with your webpack config? - reactjs

There are many SO questions on using the url-loader, but I'm wondering what exactly it does? The README is a little limited.
The url loader works like the file loader, but can return a Data Url
if the file is smaller than a limit.
The limit can be specified with a query parameter. (Defaults to no
limit)
If the file is greater than the limit the file-loader is used and all
query parameters are passed to it.
source: https://github.com/webpack/url-loader
I sort of understand the README but I'm not sure when I need that functionality. I noticed a bunch of webpack configs that don't use it, and seem to still work. Is this something you need when running a server vs the webpack-dev-server?
What exactly does it do and when should I use it?
thanks!

To expand on my comment.
It base-64 encodes the image directly into the css or html if below
the limit.
This method allows you to save on HTTP requests which can improve your page speed. It is mostly used for icons or other smaller image files. Once you start to get into bigger image files you aren't gaining much if any performance by base-64 encoding it.

Related

How can I edit compressed JS files

I have had a developer create a website or app in React. This is already on a webserver and does what it should do. Now I want to develop the frontend myself, which would be no problem if I knew how to edit the code.
On the server I have an index.html, some stuff like favicon and a folder. This folder contains the folders "css", "js" & "media" and I don't understand their content. In the folder "css" are for example the files "main.12345.chunk.css" and "main.12345.chunk.css.map" Both look very cryptic.
Now I found out after some research that this is probably a compressed representation. Possibly compressed with Webpack?
But how can I edit these files in a meaningful way and understand what was coded there in the first place? Normally I would just download the file to be changed with Filezilla and edit it with an editor or Visual Studio code, but in this case I have no idea.
Those "cryptic" files are probably minified. Minification is a process where the original code is minified using several approaches, making it much smaller in size and also sometimes better performing. This is done by Webpack with a build process.
Those files are not meant to be develop with (or even read for that matter). Their sole purpose is to be optimized and be run in a production environment. It's very hard or even impossible to understand those, you would basically have to reverse-engineer them to understand what's going on. Many websites actually use minification for this additional bonus of protection of their application logic, because minimization basically obfuscates client side code. For example, the WhatsApp web client written in React is heavily obfuscated, in order to not allow anyone to write a WhatsApp client (there are efforts for this particular example, but it takes lots of time).
TL;DR: You have to get the original source files in order to edit them.
But how can I edit these files in a meaningful way and understand what was coded there in the first place?
They really are not designed for editing.
Edit the original source code to the application, then run its build script and deploy the output from it.

Caching best practice for AngularJS dynamic sites?

I'm working on an AngularJS project that will have considerable traffic.
While in development I'm stumbled upon the issue with partials cached and not updated on different actions. Sure, I can get rid of this using .run with $templateCache.removeAll(); for example,
but want to make sure this is actually a good idea.
Some partials are updated dynamically (user input, or automatically in intervals), while some are static or updated very infrequently.
What would be the best approach to caching in this case?
With non-angular sites I prefer to keep responsibilities cleanly split, for ex.:
1. Cache-headers are set on app level
2. nginx - just to serve sites per se
3. Varnish doing FPC + CDN for static assets or CDN doing full page caching (depending on a client/project, etc.) etc. etc. etc.
Key point: every part has its own distinct responsibility.
With this project I can use Varnish and CDN for static assets,
multi-server setup. Possibility to re-use Varnish for loadbalancing as well, i.e. I may have Varnish above several web-nodes. I have some flexibility in terms of infrastructure.
Please share your thoughts on the optimal setup?
In particular: is it still worth caching partials?
If yes, what would be the best place to set CC headers?
What would be the best way to flush their cache then, esp. if I need to flush only some sub-selection?
Thank you!!
D.
Posting the answer myself, as cannot select rob's reply as an answer. My solution is based on his suggestion which helped me a lot to move forward:
During the production build:
I'm using "gulp-rev-easy" gulp module for revving the previously
concatenated and uglified css & js files.
However, no existing revving modules could provide me with functionality I needed: replacing specific strings in an index.html template file with my own paths to CDN both for CSS and JS scripts. So I had to add my own function/override reveasy a little.
As per rob's I've started using templatecache and its works great in conjunction with revving.
Images are just going through a S3 -> CDN pair.
Some quick notes:
Using revving for "breaking" cache turned out to be the fastest and
easiest way to deal with caching, as CloudFront is taking approx. 20
minutes to invalidate a path, other CDNs - less, but you still have
to issue the request etc.
For dynamically updated images having same names, like avatar images
that a user can update via his profile areas, I suggest adding a
timestamp or some random string to the file name. While adding a
random URL path sounds easier, you won't be able to cache this.
Hope this helps,
Dennis

How can i gather lots of files from one filetype?

Im trying to fuzz some tools but i need a huge amount of .zip or .jpg files for that. I ve tried crawlers like webripper but its not very effective (or im doing it wrong). Is there a better way to get lots of different files?
Ok, for the offchance that someone else might need sth like this:
In the end i used Webripper and instead of generating links to google/bing results with the "filetype" parameter i just put some upload/freeware pages as targeted rip job with the max link depth.
Webripper might crash sometimes and it will take quite some time but well it works somewhat.
A possible better solution would probably be to use the google API (e.g. c#SearchAPI ). Then extract the clean links from the results and call asynch download for those. Using the direct result link most likely wont work because google will block it after some files "Unusual datatransfer".

Is it good to put each directive in separate file Angular?

I have about 10 directives and they are pretty complicated. Today I use one file only directives.js.
Is there some performance penalty if I'll put each directive to separate file for better maintenance?
Thanks,
JavaScript itself doesn't care where the code comes from. But JavaScript code has to be loaded by the browser. Making 10 HTTP requests to load 10 files is obviously slower than making 1 HTTP request to load the equivalent code.
But that's not a good reason to put everything in a single file. You should make one file for each component to make the code maintainable and easy to find. But the build procedure of your application should concatenate and minify the JavaSript files into a single file for production, so that a single file is used by the actual application.
Grunt and Gulp are two good build tools to do that, and much more.
Yes, there is a performance penalty for the client if it has to load every file individually. There are, however, server-side techniques to mitigate this, such as ASP.NET's script bundling, Grunt's building and many many more, that bundle several JavaScript files into one file for the client.
Yes, you should put your directives in separate files. This will cause performance degradation if used as is, however by using build tools like Grunt you can concatenate and minify whole of your app into a single JS file.

How do I load an alternative font for PIL ImageFont on App Engine?

I have successfully ported some Python code to App Engine that uses PIL's ImageFont and ImageDraw to generate a dynamic image. The only remaining problem is that the original code loads a TrueType font using a call like this:
titlefont = ImageFont.truetype("Verdana Bold.ttf", titlefontsize)
I can't just upload the font file and access it directly in GAE (at least I don't think I can?!). I guess it might be possible somehow to dump font data in a datastore blob, load that and feed it into PIL, but this seems less than elegant, and quite wasteful if everybody who uses PIL for image generation does the same thing. Currently I'm stuck with ImageFont.load_default() though, which gives pretty horrendous looking results.
Is there some clever way of working with alternative fonts in GAE PIL? Some additional API I'm missing that will return usable font objects?
Any file in your applications directory will be uploaded along with your application when you deploy it.
So yes, you should be able to "just" access any file you need by keeping it in or under your application directory, moving it there if necessary.
If you want to serve those files, that's something different. https://developers.google.com/appengine/docs/python/gettingstarted/staticfiles
But try including your .ttf file where your app can locate it and it should just work.

Resources