serving files to angular app from file system on different server - angularjs

I have one server running the angular application and another one for the web api.
I have a mechanism to upload and save the photos path. For example, I store the path in my database:
http://localhost:37020/App_Data/Tmp/FileUploads/248/tag2.png
So, when I want to display those images, I use
<img ng-src="path">
but I get
GET http://localhost:37020/App_Data/Tmp/FileUploads/248/tag2.png 404 (Not Found)
HTTP Error 404.8 - Not Found The request filtering module is
configured to deny a path in the URL that contains a hiddenSegment
section.
What i have done in my web api is set up a route like:
routeTemplate: "App_Data/Tmp/FileUploads/{listingId}/{file}"
but the controller doesnt seem to pick up the request.

The problem was that I was using App_Data folder.
Changing the folder worked just fine. I didnt even have to create a controller or action to serve the files

Related

How to setup api,s in react native template

I have a react native app (chat) template which I cant figure out how to set it up, don't know if am to get an api key from cometchat or use firebase. The template has been preconfigured I think and by adding firebase credential in the env. the login and register page works, but the chat section is blank.
This is what the documentation says:[
API Integration
Doot react having fake-backend setup already. you will find all files related to API integrations in the src/API folder.
src/apiCore.ts file contains axios setup to call server API(s) including get, put, post, patch, delete, etc methods, interceptors & token set methods.
src/index.ts file contain all module's API call functions which are exported from each module's individual .ts file, E.g. contacts.ts, bookmarks.ts etc.
src/urls.ts file contain all module's API's url.
Note : we have added a Fake backend setup just for user interactions, but if you are working with the real API integration, then there is no need of fake backend as well as the fakeBackend file located in src/helpers/fakeBackend.ts, so you can delete it and also remove the code related to fakeBackend from app.tsx file. you just need to update API's url of the related module from src/apiCore/urls file, that's it!}
The chat interface after login:
App documentation:
App documentation:

React App url parameter with S3 and CloudFront

My apologies if the information that I have provided is vague as I am not so experience with AWS and React.
I have a React Application being deployed on S3 and CloudFront as per what is suggested in the following link.
Use S3 and CloudFront to host Static Single Page Apps (SPAs) with HTTPs and www-redirects
So most of the things are working fine. I have 403 and 404 errors being redirected to index.html. However the issue comes in where I have query parameters in my url. eg. https://example.com/example?sample=123 when I enter the url in my browser the query string gets removed from the url. The end result I got is https://example.com/example I have read some articles about forwarding query parameters but it's not working for me.
AWS Documentation - Query String Parameters
Hope I will be able to get some advise here. Thanks in advance.
The example?sample=123 is redirected to example because S3 sees example?sample=123 as path (a folder named example?sample=123), it will throw 404 as there is no such folder.
As you have mentioned, you have configured 404 -> index.html, the browser then goes back to example, which is very likely the default page of your react app.
Overall it looks like your query string is being cleared, actually it is lost during the redirection.
The solution includes three parts:
React
You can follow these two great tutorials, one for NextJs and another for RCA.
The way it works is to detect #! in the path, keep and store the query string after redirection.
S3
As included in the two links above, you have to set the redirection rule of the S3 Bucket, to add a #!/ prefix before the path on 403 or 404, it helps React to determine which parts of the url include query string. You can configure it in Properties -> Static website hosting -> Redirection rules – optional. You need to also set index.html as the Index document and enable static web hosting with the correct permission configured.
CloudFront
In General, set Default Root Object to index.html, make sure you don't make it as /index.html.
In Origin, set Origin domain to the S3 Static Web Hosting URL (http://[bucket-name].s3-website.[region].amazonaws.com, do not choose the bucket itself.
In Behavior, change Viewer to Redirect HTTP to HTTPS, set Origin request policy - optional to AllViewer to let all query strings go through.
Hope it helps.

How to integrate Nodejs running on AWS EC2 and static Angularjs content on AWS S3?

I have a Nodejs and Angularjs application running on a single AWS EC2 instance. The code structure of the application is as follows :
Code Structure
Currently, when a user makes the first call to the application, the call is handled by nodejs express framework and the call is directed to the index.html page within angular static directory, using the following code in server/server.js file
res.sendFile(path.resolve(__dirname + '/../web/dist/index.html'));
From index.html, various static components (JS, HTML, CSS etc) are invoked and all angularjs controllers / services make a call back to nodejs API for fulfilling user request.
This arrangement works fine in single instance. Now I want to move the angular static content (i.e. web directory to AWS S3). In doing so, do I have to change the following code in server.js to :
res.sendFile(path.resolve('**AWS S3 URL** +/web/dist/index.html'));
as my static files have now moved to AWS S3. Also, in doing so I have to modify all angular controllers and service to use absolute path to Nodejs API call. This means making lots of changes and introducing deployment configuration (which we have to anyways at a later date for flexibility).
Another approach that we can take is move index.html from web to server folder by making the following changes in server/server.js :
res.sendFile(path.resolve(__dirname + '/../server/index.html'));
then add the following code in server.js to handle redirects for static contents from Nodejs to S3 , something like this :
app.get('/assets/*', function(req, res){
var requestURL = req.url;
var redirectedURL = <<**Remote S3 base URL**>> + requestURL;
res.redirect(redirectedURL);
});
However, the concern is there will be lots of redirects and I am wondering, if that is a good design? I have read that search engines do not like applications with lots of redirects (HTTP 301, 304) and rank those pages lower. Hence I am trying to find out what is the best practice when deploying nodejs on AWS EC2 and static angularjs contents on S3. Any advice will be highly appreciated.
You can directly access your content from s3 by adding s3 urls of your files in to your index.html.
And if you want to serve index.html as well from s3 then you have to redirect index.html one to s3 and all other you can directly load in to browser from s3.
If you really want to serve public content fast i would suggest using cloudfront is an option where you can serve all your public content directly from cloudfront without coming to your node server and all api calls would come to your node server. But yes it depends upon the architecture of your application.

Sending files to Kloudless saver from clientside

I'm currently using a dropbox client js script to push zip files to a folder (in test, a couple of k, in production, a couple of hundred meg) - there currently isn't a server/back end, so am posting from an arraybuffer, not a server url.
var zip = new JSZip();
zip.file("test.txt", "Hello World\n");
var content = zip.generate({type:"arraybuffer"});
// ... code to pick a dropbox folder ...//
client.writeFile(url+"/"+fileName, content, function(error){ ... etc
This all works fine - client is able to write the binary file (which Dropbox's own Saver is unfortunately unable to do). I'm trying to see if Kloudless is able to perform the same, since I also need to support google, box, etc at some point. https://github.com/kloudless/file-explorer/'s documentation about its saver says the files are an array of urls ..
explorer({
...
files: [{
"url": "http://<your image url>",
"name": "filename.extension"
},
It doesn't seem to like local storage file references using URL.createObjectURL(blob), so I'm guessing the api is telling the remote services to pull the files rather than pushing their data.
You are correct that the Kloudless API backend servers stream the file from the URL to the final destination in whichever cloud service you would like the file to be uploaded to (e.g. a folder in a Dropbox account).
If the files are present only on the client-side, I would recommend using the Kloudless Chooser to instead prompt the user to choose a folder to save the files in, and then handle uploading the file data to that destination via the client-side manually.
To do this, refer to this example configuration: https://jsfiddle.net/PB565/139/embedded/
I have set retrieve_tokens to true so that my client-side JavaScript will receive not just the metadata of the folder the user chooses to upload the data to but also the Bearer token to use to gain access to the user's account. This allows the client-side JavaScript to then make upload or multipart upload requests to Kloudless to upload the file data to that folder. The advantage of multipart uploads is that an error uploading one chunk wouldn't require the whole upload to be retried.
Be sure to add the domain you are hosting the File Explorer on to your Kloudless App's Trusted Domains (on the App Details page) so that it can in fact receive the Bearer token in the response JS callback. In my JSFiddle example, I would have to add 'fiddle.jshell.net' to my app's list of Trusted Domains to be able to receive the Bearer token to perform further requests from the client side to the Kloudless API.

Pass environment variable through to Angular app

I'd like to configure my gulp webserver task to pass an environment variable into the angular app. Each team member has his own VM running the API and the variable would instruct the Angular app of the base API url. I'm trying to eliminate the need for every team member to remember to edit the config file after every TFS update.
I thought of simply setting a response header via middleware, but javascript cannot see response headers for the current page - only those of XHR responses.
So I try initializing the config service by performing a HEAD request against the web root, but this requires resolving a $http promise which requires adding a resolve to the route config to ensure it gets resolved before something tries to use it.
I tried just injecting a cookie via middleware and reading it with the $cookies service, but Internet Explorer apparently doesn't see 'localhost' as a valid domain name for cookies and does not read them.
So what other ways are there to allow an environment variable (or other form of local config) to be passed into the angular app?
We have solved this problem in many ways for different situations.
We don't use the base URL but just relative "/folder/resource".
We have an HttpHandler that resolves files with custom extension i.e. ".xyz". Then we have a file named "config.xyz" which is just a JavaScript file with something like:
{
baseUrl: [BASE_URL]
}
When the handler is asked to provide this file, the handler reads the file and does the replacements and then serve its content.
Use the one fake name for the server that serves the API. I.e: thisismylocalfake and then ask developers to configure their hosts file in system32
Have a gulp tasks that, when you compile the application, takes one config.js file and uses the machine name to replace a tag like in option 2.
I ended up adding middleware to the gulp-webserver to intercept a request for our config service file. Then I used gulp-replace and gulp-respond to inject the url from the environment variable directly in the stream. No files edited, technically and it works without having to have any sort of dev-specific code in the project.

Resources