It's driving me crazy, any help would be much appreciated!
To set up my bucket in S3 I followed http://www.cheynewallace.com/uploading-to-s3-with-angularjs/
Regarding this post I made following "improvements" by extended the policy with a wildcard and giving more rights
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:GetObjectTorrent",
"s3:GetObjectVersion",
"s3:GetObjectVersionAcl",
"s3:GetObjectVersionTorrent",
"s3:PutObject",
"s3:PutObjectAcl",
"s3:PutObjectVersionAcl"
],
"Resource": [
"arn:aws:s3:::photos-eu/*"
]
}
]
}
and added < ExposeHeader>ETag< /ExposeHeader > to the Cors settings of the bucket
Then my angular service using the aws-sdk look like
/// <reference path="../../../typings/tsd.d.ts" />
module Services {
export interface IS3UploadService {
upload(imgName:string, imgData:string):ng.IPromise<{}>;
}
export class S3UploadService implements IS3UploadService {
static $inject = ['$q'];
private bucket:AWS.S3;
constructor(private $q:ng.IQService) {
var credentials = new AWS.Credentials("myAccessKeyId", "mySecretAccessKey");
AWS.config.update(credentials);
AWS.config.region = "eu-west-1";
this.bucket = new AWS.S3({params: {Bucket: 'peterparker-photos-eu', maxRetries: 10, region: "eu-west-1"}});
}
upload(imgName:string, imgData:string):ng.IPromise<{}> {
var deferred = this.$q.defer();
var params:AWS.s3.PutObjectRequest = {
Bucket: "peterparker-photos-eu",
Key: imgName,
Body: imgData,
ContentType: "image/jpeg",
ContentEncoding: "Base64"
};
this.bucket.putObject(params, (err:any, data:any) => {
if (err) {
console.error("->" + JSON.stringify(err));
deferred.reject(err);
} else {
console.info(data);
deferred.resolve(data);
}
});
return deferred.promise;
}
}
}
angular.module('App')
.service('S3UploadService', Services.S3UploadService);
For my test purpose, I push in the imgData an img encoded as Base64, something like "/9j/4AAQSkZJRgABAgAAZABkA...." (of course a valid image converted with http://base64-image.de)
And as result, each time I try, I've got following error
{"line":25,"column":24996,"sourceURL":"http://localhost:8100/lib/aws-sdk/dist/aws-sdk.min.js","message":"The request signature we calculated does not match the signature you provided. Check your key and signing method.","code":"SignatureDoesNotMatch","region":null,"time":"2016-06-08T15:12:09.945Z","requestId":null,"statusCode":403,"retryable":false,"retryDelay":60.59883770067245}
So much fun...
Update headers:
General
Request URL:https://peterparker-photos-eu.s3-eu-west-1.amazonaws.com/1465408512724.jpg
Request Method:PUT
Status Code:403 Forbidden
Remote Address:54.231.131.16:443
Response headers
Access-Control-Allow-Methods:HEAD, GET, PUT, POST, DELETE
Access-Control-Allow-Origin:*
Access-Control-Expose-Headers:ETag, x-amz-meta-custom-header
Connection:close
Content-Type:application/xml
Date:Wed, 08 Jun 2016 17:55:20 GMT
Server:AmazonS3
Transfer-Encoding:chunked
Vary:Origin, Access-Control-Request-Headers, Access-Control-Request- Method
x-amz-id-...
x-amz-request-id:...
Request Headers
Accept:*/*
Accept-Encoding:gzip, deflate, sdch, br
Accept-Language:fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4,de;q=0.2
Authorization:AWS ...
Connection:keep-alive
Content-Encoding:Base64
Content-Length:38780
Content-MD5:...
Content-Type:image/jpeg; charset=UTF-8
Host:peterparker-photos-eu.s3-eu-west-1.amazonaws.com
Origin:http://localhost:8100
Referer:http://localhost:8100/?ionicplatform=ios
User-Agent:Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.79 Safari/537.36
X-Amz-Date:Wed, 08 Jun 2016 17:55:20 GMT
X-Amz-User-Agent:aws-sdk-js/2.3.18
Request payload
Img base64 code
Update
Even by trying to upload a non Base64 content it finish with the same error
var paramsHtml:AWS.s3.PutObjectRequest = {
Bucket: "peterparker-photos-eu",
Key: "HelloWorld.html",
Body: "The Body",
ContentType: "text/html"
};
Update #2
I moved to a solution with a signed URL generated by my node js server as described in following solution, still got the same error as result...but I least I try ;)
upload file from angularjs directly to amazon s3 using signed url
Freak I finally find the solution or at least a solution.
After migrating my client aws-sdk based solution to a solution where the server generate a signedUrl I was still facing the same error. Short story long, it fixed the problem by setting in both side the Content-type for the header.
My code if someone face the same problem one day:
Server Node.js
var AWS = require('aws-sdk');
AWS.config.update({accessKeyId: "myKey", secretAccessKey: "mySecret"});
AWS.config.region = 'eu-west-1';
app.post('/api/images', securityPolicy.authorise, function (req, res) {
var s3 = new AWS.S3();
var imgName = req.body.imgName;
var contentType = req.body.contentType;
// Expires in seconds
var params = {Bucket: 'photos-eu', Key: imgName, Expires: 600, ContentType: contentType};
s3.getSignedUrl('putObject', params, function (err, url) {
if (err) {
res.status(500).json({
error: "Presigned S3 url for putObject can't be created. " + JSON.stringify(err)
});
} else {
res.json({url: url});
}
});
});
Client angular:
First or course there is the part to call the node server, obvious POST to my server
And then the second part processing the signedURL
private uploadToS3(preSignedUrl:string, imgData:string):ng.IPromise<{}> {
var deferred = this.$q.defer();
// Post image to S3
this.$http({
method: 'PUT',
url: preSignedUrl,
headers: {'Content-Type': 'image/jpeg'},
data: imgData
})
.then((response:any) => {
console.log("Image uploaded to S3" + JSON.stringify(response));
deferred.resolve();
}, (response:any) => {
console.log("Error Presigned URL" + JSON.stringify(response));
deferred.reject(response);
});
return deferred.promise;
}
Related
TL;DR: How to actually change a request header in http-proxy-middleware?
To get around some CORS errors I set up a local proxy using the http-proxy-middleware module. In addition to setting the mode of my request to "no-cors" I need to change an additional header: "Content-Type". However, this seems to be not working. In fact, I cannot even change the response headers on a redirected (through my proxy) request. For local requests (fetching pages etc) I am able to change the response headers but even then I am unable to change the request headers.
This is my setupProxy.js:
const { createProxyMiddleware } = require("http-proxy-middleware");
module.exports = function (app) {
app.use((req, res, next) => {
req.header("Content-Type", "application/json");
res.header("Access-Control-Allow-Origin", "*");
next();
});
function onProxyReq(proxyReq, req, res) {
console.log("test 1");
proxyReq.setHeader("Content-Type", "application/json");
req.header("Content-Type", "application/json");
}
app.use(
"/api",
createProxyMiddleware({
target: "https://my-domain.com/",
changeOrigin: true,
onProxyReg: { onProxyReq },
// secure: true,
// on: {
// proxyReq: requestInterceptor(async (buffer, proxyReq, req, res) => {
// console.log("test 2");
// }),
// },
logger: console,
})
);
};
And this is the code that sends the request:
try {
let requestOptions: RequestInit = {
method: "POST",
mode: "no-cors",
headers: {
accept: "application/json",
"Content-Type": "application/json",
},
body: JSON.stringify({
email: { username },
password: { password },
}),
};
fetch("https://localhost:3000/api/path/to/login/api", requestOptions)
.then(async function (response) {
console.log(response);
if (!response.ok) {
setError("Error code: " + response.status.toString());
}
return response.json();
})
.then(function (response) {
console.log(response);
});
} catch (e) {
console.log(e);
}
I'm getting an error back from the API itself (the CORS avoidance is working):
Content type 'text/plain;charset=UTF-8' not supported
And indeed, when I use the Chrome inspector to look at the request, the request header is set to "text/plain;charset=UTF-8". I tried setting the response header content type to "text/plain" but even that remains untouched. But how can this be after routing the request through my proxy?
EDIT:
Ok so I found out part of the problem. Setting the mode to "no-cors" in my fetch request alters the headers. But this still doesn't explain why my proxy can't edit the request headers. When I remove the "no-cors" mode but copy the headers it produced, the server is giving me error 400 (bad request). This means it is not receiving the same request as before, but this baffles me since I copied all the headers manually.
EDIT2:
Actually, I found out that when I remove mode: "no-cors" and set the "Sec-Fetch-Mode" header to "no-cors" manually, it is still set to "cors" in the actual request!
EDIT3:
I tried sending my request through ReqBin and it works there :)! So at least we know my request is correct.
I found out that changing the "content-type" header in cors mode is simply not allowed. The solution is to first send a preflight request with the options. When this has been accepted, you can send the actual request.
You can send the request through ReqBin, it will take the necessary steps to complete the request succesfully. It will even generate code to reproduce the request for you.
var url = "https://thedomain.com/path/to/api";
var xhr = new XMLHttpRequest();
xhr.open("POST", url);
xhr.setRequestHeader("Content-Type", "application/json");
xhr.onreadystatechange = function () {
if (xhr.readyState === 4) {
console.log(xhr.status);
console.log(xhr.responseText);
}
};
var data_ = '{"email": "*************", "password": "******"}';
xhr.send(data_);
And this works! :)
I am attempting to upload publicly viewable photos browser-side to an S3 Bucket. I am using a server to authenticate my request and give the browser a signed URL. My PUT statement works and my S3 Bucket will have a new object added to it. The metadata is coming in with "Content-Type" :
application/json;charset=UTF-8. Even though I am setting it to 'image/png' in my code. Additionally even if I change the metadata to 'image/png' when I go to the url to view my image I see a long string of text which was the data that represents the image something like this "["data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAASQAAAEjCAYAAACb/HxUA] which continues for around 26,000 characters.
Why is the page displaying text and not an image?
Here is my browser code for the request:
axios.get('/picture', {params: { filename: `${this.props.user.id}_${Date.now()}`}})
.then(response =>{
var signedUrl = response.data;
var headers= {
'ACL': 'public-read',
'Content-Type': this.state.imageType,
};
return axios.put(signedUrl, this.state.imgSrc, headers);
})
The first axios.get is a request to get the signed url.
Here is the server code to create the signed url.
const s3 = new aws.S3({
region: 'us-east-2',
accessKeyId: process.env.ACCESS_KEY_ID,
secretAccessKey: process.env.SECRET_ACCESS_KEY,
});
router.get('/', (req,res)=>{
let params = {
Bucket: 'beerphotos',
Key: req.query.filename,
//Body : req.query.picture[0],
ACL: 'public-read',
ContentType: 'image/png',
Expires: 60,
}
s3.getSignedUrl('putObject', params, function(err, data) {
if (err) {
console.log('Error Getting Signed URL', err);
return err;
} else {
console.log('This is the data', data)
res.send(data);
}
})
})
I have a REST API developed using Play Framework/Java and front end developed in Angular JS.
I am trying to call a POST method fron the Angular Client to the server using the following code:
$scope.login = function () {
console.log('login called');
var loginURL = 'http://localhost:9000/login';
var loginInfo = {
'email': $scope.email,
'password': $scope.password
};
$http({
url: loginURL,
method: 'POST',
data: loginInfo,
headers: { 'Content-Type': 'application/json' }
}).then(function (response) {
console.log('SUCCESS: ' + JSON.stringify(response));
$scope.greeting = response.status;
}, function (response) {
console.log('ERROR: ' + JSON.stringify(response));
});
}
This is the code at my server:
public Result doLogin() {
ObjectNode result = Json.newObject();
result.put("status", "success");
return ok(result).withHeader("Access-Control-Allow-Origin", "*");
}
And this is the application conf file:
#allow all hosts.
play.filter.hosts {
allowed = ["."]
}
#allow CORS requests.
play.filters.cors {
allowedOrigins = ["*"]
}
Yet even after enabling CORS, I am getting error in console in both Firefox and Google Chrome:
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://localhost:9000/login. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing).
ERROR: {"data":null,"status":-1,"config":{"method":"POST","transformRequest":[null],"transformResponse":[null],"jsonpCallbackParam":"callback","url":"http://localhost:9000/login","data":{"email":"xxx","password":"xxx"},"headers":{"Content-Type":"application/json","Accept":"application/json, text/plain, /"}},"statusText":""}
I do know that the server is sending the correct response and the correct header because when I do the POST from Postman, I can see the response and also the headers containing {"Access-Control-Allow-Origin", "*"} in Postman.
So then, what could be the problem? Is there something I am missing from the Client side?
The difference between POSTMAN request and browser request is browser sends an OPTIONS request before the actual POST / GET request.
To be able to accept OPTION request with your play framework allowedHttpMethods = ["GET", "POST" ,"OPTIONS"]
for follow this link
Play Framework 2.3 - CORS Headers
This causes a problem accessing CORS request from a framework (like angularjs). It becomes difficult or the framework to find what was the options request for and take action properly.
For fixing your problem you will need to analyze how the options request going and how it's being interpreted and how to overcome. But in general, I suggest using "fetch" built-in request for this, which supports the promises so can be chained easily with angularjs code
so your code will look something like this
$scope.login = function () {
console.log('login called');
var loginURL = 'http://localhost:9000/login';
var loginInfo = {
'email': $scope.email,
'password': $scope.password
};
fetch(loginURL, {
method: 'post',
headers: {
"Content-type": "application/json"
},
body: loginInfo
}).then(function (response) {
console.log('SUCCESS: ' + JSON.stringify(response));
$scope.greeting = response.status;
}, function (response) {
console.log('ERROR: ' + JSON.stringify(response));
});
}
I'm trying to put a video file to my bucket using a pre-signed url in angular4.
Node:
let s3 = new AWS.S3();
s3.config.update({
accessKeyId: process.env.VIDEO_ACCESS_KEY,
secretAccessKey: process.env.VIDEO_SECRET_KEY
})
let videoId = await Video.createVideo()
let params = {
ACL: "public-read",
Bucket: process.env.BUCKET_NAME,
ContentType: 'video/mp4',
Expires: 100,
Key: req.jwt.username+"/"+videoId,
}
return s3.getSignedUrl('putObject', params, function (err, url) {
if(!err) {
console.log(url);
res.status(200);
res.json({
url: url,
reference: `${process.env.BUCKET_NAME}/${req.jwt.username}/${videoId}`,
acl: params.ACL,
bucket: params.Bucket,
key: params.Key,
contentType: params.ContentType,
});
} else {
console.log(err);
res.status(400);
res.json({
message: "Something went wrong"
})
}
});
This successfully generates a url for me, and I try to use it in my post request in the front end.
Angular:
this.auth.fileUpload().subscribe((result) => {
console.log(result["key"], result["acl"], result["bucket"], result["contentType"])
if(!result["message"]) {
let formData = new FormData();
formData.append('file', file.files[0]);
const httpOptions = {
headers: new HttpHeaders({
"Key": result["key"],
"ACL": result["acl"],
"Bucket": result["bucket"],
"Content-Type": result["contentType"],
})
};
this.http.post(result["url"], formData, httpOptions ).subscribe((response) => {
console.log("response");
console.log(response);
let reference = `https://s3.amazonaws.com/${result["reference"]}`
this.auth.makeVideo(result["reference"]).subscribe((result) => {
console.log(result);
});
}, (error) => {
console.log("error");
console.log(error);
})
But this generates an error.
SignatureDoesNotMatch
The request signature we calculated does not match the signature you provided. Check your key and signing method
Here's the URL that I generate
https://MY_BUCKET_HERE.s3.amazonaws.com/admin/87f314f1-9f2e-462e-84ff-25cba958ac50?AWSAccessKeyId=MY_ACCESS_KEY_HERE&Content-Type=video%2Fmp4&Expires=1520368428&Signature=Ks0wfzGyXmBTiAxGkHNgcYblpX8%3D&x-amz-acl=public-read
I'm pretty sure I'm just making a simple mistake, but I can't figure it out for the life of me. Do I need to do something with my headers? Do I need to change the way I read the file for the post? I've gotten it to work with a public bucket with FormData and a simple post request with no headers, but now that I'm working with Policies and a private bucket, my understanding is much less. What am I doing wrong?
If you generate a pre-signed URL for PutObject then you should use the HTTP PUT method to upload your file to that pre-signed URL. The POST method won't work (it's designed for browser uploads).
Also, don't supply HTTP headers when you invoke the PUT. They should be supplied when generating the pre-signed URL, but not when using the pre-signed URL.
I have Angularjs app connects to a server using API, and i'm using token authentication, when i use Postman to get the token, it works perfect, but when i'm use Angulajs with the same header and parameters i got error:400.
When i checked both requests using Fiddler, i found that the request from Angularjs is missing Access-Control-Allow-Origin: * header.
How to fix this?
Here is the service used to get the token:
AuthenticationApi.Login = function (loginData) {
//POST's Object
var data = "grant_type=password&username=" + loginData.userName + "&password=" + loginData.password;
var deferred = $q.defer();
//the data will be sent the data as string not JSON object.
$http.post('http://localhost:53194/Token', data, { headers: { 'Content-Type': 'application/x-www-form-urlencoded' } })
.then(function (response) {
console.log(response);
localStorageService.set('authorizationData',
{
token: response.access_token,
userName: loginData.userName
});
Authentication.isAuth = true;
Authentication.userName = loginData.userName;
console.log(Authentication);
deferred.resolve(response);
},
function (err, status) {
logout();
deferred.reject(err);
});
return deferred.promise;
};
for the API server, i'v done CORS:
public void Configuration(IAppBuilder app)
{
ConfigureOAuth(app);
HttpConfiguration config = new HttpConfiguration();
WebApiConfig.Register(config);
app.UseCors(Microsoft.Owin.Cors.CorsOptions.AllowAll);
app.UseWebApi(config);
}
i found the problem and i fixed it.
in the API server, i have this code:
var cors = new EnableCorsAttribute("*", "*", "*");
cors.PreflightMaxAge = 60;
config.EnableCors(cors);
The problem is in the PreflightMaxAge, i just commented it...It worked!!!
if the problem not solved, try to use IE or FireFox, don't use Chrome because it is not CORS enabled