I am trying to create a react app where a user can upload a file to an S3 bucket.
I have a react component that can successfully get the file from the user. And a button that calls a function where I can send the file to S3.
I am using react-aws-s3 because this is going to be the only direct functionality I am going to need so I didn't want to install the whole aws-sdk package and bloat my application.
Now I have followed a few different blogs/instructions on how to do this (like this one) but haven't been able to get the file to upload.
My upload code looks like this (and I will be moving the access keys to env variables):
const S3_BUCKET = ...
const REGION = ...
const ACCESS_KEY = ...
const SECRET_ACCESS_KEY = ...
const config = {
bucketName: S3_BUCKET,
region: REGION,
accessKeyId: ACCESS_KEY,
secretAccessKey: SECRET_ACCESS_KEY,
};
const ReactS3Client = new S3(config);
// the name of the file uploaded is used to upload it to S3
ReactS3Client.uploadFile(datasheet, datasheet.name)
.then((data) => console.log(data.location))
.catch((err) => console.error(err));
I have enabled public access, added the bucket policy to this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicListGet",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:List*",
"s3:Get*"
],
"Resource": [
"arn:aws:s3::: my bucket name",
"arn:aws:s3::: my bucket name/*"
]
}
]
}
and the cors policy to this:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"HEAD",
"GET",
"PUT",
"POST",
"DELETE"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [
"x-amz-server-side-encryption",
"x-amz-request-id",
"x-amz-id-2",
"ETag"
]
}
]
But when I try and upload the file I get a 400 bad request response.
body: (...)
bodyUsed: false
headers: Headers {}
ok: false
redirected: false
status: 400
statusText: "Bad Request"
type: "cors"
url: ...
[[Prototype]]: Response
It says it's type: cors but I have cors fully enabled right? What am I missing here?
When I followed the tutorial that you pasted I managed to upload the file to s3. It works with your cors policy as well.
Here are a couple of things to check:
The network tab and the actual response from S3 This will give you more information on the actual problem that you are having
Your bucket policy allows only List and Get, but in the tutorial, it's s3:* So your user must have permission to upload files to this bucket.
Double check the upload logic in the demo it's like this:
const handleFileInput = (e) => {
setSelectedFile(e.target.files[0]);
}
const uploadFile = async (file) => {
const ReactS3Client = new S3(config);
// the name of the file uploaded is used to upload it to S3
ReactS3Client
.uploadFile(file, file.name)
.then(data => console.log(data.location))
.catch(err => console.error(err))
}
return <div>
<div>React S3 File Upload</div>
<input type="file" onChange={handleFileInput}/>
<br></br>
<button onClick={() => uploadFile(selectedFile)}> Upload to S3</button>
</div>
Related
I am trying to upload a file to s3 using aws sdk on a react application. However I bumped into CORS error and even after configuring the CORS policy for my bucket the error still persist.
My CORS policy for the bucket is as follow:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT",
"POST",
"DELETE"
],
"AllowedOrigins": [
"http://localhost:3000"
],
"ExposeHeaders": []
}
]
And this is my code to upload my blob:
import * as AWS from 'aws-sdk';
const s3 = new AWS.S3({
accessKeyId: "access key id",
secretAccessKey: "secret access key",
});
export const uploadToS3 = (fileContent: Blob, fileName: string, bucket: string) => {
console.log('attempting to upload to s3')
const params = {
Bucket: bucket,
Key: fileName,
Body: fileContent
}
s3.upload(params, function (err: any, data: any) {
if (err) {
console.log(err);
} if (data) {
console.log(data);
}
});
}
And this is the console output.
Access to XMLHttpRequest at 'https://.s3.amazonaws.com/testing.png' from origin 'http://localhost:3000' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.
I wasn't exactly sure what happened, but I deleted the bucket and created an identical one (as far as I am concerned) and I am getting a different error message, saying
The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'ap-southeast-2'
Hence I have to include region in my s3 instantiation.
const s3 = new AWS.S3({
accessKeyId: "access key id",
secretAccessKey: "secret access key",
region: "bucket region" // <-------------- was missing
});
Now I can put object on my bucket as intended, I will try to recreate the error and see what was the difference between my first and second bucket.
I am developing react frontend with amplify backend. In the frontend i am uploading an avatar image to s3 bucket using Storage.put and storing the signed url to the "user" object. The amplify storage is set up with read/write/delete for the owner and read permissions for guest users. The files are uploaded successfully into s3-bucket under public folder.
Now the weird thing. Everything works for the first 10-20 minutes. Then suddenly it throws 403 forbidden when accessing the image in my frontend.
Any ideas how to fix this?
I tried updating the bucket policy with following setup:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::YOUR-BUCKET-NAME/*"
}
]
}
Frontend code for uploading image to s3:
async function uploadImage(
uri?: File,
filename?: string
): Promise<string | null> {
if (uri === null || !filename) return null;
try {
const result = await Storage.put(filename, uri, {
level: "public",
contentType: "image/*",
});
console.log("STORAGE PUT SUCCESS: ", result.key);
const signedURL = await Storage.get(result.key); // get key from Storage.list
console.log("STORAGE GET SUCCESS: ", signedURL);
return signedURL;
} catch (err) {
console.log("STORAGE PUT ERROR: ", err);
return null;
}
}
I'm using axios to upload an audio file to AWS s3 bucket.
The workflow is: React => AWS API Gateway => Lambda.
Here is the backend Lambda code where generates the S3 presigned URL:
PutObjectRequest putObjectRequest = PutObjectRequest.builder()
.bucket(AUDIO_S3_BUCKET)
.key(objectKey)
.contentType("audio/mpeg")
.build();
PutObjectPresignRequest putObjectPresignRequest = PutObjectPresignRequest.builder()
.signatureDuration(Duration.ofMinutes(10))
.putObjectRequest(putObjectRequest)
.build();
PresignedPutObjectRequest presignedPutObjectRequest = s3Presigner.presignPutObject(putObjectPresignRequest);
AwsProxyResponse awsProxyResponse = new AwsProxyResponse();
awsProxyResponse.setStatusCode(HttpStatus.SC_OK);
awsProxyResponse.setBody(
GetS3PresignedUrlResponse.builder()
.s3PresignedUrl(presignedPutObjectRequest.url().toString())
.build().toString());
return awsProxyResponse;
Here is the java code to create the bucket:
private void setBucketCorsSettings(#NonNull final String bucketName) {
s3Client.putBucketCors(PutBucketCorsRequest.builder()
.bucket(bucketName)
.corsConfiguration(CORSConfiguration.builder()
.corsRules(CORSRule.builder()
.allowedHeaders("*")
.allowedMethods("GET", "PUT", "POST")
.allowedOrigins("*") // TODO: Replace with domain name
.exposeHeaders("ETag")
.maxAgeSeconds(3600)
.build())
.build())
.build());
log.info("Set bucket CORS settings successfully for bucketName={}.", bucketName);
}
In my frontend, here is the part that try to upload file:
const uploadFile = (s3PresignedUrl: string, file: File) => {
let formData = new FormData();
formData.append("file", file);
formData.append('Content-Type', file.type);
const config = {
headers: {
"Content-Type": 'multipart/form-data; boundary=---daba-boundary---'
//"Content-Type": file.type,
},
onUploadProgress: (progressEvent: { loaded: any; total: any; }) => {
const { loaded, total } = progressEvent;
let percent = Math.floor((loaded * 100) / total);
if (percent < 100) {
setUploadPercentage(percent);
}
},
cancelToken: new axios.CancelToken(
cancel => (cancelFileUpload.current = cancel)
)
};
axios(
{
method: 'post',
url: s3PresignedUrl,
data: formData,
headers: {
"Content-Type": 'multipart/form-data; boundary=---daba-boundary---'
}
}
)
.then(res => {
console.log(res);
setUploadPercentage(100);
setTimeout(() => {
setUploadPercentage(0);
}, 1000);
})
.catch(err => {
console.log(err);
if (axios.isCancel(err)) {
alert(err.message);
}
setUploadPercentage(0);
});
};
However, when try to upload the file, it return 403 error.
And if I use fetch instead of axios instead and it works, like this:
export async function putToS3(presignedUrl: string, fileObject: any) {
const requestOptions = {
method: "PUT",
headers: {
"Content-Type": fileObject.type,
},
body: fileObject,
};
//console.log(presignedUrl);
const response = await fetch(presignedUrl, requestOptions);
//console.log(response);
return await response;
}
putToS3(getPresignedUrlResponse['s3PresignedUrl'], values.selectdFile).then(
(putToS3Response) => {
console.log(putToS3Response);
Toast("Success!!", "File has been uploaded.", "success");
}
);
It seems to me that the only difference between these two is that: when using fetch the request's Content-Type header is Content-Type: audio/mpeg, but when using axios it is Content-Type: multipart/form-data; boundary=----WebKitFormBoundaryClLJS3r5Xetv3rN7 .
How can I make it work with axios? I'm switching to axios for its ability to monitor request progress as I want to show an upload progress bar.
I followed this blog and not sure what I missed: https://bobbyhadz.com/blog/aws-s3-presigned-url-react
You are using POST in your axios. Should be PUT instead.
Also I think the content type has to match the one specified during requesting the pre-signed URL, which is audio/mpeg as you rightly pointed out.
Correspondingly, your data should be just file, instead of formData.
axios(
{
method: 'put',
url: s3PresignedUrl,
data: file,
headers: {
"Content-Type": 'audio/mpeg'
}
}
...
You didn't mark any answers as accepted so I guess you didn't solve it.
For any future viewers out there. The reason why you are getting 403 forbidden error is because your Content-Type in your server and client side are not matching. I'm assuming you set up the AWS policies correctly.
Your code in the backend should look like this:
const presignedPUTURL = s3.getSignedUrl("putObject", {
Bucket: "bucket-name",
Key: String(Date.now()),
Expires: 100,
ContentType: "image/png", // important
});
and in the front-end (assuming you are using axios):
const file = e.target.files[0]
const result = await axios.put(url, file, {
withCredentials: true,
headers: { "Content-Type": "image/png" },
});
In practical, you would normally have to send the file type to generate the pre-signed url in the POST body or whatever and then in axios you do file.type to get the file type of the uploaded file.
Check your Lambda execution role. It may be the culprit. Perhaps it does not grant enough permissions to allow PUTting files into your bucket.
URL signing is a delegation of power on behalf of the signer, which is restricted to a specified object, action... Signing does not magically grants full read/write permissions on S3, even on the specific object related to the presigned URL.
The "user" who generates the signature requires sufficient permissions to allow the actions you want to delegate through that presigned URL. In this case, this is the execution role of your Lambda function.
You can add the AmazonS3FullAccess managed policy to the execution role and see if it solves your situation. This change took me out of a blocked situation me after days of struggle. Afterwards, before going to production, restrict that rule to the specific bucket you want to allow uploads into (least privilege principle).
If you develop using SAM local emulation, those execution roles seem not to be taken into account as long as you run your functions locally; the signed links work in that context even without S3 permissions.
I'm working on an aws s3 photo upload from a react client and I'm experiencing the following error:
TypeError: Cannot read property 'byteLength' of undefined
I'm assuming there's a flaw in the upload object, but I believe there might be something wrong with the s3/cognito configuration because I receive the same error when I invoked s3.listObjects. I'm following these docs - https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/s3-example-photo-album-full.html
Any thoughts?
uploadPhoto() {
const files = document.getElementById("photoUpload").files;
if (!files.length) {
return alert("Please choose a file to upload first.");
}
const file = files[0];
const fileName = file.name;
const albumPhotosKey = encodeURIComponent('screenshots') + "/";
const photoKey = albumPhotosKey + fileName;
// Use S3 ManagedUpload class as it supports multipart uploads
const upload = new AWS.S3.ManagedUpload({
params: {
Bucket: <Bucket Name>,
Key: fileName,
Body: file
}
});
const promise = upload.promise();
promise.then(
function(data) {
alert("Successfully uploaded photo.");
console.log('UPLOAD: ', data)
},
function(err) {
console.log('ERROR: ', err)
// return alert("There was an error uploading your photo: ", err.message);
}
);
}
I got this error in a React Native app. I was able to fix it by turning off my Dev Tools network inspector.
I've run the example as described in the docs and it works, so can't reproduce your err. Here's a couple of things to check:
Try passing your accesskey and secret into your config object (although I don't recommend this for security reasons, it might indicate where the issue is):
AWSService.config.update({
region: region,
accessKeyId: accessKeyId,
secretKey: secretAccessKey,
credentials: new AWSService.CognitoIdentityCredentials({
IdentityPoolId: identityPoolId
})
});
Confirm your your Cognito Identity Pool ID has an IAM policy attached with the following:
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:DeleteObject",
"s3:GetObject",
"s3:ListBucket",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::BUCKET_NAME",
"arn:aws:s3:::BUCKET_NAME/*"
]
}
]
}
Just in case! Check that your credentials are configured and passed right. And that you have the privileges for the resource in IAM.
I had the error above because i forget to pass the access credentials. It went away once i put them in.
I have a Lambda function that handles reading data from a file(stored inside S3 bucket) as well as inserting data to a Dynamodb table. This Lambda function is exposed as a REST endpoint using API gateway. The function accepts GET request as well as POST request. I'm making GET/POST requests from my REACT project using axios and aws4(for signing) libraries. GET request is to read data from a file stored inside S3 and it works just fine. And POST request is for inserting data into Dynamodb table. However, it doesn't work and AWS returns InvalidSignatureException error as a respond. This is an excerpt of my code :
createAWSSignedRequest(postData) {
let request = {};
if (postData) {
request = {
host: process.env.AWS_HOST,
method: 'POST',
url: process.env.AWS_URL,
path: process.env.AWS_PATH,
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify(postData)
}
} else {
request = {
host: process.env.AWS_HOST,
method: 'GET',
url: process.env.AWS_URL,
path: process.env.AWS_PATH
}
}
let signedRequest = aws4.sign(request, {
secretAccessKey: process.env.AWS_SECRET_KEY,
accessKeyId: process.env.AWS_ACCESS_KEY
});
return signedRequest;
}
This is how GET request is made :
let signedRequest = this.createAWSSignedRequest('GET');
axios(signedRequest)
.then(response => {
})
.catch((error) => {
console.log("error",error);
});
This is how POST request is made :
const data = {
uuid: "916b7d90-0137-11e8-94e6-116965754e23", //just a mock value
date : "22/jan/2018",
user_response: [
{
question:"this is quesiton1",
choice:"user selected A"
},
{
question:"this is quesiton2",
choice: "user selected b"
},
{
question:"this is quesiton3",
choice: "user selected C"
}
]
};
let signedRequest = this.createAWSSignedRequest(data);
axios(signedRequest)
.then(response => {
......
})
.catch((error) => {
console.log("error",error);
});
As you can see, the code for both GET and POST requests are exactly the same (except payload and method type). I'm singing with the same secret access key and access key id for both requests. I'm not sure why one request results in "InvalidSignatureException" when the other doesn't. Can anyone shed a light on this issue for me.
Thanks
After having discussion with AWS4 lib developer, I figured out what I did wrong. AWS4 uses "body" as a payload attribute to compute signature. However, Axios uses "data" attribute as payload. My mistake was only setting either one of them. So when I set just "data" attribute, the payload was present in the request and content-length is computed correctly. However, the signature was incorrect since the payload was not taken into consideration when computing signature. When I set just "body", payload was not present in the request because Axios does not use "body" attribute for payload. The solution is to set both attributes with payload. I hope this helps to anyone who are having the same issue I have.
If you use the AWS Amplify library it has a module called API which should fit your use cases, and it will perform Sigv4 signing for you either with authenticated or unauthenticated roles. The Auth category uses Cognito as the default implementation. For instance:
npm install aws-amplify --save
Then import and configure the lib:
import Amplify, { API } from 'aws-amplify';
Amplify.configure({
Auth: {
identityPoolId: 'XX-XXXX-X:XXXXXXXX-XXXX-1234-abcd-1234567890ab',
region: 'XX-XXXX-X'
},
API: {
endpoints: [
{
name: "APIName",
endpoint: "https://invokeURI.amazonaws.com"
}
]
}
});
Then for your API Gateway endpoint calling a Lambda:
let apiName = 'MyApiName';
let path = '/path';
let options = {
headers: {...} // OPTIONAL
}
API.get(apiName, path, options).then(response => {
// Add your code here
});
More info here: https://github.com/aws/aws-amplify