CORS while uploading to S3 in React-Vite application - reactjs

I am trying to upload a file to s3 using aws sdk on a react application. However I bumped into CORS error and even after configuring the CORS policy for my bucket the error still persist.
My CORS policy for the bucket is as follow:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT",
"POST",
"DELETE"
],
"AllowedOrigins": [
"http://localhost:3000"
],
"ExposeHeaders": []
}
]
And this is my code to upload my blob:
import * as AWS from 'aws-sdk';
const s3 = new AWS.S3({
accessKeyId: "access key id",
secretAccessKey: "secret access key",
});
export const uploadToS3 = (fileContent: Blob, fileName: string, bucket: string) => {
console.log('attempting to upload to s3')
const params = {
Bucket: bucket,
Key: fileName,
Body: fileContent
}
s3.upload(params, function (err: any, data: any) {
if (err) {
console.log(err);
} if (data) {
console.log(data);
}
});
}
And this is the console output.
Access to XMLHttpRequest at 'https://.s3.amazonaws.com/testing.png' from origin 'http://localhost:3000' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource.

I wasn't exactly sure what happened, but I deleted the bucket and created an identical one (as far as I am concerned) and I am getting a different error message, saying
The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'ap-southeast-2'
Hence I have to include region in my s3 instantiation.
const s3 = new AWS.S3({
accessKeyId: "access key id",
secretAccessKey: "secret access key",
region: "bucket region" // <-------------- was missing
});
Now I can put object on my bucket as intended, I will try to recreate the error and see what was the difference between my first and second bucket.

Related

File/Image stored to s3 bucket throws error 403 forbidden after 10-20 min when fetching the image again

I am developing react frontend with amplify backend. In the frontend i am uploading an avatar image to s3 bucket using Storage.put and storing the signed url to the "user" object. The amplify storage is set up with read/write/delete for the owner and read permissions for guest users. The files are uploaded successfully into s3-bucket under public folder.
Now the weird thing. Everything works for the first 10-20 minutes. Then suddenly it throws 403 forbidden when accessing the image in my frontend.
Any ideas how to fix this?
I tried updating the bucket policy with following setup:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::YOUR-BUCKET-NAME/*"
}
]
}
Frontend code for uploading image to s3:
async function uploadImage(
uri?: File,
filename?: string
): Promise<string | null> {
if (uri === null || !filename) return null;
try {
const result = await Storage.put(filename, uri, {
level: "public",
contentType: "image/*",
});
console.log("STORAGE PUT SUCCESS: ", result.key);
const signedURL = await Storage.get(result.key); // get key from Storage.list
console.log("STORAGE GET SUCCESS: ", signedURL);
return signedURL;
} catch (err) {
console.log("STORAGE PUT ERROR: ", err);
return null;
}
}

I Always Got Cors Error When Send DELETE request to gcs Resumable Singed Url

I've tried to use "resumable signed url" when uploading a file.
This is the CORS config on the bucket:
[{“maxAgeSeconds”: 3600, “method”: [“GET”, “PATCH”, “DELETE”, “OPTIONS”, “POST”, “PUT”], “origin”: [“*”], “responseHeader”: [“*”]}]
BE
const options: GetSignedUrlConfig = {
version: 'v4',
action: 'resumable',
expires: Date.now() + 60 * 60 * 1000, // 60 minutes
contentType: 'text/plain',
};
FE
const res1 = await axios.post(
signedUrl.url,
{},
{
headers: {
'Access-Control-Allow-Origin': '*',
'Content-Type': 'text/plain',
'x-goog-resumable': 'start',
},
}
);
const res2 = await axios.put(res.headers.location, file);
Uploading the file works successfully.
The problem is when I sent a 'DELETE' request via the url while uploading. The below code is what I used.
Google Docs Link
await axios.delete(resumable_signed_url_session_uri, {
headers: {
'Content-Length': '0'
}
});
What I expected to happen is that the file is being stop uploaded. but I got a CORS error. To be more specific, in the Console panel of Chrome, I saw an error like this:
'xhr.js?78a4:193 Refused to set unsafe header "content-length"'.
After this error, I got a CORS error:
Access to XMLHttpRequest at 'https://storage.googleapis.com/%5Bbucket%5D/test/test.mp3? ... &upload_id=ADPycdvPhSounQyXYnfOeyiXu-reeZf2j2ghdrXzHcUkSNzoFmmTa3k8Mutis_hhXJjEiMbP6TtzSbjuXzXSClvHUrqdNUlvCJiy' from origin 'http://localhost:3001' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
Where is it wrong? Any help would be appreciated.

React upload csv file to S3 using react-aws-s3

I am trying to create a react app where a user can upload a file to an S3 bucket.
I have a react component that can successfully get the file from the user. And a button that calls a function where I can send the file to S3.
I am using react-aws-s3 because this is going to be the only direct functionality I am going to need so I didn't want to install the whole aws-sdk package and bloat my application.
Now I have followed a few different blogs/instructions on how to do this (like this one) but haven't been able to get the file to upload.
My upload code looks like this (and I will be moving the access keys to env variables):
const S3_BUCKET = ...
const REGION = ...
const ACCESS_KEY = ...
const SECRET_ACCESS_KEY = ...
const config = {
bucketName: S3_BUCKET,
region: REGION,
accessKeyId: ACCESS_KEY,
secretAccessKey: SECRET_ACCESS_KEY,
};
const ReactS3Client = new S3(config);
// the name of the file uploaded is used to upload it to S3
ReactS3Client.uploadFile(datasheet, datasheet.name)
.then((data) => console.log(data.location))
.catch((err) => console.error(err));
I have enabled public access, added the bucket policy to this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicListGet",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:List*",
"s3:Get*"
],
"Resource": [
"arn:aws:s3::: my bucket name",
"arn:aws:s3::: my bucket name/*"
]
}
]
}
and the cors policy to this:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"HEAD",
"GET",
"PUT",
"POST",
"DELETE"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [
"x-amz-server-side-encryption",
"x-amz-request-id",
"x-amz-id-2",
"ETag"
]
}
]
But when I try and upload the file I get a 400 bad request response.
body: (...)
bodyUsed: false
headers: Headers {}
ok: false
redirected: false
status: 400
statusText: "Bad Request"
type: "cors"
url: ...
[[Prototype]]: Response
It says it's type: cors but I have cors fully enabled right? What am I missing here?
When I followed the tutorial that you pasted I managed to upload the file to s3. It works with your cors policy as well.
Here are a couple of things to check:
The network tab and the actual response from S3 This will give you more information on the actual problem that you are having
Your bucket policy allows only List and Get, but in the tutorial, it's s3:* So your user must have permission to upload files to this bucket.
Double check the upload logic in the demo it's like this:
const handleFileInput = (e) => {
setSelectedFile(e.target.files[0]);
}
const uploadFile = async (file) => {
const ReactS3Client = new S3(config);
// the name of the file uploaded is used to upload it to S3
ReactS3Client
.uploadFile(file, file.name)
.then(data => console.log(data.location))
.catch(err => console.error(err))
}
return <div>
<div>React S3 File Upload</div>
<input type="file" onChange={handleFileInput}/>
<br></br>
<button onClick={() => uploadFile(selectedFile)}> Upload to S3</button>
</div>

TypeError: Cannot read property 'byteLength' of undefined - AWS S3 Upload

I'm working on an aws s3 photo upload from a react client and I'm experiencing the following error:
TypeError: Cannot read property 'byteLength' of undefined
I'm assuming there's a flaw in the upload object, but I believe there might be something wrong with the s3/cognito configuration because I receive the same error when I invoked s3.listObjects. I'm following these docs - https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/s3-example-photo-album-full.html
Any thoughts?
uploadPhoto() {
const files = document.getElementById("photoUpload").files;
if (!files.length) {
return alert("Please choose a file to upload first.");
}
const file = files[0];
const fileName = file.name;
const albumPhotosKey = encodeURIComponent('screenshots') + "/";
const photoKey = albumPhotosKey + fileName;
// Use S3 ManagedUpload class as it supports multipart uploads
const upload = new AWS.S3.ManagedUpload({
params: {
Bucket: <Bucket Name>,
Key: fileName,
Body: file
}
});
const promise = upload.promise();
promise.then(
function(data) {
alert("Successfully uploaded photo.");
console.log('UPLOAD: ', data)
},
function(err) {
console.log('ERROR: ', err)
// return alert("There was an error uploading your photo: ", err.message);
}
);
}
I got this error in a React Native app. I was able to fix it by turning off my Dev Tools network inspector.
I've run the example as described in the docs and it works, so can't reproduce your err. Here's a couple of things to check:
Try passing your accesskey and secret into your config object (although I don't recommend this for security reasons, it might indicate where the issue is):
AWSService.config.update({
region: region,
accessKeyId: accessKeyId,
secretKey: secretAccessKey,
credentials: new AWSService.CognitoIdentityCredentials({
IdentityPoolId: identityPoolId
})
});
Confirm your your Cognito Identity Pool ID has an IAM policy attached with the following:
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:DeleteObject",
"s3:GetObject",
"s3:ListBucket",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::BUCKET_NAME",
"arn:aws:s3:::BUCKET_NAME/*"
]
}
]
}
Just in case! Check that your credentials are configured and passed right. And that you have the privileges for the resource in IAM.
I had the error above because i forget to pass the access credentials. It went away once i put them in.

InvalidSignatureException from POST request

I have a Lambda function that handles reading data from a file(stored inside S3 bucket) as well as inserting data to a Dynamodb table. This Lambda function is exposed as a REST endpoint using API gateway. The function accepts GET request as well as POST request. I'm making GET/POST requests from my REACT project using axios and aws4(for signing) libraries. GET request is to read data from a file stored inside S3 and it works just fine. And POST request is for inserting data into Dynamodb table. However, it doesn't work and AWS returns InvalidSignatureException error as a respond. This is an excerpt of my code :
createAWSSignedRequest(postData) {
let request = {};
if (postData) {
request = {
host: process.env.AWS_HOST,
method: 'POST',
url: process.env.AWS_URL,
path: process.env.AWS_PATH,
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify(postData)
}
} else {
request = {
host: process.env.AWS_HOST,
method: 'GET',
url: process.env.AWS_URL,
path: process.env.AWS_PATH
}
}
let signedRequest = aws4.sign(request, {
secretAccessKey: process.env.AWS_SECRET_KEY,
accessKeyId: process.env.AWS_ACCESS_KEY
});
return signedRequest;
}
This is how GET request is made :
let signedRequest = this.createAWSSignedRequest('GET');
axios(signedRequest)
.then(response => {
})
.catch((error) => {
console.log("error",error);
});
This is how POST request is made :
const data = {
uuid: "916b7d90-0137-11e8-94e6-116965754e23", //just a mock value
date : "22/jan/2018",
user_response: [
{
question:"this is quesiton1",
choice:"user selected A"
},
{
question:"this is quesiton2",
choice: "user selected b"
},
{
question:"this is quesiton3",
choice: "user selected C"
}
]
};
let signedRequest = this.createAWSSignedRequest(data);
axios(signedRequest)
.then(response => {
......
})
.catch((error) => {
console.log("error",error);
});
As you can see, the code for both GET and POST requests are exactly the same (except payload and method type). I'm singing with the same secret access key and access key id for both requests. I'm not sure why one request results in "InvalidSignatureException" when the other doesn't. Can anyone shed a light on this issue for me.
Thanks
After having discussion with AWS4 lib developer, I figured out what I did wrong. AWS4 uses "body" as a payload attribute to compute signature. However, Axios uses "data" attribute as payload. My mistake was only setting either one of them. So when I set just "data" attribute, the payload was present in the request and content-length is computed correctly. However, the signature was incorrect since the payload was not taken into consideration when computing signature. When I set just "body", payload was not present in the request because Axios does not use "body" attribute for payload. The solution is to set both attributes with payload. I hope this helps to anyone who are having the same issue I have.
If you use the AWS Amplify library it has a module called API which should fit your use cases, and it will perform Sigv4 signing for you either with authenticated or unauthenticated roles. The Auth category uses Cognito as the default implementation. For instance:
npm install aws-amplify --save
Then import and configure the lib:
import Amplify, { API } from 'aws-amplify';
Amplify.configure({
Auth: {
identityPoolId: 'XX-XXXX-X:XXXXXXXX-XXXX-1234-abcd-1234567890ab',
region: 'XX-XXXX-X'
},
API: {
endpoints: [
{
name: "APIName",
endpoint: "https://invokeURI.amazonaws.com"
}
]
}
});
Then for your API Gateway endpoint calling a Lambda:
let apiName = 'MyApiName';
let path = '/path';
let options = {
headers: {...} // OPTIONAL
}
API.get(apiName, path, options).then(response => {
// Add your code here
});
More info here: https://github.com/aws/aws-amplify

Resources