I'm using the latest version of aws/amplify 3.3.7
This is the code
const s3Upload = async (file, id) => {
const filename = file.name.replace(/\s/g, '')
await Storage.vault.put(filename, file, {
contentType: file.type
})
}
When I try to upload small files (1 Mb zip file) it works fine, but when I try to upload bigger files ie 6 Mb zip file, I'm getting the error:
AWSS3ProviderManagedUpload - error happened while finishing the upload. Cancelling the multipart upload Error: Request failed with status code 40
Anyone faced this problem
I've been having the same issue where after around 5MB, Amplify uses multipart upload and it fails with the same error.
The fix for me was https://github.com/aws-amplify/amplify-js/issues/61
I added ETag to ExposeHeaders in the S3 bucket's CORS settings and it worked fine uploading a 10 MB file after that.
Related
I am facing a problem with the image upload, manually I can upload the image, but with selenium I allways get the Error 500 image upload. (Error is on the website Server, not the selenium itself)
I am uploading image from Properties in Project, but happens also when I upload it from windows explorer.
I am using headless mode also.
This is the Error I get:
Exception
Type: imageupload.uploaderror
Message: Error uploading file
this is my code:
File file = new File(getClass().getClassLoader().getResource(BildName).getFile());
String imagePath = file.getAbsolutePath();
eHinzufuegen.sendKeys(imagePath);
The company I work at wants to serve multiple react websites using a single S3 bucket and Cloudfront distribution as there is a limit to the number of buckets AWS allows. The base file structure we want to have has three folders at the root of the bucket (i.e. dev, prod, and sandbox), and each has its corresponding react project.
From the Cloudfront distribution, I send the Host header to know which environment and bucket should be served. I catch that in a Lambda#Edge function and route to the corresponding bucket, which works up to that point. The Lambda#Edge function gets the index.html file correctly but cannot serve all the other files in the folder. This results in just getting a blank white screen with a couple of errors in the console: The stylesheet was not loaded because its MIME type, "text/html" is not "text/css and Loaded file has not a valid JavaScript MIME type.
The Lambda#Edge function I use is the following:
'use strict';
exports.handler = (event, context, callback) => {
// Extract the request from the CloudFront event that is sent to Lambda#Edge
var cf = event.Records[0].cf;
var request = event.Records[0].cf.request;
const origin = request.origin;
const domain = request.headers.host[0].value;
if (domain.includes("dev")) {
request.uri = "/dev-test/index.html"
} else if (domain.includes("sandbox") {
request.uri = "/sandbox-test/index.html";
} else {
request.uri = "/prod-test/index.html";
}
request.headers.host[0].value = origin.s3.domainName
// Return to CloudFront
return callback(null, request);
}
That function extracts the domain from the Host header, sets the URI accordingly, and changes the Host header to the S3 domain name to avoid getting a permissions error. I have also tried changing the S3 path as well as the URI, changing only the path instead of the URI but nothing seems to work. I do not know if I should be setting/changing something else in the request or I should change the settings/permissions in my S3 bucket and/or Cloudfront distribution.
By the way, I am testing this with a basic React JS app, I only ran npx create-react-app my-app then built it and uploaded everything in the build folder to the S3 bucket folders.
Thank you very much in advance for any help in this matter.
Are all of your requests coming back with a 200 OK status and the correct content — you just see a blank screen with that console error? If that's the case, then the issue is that you need to set the mime type on the CSS file in order for your browser to display it. Some browsers like Chrome won't serve a CSS file without the text/css Content-Type header.
You have three ways to set it:
Add it as metadata to each CSS file in your S3 bucket
Use a Lambda#Edge origin response function to dynamically set the Content-Type header based on the file extension.
Create a new cache behavior with a path pattern of *.css and attach a Response Header Policy that sets the Content-Type of text/css as a custom header with Origin Override checked (this is free to use, and will perform faster than Lambda#Edge as it's a native feature)
Option 1 is the simplest if you add it as part of your upload workflow. Otherwise would recommend option 3.
I have an Angular 1.x application using the popular ng-file-upload to make a request to the Rackspace OpenCloud library for uploading files to a CDN.
In a nutshell the script below - uploads a file, sends it to the backend of the application & in turn sends a request to the Rackspace Container and returns the public URL. This all works fine in Postman without issues - when I implement this into my Angular application I have some issues with CORS (see upload code below) - the frontend Angular sends a request to our backend app which in turn sends a request to the OpenCloud to upload a file to the Rackspace CDN and returns the public URL.
// frontend Angular app
Upload.upload({
url : 'https://asite.com/users/request',
data : {
__token : $localStorage.token.value,
fileToUpload : file
}
// additional code below not shown for clarity
In the console log I see the following : ( I have the changed the actual url below for security purposes) :-
Failed to load https://xxxxxxxxxxxxxx-2426685a61633353dfd5b28cdbf2b449.ssl.cf3.rackcdn.com/5a7d6de352a949.48129019.pdf: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'http://domain.local' is therefore not allowed access. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled.
How do I prevent this CORS error so it loads the file from the CDN?
fixed - the issue was actually on the PHP backend (in case it helps out somebody else)
$object = $container->DataObject();
$object->Create(array(
'name' => $filename,
'content_type' => $mime,
'extra_headers' => array('Access-Control-Allow-Origin' => '*'
)), $file);
I needed to add this in the extra headers array when creating the file using the OpenCloud SDK
I am trying to upload an image to an API with a http POST service multipart form. I only have the URI of the image (image is stored on the mobile device, android or ios)
I am stuck trying to get the File to send.
I tried
File myFile = new File(uri.getPath());
But I get this error with File:
services.js:2217 Uncaught SyntaxError: Unexpected identifier
I tried to get the file with
window.resolveLocalFileSystemURL(item.file_uri, function (file) {
fd.append("documento", file);
});
(fd is the FormData)
With this I don't get an error but it sends an empty file. The POST request works tho but, as I said, there is no new file there.
I have to send the file inside the formdata with param "documento"
The uri is like this:
file:///data/data/com.ionicframework.bemywallet910223/files/ifXaEIMG_20160318_225137.jpg
Thank you all for the help!
In my serverless app, I need to generate pdf dynamically and then upload that generated pdf into aws-s3 bucket. But in serverless, we can only sent json request, so I sent my html string to serverless, it generate pdf and then save that generated pdf into local machine. I think, that part I can do, But my problem is, I need to upload that newly generated pdf into aws-s3. My code is given below:
Angular Code:
$scope.generate_pdf = function() {
var html = angular.element('html').html();
var service = API.getService(); // sent html string as json for generating pdf
service.downloadPdf({}, { html : html },
function(res) {
console.log("res : ", res); // res.url: /tmp/dynamica-generate.pdf
// pdf is saved into '/tmp' folder
// now I want to upload pdf from '/tmp' folder to aws-s3
}, function(err) {
console.log("err : ", err);
});
};
How can I solve this problem, Thanks in Advance.
If you want to upload file from your local machine to S3 bucket. Install AWS Cli on your local machine. after that is done add a cron job for this command
aws s3 sync /tmp/* s3://mybucket
You should be able to use the aws Javascript S3 sdk to integrate JS code into your angularJS code.
You can follow from this blog post and github code
Its also mentioning about setting up the credentials using a public IAM account. Depending how you serve the files to your clients, you might check also using Pre-Signed URL’s.