I am integrating CKEDITOR in react project and I am using AWS S3 bucket to upload the image that i add in text-editor.. Upload is working fine... The problem is if I delete the image in text-editor.It does not delete it from the AWS bucket.
Causing a lot of unwanted images in bucket. Hence , I need to delete the image from AWS server if it's not present in the text-editor..
How Can I do it??
I have the link to the image in the React Part as the repsonse of the upload.
You need to have a bucket name and key of that file in order to delete that file form AWS s3
const deleteS3Object = async (key, BUCKET_NAME) => {
return new Promise((resolve, reject) => {
try {
let s3bucket = new AWS.S3({
accessKeyId: IAM_USER_KEY,
secretAccessKey: IAM_USER_SECRET,
Bucket: BUCKET_NAME,
});
var params = { Bucket: BUCKET_NAME, Key: key };
s3bucket.deleteObject(params, function(err, data) {
if (err) reject(err);
// an error occurred
else resolve(data); // successful response
});
} catch (e) {
reject(e);
}
});
};
Related
EDIT: I've updated the CORS config but its still showing the same error.
I have a Tinymce RTE on my page, and when u drop an image into the editor, I have some functions that upload it to firebase storage, then swaps out the src of the text editor with the url fetched from firebase. It works kinda ok, but its being displayed as a broken link image icon.
When I check the link, its because originally it downloads the image when the link is clicked. I added a metadata property when it uploads it, but now its just showing a tiny box.
Here is the code where the image dropped into the editor is uploaded into firebase storage
const imagesUploadHandler = async (blobInfo, success, failure) => {
try {
const file = blobInfo.blob();
const storageRef = ref(storage, file.name);
const metadata = {
contentType: 'image/jpeg',
};
await uploadBytes(storageRef, file, metadata);
const url = await getDownloadURL(storageRef);
console.log(url);
return url;
} catch (error) {
// Call the failure callback with the error message
console.log(error.message);
}
};
Originally, i didnt include the contentType metadata, and it was just uploading as application/octet-stream, which i assume is why it prompts you to save the image.
Image link: https://firebasestorage.googleapis.com/v0/b/cloudnoise-news.appspot.com/o/ref.jpg?alt=media&token=1edc90e7-1668-4a06-92a3-965ce275798b
Currently its displaying this
Somethings i checked through
firebase storage rules is in test mode, so should be able to read and write by anyone.
i tried sticking in different MIME types but it either shows the tiny box, or it shows "undefined"
the files upload successfully and the "swap" in Tinymce editor is also all good.
Any idea why this is happening?
you need to set the metadata tag
const metadata = {
contentType: file.type,
};
This should ensure that the correct content type is set when the image is uploaded to Firebase Storage.
If this does not resolve the issue, you may need to check that the URL returned from getDownloadURL is valid and points to the correct image. You can try opening the URL in a new browser tab to verify that the image is accessible.
I fixed it by adding a blob, I created a blob object with the file data, then i just made it upload the blob object instead of the single file.
const imagesUploadHandler = async (blobInfo, success, failure) => {
try {
const file = blobInfo.blob();
const storageRef = ref(storage, file.name);
const metadata = {
contentType: file.type,
};
// Create a new Blob object with the file data
const blob2 = await new Blob([file], { type: file.type });
// Upload the Blob to Firebase Storage
await uploadBytes(storageRef, blob2, metadata);
const url = await getDownloadURL(storageRef);
console.log(url);
return url;
} catch (error) {
// Call the failure callback with the error message;;
console.log(error.message)
}
};
After updating React Native version to latest 0.63.2 and trying to upload the image to S3 bucket XHR returns error Stream Closed image upload was working fine with version 0.61.5
The Code
uploadProfile({ variables: { filetype: mime } }).then(
({ data: { uploadUserProfile } }) => {
const { presignedUrl, url } = uploadUserProfile;
console.log('presignedUrl', { presignedUrl, url });
// uploading to s3 bucket
const xhr = new XMLHttpRequest();
xhr.open('PUT', presignedUrl);
xhr.onreadystatechange = async function () {
if (xhr.readyState === XMLHttpRequest.DONE) {
if (xhr.status === 200) {
updateAccount({
variables: {
data: {
profile: url,
},
},
});
} else {
if (/Request has expired/g.test(xhr.response))
Toast({ message: 'slow network connection' });
else {
console.log({
response: xhr.response,
responseText: xhr.responseText,
status: xhr.status,
});
Toast({ message: 'internal server error' });
await report({
error: {
response: xhr.response,
responseText: xhr.responseText,
status: xhr.status,
},
}); // reporting error
}
}
}
};
xhr.setRequestHeader('Content-Type', mime);
xhr.send({ uri: path, type: mime });
setLoading(false);
},
);
When the user wants to upload a profile image first App send a request to the server and get return the pre-signed URL and upload from client-side this how App was working.
I upgraded Flipper to version 0.51.2 and it worked for me.
Go to android/gradle.properties and add this line
FLIPPER_VERSION=0.52.1
You should have the following lines in your android/app/build.gradle
dependencies {
// ....
debugImplementation("com.facebook.flipper:flipper:${FLIPPER_VERSION}") {
exclude group:'com.facebook.fbjni'
}
debugImplementation("com.facebook.flipper:flipper-network-plugin:${FLIPPER_VERSION}") {
exclude group:'com.facebook.flipper'
}
debugImplementation("com.facebook.flipper:flipper-fresco-plugin:${FLIPPER_VERSION}") {
exclude group:'com.facebook.flipper'
}
// ...
}
upgrading flipper version solves the issue for me, If upgrading flipper version doesn't solve for you then try this solution.
Whoever is still struggling with this issue. it's happening because of Flipper network plugin.
I disabled it and things work just fine.
My workaround to make this work is commenting outline number 43
38 NetworkFlipperPlugin networkFlipperPlugin = new NetworkFlipperPlugin();
39 NetworkingModule.setCustomClientBuilder(
40 new NetworkingModule.CustomClientBuilder() {
41 #Override
42 public void apply(OkHttpClient.Builder builder) {
43 // builder.addNetworkInterceptor(new FlipperOkhttpInterceptor(networkFlipperPlugin));
44 }
45 });
46 client.addPlugin(networkFlipperPlugin);
in this file android/app/src/debug/java/com/maxyride/app/drivers/ReactNativeFlipper.java
found this answer link
I'm making a react app where I let the user upload a pdf file and then I convert the pdf file to a jpg file, store it locally then upload it to aws s3. However currently, when I upload the jpg file to aws s3 the file is not uploaded as an image file. I want to upload a local image file to aws s3 without having to use <input type="file" />
I tried this using fs.readFile() with s3.upload() but the file that is uploaded is not an image file. I also tried using multer-s3 but this requires the user to manually select the image file using a <input type="file" />, which I want to avoid.
Here is how I'm uploading the jpg file:
const aws = require("aws-sdk");
const multer = require("multer");
const fs = require("fs");
const convertPdf = require("pdf-poppler");
const s3 = new aws.S3({
accessKeyId: "<key>",
secretAccessKey: "<secret>",
Bucket: "<bucketName>"
});
const storage = multer.diskStorage({
destination: function(req, file, cb) {
cb(null, path.join(__dirname + "/uploads/"));
},
filename: function(req, file, cb) {
// let pdfName = "samplePDF";
// req.body.file = pdfName;
cb(null, file.originalname);
}
});
const upload = multer({
storage: storage
}); //SAVING PDF TO DISK STORAGE
router.post("/pdf", upload.single("pdf"),(req, res, next) => {
const uploadPath = req.file.path;
var imagePath = req.file.destination + req.file.originalname.slice(0, -4) + "-1.jpg";
let opts = {
format: "jpg",
out_dir: req.file.destination,
out_prefix: path.basename(req.file.path).slice(0, -4),
page: null
}
//CONVERTING PDF TO JPG
convertPdf.convert(uploadPath, opts).then(() =>
{
fs.readFile(imagePath, (err, data) => {
if (err) throw err;// UPLOADING FILE BUT NOT IN IMAGE FORMAT
const params = {
Bucket: "<bucketName>",
Key: path.basename(imagePath),
Body: data
};
s3.upload(params, (s3Error, data) => {
if (s3Error) throw s3Error;
console.log(`File uploaded successfully at ${data.Location}`);
res.json({
image: data.key,
location: data.Location
});
});
});
});
});
I expected an image file to be uploaded but not the uploaded file is not an image file which is the problem. Is there any way to upload a local image file to aws s3 without requiring the use of an input field?
EDIT: turns out aws s3 makes the uploaded file private by default which is why the file could not be read, issue is resolved when I make the file public.
The uploaded file was an image file, it just didn't have the right permissions. I added the ACL: "public-read" param and now the file displays as expected.
Updated code:
fs.readFile(imagePath, (err, data) => {
if (err) throw err;
const params = {
Bucket: "flyingfishcattle",
Key: path.basename(imagePath),
Body: data,
ACL: "public-read"
};
s3.upload(params, (s3Error, data) => {
if (s3Error) throw s3Error;
console.log(`File uploaded successfully at ${data.Location}`);
res.json({
image: data.key,
location: data.Location
});
});
});
I'm getting an unexpected 403 with trying to upload a file to S3. The weird part is that I have accomplished this before when I did this using the Java AWS SDK to generate the presigned url. I am now using the Python AWS SDK to generate the presigned url and I feel like I am doing the exact same thing.
Here is my code that WORKS no problem:
public UploadSignedRequest getUploadSignedRequest() {
AmazonS3 s3Client = getS3Client();
// Set the pre-signed URL to expire after one hour.
Date expiration = DateUtil.getSignedUrlExpirationDate();
// Generate the pre-signed URL.
String objectKey = UUID.randomUUID().toString();
GeneratePresignedUrlRequest generatePresignedUrlRequest = new GeneratePresignedUrlRequest(BUCKET_NAME, objectKey)
.withMethod(HttpMethod.PUT)
.withExpiration(expiration);
String s3FilePath = String.format("%s/%s/%s", S3_URL, BUCKET_NAME, objectKey);
URL signedRequest = s3Client.generatePresignedUrl(generatePresignedUrlRequest);
return new UploadSignedRequest(signedRequest, s3FilePath, objectKey);
}
Here is the successful client code:
var config = {
onUploadProgress: function (progressEvent) {
var percentCompleted = Math.round((progressEvent.loaded * 100) / progressEvent.total);
updateProgressFunc(percentCompleted);
}
};
axios
.put(signedRequest.signedRequestUrl, videoFile, config)
.then(res => {
console.log(res);
console.log(res.status);
// save video metadata in db
dispatch(saveVideoMetadata(video));
})
.catch(function (error) {
console.log(error);
});
Now, here is me trying to accomplish essentially the same thing (image files instead of video files) using the Python AWS SDK.
def getS3UploadRequest(uuid):
return S3.generate_presigned_url(
ClientMethod='put_object',
Params={
'Bucket': BUCKET,
'Key': uuid
}
)
client code where I get the 403:
axios
.put(signedRequest, attachedFile)
.then(res => {
console.log("successfully uploaded file to s3");
console.log(res);
// dispatch(createProjectTaskComment(newComment, projectTaskId, userId, isFreelancer));
});
When I try to use the presignedUrl in postman, I get the following response back:
?xml version="1.0" encoding="UTF-8"?>
<Error><Code>SignatureDoesNotMatch</Code><Message>The request
signature we calculated does not match the signature you provided.
Check your key and signing method.</Message>
<AWSAccessKeyId>gibberish</AWSAccessKeyId><StringToSign>PUT
Thanks for the help!!!
I have a react App where users can post images. When a use posts an image, my SQL table get updated. The column imgSRC gets updated with the name of the file.
For example - mysite_1536290516498.jpg
How can the code below be modified to server images from AWS instead of destFile: ${root}/dist/posts/${filename}?
My server code to post an image and update database is:
const app = require('express').Router(),
db = require('../../../config/db'),
Post = require('../../../config/Post'),
User = require('../../../config/User'),
root = process.cwd(),
upload = require('multer')({
dest: `${root}/dist/temp/`,
}),
{ ProcessImage, DeleteAllOfFolder } = require('handy-image-processor')
// POST [REQ = DESC, FILTER, LOCATION, TYPE, GROUP, IMAGE(FILE) ]
app.post('/post-it', upload.single('image'), async (req, res) => {
try {
let { id } = req.session,
{ desc, filter, location, type, group } = req.body,
filename = `mysite_${new Date().getTime()}.jpg`,
obj = {
srcFile: req.file.path,
destFile: `${root}/dist/posts/${filename}`,
},
insert = {
user: id,
description: desc,
imgSrc: filename,
filter,
location,
type,
group_id: group,
post_time: new Date().getTime(),
}
await ProcessImage(obj)
DeleteAllOfFolder(`${root}/dist/temp/`)
let { insertId } = await db.query('INSERT INTO posts SET ?', insert),
fullname = await User.getWhat('fullname', id)
await db.toHashtag(desc, id, insertId)
await User.mentionUsers(desc, id, insertId, 'post')
res.json({
success: true,
mssg: 'Posted!!',
post_id: insertId,
fullname,
filename,
})
} catch (error) {
db.catchError(error, res)
}
})
You do this using your Http server where you have access to .htaccess file or httpd.conf file in apache http server.
Just redirect all request for /dist/posts/ to AWS Path.
Before this you need to make sure you have same directory structure in AWs S3 bucket.
So when request comes from client for /dist/posts/image it will go to
http://aws/dist/posts/
This is also known as CDN if you google.