React JS and Firebase : Cannot upload image - reactjs

Im trying to create a signup form for my website. Im storing the user information on MongoDB and their images on Firebase. When Signup button is pressed i get "POST https://firebasestorage.googleapis.com/v0/b/app/o?name=image.jpg 403" in the browser's console
I have seen several tutorials on how to upload files to firebase storage using React.js and this is what i tried:
const [file , setfile] = useState(null);
const handleClick = (e)=>{
e.preventDefault();
const fileName = new Date().getTime() + file?.name;
const storage = getStorage(app);
const StorageRef = ref(storage , fileName);
const uploadTask = uploadBytesResumable(StorageRef, file);
uploadTask.on('state_changed',
() => {
// Handle successful uploads on complete
// For instance, get the download URL: https://firebasestorage.googleapis.com/...
getDownloadURL(uploadTask.snapshot.ref).then((downloadURL) => {
//signup() is the node js signup endpoint
signup(dispatch ,{email , password , username , phonenumber , profile:downloadURL});
})
});
}
Image field:
<input type="file" name="file" id="file" onChange={(e)=>setfile(e.target.files[0])} />
If is needed to upload any other part of the code please let me know

Based on the comments under your question you get a 403 error because your Security Rules prevent any user to upload a file (allow read, write: if false;). This is also a point listed by #eugenemusebe in his answer.
You need to adapt your Security Rules in such a way the desired users have the correct access right for writing:
For example a user needs to be authenticated to upload an image:
service firebase.storage {
// The {bucket} wildcard indicates we match files in all Cloud Storage buckets
match /b/{bucket}/o {
// Match filename
match /filename {
allow read: if <condition>;
allow write: if request.auth != null;
}
}
}
More details in the Security Rules documentation.
Note that for testing (and confirming that this is the problem) you can (temporarily) open the write access with:
service firebase.storage {
// The {bucket} wildcard indicates we match files in all Cloud Storage buckets
match /b/{bucket}/o {
// Match filename
match /filename {
allow read: if <condition>;
allow write: if true;
}
}
}

Related

Firebase Storage not displaying image properly (shows a small box)

EDIT: I've updated the CORS config but its still showing the same error.
I have a Tinymce RTE on my page, and when u drop an image into the editor, I have some functions that upload it to firebase storage, then swaps out the src of the text editor with the url fetched from firebase. It works kinda ok, but its being displayed as a broken link image icon.
When I check the link, its because originally it downloads the image when the link is clicked. I added a metadata property when it uploads it, but now its just showing a tiny box.
Here is the code where the image dropped into the editor is uploaded into firebase storage
const imagesUploadHandler = async (blobInfo, success, failure) => {
try {
const file = blobInfo.blob();
const storageRef = ref(storage, file.name);
const metadata = {
contentType: 'image/jpeg',
};
await uploadBytes(storageRef, file, metadata);
const url = await getDownloadURL(storageRef);
console.log(url);
return url;
} catch (error) {
// Call the failure callback with the error message
console.log(error.message);
}
};
Originally, i didnt include the contentType metadata, and it was just uploading as application/octet-stream, which i assume is why it prompts you to save the image.
Image link: https://firebasestorage.googleapis.com/v0/b/cloudnoise-news.appspot.com/o/ref.jpg?alt=media&token=1edc90e7-1668-4a06-92a3-965ce275798b
Currently its displaying this
Somethings i checked through
firebase storage rules is in test mode, so should be able to read and write by anyone.
i tried sticking in different MIME types but it either shows the tiny box, or it shows "undefined"
the files upload successfully and the "swap" in Tinymce editor is also all good.
Any idea why this is happening?
you need to set the metadata tag
const metadata = {
contentType: file.type,
};
This should ensure that the correct content type is set when the image is uploaded to Firebase Storage.
If this does not resolve the issue, you may need to check that the URL returned from getDownloadURL is valid and points to the correct image. You can try opening the URL in a new browser tab to verify that the image is accessible.
I fixed it by adding a blob, I created a blob object with the file data, then i just made it upload the blob object instead of the single file.
const imagesUploadHandler = async (blobInfo, success, failure) => {
try {
const file = blobInfo.blob();
const storageRef = ref(storage, file.name);
const metadata = {
contentType: file.type,
};
// Create a new Blob object with the file data
const blob2 = await new Blob([file], { type: file.type });
// Upload the Blob to Firebase Storage
await uploadBytes(storageRef, blob2, metadata);
const url = await getDownloadURL(storageRef);
console.log(url);
return url;
} catch (error) {
// Call the failure callback with the error message;;
console.log(error.message)
}
};

Uppy/Shrine: How to retrieve presigned url for video after successful upload (using AWS S3)

I'm using Uppy for file uploads in React, with a Rails API using Shrine.
I'm trying to show a preview for an uploaded video before submitting a form. It's important to emphasize that this is specifically for a video upload, not an image. So the 'thumbnail:generated' event will not apply here.
I can't seem to find any events that uppy provides that returns a cached video preview (like thumbnail:generated does) or anything that passes back a presigned url for the uploaded file (less expected, obviously), so the only option I see is constructing the url manually. Here's what I'm currently trying for that (irrelevant code removed for brevity):
import React, { useEffect, useState } from 'react'
import AwsS3 from '#uppy/aws-s3'
import Uppy from '#uppy/core'
import axios from 'axios'
import { DragDrop } from '#uppy/react'
import { API_BASE } from '../../../api'
const constructParams = (metadata) => ([
`?X-Amz-Algorithm=${metadata['x-amz-algorithm']}`,
`&X-Amz-Credential=${metadata['x-amz-credential']}`,
`&X-Amz-Date=${metadata['x-amz-date']}`,
'&X-Amz-Expires=900',
'&X-Amz-SignedHeaders=host',
`&X-Amz-Signature=${metadata['x-amz-signature']}`,
].join('').replaceAll('/', '%2F'))
const MediaUploader = () => {
const [videoSrc, setVideoSrc] = useState('')
const uppy = new Uppy({
meta: { type: 'content' },
restrictions: {
maxNumberOfFiles: 1
},
autoProceed: true,
})
const getPresigned = async (id, type) => {
const response = await axios.get(`${API_BASE}/s3/params?filename=${id}&type=${type}`)
const { fields, url } = response.data
const params = constructParams(fields)
const presignedUrl = `${url}/${fields.key}${params}`
console.log('presignedUrl from Shrine request data: ', presignedUrl)
setVideoSrc(presignedUrl)
}
useEffect(() => {
uppy
.use(AwsS3, {
id: `AwsS3:${Math.random()}`,
companionUrl: API_BASE,
})
uppy.on('upload-success', (file, _response) => {
const { type, meta } = file
// First attempt to construct presigned URL here
const url = 'https://my-s3-bucket.s3.us-west-1.amazonaws.com'
const params = constructParams(meta)
const presignedUrl = `${url}/${meta.key}${params}`
console.log('presignedUrl from upload-success data: ', presignedUrl)
// Second attempt to construct presigned URL here
const id = meta.key.split(`${process.env.REACT_APP_ENV}/cache/`)[1]
getPresigned(id, type)
})
}, [uppy])
return (
<div className="MediaUploader">
<div className="Uppy__preview__wrapper">
<video
src={videoSrc || ''}
className="Uppy__preview"
controls
/>
</div>
{(!videoSrc || videoSrc === '') && (
<DragDrop
uppy={uppy}
className="UploadForm"
locale={{
strings: {
dropHereOr: 'Drop here or %{browse}',
browse: 'browse',
},
}}
/>
)}
</div>
)
}
export default MediaUploader
Both urls here come back with a SignatureDoesNotMatch error from AWS.
The manual construction of the url comes mainly from constructParams. I have two different implementations of this, the first of which takes the metadata directly from the uploaded file data in the 'upload-success' event, and then just concatenates a string to build the url. The second one uses getPresigned, which makes a request to my API, which points to a generated Shrine path that should return data for a presigned URL. API_BASE simply points to my Rails API. More info on the generated Shrine route here.
It's worth noting that everything works perfectly with the upload process that passes through Shrine, and after submitting the form, I'm able to get a presigned url for the video and play it without issue on the site. So I have no reason to believe Shrine is returning incorrectly signed urls.
I've compared the two presigned urls I'm manually generating in the form, with the url returned from Shrine after uploading. All 3 are identical in structure, but have different signatures. Here are those three urls:
presignedUrl from upload-success data:
https://my-s3-bucket.s3.us-west-1.amazonaws.com/development/cache/41b229fb17cbf21925d2cd907a59be25.mp4?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAW63AYCMFA4374OLC%2F20221210%2Fus-west-1%2Fs3%2Faws4_request&X-Amz-Date=20221210T132613Z&X-Amz-Expires=900&X-Amz-SignedHeaders=host&X-Amz-Signature=97aefd1ac7f3d42abd2c48fe3ad50b542742ad0717a51528c35f1159bfb15609
presignedUrl from Shrine request data:
https://my-s3-bucket.s3.us-west-1.amazonaws.com/development/cache/023592fb14c63a45f02c1ad89a49e5fd.mp4?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAW63AYCMFA4374OLC%2F20221210%2Fus-west-1%2Fs3%2Faws4_request&X-Amz-Date=20221210T132619Z&X-Amz-Expires=900&X-Amz-SignedHeaders=host&X-Amz-Signature=7171ac72f7db2b8871668f76d96d275aa6c53f71b683bcb6766ac972e549c2b3
presigned url displayed on site after form submission:
https://my-s3-bucket.s3.us-west-1.amazonaws.com/development/cache/41b229fb17cbf21925d2cd907a59be25.mp4?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAW63AYCMFA4374OLC%2F20221210%2Fus-west-1%2Fs3%2Faws4_request&X-Amz-Date=20221210T132734Z&X-Amz-Expires=900&X-Amz-SignedHeaders=host&X-Amz-Signature=9ecc98501866f9c5bd460369a7c2ce93901f94c19afa28144e0f99137cdc2aaf
The first two urls come back with SignatureDoesNotMatch, while the third url properly plays the video.
I'm aware the first and third urls have the same file name, while the second url does not. I'm not sure what to make of that, though, but the relevance of this is secondary to me, since that solution was more of a last ditch effort anyway.
I'm not at all attached to the current way I'm doing things. It's just the only solution I could come up with, due to lack of options. If there's a better way of going about this, I'm very open to suggestions.

Save values to a .env variable in ReactJS

I've been trying to create a .env variable where initially it will be empty but after login process it will store the data to the .env variable for further work, but unfortunately, I am not able to do so.
Before I put my code example, I would like to have some suggestions!!
Yea, in the login process I'm using session storage to store the user token. So, will it be a good work to store the user data into a .env file and later access it for future use or should I just call getToken function every time I need the token to verify if the user is logged in.
login.js:
const getToken = () => {
const tokenString = sessionStorage.getItem('token');
const userToken = JSON.parse(tokenString);
return userToken?.token
}
const saveToken = (userData) => {
sessionStorage.setItem('token', JSON.stringify(userData));
setToken(userData)
}
Tried different techniques to make it work, but I just couldn't get the data from the .env file.
Watched many different YouTube videos and did exactly like them but it was all in vain.
I checked multiple timed if there is any type or bug in my code or not! There was no error. I was getting the token after successful login and by default it was returning null. I was storing the token only when the user login successfully so that no garbage value gets inserted into the value.
Here's my logic:
const handleSubmit = async function (e) {
e.preventDefault();
const response = await loginUser(user);
if (response.status === 200) {
setToken(response.data);
process.env.REACT_APP_USER_TOKEN=response.data;
navigate("/");
} else {
console.error(response)
}
}
ENV files are used to store sensitive Api keys or secrets. which can only be read by the code when needed.
Storing user data in .env file is not the right way. If your user data should not be available easily in frontend, try encryption and store the encryption key in .env file or backend.

'Eperm: operation not permitted' when using multer

I'm attempting to pass a video file from my front end (using React and Axios) and upload it to youtube from my backend using express (using this tutorial https://youtu.be/xhiWEpU-h-A). The file gets submitted in the form of a 'formdata' object from my front end.
Here's my backend
const oAuth = youtube.authenticate({type: 'oauth',client_id: credentials.web.client_id,client_secret: credentials.web.client_secret,//Redirect uris has not been set up, may cause errors
redirect_url: credentials.web.redirect_uris[0]
})
const storage = multer.diskStorage({destination: '/',filename(req, file, cb) {const newFileName = ${uuid()}-${file.originalname}cb(null,newFileName);}})
const uploadVideoFile = multer({storage: storage}).single("videoFile");
app.post('/api/uploadVideo', uploadVideoFile, (req, res)=>{
console.log("upload video endpoint established")console.log(file was set to: ${req.file})if(req.file)console.log('we found a file')
else
console.log('no file?')console.log(title: ${title} description: ${description})return;
//ignore the stuff below here
Here's the function that submits the formdata on the front end, as well as the state variables
const uploadVideo = ()=>{
//uploadVideo(Credentials, [{}])
const videoData = new FormData()
videoData.append("videoFile", videoFile)
videoData.append("title", videoTitle)
videoData.append("description", videoDescription)
console.log(videoData)
Axios.post("http://localhost:3001/api/uploadVideo", videoData).then((response) =>{
console.log(response.data)
})
}
const [videoTitle, setVideoTitle] = useState('')
const [videoMode, setVideoMode] = useState(true)
const [videoFile, setVideoFile] = useState(null)`
And here's the form object that accepts the video file
<Form.Control type="file" accept="video/mp4" disabled={!videoMode} onChange={(e)=>{setVideoFile(e.target.files[0])}}>
</Form.Control>`
When I attempt to run the function I get the following error message on my server
Error: EPERM: operation not permitted, open 'C:\fb434fe2-e46c-4d8c-8f41-7b807f1b92a7-Column test - Google Chrome 2022-01-12 17-27-21.mp4'
the error occurs before the post request can be resolved
I was hoping multer would be able to locate the file I passed to it but I simply get a permission error.
I managed to solve the error by setting up Multer again and using the default settings from the documentation
const upload = multer({dest: "uploads/"});//uploads to "uploads" folder on my server
//post request
app.post('/api/uploadVideo', upload.single("videoFile"), (req, res)=>{
const {title, description} = req.body;
console.log("upload video endpoint established")
console.log(`file destination was set to: ${req.file.destination}`)
//return

Error: User credentials required in Google Cloud Print API

I'm trying to use Google Cloud Print(GCP) API, but I can't make it works.
Maybe I've understood bad the workflow because is the first time I'm using the google api, please help me to understand how to make it works.
Initial considerations:
I'm trying to implement it in reactJS, but It is indifferent because the logic to make GCP works is independent of the technology. Then you also can help me understand the workflow.
What exactly I want:
To make my first test, I am looking to get all information about my printer.
What I did:
I created a project in: https://console.developers.google.com
Inside the project created, I created a credential:
create credentials -> OAuth client ID
And I chose Application type: Web, and also configure the restrictions to source and redirection to my localhost.
Manually in https://www.google.com/cloudprint, I added my printer, I made a test printing a PDF and was OK.
I created a project in reactJS to get the information of my printer I've added.
Component:
Explanation:
I'm using a component react-google-login to obtain easily the user accessToken: https://github.com/anthonyjgrove/react-google-login
This component only obtains the access token and save it in localStorage, in a variable called googleToken and it draws a button to call a function to obtain the information about the printer.
code:
import React, { Component } from 'react'
import GoogleLogin from 'react-google-login';
import { connect } from 'react-redux'
import { getPrinters } from '../actions/settings'
class Setting extends Component {
responseGoogle(response) {
const accessToken = response.accessToken
localStorage.setItem('googleToken', accessToken)
}
render() {
return (
<div>
<GoogleLogin
clientId="CLIENT_ID_REMOVED_INTENTIONALLY.apps.googleusercontent.com"
buttonText="Login"
onSuccess={this.responseGoogle}
onFailure={this.responseGoogle}
/>
<button
onClick = {() => {
this.props.getPrinters()
}}
>test printer</button>
</div>
)
}
}
const mapStateToProps = state => {
return {
state: state
}
}
const mapDispatchToProps = dispatch => {
return {
getPrinters() {
dispatch(getPrinters())
}
}
}
export default connect(
mapStateToProps,
mapDispatchToProps
)(Setting)
Action or Function to get information printer:
Explanation:
I'm passing the parameter printerid to get information about that printer.
In authorization, I'm using OAuth ... because in the documentation says that(second paragraph).: https://developers.google.com/cloud-print/docs/appInterfaces
The next two headers I wrote it because I tried solutions as:
Google Cloud Print API: User credentials required
Google Cloud Print User credentials required
code:
import axios from 'axios'
axios.defaults.headers.common['Authorization'] = 'OAuth ' + localStorage.getItem('googleToken')
axios.defaults.headers.common['scope'] = 'https://www.googleapis.com/auth/cloudprint'
axios.defaults.headers.common['X-CloudPrint-Proxy'] = 'printingTest'
const getPrinters = () => {
return () => {
return axios.get('https://www.google.com/cloudprint/printer'
, {
params: {
printeid: 'PRINTER_ID_REMOVED_INTENTIONALLY'
}
}
)
.then(response => {
console.log('response of google cloud print')
console.log(response)
})
}
}
export { getPrinters }
Error:
After all explained before, I got the next error:
User credentials required
Error 403
Note:
I'm using CORS plugin by recommendation of:
Chrome extensions for silent print?
because initially, I had cors error.
Any suggestion or recommendation would be very useful, thanks.
I've resolved my problem, my main problem about User Credential required were because I was using the incorrect access token and It was because I was getting the access token incorrectly.
I'm going to explain my whole solution because there are few examples of codes with this API.
Solutions:
The steps described were Ok until the fourth step where I used the external component react-google-login to trying to get the access token, instead I used googleapis module: Link Github googleapis
Also to avoid CORS problem(and not use CORS chrome plugin) I wrote the requests to Google API in server side.(NODEJS)
I had also a problem in the frontend when I tried to generate a popup to give permission for printer(problems about CORS), my solution was to use this very simple module for authentication: Link Github oauth-open
General scheme:
Explanation:
Knowing I have all data described in my question post(until the third step).
Authentication:
The next step in getting a URL and use it to the user can authenticate.
As I said before I used the module oauth-open in the frontend to generate the popup and only this module need the URL. To get the URL in the backend I used the endpoint /googleurl, where here I used the method generateAuthUrl of the module googleapis to generate the URL.
After that In the frontend, I got the authentication_code(that returned the module oauth-open), I send It to my endpoint /googletoken and here I process the authentication_code to generate access token, refresh token and expiration date with the method getToken of the module googleapis. Finally, these data are stored in the database.
Print:
For print, since the frontend, I send what data I need send to the printer. I used my endpoint /print
In the backend endpoint, my logic was the next:
Recover tokens and expiration date from database, with the expiration date check if the token has expired, and if It has already expired then gets another token and replace the old access token with the new one, replacing also with the new expiration date, to obtain this new data only is necessary call to method refreshAccessToken of module googleapis.Note: the refresh token never expires.
After having the access token updated, use it to send data to the printer with Google route(.../submit)
Code:
All the next codes are in only 1 file
Some data as validation, static variables, error handler, etc, has been removed to better understanding.
Route get URL authentication.
const express = require('express');
const google = require('googleapis');
const router = express.Router();
var OAuth2 = google.auth.OAuth2;
const redirect_url = 'http://localhost:3001/setting'; //Your redirect URL
var oauth2Client = new OAuth2(
'CLIENT ID', //Replace it with your client id
'CLIEND SECRET', //Replace it with your client secret
redirect_url
);
var url = oauth2Client.generateAuthUrl({
access_type: 'offline',
scope: 'https://www.googleapis.com/auth/cloudprint'
});
router.get('/googleurl', (req, res) => {
return res.status(200).send({
result: { googleURLToken: url }
});
});
To get tokens using the authentication code and save these in the database.
const Setting = require('../models/setting'); // My model(Mongoose)
router.post('/googletoken', (req, res) => {
oauth2Client.getToken(req.body.code, function (err, tokens) {
oauth2Client.credentials = tokens;
// If refresh token exits save it
// because the refresh token it returned only 1 time! IMPORTANT
if (tokens.hasOwnProperty('refresh_token')) {
let setting = new Setting();
setting.refreshTokenGoogle = tokens.refresh_token;
setting.expirationTokenGoogle = tokens.expiry_date;
setting.tokenGoogle = tokens.access_token;
setting.save()
.then((settingCreated) => {
return res.status(200).send({
message: 'OK'
});
})
}
});
});
To print
const axios = require('axios');
const moment = require('moment');
router.post('/print',async (req, res) => {
const tickeProperties = {
'version': '1.0',
'print': {
'vendor_ticket_item': [],
'color': { 'type': 'STANDARD_MONOCHROME' },
'copies': { 'copies': 1 }
}
};
const accessToken = await getTokenGoogleUpdated();
axios.get(
'https://www.google.com/cloudprint/submit',
{
params: {
printerid : printerID, // Replace by your printer ID
title: 'title printer',
ticket: tickeProperties,
content : 'print this text of example!!!',
contentType: 'text/plain'
},
headers: {
'Authorization': 'Bearer ' + accessToken
}
}
)
.then(response => {
return res.status(200).send({
result: response.data
});
})
}
);
async function getTokenGoogleUpdated() {
return await Setting.find({})
.then(async setting => {
const refreshTokenGoogle = setting[0].refreshTokenGoogle;
const expirationTokenGoogle = setting[0].expirationTokenGoogle;
const tokenGoogle = setting[0].tokenGoogle;
const dateToday = new Date();
// 1 minute forward to avoid exact time
const dateTodayPlus1Minute = moment(dateToday).add(1, 'm').toDate();
const dateExpiration = new Date(expirationTokenGoogle);
// Case date expiration, get new token
if (dateExpiration < dateTodayPlus1Minute) {
console.log('Updating access token');
oauth2Client.credentials['refresh_token'] = refreshTokenGoogle;
return await oauth2Client.refreshAccessToken( async function(err, tokens) {
// Save new token and new expiration
setting[0].expirationTokenGoogle = tokens.expiry_date;
setting[0].tokenGoogle = tokens.access_token;
await setting[0].save();
return tokens.access_token;
});
} else {
console.log('Using old access token');
return tokenGoogle;
}
})
.catch(err => {
console.log(err);
});
}
I hope It helps you if you want to use Google Cloud Print to not waste a lot of time as I did.
The important part there is a scope https://www.googleapis.com/auth/cloudprint which is not obvious and took one day for me to figure out.

Resources