ReactJS: Resize image before upload - reactjs

In my reactJs project, I need to resize image before uploading it.
I am using react-image-file-resizer library which has a simple example but not working for me.
I have tried this but its shows me blank result. What am I doing wrong?
var imageURI = '';
const resizedImg = await Resizer.imageFileResizer(
fileList.fileList[0].originFileObj,
300,
300,
'JPEG',
100,
0,
uri => {
imageURI = uri
console.log(uri ) // this show the correct result I want but outside of this function
},
'blob'
);
console.log(resizedImg)
console.log(imageURI)
// upload new image
...uploading image here..
If I do imgRef.put(uri); inside URI function then image upload works. but I need to do that outside of that function.
how to get result in imageURI variable and reuse it later ?

First, wrap this resizer:
const resizeFile = (file) => new Promise(resolve => {
Resizer.imageFileResizer(file, 300, 300, 'JPEG', 100, 0,
uri => {
resolve(uri);
}, 'base64' );
});
And then use it in your async function:
const onChange = async (event) => {
const file = event.target.files[0];
const image = await resizeFile(file);
console.log(image);
}

Ok I figured it out using compres.js library.
async function resizeImageFn(file) {
const resizedImage = await compress.compress([file], {
size: 2, // the max size in MB, defaults to 2MB
quality: 1, // the quality of the image, max is 1,
maxWidth: 300, // the max width of the output image, defaults to 1920px
maxHeight: 300, // the max height of the output image, defaults to 1920px
resize: true // defaults to true, set false if you do not want to resize the image width and height
})
const img = resizedImage[0];
const base64str = img.data
const imgExt = img.ext
const resizedFiile = Compress.convertBase64ToFile(base64str, imgExt)
return resizedFiile;
}
it return a file to be uploaded to server.

Image resize in the browser should be pain-free, but it is not. You can use a package, but they are often poorly-written and poorly maintained.
For that reason, I wrote my own code using several Javascript APIs: FileReader, Image, canvas, and context. However, this code produces resizes with some pixelation. If you want even higher quality resizes, I would recommend the Pica package, which uses web workers.
Javascript
const uploadImage = (event) => {
const [ imageFile ] = event.target.files;
const { type: mimeType } = imageFile;
const fileReader = new FileReader();
fileReader.readAsDataURL(imageFile);
fileReader.onload = (fileReaderEvent) => {
const imageAsBase64 = fileReaderEvent.target.result;
const image = document.createElement("img");
image.src = imageAsBase64;
const imageResizeWidth = 100;
// if (image.width <= imageResizeWidth) {
// return;
// }
const canvas = document.createElement('canvas');
canvas.width = imageResizeWidth;
canvas.height = ~~(image.height * (imageResizeWidth / image.width));
const context = canvas.getContext('2d', { alpha: false });
// if (!context) {
// return;
// }
context.drawImage(image, 0, 0, canvas.width, canvas.height);
// const resizedImageBinary = canvas.toBlob();
const resizedImageAsBase64 = canvas.toDataURL(mimeType);
};
};
HTML
<form>
<input type="file" accept="image/jpeg"
onchange="uploadImage()"/>
</form>

The library which you are using will not resize the image for file upload.
It returns of new image's base64 URI or Blob. The URI can be used as the
source of an component.
To resize the image:
You can refer to the script here
or a working code sample demo here

Related

How to changes the photo width and height in React Native Vision Camera or chop it?

I am simply trying to get a square image and lower the size of the image as much a possible while keep image text clear.
This is my attempt but it does not work.
const capturePhoto = async () => {
const options = {
qualityPrioritization: 'speed',
skipMetadata: true,
flash: 'off',
width: 500,
height: 500,
};
if (camera.current !== null) {
const photo = await camera.current.takePhoto(options);
const {height, width, path} = photo;
console.log(
`The photo has a height of ${height} and a width of ${width}.`,
);
}
};

How to use custom fonts in #react-pdf/pdfkit

Can anyone say how to add custom fonts and use it with #react-pdf/pdfkit in React.js.
I tried to register custom font like this.
import utils from "./utils";
const SVGtoPDF = require("svg-to-pdfkit");
const PDFDocument = require("#react-pdf/pdfkit").default;
const { Base64Encode } = require("base64-stream");
const generatePDF = async (width, height, canvas, fonts) => {
const canvasSvg = canvas.toSVG();
return await new Promise((resolve, reject) => {
try {
var doc = new PDFDocument({
bufferPages: true,
size: [width * 0.75, height * 0.75],
});
utils.getFont(fonts).forEach(async (font) => {
doc.registerFont(font.family, font.path, font.family);
});
doc.font("Pacifico").text("Hello Message");
let finalString = "";
const stream = doc.pipe(new Base64Encode());
SVGtoPDF(doc, canvasSvg, 0, 0);
stream.on("data", (chunk) => (finalString += chunk));
stream.on("end", () => {
resolve(finalString);
});
doc.end();
} catch (err) {
reject(err);
}
});
};
export default generatePDF;
getFont() function will return an array of objects having props like
{
path: "#fonts/Pacifico-Regular.ttf",
family: "Pacifico",
weight: "regular",
style: "normal",
}
I have added #font as alias to the path where I have saved the .ttf fonts.
So technically, the following is happening
doc.registerFont("Pacifico", "src/fonts/Pacifico-Regural.ttf");
And I tried to use the font like this:
doc.font("Pacifico").text("Hello Message");
But I got the following error:
Unhandled Rejection (Error): fontkit.openSync unavailable for browser build
Is there is any other method to add font and use it in react-pdf/pdfkit?

Merge created PDF with existing local PDF in ReactJS

I create a PDF in ReactJS using react-pdf/renderer and download it using file-saver.
Here is my code that creates the PDF and downloads it:
const LazyDownloadPDFButton = (number, date, totalHours, formattedHours) => (
<Button
className={classes.download}
onClick={
async () => {
const doc = <InvoicePDF number={number} date={date} totalHours={totalHours} formattedHours={formattedHours} />
const asPdf = pdf()
asPdf.updateContainer(doc)
const blob = await asPdf.toBlob()
saveAs(blob, `PDF${number}.pdf`)
}}>
Download
</Button>
)
where InvoicePDF is a separate component that renders the PDF pages with the necessary arguments, as in react-pdf/renderer documentation page.
Before download the actual PDF I have to merge it with another existing PDF that will be choose from computer drive. To do that I have the next code snippet:
fileRef = useRef()
<Button onClick={() => fileRef.current.click()}>
Upload file
<input
ref={fileRef}
type='file'
style={{ display: 'none' }}
/>
</Button>
Which returns me the details of the file.
I tried to updateContainer with this selected file, but there are errors.
How this new file should be merged with the InvoicePDF that is created?
In the meantime, I tried to create my last blob from arrayBuffers like this:
This is the function that concatenates the created PDF with the selected PDF and it returns the correct sum.
function concatArrayBuffers(buffer1, buffer2) {
if (!buffer1) {
return buffer2;
} else if (!buffer2) {
return buffer1;
}
var tmp = new Uint8Array(buffer1.byteLength + buffer2.byteLength);
tmp.set(new Uint8Array(buffer1), 0);
tmp.set(new Uint8Array(buffer2), buffer1.byteLength);
return tmp.buffer;
};
And my method now have a finalBlob that is created with arrayBuffers but the problem is that the resulted PDF will always contain just the content of the second arrayBuffer (which is either the selected pdf or the created pdf)
const LazyDownloadPDFButton = (number, date, totalHours, formattedHours) => (
<Button
className={classes.download}
onClick={
async () => {
const doc = <InvoicePDF number={number} date={date} totalHours={totalHours} formattedHours={formattedHours} />
const asPdf = pdf()
asPdf.updateContainer(doc)
const initialBlob = await new Blob([fileRef.current.files[0]], { type: 'application/pdf' }).arrayBuffer()
const blob = await (await asPdf.toBlob()).arrayBuffer()
const finalArrayBuffer = concatArrayBuffers(initialBlob, blob)
const finalBlob = new Blob([finalArrayBuffer], { type: 'application/pdf' })
saveAs(finalBlob, `PDF${number}.pdf`)
}}
>
Download
</Button>
)
Just A Simple Solution Made By me...
https://github.com/ManasMadan/pdf-actions
https://www.npmjs.com/package/pdf-actions
import { createPDF,pdfArrayToBlob, mergePDF } from "pdf-actions";
// Async Function To Merge PDF Files Uploaded Using The Input Tag in HTML
const mergePDFHandler = async (files) => {
// Converting File Object Array To PDF Document Array
files.forEach((file)=>await createPDF.PDFDocumentFromFile(file))
// Merging The PDF Files to A PDFDocument
const mergedPDFDocument = await mergePDF(files)
// Converting The Merged Document to Unit8Array
const mergedPdfFile = await mergedPDFDocument.save();
// Saving The File To Disk
const pdfBlob = pdfArrayToBlob(mergedPdfFile);
};
Solution
After some research and a lot of failed tries I came to an answer on how to merge the PDFs in the correct order, and bonus add an image (in my case a signature) on every page of the final PDF.
This is the final code:
function base64toBlob(base64Data, contentType) {
contentType = contentType || '';
var sliceSize = 1024;
var byteCharacters = atob(base64Data);
var bytesLength = byteCharacters.length;
var slicesCount = Math.ceil(bytesLength / sliceSize);
var byteArrays = new Array(slicesCount);
for (var sliceIndex = 0; sliceIndex < slicesCount; ++sliceIndex) {
var begin = sliceIndex * sliceSize;
var end = Math.min(begin + sliceSize, bytesLength);
var bytes = new Array(end - begin);
for (var offset = begin, i = 0; offset < end; ++i, ++offset) {
bytes[i] = byteCharacters[offset].charCodeAt(0);
}
byteArrays[sliceIndex] = new Uint8Array(bytes);
}
return new Blob(byteArrays, { type: contentType });
}
async function mergeBetweenPDF(pdfFileList, number) {
const doc = await PDFDocument.create()
const getUserSignature = () => {
switch (selectedUser.id) {
case 1:
return FirstImage
case 2:
return SecondImage
default:
return null
}
}
const pngURL = getUserSignature()
const pngImageBytes = pngURL ? await fetch(pngURL).then((res) => res.arrayBuffer()) : null
const pngImage = pngURL ? await doc.embedPng(pngImageBytes) : null
const pngDims = pngURL ? pngImage.scale(0.5) : null
const initialPDF = await PDFDocument.load(pdfFileList[0])
const appendixPDF = await PDFDocument.load(pdfFileList[1])
const initialPDFPages = await doc.copyPages(initialPDF, initialPDF.getPageIndices())
for (const page of initialPDFPages) {
if (pngURL) {
page.drawImage(pngImage, {
x: page.getWidth() / 2 - pngDims.width / 2 + 75,
y: page.getHeight() / 2 - pngDims.height,
width: pngDims.width,
height: pngDims.height,
});
}
doc.addPage(page)
}
const appendixPDFPages = await doc.copyPages(appendixPDF, appendixPDF.getPageIndices())
for (const page of appendixPDFPages) {
if (pngURL) {
page.drawImage(pngImage, {
x: page.getWidth() / 2 - pngDims.width / 2 + 75,
y: page.getHeight() / 2 - pngDims.height,
width: pngDims.width,
height: pngDims.height,
});
}
doc.addPage(page)
}
const base64 = await doc.saveAsBase64()
const bufferArray = base64toBlob(base64, 'application/pdf')
const blob = new Blob([bufferArray], { type: 'application/pdf' })
saveAs(blob, `Appendix${number}.pdf`)
}
const LazyDownloadPDFButton = (number, date, totalHours, formattedHours) => (
<Button
className={classes.download}
onClick={
async () => {
const doc = <InvoicePDF number={number} date={date} totalHours={totalHours} formattedHours={formattedHours} />
const asPdf = pdf()
asPdf.updateContainer(doc)
let initialBlob = await new Blob([fileRef.current.files[0]], { type: 'application/pdf' }).arrayBuffer()
let appendixBlob = await (await asPdf.toBlob()).arrayBuffer()
mergeBetweenPDF([initialBlob, appendixBlob], number)
}}
>
Download
</Button>
)
So the LazyDownloadPDFButton is my button that request the respective parameters to create the final PDF. InvoicePDF is my created PDF with the parameters, and initialBlob is the PDF that I upload on my page, which requires to be the first in merged PDF, and appendixBlob is the created PDF that will be attached to initialBlob.
In mergeBetweenPDF I am using pdf-lib library to create the final document, where I create the image, take the 2 initial PDFs that are send, looping them, add the image on every page, and then add every page to the final doc which will be downloaded.
Hope one day this will help someone.

JSSIP How to switch between audio call to video call

I am new in JSSIP. I need to switch Audio call to video call in ongoing call.
const session = userAgent.call(destinationNumber, {
mediaConstraints: {
audio: true,
video: false
},
pcConfig: {
iceServers: [{ urls: Config.STUN_SERVER }]
}
});
This is how i initiate audio call. How i can able to switch to video call in between the call?
You can do a trick and initiate a call with video, but instead of putting real video track, you will put some dummy "silent" video track:
function createSilentVideoTrack() {
const canvas = document.createElement("canvas");
canvas.width = 50;
canvas.height = 30;
canvas.getContext("2d").fillRect(0, 0, canvas.width, canvas.height);
animateCanvas(canvas);
const stream = canvas.captureStream(1);
const tracks = stream.getTracks();
const videoTrack = tracks[0];
return videoTrack;
}
And when you need to enable video, you just replace dummy video track to the real one:
navigator.mediaDevices.getUserMedia(constraints).getVideoTracks()[0].then(track => {
connection.getSenders().filter(sender => sender.track !== null && sender.track.kind === "video").forEach(sender => {
sender.replaceTrack(track);
});

How to use .obj files in expo three.js without AR module? [React Native Expo]

My problem is about how can I use .mtl and .obj files with expo three.js, but I don't want to use AR, I only want to use a simple canvas/View with the object rotating.
This code is the thing I want but with my obj file, not to create a cube.
import { View as GraphicsView } from 'expo-graphics';
import ExpoTHREE, { THREE } from 'expo-three';
import React from 'react';
import Assets from './Assets.js';
import ThreeStage from './ThreeStage.js';
export default class App extends React.Component {
componentWillMount() {
THREE.suppressExpoWarnings();
}
render() {
return (
<GraphicsView
onContextCreate={this.onContextCreate}
onRender={this.onRender}
/>
);
}
async setupModels() {
await super.setupModels();
const model = Assets.models.obj.ninja;
const SCALE = 2.436143; // from original model
const BIAS = -0.428408; // from original model
const object = await ExpoTHREE.loadObjAsync({
asset: require('ninja.obj'),
});
const materialStandard = new THREE.MeshStandardMaterial({
color: 0xffffff,
metalness: 0.5,
roughness: 0.6,
displacementScale: SCALE,
displacementBias: BIAS,
normalScale: new THREE.Vector2(1, -1),
//flatShading: true,
side: THREE.DoubleSide,
});
const geometry = object.children[0].geometry;
geometry.attributes.uv2 = geometry.attributes.uv;
geometry.center();
const mesh = new THREE.Mesh(geometry, materialStandard);
mesh.scale.multiplyScalar(0.25);
ExpoTHREE.utils.scaleLongestSideToSize(mesh, 1);
ExpoTHREE.utils.alignMesh(mesh, { y: 1 });
this.scene.add(mesh);
this.mesh = mesh;
}
onRender(delta) {
super.onRender(delta);
this.mesh.rotation.y += 0.5 * delta;
}
}
My assets.js file which contains the path to my 3D modal in .obj
export default {
obj: {
"museu.obj": require('../Conteudos_AV/museu1.obj'),
}
};
And my threeStage.js file which contains in import of 3DModal.js
import ExpoTHREE, { THREE } from 'expo-three';
class ThreeStage {
constructor() {
this.onRender = this.onRender.bind(this);
this.setupControls = this.setupControls.bind(this);
this.onResize = this.onResize.bind(this);
this.setupCamera = this.setupCamera.bind(this);
this.setupScene = this.setupScene.bind(this);
}
onContextCreate = async ({
gl,
canvas,
width,
height,
scale: pixelRatio,
}) => {
this.gl = gl;
this.canvas = canvas;
this.width = width;
this.height = height;
this.pixelRatio = pixelRatio;
await this.setupAsync();
};
setupAsync = async () => {
const { gl, canvas, width, height, pixelRatio } = this;
await this.setupRenderer({ gl, canvas, width, height, pixelRatio });
await this.setupScene();
await this.setupCamera({ width, height });
await this.setupLights();
await this.setupModels();
await this.setupControls();
};
setupControls() {
new THREE.OrbitControls(this.camera);
}
setupRenderer = props => {
this.renderer = new ExpoTHREE.Renderer(props);
this.renderer.capabilities.maxVertexUniforms = 52502;
};
setupCamera({ width, height }) {
this.camera = new THREE.PerspectiveCamera(50, width / height, 0.1, 10000);
this.camera.position.set(0, 6, 12);
this.camera.lookAt(0, 0, 0);
}
setupScene() {
this.scene = new THREE.Scene();
this.scene.background = new THREE.Color(0x999999);
this.scene.fog = new THREE.FogExp2(0xcccccc, 0.002);
this.scene.add(new THREE.GridHelper(50, 50, 0xffffff, 0x555555));
}
setupLights = () => {
const directionalLightA = new THREE.DirectionalLight(0xffffff);
directionalLightA.position.set(1, 1, 1);
this.scene.add(directionalLightA);
const directionalLightB = new THREE.DirectionalLight(0xffeedd);
directionalLightB.position.set(-1, -1, -1);
this.scene.add(directionalLightB);
const ambientLight = new THREE.AmbientLight(0x222222);
this.scene.add(ambientLight);
};
async setupModels() {}
onResize({ width, height, scale }) {
this.camera.aspect = width / height;
this.camera.updateProjectionMatrix();
this.renderer.setPixelRatio(scale);
this.renderer.setSize(width, height);
this.width = width;
this.height = height;
this.pixelRatio = scale;
}
onRender(delta) {
this.renderer.render(this.scene, this.camera);
}
}
export default ThreeStage;
It looks like the provided code creates a ThreeStage class that is imported, but never used by the class containing the Expo GraphicsView.
The examples provided in the repo for expo-three use a bit of an esoteric structure since they're each meant to be served through a react-navigation app with a centralized asset library and abstracted components. This is a lot of extra for a simple application that just attempts to display a model on the screen.
import React from 'react';
import ExpoTHREE, { THREE } from 'expo-three';
import { GraphicsView } from 'expo-graphics';
export default class App extends React.Component {
componentDidMount() {
THREE.suppressExpoWarnings();
}
render() {
return (
<GraphicsView
onContextCreate={this.onContextCreate}
onRender={this.onRender}
onResize={this.onResize}
/>
);
}
// When our context is built we can start coding 3D things.
onContextCreate = async ({ gl, pixelRatio, width, height }) => {
// Create a 3D renderer
this.renderer = new ExpoTHREE.Renderer({
gl,
pixelRatio,
width,
height,
});
// We will add all of our meshes to this scene.
this.scene = new THREE.Scene();
this.scene.background = new THREE.Color(0xbebebe)
this.camera = new THREE.PerspectiveCamera(45, width/height, 1, 1000)
this.camera.position.set(3, 3, 3);
this.camera.lookAt(0, 0, 0);
this.scene.add(new THREE.AmbientLight(0xffffff));
await this.loadModel();
};
loadModel = async () => {
const obj = {
"museu.obj": require('../Conteudos_AV/museu1.obj')
}
const model = await ExpoTHREE.loadAsync(
obj['museu.obj'],
null,
obj
);
// this ensures the model will be small enough to be viewed properly
ExpoTHREE.utils.scaleLongestSideToSize(model, 1);
this.scene.add(model)
};
// When the phone rotates, or the view changes size, this method will be called.
onResize = ({ x, y, scale, width, height }) => {
// Let's stop the function if we haven't setup our scene yet
if (!this.renderer) {
return;
}
this.camera.aspect = width / height;
this.camera.updateProjectionMatrix();
this.renderer.setPixelRatio(scale);
this.renderer.setSize(width, height);
};
// Called every frame.
onRender = delta => {
// Finally render the scene with the Camera
this.renderer.render(this.scene, this.camera);
};
}
I adapted this code from one of Evan's expo snack examples, which are a little easier to follow since they don't have a lot of the overhead of the whole example app. You can find more on his expo snack page here: https://expo.io/snacks/#bacon.
This code should render your object file, but may run into issues if your .obj relies on additional material or texture files. If that is the case, you'll need to add them to the loadModel function like this:
const obj = {
"museu.obj": require('../Conteudos_AV/museu1.obj'),
"museu.mtl": require('../Conteudos_AV/museu1.mtl'),
"museu.png": require('../Conteudos_AV/museu1.png'),
}
const model = await ExpoTHREE.loadAsync(
[obj['museu.obj'], obj['museu.mtl']],
null,
obj
);
I'd recommend taking a look at expo snacks that use expo-three rather than the example app when you're getting started, as it can be a bit of a quagmire to work through all the intricacies of the example.
I don't have a device handy at the moment to test, but let me know if you have any issues with the above code and I can troubleshoot when I'm back beside a phone and laptop together.

Resources