I am developing a course platform using ReactJS. When the user finishes a course he can download the PDF file.
I need a version of the same file as an image (png or jpg), but I haven't found any way to do that. Can someone help me?
To generate the PDF certificate I'm using the lib: React-PDF.
This is my code to generate pdf file:
<PDFDownloadLink
document={
<Certificate course={course} name={name} date={today()} />
}
fileName="somename.pdf"
>
{({ blob, url, loading, error }) => {
return loading ? 'Loading document...' : 'Download now!';
}}
</PDFDownloadLink>
I created a helper function: convertPdfToImages which takes in the pdf file and returns an array of images encoded in base64, using the pdfjs package.
npm install pdfjs-dist -S
const PDFJS = require("pdfjs-dist/webpack");
const readFileData = (file) => {
return new Promise((resolve, reject) => {
const reader = new FileReader();
reader.onload = (e) => {
resolve(e.target.result);
};
reader.onerror = (err) => {
reject(err);
};
reader.readAsDataURL(file);
});
};
//param: file -> the input file (e.g. event.target.files[0])
//return: images -> an array of images encoded in base64
const convertPdfToImages = async (file) => {
const images = [];
const data = await readFileData(file);
const pdf = await PDFJS.getDocument(data).promise;
const canvas = document.createElement("canvas");
for (let i = 0; i < pdf.numPages; i++) {
const page = await pdf.getPage(i + 1);
const viewport = page.getViewport({ scale: 1 });
const context = canvas.getContext("2d");
canvas.height = viewport.height;
canvas.width = viewport.width;
await page.render({ canvasContext: context, viewport: viewport }).promise;
images.append(canvas.toDataURL());
}
canvas.remove();
return images;
}
Please use this library
https://www.npmjs.com/package/react-pdf-to-image
It is pretty straight forward. It will return the list of images (each page in the pdf as one image)
import React from 'react';
import {PDFtoIMG} from 'react-pdf-to-image';
import file from './pdf-sample.pdf';
const App = () =>
<div>
<PDFtoIMG file={file}>
{({pages}) => {
if (!pages.length) return 'Loading...';
return pages.map((page, index)=>
<img key={index} src={page}/>
);
}}
</PDFtoIMG>
</div>
export default App;
if you want to just download the each pdf page as image instead of component please follow below code
import PDFJS from 'pdfjs-dist/webpack';
this is the dependency library for react-pdf-to-image. Then read the pdf file(I'm giving base64 as input)
PDFJS.getDocument(blob).promise.then(pdf => {
const pages = [];
this.pdf = pdf;
for (let i = 0; i < this.pdf.numPages; i++) {
this.getPage(i + 1).then(result => {
// the result is the base 64 version of image
});
}
})
after reading each page, read each page as image from getPage method as below
getPage = (num) => {
return new Promise((resolve, reject) => {
this.pdf.getPage(num).then(page => {
const scale = "1.5";
const viewport = page.getViewport({
scale: scale
});
const canvas = document.createElement('canvas');
const canvasContext = canvas.getContext('2d');
canvas.height = viewport.height || viewport.viewBox[3]; /* viewport.height is NaN */
canvas.width = viewport.width || viewport.viewBox[2]; /* viewport.width is also NaN */
page.render({
canvasContext, viewport
}).promise.then((res) => {
resolve(canvas.toDataURL());
})
})
})
}
Related
I have a React app that displays a toolbar and 2 canvases built-in Three.js. I would like to take a screenshot of the entire app.
I tried already niklasvh/html2canvas
const element = document.getElementsByClassName("contentContainer-5")[0] as HTMLElement;
const url = html2canvas(element)
.then((canvas)=> {
return canvas.toDataURL();
})
}
const screenshotObject = {
url: url,
width: 128,
height: 64,
}
return screenshotObject
}
and BLOB html5
takeScreenshot() {
const screenshot = document.documentElement
.cloneNode(true) as Element;
const blob = new Blob([screenshot.outerHTML], {
type: 'text/html'
});
return blob;
}
generate() {
window.URL = window.URL || window.webkitURL;
window.open(window.URL
.createObjectURL(this.takeScreenshot()));
}
In the first case, the screenshot's URL is very long but the image is empty.
In the second case, HTML and CSS is perfectly snapshotted but canvases are empty.
.cloneNode() only copies the DOM and canvasses are not a part of DOM. A workround could be done with the cloneNode function (as done here)
const takeScreenshot = () => {
const _cloneNodeFn = HTMLCanvasElement.prototype.cloneNode;
HTMLCanvasElement.prototype.cloneNode = function () {
const customCloneNodeFn = _cloneNodeFn.apply(this, arguments);
if (this.getContext("2d")) {
customCloneNodeFn.getContext("2d").drawImage(this, 0, 0);
}
return customCloneNodeFn;
};
};
const _canvas = document.querySelector("canvas");
_canvas.getContext("2d").fillRect(20, 20, 20, 20);
for (let i = 0; i < 20; i++) {
document.body.appendChild(_canvas.cloneNode());
}
Note: cloneNode has to be called on the canvas directly, and not the parent node.
In order to take a screenshot of a Three.js scene using Javascript, you need to create a new canvas context with the original canvas' properties, draw the image onto the context, convert it to a blob, then make it available on an invisible a tag which is automatically click and downloads a screenshot of your Three.js scene.
I created the following barebones code that takes screenshots of scenes:
export const screenshot = async (idx) => {
const element = document.getElementById('existing-canvas-id');
const canvas = document.createElement('new-canvas-id');
canvas.width = element.width;
canvas.height = element.height;
const context = canvas.getContext('2d');
let url = element.toDataURL();
let img = new Image();
await new Promise(r => img.onload=r, img.src=url);
context.drawImage(img, 0, 0);
const blob = canvas.toBlob();
const link = document.createElement('a');
link.href = URL.createObjectURL(blob);
link.download = `${index}.png`;
link.dispatchEvent(new MouseEvent('click'));
}
};
I have a custom trained tensorflow.js graph model (link can be found in the getModel method) to recognize images of fly agarics. However, the prediction output (result from model.executeAsync()) never changes - this is evident in the console.log in the renderPredictions method.
Greatly appreciate any help.
webcamView.tsx
import { useEffect, useRef, useState } from "react";
import * as tf from '#tensorflow/tfjs-core';
import { loadGraphModel } from "#tensorflow/tfjs-converter";
import '#tensorflow/tfjs-backend-cpu';
export default function WebcamView() {
const videoWidth = 640;
const videoHeight = 500;
const videoRef = useRef<HTMLVideoElement>(null);
const [loading, setLoading] = useState<Boolean>(true);
let model: any = undefined;
const getModel = async () => {
model = await loadGraphModel('https://raw.githubusercontent.com/AlanChen4/FungEye/main/data/web_model_old/model.json');
return model;
};
const startVideo = async () => {
navigator.mediaDevices.getUserMedia({
video: {
facingMode: 'environment',
}
}).then(stream => {
if (videoRef.current !== null) {
const video = videoRef.current;
video.srcObject = stream;
video.onloadedmetadata = () => {
video.play();
video.addEventListener('loadeddata', processVideoInput);
}
}
})
};
const renderPredictions = (predictions: any) => {
const predictionBoxes = predictions[6].dataSync();
const predictionClasses = predictions[2].dataSync();
const predictionScores = predictions[4].dataSync();
// this always prints the same scores
console.log(predictionScores);
};
const processVideoInput = () => {
// classify the frame in the video stream and then repeat to process the next frame
if (videoRef.current !== null) {
classifyVideoInput(videoRef.current).then(() => {
window.requestAnimationFrame(processVideoInput);
})
}
}
const classifyVideoInput = async (videoImage: HTMLVideoElement) => {
// get next video frame
await tf.nextFrame();
// convert tensor to image
const tfImage = tf.browser.fromPixels(videoImage);
// convert image to smaller image in order to match detection size
const smallerImage = tf.image.resizeBilinear(tfImage, [videoHeight, videoWidth]);
// convert smaller image to format usable by model
const resizedImage = tf.cast(smallerImage, 'int32');
let tf4d_ = tf.tensor4d(Array.from(resizedImage.dataSync()), [1, videoHeight, videoWidth, 3]);
const tf4d = tf.cast(tf4d_, 'int32');
// generate predictions from model
let predictions = await model.executeAsync(tf4d);
renderPredictions(predictions);
tfImage.dispose();
smallerImage.dispose();
resizedImage.dispose();
tf4d.dispose();
};
useEffect(() => {
const start = async () => {
await getModel();
await startVideo();
setLoading(false);
};
start();
}, []);
return (
<div className="p-8">
<div className="flex justify-center p-5">
<div id="videoView" style={{ cursor: 'pointer' }}>
{loading
? <p className="text-center text-semibold p-5">Loading Model...</p>
: <video ref={videoRef} playsInline autoPlay muted width={videoWidth} height={videoHeight}></video>
}
</div>
</div>
</div>
);
};
I have checked to make sure that video processing only occurs after the video has loaded. I've checked numerous threads and posts but can not seem to find the cause of this issue. I am using tensorflow.js 3.1.0.
I added options to the existing toolbar in Gutenberg. I have a lot of options, so I thought that they would be automatically registered. The tasks are easy; every task has to call OpenAI based on the selected block, but every time it calls, it should use the latest text. I am not Frontend developer so I have little problem with understand where I made mistake, how it should be improve?
Component:
const AiBlockEdit = (props) => {
const [registered, _] = useState(() => {
let localRegistered = [];
FORMAT_LIST.forEach((format) => {
if (!localRegistered.includes(format)) {
let result = register(new OpenAI(globalOpenAi), wrappProps(format, props));
localRegistered.push(result);
}
});
return localRegistered;
});
return (
<>
<BlockControls>
<ToolbarDropdownMenu
icon={ AI }
label="Select a direction"
controls={ registered }
/>
</BlockControls>
</>
);
};
Logic:
export const wrappProps = (item, props) => {
return {
...item,
props: props
}
}
export function register(api, item) {
const {value, onChange} = item.props;
// Build message
let promptFormatType = new BuildPrompt()
.addText(item.input)
.addText(" ")
.addText(value.text)
.build();
// Prepare openai settings, use default
let requestParams = new OpenAIParams().setPrompt(promptFormatType).build();
let request = new item.Query(requestParams);
// Build settings
return new SettingsFormatType()
.setIcon(item.icon)
.setTitle(item.title)
.setOnclick(() => {
api.send(request, (response) => {
let text = response.data.choices[0].text;
onChange(insert(value, value.text + text));
// let xd = create({
// text: response.data.choices[0].text,
// })
// onChange(item.fn(value, xd));
});
}).build();
}
I need help on this matter. I am trying to extract the keypoints from BodyPix to be saved in a JSON file. I can see the keypoints in the console whenever I inspect it from the browser. I also want to save the keypoints into an array or a variable maybe so that I can use it throughout the code. Thank you very much. Here's the code:
function App() {
const bodyPixProperties = {
architecture: 'MobileNetV1',
outputStride: 16,
multiplier: 0.75,
quantBytes: 4
};
const runBodysegment = async () => {
const net = await bodyPix.load(bodyPixProperties);
console.log("BodyPix model loaded.");
// Loop and detect hands
setInterval(() => {
detect(net);
}, 100);
};
const detect = async (net) => {
// Check data is available
if (
...
const person = await net.segmentPersonParts(video, {
flipHorizontal: false,
internalResolution: 'medium',
segmentationThreshold: 0.7
});
console.log(person);
// const coloredPartImage = bodyPix.toMask(person);
const coloredPartImage = bodyPix.toColoredPartMask(person);
const opacity = 0.7;
const flipHorizontal = false;
const maskBlurAmount = 0;
const canvas = canvasRef.current;
bodyPix.drawMask(
canvas,
video,
coloredPartImage,
opacity,
maskBlurAmount,
flipHorizontal
);
}
};
runBodysegment();
i am using Clarifai's Api to detect faces in an image it was working fine and i deployed it to github pages. after some time it stopped working and started giving me status code 400 and status code 10020 at the network tab although i am using the correct image format that Clarifai wants which is base64. at the same time my app uses the Clarifai's apparels detection model which works perfectly fine.
below is the relevant code:
import React from 'react';
import Clarifai from 'clarifai';
import { connect } from 'react-redux';
import { setFaceBoundary, setApparelBoundary, numberOfFaces, setBoundingBox, setApparelsInfo, setWithSpinner } from '../../redux/box/box.actions';
import { setImageDimensions } from '../../redux/image/image.actions.js';
import './models-options.styles.css';
const app = new Clarifai.App({
apiKey: 'MY_API_KEY'
});
const ModelsOptions = ({ setFaceBoundary, setApparelBoundary, fileProperties, numberOfFaces, setBoundingBox, setApparelsInfo, setWithSpinner, setImageDimensions })=> {
const calculateApparel = (data) => {
const conceptsArray = data.outputs[0].data.regions.map(concepts => concepts.data.concepts);
setApparelsInfo(conceptsArray)
const outputs = data.outputs[0].data.regions.map(apparels => apparels.region_info.bounding_box);
console.log(outputs);
setBoundingBox(outputs)
const image = document.getElementById("inputImage");
console.log('image dimensions' ,image.naturalWidth, image.naturalHeight);
const width = image.naturalWidth;
const height = image.naturalHeight;
const apparelsLoaction = outputs.map(apparel => {
return {
leftCol: apparel.left_col * width,
topRow: apparel.top_row * height,
rightCol: width - apparel.right_col * width,
bottomRow: height - apparel.bottom_row * height
}
});
return apparelsLoaction;
}
const calculateFace = (data) => {
const faceNumber = data.outputs[0].data.regions.length;
numberOfFaces(faceNumber);
const outputs = data.outputs[0].data.regions.map((faces) => faces.region_info.bounding_box);
setBoundingBox(outputs);
const image = document.getElementById("inputImage");
const width = image.clientWidth;
const height = image.clientHeight;
const faceCordinates = outputs.map((face) => {
return {
leftCol: face.left_col * width,
topRow: face.top_row * height,
rightCol: width - face.right_col * width,
bottomRow: height - face.bottom_row * height,
}
});
return faceCordinates;
}
const detectFace = () => {
setWithSpinner(true)
app.models.predict(Clarifai.FACE_DETECT_MODEL, {base64: fileProperties}).then(
(response) => {
setFaceBoundary(calculateFace(response));
setWithSpinner(false)
},
(err) => {
console.log('There was an error', err);
}
);
setApparelsInfo({});
setApparelBoundary({});
}
const detectApparels = () => {
setWithSpinner(true)
app.models.predict('72c523807f93e18b431676fb9a58e6ad', {base64: fileProperties}).then(
(response) => {
console.log('response at the models',response)
setApparelBoundary(calculateApparel(response));
setWithSpinner(false)
},
(err) => {
console.log('There was an error', err);
}
);
setFaceBoundary({});
numberOfFaces(0)
}
return (
<div className="models-button">
<button onClick={detectFace}>Detect Face</button>
<button onClick={detectApparels}>Detect Apparels</button>
</div>
);
};
const mapStateToProps = ({image: {fileProperties}}) => ({
fileProperties
})
const mapDispatchToProps = dispatch => ({
setFaceBoundary: (facePostion) => dispatch(setFaceBoundary(facePostion)),
setApparelBoundary: (apparelPosition) => dispatch(setApparelBoundary(apparelPosition)),
numberOfFaces: (number) => dispatch(numberOfFaces(number)),
setApparelsInfo: (number) => dispatch(setApparelsInfo(number)),
setBoundingBox: (bounding) => dispatch(setBoundingBox(bounding)),
setWithSpinner: (spinner) => dispatch(setWithSpinner(spinner)),
setImageDimensions: (dimensions) => dispatch(setImageDimensions(dimensions)),
})
export default connect(mapStateToProps, mapDispatchToProps)(ModelsOptions);
here is a link to the webApp if it might help: https://abdullahgumi.github.io/smart-box/
any idea on how to solve this would be much appreciated. Thanks
Looks like there was an internal issue that now should be resolved. The webApp you've linked to is now working.
There is also a status page for the models at Clarifai Model Status Page which might be helpful, although in this case it was not reflecting the status of that model accurately unfortunately.