How to use PDF.JS with React? - reactjs

I would like to parse a pdf file in a React app. The pdf will be provided through a html input.
I used pdf-parse - a wrapper around pdf.js in node - without any problem. But when it comes to React, I only receive this error:
MissingPDFException {message: 'Missing PDF "http://localhost:3000/myfile.pdf".', name: 'MissingPDFException'}
I upload the file like this:
export default function Home() {
const [data, setData] = useState();
const handleFile = (e) => {
const file = e.target.files[0];
const fileReader = new FileReader();
fileReader.onload = (d) => {
setData(new Uint32Array(d.target.result));
};
};
return (
<>
<h1>hello!</h1>
<input
type="file"
accept="application/pdf"
placeholder="insert PDF here"
onChange={(e) => handleFile(e)}
/>
<PDFViewer pdfFile={data} />
</>
);
}
And The file is supposed to be read here:
import * as PDFJS from "pdfjs-dist/build/pdf";
import * as pdfjsWorker from "pdfjs-dist/build/pdf.worker.entry";
window.PDFJS = PDFJS;
export default function PDFViewer({ pdfFile }) {
PDFJS.GlobalWorkerOptions.workerSrc = pdfjsWorker;
const getPDFDoc = useCallback(async () => {
const doc = await PDFJS.getDocument(pdfFile);
doc.promise.then(
(loadedPdf) => {
setPdfRef(loadedPdf);
},
function (reason) {
console.error(reason);
}
);
}, []);
useEffect(() => {
getPDFDoc();
}, [getPDFDoc]);
I doesn't seem to work at all. I have a custom config with webpack, typescript and SWC-loader. I have read all the related stackoverflow threads.
How to properly parse a PDF with PDF.js in React? If there is a better library, I'm open to any suggestions. My goal is not to display the pdf, but to get its content.

Your component only runs getPDFDoc on mount since pdfFile is missing in the usecallback deps, so when the file changes, it probably doesn't even notice as your effect won't re-run since getPDFDoc is referentially stable when it shouldn't be.
Try
import * as PDFJS from "pdfjs-dist/build/pdf";
import * as pdfjsWorker from "pdfjs-dist/build/pdf.worker.entry";
window.PDFJS = PDFJS;
export default function PDFViewer({ pdfFile }) {
PDFJS.GlobalWorkerOptions.workerSrc = pdfjsWorker;
const getPDFDoc = useCallback(async () => {
if (!pdfFile) return
const doc = await PDFJS.getDocument(pdfFile);
doc.promise.then(
(loadedPdf) => {
setPdfRef(loadedPdf);
},
function (reason) {
console.error(reason);
}
);
}, [pdfFile]);
useEffect(() => {
getPDFDoc();
}, [getPDFDoc]);
I think the reason for the weird "myfile.pdf" thing is probably because when it first runs pdfFile is not defined and this might be some internal library default. So I also added a guard to not do anything when it's not set.

Related

Get value from promise using React/Typescript [duplicate]

This question already has answers here:
Using async/await inside a React functional component
(4 answers)
Closed 7 months ago.
I was given a snippet of a class named GithubService. It has a method getProfile, returning a promise result, that apparently contains an object that I need to reach in my page component Github.
GithubService.ts
class GithubService {
getProfile(login: string): Promise<GithubProfile> {
return fetch(`https://api.github.com/users/${login}`)
.then(res => res.json())
.then(({ avatar_url, name, login }) => ({
avatar: avatar_url as string,
name: name as string,
login: login as string,
}));
}
export type GithubProfile = {
avatar: string;
name: string;
login: string;
};
export const githubSerive = new GithubService();
The page component should look something like this:
import { githubSerive } from '~/app/github/github.service';
export const Github = () => {
let name = 'Joshua';
const profile = Promise.resolve(githubSerive.getProfile(name));
return (
<div className={styles.github}>
<p>
{//something like {profile.name}}
</p>
</div>
);
};
I'm pretty sure the Promise.resolve() method is out of place, but I really can't understand how do I put a GithubProfile object from promise into the profile variable.
I've seen in many tutorials they explicitly declare promise methods and set the return for all outcomes of a promise, but I can't change the source code.
as you are using React, consider making use of the useState and useEffect hooks.
Your Code could then look like below, here's a working sandBox as well, I 'mocked' the GitHub service to return a profile after 1s.
export default function Github() {
const [profile, setProfile] = useState();
useEffect(() => {
let name = "Joshua";
const init = async () => {
const _profile = await githubService.getProfile(name);
setProfile(_profile);
};
init();
}, []);
return (
<>
{profile ? (
<div>
<p>{`Avatar: ${profile.avatar}`}</p>
<p>{`name: ${profile.name}`}</p>
<p>{`login: ${profile.login}`}</p>
</div>
) : (
<p>loading...</p>
)}
</>
);
}
You should wait for the promise to be resolved by either using async/await or .then after the Promise.resolve.
const profile = await githubSerive.getProfile(name);
const profile = githubSerive.getProfile(name).then(data => data);
A solution would be:
import { githubSerive } from '~/app/github/github.service';
export async function Github() {
let name = 'Joshua';
const profile = await githubSerive.getProfile(name);
return (
<div className={styles.github}>
<p>
{profile.name}
</p>
</div>
);
}
But if you are using react, things would be a little different (since you have tagged reactjs in the question):
import { githubSerive } from '~/app/github/github.service';
import * as React from "react";
export const Github = () => {
let name = 'Joshua';
const [profile, setProfile] = React.useState();
React.useEffect(() => {
(async () => {
const profileData = await githubSerive.getProfile(name);
setProfile(profileData);
})();
}, [])
return (
<div className={styles.github}>
<p>
{profile?.name}
</p>
</div>
);
}

Argument of type 'string' is not assignable to parameter of type 'Blob'

So I am having this problem.
import React, { useState } from "react";
const ImageInput: React.FC = () => {
const [image, setImage] = useState("");
let reader = new FileReader();
reader.readAsDataURL(image);
const handleUpload = (e: React.ChangeEvent<HTMLInputElement>) => {
const imageValue = e.target.value;
setImage(imageValue);
};
return (
<input onChange={handleUpload} type="file" multiple={false} value={image} />
);
};
export default ImageInput;
I am using React with TypeScript and I'm trying to make an image upload component but it is giving me this error. I've tried researching "Blobs" but no joy. I am getting the error in the readAsDataURL.
The reader will be defined and executed on each render of the component. This seems off, it should rather happen inside a function.
As input for the readAsDataUrl one should not use a string, as TypeScript already complained, given the documentation, the input is a File or a Blob. Which can be retrieved directly from the input component. (Source: MDN)
For reference: "[..] a blob is a representation of immutable data, which can be read as text or binary" (Source: MDN)
Based on the above two links your component could look like this (note: image is currently unused):
import React, { useState, createRef } from "react";
const ImageInput: React.FC = () => {
// create a ref to keep a reference to a DOM element, in this case the input
const imagesUpload = createRef<HTMLInputElement>();
const [image, setImage] = useState<string | ArrayBuffer | null>(null);
const handleUpload = (e: React.ChangeEvent<HTMLInputElement>) => {
// get the file which was uploaded
const file = imagesUpload?.current?.files?.[0];
if (file) {
const reader = new FileReader();
reader.addEventListener("load", function () {
// convert image file to base64 string
setImage(reader.result)
}, false);
reader.readAsDataURL(file);
}
};
return (
<input ref={imagesUpload} onChange={handleUpload} type="file" multiple={false} />
);
};
export default ImageInput;

How to propperly implement automatic #Lexical/react editor focus on initialization/first render?

I know there is an AutoFocusPlugin in #lexical/react, but I can't seem to get it to work properly on initial render.
Take the following code (which seems to be matching the current implementation of AutoFocusPlugin.js) - sandbox here :
import React, { FC, useLayoutEffect } from "react";
import { useLexicalComposerContext } from "#lexical/react/LexicalComposerContext";
import LexicalComposer from "#lexical/react/LexicalComposer";
import LexicalPlainTextPlugin from "#lexical/react/LexicalPlainTextPlugin";
import LexicalContentEditable from "#lexical/react/LexicalContentEditable";
const AutofocusPlugin = () => {
const [editor] = useLexicalComposerContext();
useLayoutEffect(() => {
editor.focus();
}, [editor]);
return null;
};
export const MyEditor = () => {
return (
<LexicalComposer initialConfig={{ onError: () => null }}>
<LexicalPlainTextPlugin
contentEditable={<LexicalContentEditable />}
placeholder={null}
/>
<AutofocusPlugin />
</LexicalComposer>
);
};
I would expect the editor to initialize focused, but it does not.
Deferring the focus call to the async stack seems to solve this inside the sandbox:
useLayoutEffect(() => {
setTimeout(() => editor.focus(), 0);
}, [editor]);
but does not reliably work in Cypress/Storybook for me.
So what am I doing wrong?
As of lexical version 0.6.0, the lexical editor will not process the callbackFn in editor.focus() if there are no contents in the editor. see line 839 of:
https://github.com/facebook/lexical/blob/main/packages/lexical/src/LexicalEditor.ts:
focus(callbackFn?: () => void, options: EditorFocusOptions = {}): void {...}
What I did was add a paragraph node if the root is empty on the initial editor state. Then the AutoFocusPlugin will work without issue:
const initialConfig = {
...
onError: error => {
throw error;
},
editorState: editor => {
// For autoFocus to work, the editor needs to have a node present in the root.
const root = $getRoot();
if (root.isEmpty()) {
const paragraph = $createParagraphNode();
root.append(paragraph);
}
},
};

Error using FFmpeg.wasm for audio files in react: "ffmpeg.FS('readFile', 'output.mp3') error. Check if the path exists"

I'm currently building a browser-based audio editor and I'm using ffmpeg.wasm (a pure WebAssembly/JavaScript port of FFmpeg) to do it.
I'm using this excellent example, which allows you to uploaded video file and convert it into a gif:
import React, { useState, useEffect } from 'react';
import './App.css';
import { createFFmpeg, fetchFile } from '#ffmpeg/ffmpeg';
const ffmpeg = createFFmpeg({ log: true });
function App() {
const [ready, setReady] = useState(false);
const [video, setVideo] = useState();
const [gif, setGif] = useState();
const load = async () => {
await ffmpeg.load();
setReady(true);
}
useEffect(() => {
load();
}, [])
const convertToGif = async () => {
// Write the file to memory
ffmpeg.FS('writeFile', 'test.mp4', await fetchFile(video));
// Run the FFMpeg command
await ffmpeg.run('-i', 'test.mp4', '-t', '2.5', '-ss', '2.0', '-f', 'gif', 'out.gif');
// Read the result
const data = ffmpeg.FS('readFile', 'out.gif');
// Create a URL
const url = URL.createObjectURL(new Blob([data.buffer], { type: 'image/gif' }));
setGif(url)
}
return ready ? (
<div className="App">
{ video && <video
controls
width="250"
src={URL.createObjectURL(video)}>
</video>}
<input type="file" onChange={(e) => setVideo(e.target.files?.item(0))} />
<h3>Result</h3>
<button onClick={convertToGif}>Convert</button>
{ gif && <img src={gif} width="250" />}
</div>
)
:
(
<p>Loading...</p>
);
}
export default App;
I've modified the above code to take an mp3 file recorded in the browser (recorded using the npm package 'mic-recorder-to-mp3' and passed to this component as a blobURL in the global state) and do something to it using ffmpeg.wasm:
import React, { useContext, useState, useEffect } from 'react';
import Context from '../../store/Context';
import Toolbar from '../Toolbar/Toolbar';
import AudioTranscript from './AudioTranscript';
import { createFFmpeg, fetchFile } from '#ffmpeg/ffmpeg';
//Create ffmpeg instance and set 'log' to true so we can see everything
//it does in the console
const ffmpeg = createFFmpeg({ log: true });
const AudioEditor = () => {
//Setup Global State and get most recent recording
const { globalState } = useContext(Context);
const { blobURL } = globalState;
//ready flag for when ffmpeg is loaded
const [ready, setReady] = useState(false);
const [outputFileURL, setOutputFileURL] = useState('');
//Load FFmpeg asynchronously and set ready when it's ready
const load = async () => {
await ffmpeg.load();
setReady(true);
}
//Use UseEffect to run the 'load' function on mount
useEffect(() => {
load();
}, []);
const ffmpegTest = async () => {
//must first write file to memory as test.mp3
ffmpeg.FS('writeFile', 'test.mp3', await fetchFile(blobURL));
//Run the FFmpeg command
//in this case, trim file size down to 1.5s and save to memory as output.mp3
ffmpeg.run('-i', 'test.mp3', '-t', '1.5', 'output.mp3');
//Read the result from memory
const data = ffmpeg.FS('readFile', 'output.mp3');
//Create URL so it can be used in the browser
const url = URL.createObjectURL(new Blob([data.buffer], { type: 'audio/mp3' }));
setOutputFileURL(url);
}
return ready ? (
<div>
<AudioTranscript />
<Toolbar />
<button onClick={ffmpegTest}>
Edit
</button>
{outputFileURL &&
<audio
controls="controls"
src={outputFileURL || ""}
/>
}
</div>
) : (
<div>
Loading...
</div>
)
}
export default AudioEditor;
This code returns the following error when I press the edit button to call the ffmpegTest function:
I've experimented, and when I tweak the culprit line of code to:
const data = ffmpeg.FS('readFile', 'test.mp3');
the function runs without error, simply returning the input file. So I assume there must be something wrong with ffmpeg.run() line not storing 'output.mp3' in memory perhaps? I can't for the life of me figure out what's going on...any help would be appreciated!
Fixed it...
Turns out I needed to put an 'await' before ffmpeg.run(). Without that statement, the next line:
const data = ffmpeg.FS('readFile', 'output.mp3');
runs before output.mp3 is produced and stored in memory.

Intermediate data from MediaRecorder getting lost when using React hooks

I'm working on a higher order component that will provide the ability to capture media using the MediaRecorder API. However, when I try to use the captured video (in the form of a Blob passed to createObjectURL) I am getting an error ERR_REQUEST_RANGE_NOT_SATISFIABLE. When I console log the Blob that is passed to the wrapped component, it has a length of 0. I have included my code at the end of this post.
In order to diagnose the problem, I tried the following tests:
Console logging newChunks in handleDataAvailable logs the correct value (i.e. [Blob]).
I added React.useEffect(() => console.log(chunks), [chunks]); in order to see if chunks is actually getting updated. This also results in the correct value being logged (i.e. [Blob]).
I added React.useEffect(() => console.log(captured), [captured]); in order to see if captured is getting updated. This results in a Blob of size 0 being logged.
In handleStop, I console log chunks and the blob created by combining the chunks. That results in an empty array and a blob with size 0 respectively.
This leads me to believe that handleDataAvailable is correctly adding each chunk to the chunks array, but somehow the array is being emptied by the time that handleStop gets run.
Does anyone see what might be causing that to happen?
Code:
import React from 'react';
import { getUserMedia, getConstraints } from '../../utils/general';
const withMediaCapture = (WrappedComponent, recordingType, facingMode, deviceID) => {
const constraints = getConstraints(recordingType, facingMode, deviceID);
const type = recordingType === 'audio'
? 'audio/ogg; codecs=opus'
: 'video/webm; codecs=vp9';
return props => {
const [mediaStream, setMediaStream] = React.useState(undefined);
const [mediaRecorder, setMediaRecorder] = React.useState(undefined);
const [isRecording, setIsRecording] = React.useState(false);
const [captured, setCaptured] = React.useState(undefined);
const [chunks, setChunks] = React.useState([]);
// On mount, get the mediaStream:
const setupStream = () => {
getUserMedia(constraints)
.then(setMediaStream)
.catch(error => {/* TODO: Handle error */});
};
React.useEffect(setupStream, []);
// Once we have gotten the mediaStream, get the mediaRecorder:
const handleDataAvailable = ({ data }) => {
const newChunks = [...chunks, data];
setChunks(newChunks);
};
const handleStop = foo => {
const blob = new Blob(chunks, { type });
setCaptured(blob);
setChunks([]);
}
const getMediaRecorder = () => {
mediaStream && setMediaRecorder(Object.assign(
new MediaRecorder(mediaStream),
{
ondataavailable: handleDataAvailable,
onstop: handleStop,
},
));
}
React.useEffect(getMediaRecorder, [mediaStream]);
const toggleRecording = () => {
isRecording
? mediaRecorder.stop()
: mediaRecorder.start();
setIsRecording(!isRecording);
};
return <WrappedComponent {...{ preview: mediaStream, captured, toggleRecording, isRecording, ...props }} />;
};
};
const VideoCaptureDemo = ({ preview, captured, toggleRecording, isRecording }) => {
const previewRef = React.useRef(null);
const capturedRef = React.useRef(null);
const setupPreview = () => {
previewRef.current.srcObject = preview;
};
React.useEffect(setupPreview, [preview]);
const setupCaptured = () => {
const url = captured && window.URL.createObjectURL(captured);
capturedRef.current.src = url;
};
React.useEffect(setupCaptured, [captured]);
return (
<div>
<video ref={previewRef} autoPlay={true} muted={true} />
<video ref={capturedRef} controls />
<button onClick={toggleRecording}>
{isRecording ? 'Stop Recording' : 'Start Recording'}
</button>
</div>
);
};
export default withMediaCapture(VideoCaptureDemo, 'videoAndAudio');
handleStop and handleDataAvailable are both closing over the initial, empty chunks array. If handleDataAvailable is called more than once, earlier chunks will be lost, and handleStop will always create a Blob from the empty chunks array. Re-renders caused by setChunks will cause new versions of the handle methods to be created, but the MediaRecorder will still be using the versions from when the MediaRecorder was created.
You could fix handleDataAvailable by using functional updates, but in order to fix handleStop I think you would be best off to switch to using useReducer (with the reducer managing both chunks and captured) so that you can just dispatch an action and then the reducer can have access to the current chunks and create the Blob appropriately.

Resources