How to create and update a text file using React.js? - reactjs

I am trying to save a variable's data into a text file and update the file every time the variable changes. I found solutions in Node.js and vanilla JavaScript but I cannot find a particular solution in React.js.
Actually I am trying to store Facebook Long Live Access Token in to a text file and would like to use it in the future and when I try importing 'fs' and implementing createFile and appendFile methods I get an error saying Method doesn't exist.
Please help me out. Here is the code below
window.FB.getLoginStatus((resp) => {
if (resp.status === 'connected') {
const accessToken = resp.authResponse.accessToken;
try {
axios.get(`https://graph.facebook.com/oauth/access_token?client_id=CLIENT_id&client_secret=CLIENT_SECRET&grant_type=fb_exchange_token&fb_exchange_token=${accessToken}`)
.then((response) => {
console.log("Long Live Access Token " + response.data.access_token + " expires in " + response.data.expires_in);
let longLiveAccessToken = response.data.access_token;
let expiresIn = response.data.expires_in;
})
.catch((error) => {
console.log(error);
});
}
catch (e) {
console.log(e.description);
}
}
});

React is a frontend library. It's supposed to be executed in the browser, which for security reasons does not have access to the file system. You can make React render in the server, but the example code you're showing is clearly frontend code, it uses the window object. It doesn't even include anything React-related at first sight: it mainly consists of an Ajax call to Facebook made via Axios library.
So your remaining options are basically these:
Create a text file and let the user download it.
Save the file content in local storage for later access from the same browser.
Save the contents in online storage (which could also be localhost).
Can you precise if any of these methods would fit your needs, so I can explain it further with sample code if needed?

Related

React Load Binary File / URL scheme "file" is not supported

Background
I built an app, which converts files from type A to type B (a binary file). I want to import and use a dummy file of type B to fill the data of file type A. The dummy always stays the same. The app has no backend. I want to share the html, so anything which requires turning off browser security etc., isn't an option.
Problem
At the moment, I load the files as I found here, but this works only with a backend server:
Requesting blob images and transforming to base64 with fetch API
import dummy from '../templates/Grid2.shp';
let hex = await fetch(dummy)
.then( response => response.blob() )
.then( blob => new Promise( callback =>{
let reader = new FileReader() ;
reader.onload = function(){
const serumShp = atob(this.result.substring(37)); // 37 strips the base64 info data:...
callback(binaryToHex(serumShp))
} ;
reader.readAsDataURL(blob) ;
}) ) ;
It works in my development but not at the built stage. As the browsers requests from the filesystem.
I found a solution over a file loader, but this solution also throws an error:
Using file-loader to load binary file in react
import/no-webpack-loader-syntax
Also, I don't see any configuration files for Webpack. As far as I have seen I would need to eject them, which is also not recommended.
Question:
How can I import binary files into my app without a backend server/any changes, etc.?
Sorry, I cannot help, but pointing out that there is a general discussion in CRA to support a more elegant way of importing binary/raw data. Sadly there doesn't seem to be much progress, the proposal is from 2018.

Pdf Tron error " Exception error: Pdf error not found" (React App)

I'm trying to embed Pdf tron to my React application. I'm receiving this error when I'm clicking on the tab I want to filter to find the relative pdf file.
const handleFilteredDocs = (id)=>{
const filteredDoc = props.location.documents && props.location.documents.filter(doc=>{
return doc.controlId === id
})
setFileteredDoc(filteredDoc)
setPdfPath(filteredDoc[0].filePath)
WebViewer(
{
path: 'lib',
initialDoc: `lib/pdf/${pdfPath}`,
extension: "pdf"
},
viewer.current,
).then((instance) => {
const { docViewer, Annotations } = instance;
const annotManager = docViewer.getAnnotationManager();
docViewer.on('documentLoaded', () => {
const rectangleAnnot = new Annotations.RectangleAnnotation();
rectangleAnnot.PageNumber = 1;
// values are in page coordinates with (0, 0) in the top left
rectangleAnnot.X = 100;
rectangleAnnot.Y = 150;
rectangleAnnot.Width = 200;
rectangleAnnot.Height = 50;
rectangleAnnot.Author = annotManager.getCurrentUser();
annotManager.addAnnotation(rectangleAnnot);
// need to draw the annotation otherwise it won't show up until the page is refreshed
annotManager.redrawAnnotation(rectangleAnnot);
});
});
}
I'm thinking is because the ref component didn't receive in time the pdfPath state and then throw the error. I've tried to place a separate button to load the pdf with the pdfPath correctly updated and in that case worked. What can i do make it render correctly there?
this is the error I get from the console:
(index)
Value
UI version "7.3.0"
Core version "7.3.0"
Build "Mi8yMi8yMDIxfDZmZmNhOTdmMQ=="
WebViewer Server false
Full API false
Object
CoreControls.js:189 Could not use incremental download for url /lib/pdf/. Reason: The file is not linearized.
CoreControls.js:189
{message: "The file is not linearized."}
CoreControls.js:189 There may be some degradation of performance. Your server has not been configured to serve .gz. and .br. files with the expected Content-Encoding. See http://www.pdftron.com/kb_content_encoding for instructions on how to resolve this.
CoreControls.js:189 There may be some degradation of performance. Your server has not been configured to serve .gz. and .br. files with the expected Content-Encoding. See http://www.pdftron.com/kb_content_encoding for instructions on how to resolve this.
CoreControls.js:189 There may be some degradation of performance. Your server has not been configured to serve .gz. and .br. files with the expected Content-Encoding. See http://www.pdftron.com/kb_content_encoding for instructions on how to resolve this.
81150ece-4c18-41b0-b551-b92f332bd17f:1
81150ece-4c18-41b0-b551-b92f332bd17f:1 PDFNet is running in demo mode.
81150ece-4c18-41b0-b551-b92f332bd17f:1 Permission: read
CoreControls.js:922 Uncaught (in promise)
{message: "Exception: ↵ Message: PDF header not found. The f… Function : SkipHeader↵ Linenumber : 1139↵", type: "InvalidPDF"}
Thank you guys for any help I will get on this!
The value of "pdfPath" isn't set to "filteredDoc[0].filePath" yet after you call "setPdfPath" (it'll still be the initial state till the next render). One thing you can do is pass a callback function when using "setState" to call "WebViewer()" after "pdfPath" has been updated
https://reactjs.org/docs/react-component.html#setstate
Also there is a guide on how to add PDFtron to a React project in the following link
https://www.pdftron.com/documentation/web/get-started/react/
One thing to note, is it's does the following
useEffect(() => {
// will only run once
WebViewer()
}, [])
By doing the above, "WebViewer" is only initialized once. It might be a good idea to do something similar and use "loadDocument" (https://www.pdftron.com/documentation/web/guides/best-practices/#loading-documents-with-loaddocument) when switching between documents instead of reinitializing WebViewer each time the state changes

Corrupt video uploads when chunking MediaRecorder to Google Cloud platform

I currently am using react hook powered component to record my screen, and subsequently upload it to Google Cloud Storage. However, when it finishes, the file created inside Google Cloud appears to be corrupt.
This is the gist of the code within my React component, where useMediaRecorder is from here: https://github.com/wmik/use-media-recorder -
let {
error,
status,
mediaBlob,
stopRecording,
getMediaStream,
startRecording,
liveStream,
} = useMediaRecorder({
onCancelScreenShare: () => {
stopRecording();
},
onDataAvailable: (chunk) => {
// do the uploading here:
onChunk(chunk);
},
recordScreen: true,
blobOptions: { type: "video/webm;codecs=vp8,opus" },
mediaStreamConstraints: { audio: audioEnabled, video: true },
});
As data becomes available through this hook - it calls onChunk( chunk ) passing a binary Blob through to that method, to perform the upload, I tie in with this section of code to perform the upload:
const onChunk = (binaryData) => {
var formData = new FormData();
formData.append("data", binaryData);
let customerApi = new CustomerVideoApi();
customerApi.uploadRecording(
videoUUID,
formData,
(res) => {},
(err) => {}
);
};
customerApi.uploadRecording looks like this (using axios).
const uploadRecording = (uuid, data, fn, fnErr) => {
axios
.post(endpoint + "/stream/upload", data, {
headers: {
"Content-Type": "multipart/form-data",
},
})
.then(function (response) {
fn(response);
})
.catch(function (error) {
fnErr(error.response);
});
};
The HTTP request succeeds, and all is well with the world: the server side code to upload is based on laravel:
// this is inside the controller.
public function index( Request $request )
{
// Set file attributes.
$filepath = '/public/chunks/';
$file = $request->file('data');
$filename = $uuid . ".webm";
// streamupload
File::streamUpload($filepath, $filename, $file, true);
return response()->json(['uploaded' => true,'uuid'=>$uuid]);
}
// there's a service provider used to create a new macro on the File:: object, providing the facility for appropriate handling the stream:
public function boot()
{
File::macro('streamUpload', function($path, $fileName, $file, $overWrite = true) {
$resource = fopen($file->getRealPath(), 'r+');
$storageClient = new StorageClient([
'projectId' => 'myprjectid',
'keyFilePath' => '/my/path/to/servicejson.json',
]);
$bucket = $storageClient->bucket('mybucket');
$adapter = new GoogleStorageAdapter($storageClient, $bucket);
$filesystem = new Filesystem($adapter);
return $overWrite
? $filesystem->putStream($fileName, $resource)
: $filesystem->writeStream($fileName, $resource);
});
}
So to reiterate:
React app chunks out blobs,
server side determines if it should create or append in Google Cloud Storage
server side succeeds.
4) Video inside Google Cloud platform is corrupted.
However, the video file, inside the Google Cloud container is corrupted and won't play. I'm unsure exactly why it is corrupted, but my guesses so far:
Some sort of Dodgy Mime type problem.. - different browsers seem to handle the codec / filetype differently from the mediarecorder: e.g. Chrome seems to be x-matroska (.mkv?) - firefox different again.. Ideally I would have a container of .webm - notice how I set the file name server side, and it isn't coming from the client. Should it? I'm unsure how to force the MediaRecorder to be a specific mimeType - I thought the blobOptions option should do it, but changing the extension and mime type seems to have little to no impact on the corruption occurring.
Some sort of problem during upload where an HTTP request doesn't execute and finish in order - e.g.
1 onDataAvailable completes second
2 onDataAvailable completes first
3 onDataAvailable completes third
I've sort of ruled this out because I think the chunks should be small enough.
Some sort of problem with Google Cloud Storage APIs that I'm using, perhaps in the wrong way? Does the cloud platform support streaming, and does this library send the correct params to do so?
Some sort of problem with how I'm uploading - should the axios headers be multipart formdata, or something else?
This is the package I'm using for the Server side: https://github.com/Superbalist/flysystem-google-cloud-storage
Can anyone could shed any light on how to achieve this goal of streaming up into Google Cloud without the video from the mediarecorder being corrupted? Hopefully there's enough detail here in the question to help figure it out. The problem as illustrated isn't on getting the file as far as Google cloud, but rather the resulting file being unplayable in any video format.
Update
I've ordered my chunks client side now, and queued them properly before letting them reach the server. No difference to the output. As some have suggested - a single blob upload request works fine.
Tried using streamable config param (from reading source code it seems like chunks need to be a certain size before Google recognises them as a resumable upload
$filesystem = new Filesystem($adapter, [
'resumable'=>true
]);
Not sure how: https://cloud.google.com/storage/docs/performing-resumable-uploads - is implemented within the libraries I'm using, (or within the Google Cloud APIs themselves if at all?). Do I need to implement that myself? Documentation is light on Google's part.
Short version:
The first thing you should do is buffer the whole video locally, and send a single payload to the server and to google drive. This will validate your code for a small video is actually correct. Once you can verify this you can move onto handling multi-chunk uploads.
Longer version:
For starters, you aren't passing the uuid to the request, it's being used:
const uploadRecording = (uuid, data, fn, fnErr) => {
axios
.post(endpoint + "/stream/upload", data, {
headers: {
"Content-Type": "multipart/form-data",
},
})
.then(function (response) {
fn(response);
})
.catch(function (error) {
fnErr(error.response);
});
};
Next, you can't trust how chunking will work, I think you verified this behavior with the out of order result of chunk logging. You need to assume on your server you will get chunks out of order and handle them correctly.
Each chunk you get on the server needs to put in the right place, you can't just "writeStream", you need to write to the explicit binary block. Specifically, on every request specify the byte range: Google docs:
curl -i -X PUT --data-binary #CHUNK_LOCATION \
-H "Content-Length: CHUNK_SIZE" \
-H "Content-Range: bytes CHUNK_FIRST_BYTE-CHUNK_LAST_BYTE/TOTAL_OBJECT_SIZE" \
"SESSION_URI"
CHUNK_LOCATION is the local path to the
chunk that you're currently uploading. CHUNK_SIZE is the number of
bytes you're uploading in the current request. For example, 524288. CHUNK_FIRST_BYTE is the
starting byte in the overall object that the chunk you're uploading
contains. CHUNK_LAST_BYTE is the ending byte in the
overall object that the chunk you're uploading contains.
TOTAL_OBJECT_SIZE is the total size of the
object you are uploading. SESSION_URI is the value returned in the
Location header when you initiated the resumable upload.
Try to eliminate as many variables as possible and pinpoint where exactly the file is getting corrupted.
Since you are using a React(JS) -> Laravel(PHP) -> GoogleCloud path,
first thing I would suggest is to test each step separately:
React -> Laravel - save the file on your server and check if its corrupted at this point
Laravel -> GoogleCloud - Load a file from the server filesystem and upload to cloud and see if it gets corrupted
I don't have experience with Google cloud, but I did something very similar with AWS and found that their video uploading service was extremely picky about the requests (including order of headers that were sent).
Try to compare the specs on the service you are using with your input, make the smallest possible thing that works and start adding variables until you get to the final state.
Also I don't see any kind of data ordering in your code.
If your chunks are close to each other, and with streaming it is highly possible then there is a chance that they will arrive in different order than originally sent. If you just append them to a file without any control of the sorting then the file will indeed get corrupted. Not sure if for webm that would cause just parts of the video to be broken or the entire thing to die.

Upload images to Azure blob from front end (React)

The front end enables people to upload their photos, so i was sending the base64 to the server and working with it initially, but there are problems with firewall which blocks the request which contains base64. As an alternative solution I was trying to upload the image to azure blob get the file name and then send that to the server for processing where I generate a sas token for the blob validation and processing.
This works perfectly fine when I work locally and the front end connection works with #azure/storage-blob
and uploadBrowserData() when I send the arrayBuffer as the param
export const uploadSelfieToBlob = async arrayBuffer => {
try {
const blobURL = `https://${accountName}.blob.core.windows.net${sasString}`;
const blobServiceClient = new BlobServiceClient(blobURL, anonymousCredential);
const containerClient = blobServiceClient.getContainerClient(containerName);
let randomString = Math.random().toString(36).substring(7);
const blobName = `${randomString}_${new Date().getTime()}.jpg`;
const blockBlobClient = containerClient.getBlockBlobClient(blobName);
const uploadBlobResponse = await blockBlobClient.uploadBrowserData(arrayBuffer);
return { blobName, blobId: uploadBlobResponse.requestId };
} catch (error) {
console.log('error when uploading to blob', error);
throw new Error('Error Uploading the selfie to blob');
}
};
When I deploy this is not working, the front is deployed in the EastUs2 location and the local development location is different.
I thought the sasString generated for anonymous access had the timezone option so I generated 2 different one's one for local and one for hosted server with the same location selected.
Failed to send request to https://xxxx.blob.core.windows.net/contanainer-name/26pcie_1582087489288.jpg?sv=2019-02-02&ss=b&srt=c&sp=rwdlac&se=2023-09-11T07:57:29Z&st=2020-02-18T00:57:29Z&spr=https&sig=9IWhXo5i%2B951%2F8%2BTDqIY5MRXbumQasOnY4%2Bju%2BqF3gw%3D
What am I missing any lead would be helpful thanks
First, as mentioned in the comments there was an issue with the CORS Settings because of which you're getting the initial error.
AuthorizationResourceTypeMismatchThis
request is not authorized to perform this operation using this
resource type. RequestId:7ec96c83-101e-0001-4ef1-e63864000000
Time:2020-02-19T06:57:31.2867563Z
I looked up this error code here and then closely looked at your SAS URL.
One thing I noticed in your SAS URL is that you have set the signed resource type (srt) as c (container) and trying to upload the blob. If you look at the description of the kind of operations you can do using srt=c here, you will notice that blob related operations are not supported.
In order to perform blob related operations (like blob upload), you would need to set signed resource type value to o (for object).
Please regenerate your SAS Token and include signed resource type as object (you can also include container and/or service in there as well) and then your request should work. So essentially your srt in your SAS URL should be something like srt=o or srt=co or srt=sco.
I couldn't notice anything wrong with the code you mentioned about, but I have been using a different method to upload files to Azure Blog Storage using React, the method is exactly the same as in this blog article which works perfectly for me.
https://medium.com/#stuarttottle/upload-to-azure-blob-storage-with-react-34f37805fdfc

load pdf from redux store

I am trying to download a server side generated pdf file in client, which i get with axios and save it in redux and using FileSaver to download it.
const getTicketPdf = ({ userID, ticketID }) =>
requestApi(`/users/${userID}/tickets/${ticketID}/pdf`, {
method: 'get',
});
requestApi gets me all neccessary headers so that i can download the file.
the data is then stored in redux like this:
data: "%PDF-1.4\n3 0 obj\n<</Type /Page\n/Parent 1 0 R\n/MediaBox [0 0 595.00 842.00]\n/Resources 2 0 R\n/Contents 4 0 R>>\nendobj\n4 0 obj\n<</Filter /FlateDecode /Length 64>>\nstream\nx�3R��2�35W(�*T0P�R0T(\u0007�Y#�\u000e��#Q…"
i call it in render with:
<div>
<button onClick={ () => this.getPdf(ticket) }>PDF</button>
</div>
getPdf = ticket => {
const blob = new Blob([ticket]);
FileSaver.saveAs(blob, 'Ticket.pdf');
}
I am always getting the following error:
TypeError: Cannot read property 'saveAs' of undefined
i tried also to set
responseType: 'blob'
but this doesn't help either.
Next thing I testet was with react-pdf library, where I managed to display pdf in Component, but i cant print it. User should only habe to save it and then print it locally (or at least show it in separate tab as PDF, which i tried with window.open() as base64 encoded string).
How can I download a server side generated PDF otherwise? Are there any better ways?
Unfortunately I have to set HTTP Headers in order to get that file.
Thanks in advance.
The error stems from the fact that there is no FileSaver object (or rather, it's non-standard).
It seems to be polyfilled by this third-party library: https://github.com/eligrey/FileSaver.js
The error you are seeing is caused by a reference to an undefined variable FileSaver - I guess that you are using FileSaver.js, and need to fix the import. You should also bear in mind that FileSaver is deprecated in favour of the download attribute. See this answer for details on how to use it.
Either way, in the interests of keeping your store light, you should save a reference to the PDF in your Redux store, rather than the string itself.

Resources