I can use ffmpeg in js but how can i use this code in react
const ffmpegPath = require('#ffmpeg-installer/ffmpeg').path
const ffmpeg = require('fluent-ffmpeg')
ffmpeg.setFfmpegPath(ffmpegPath)
ffmpeg('video.mp4')
.setStartTime('00:00:03')
.setDuration('10')
.output('video_out.mp4')
.on('end', function(err) {
if(!err) { console.log('conversion Done') }
})
.on('error', function(err){
console.log('error: ', err)
}).run()
In my understanding, you want to change a video file before uploading it?
I'm afraid this is pretty hard to do in the browser. Browsers usually don't have easy access to the local file system of the computer and have trouble reading and writing to disk.
The code you have included is meant for a node environment. A hint is the use of the required function on line 1 & 2 as node provides this function natively.
My proposed solution would be:
User uploads original (full-length) video to your server through a react app.
Server processes the file (using a node environment and the code that you have copied) and removes the first three seconds. A tutorial on how to receive video uploads in nodejs can be found here: froala nodejs video upload
Server saves the new file & deletes the original
I hope this helps a bit.
Related
Background
I built an app, which converts files from type A to type B (a binary file). I want to import and use a dummy file of type B to fill the data of file type A. The dummy always stays the same. The app has no backend. I want to share the html, so anything which requires turning off browser security etc., isn't an option.
Problem
At the moment, I load the files as I found here, but this works only with a backend server:
Requesting blob images and transforming to base64 with fetch API
import dummy from '../templates/Grid2.shp';
let hex = await fetch(dummy)
.then( response => response.blob() )
.then( blob => new Promise( callback =>{
let reader = new FileReader() ;
reader.onload = function(){
const serumShp = atob(this.result.substring(37)); // 37 strips the base64 info data:...
callback(binaryToHex(serumShp))
} ;
reader.readAsDataURL(blob) ;
}) ) ;
It works in my development but not at the built stage. As the browsers requests from the filesystem.
I found a solution over a file loader, but this solution also throws an error:
Using file-loader to load binary file in react
import/no-webpack-loader-syntax
Also, I don't see any configuration files for Webpack. As far as I have seen I would need to eject them, which is also not recommended.
Question:
How can I import binary files into my app without a backend server/any changes, etc.?
Sorry, I cannot help, but pointing out that there is a general discussion in CRA to support a more elegant way of importing binary/raw data. Sadly there doesn't seem to be much progress, the proposal is from 2018.
I currently am using react hook powered component to record my screen, and subsequently upload it to Google Cloud Storage. However, when it finishes, the file created inside Google Cloud appears to be corrupt.
This is the gist of the code within my React component, where useMediaRecorder is from here: https://github.com/wmik/use-media-recorder -
let {
error,
status,
mediaBlob,
stopRecording,
getMediaStream,
startRecording,
liveStream,
} = useMediaRecorder({
onCancelScreenShare: () => {
stopRecording();
},
onDataAvailable: (chunk) => {
// do the uploading here:
onChunk(chunk);
},
recordScreen: true,
blobOptions: { type: "video/webm;codecs=vp8,opus" },
mediaStreamConstraints: { audio: audioEnabled, video: true },
});
As data becomes available through this hook - it calls onChunk( chunk ) passing a binary Blob through to that method, to perform the upload, I tie in with this section of code to perform the upload:
const onChunk = (binaryData) => {
var formData = new FormData();
formData.append("data", binaryData);
let customerApi = new CustomerVideoApi();
customerApi.uploadRecording(
videoUUID,
formData,
(res) => {},
(err) => {}
);
};
customerApi.uploadRecording looks like this (using axios).
const uploadRecording = (uuid, data, fn, fnErr) => {
axios
.post(endpoint + "/stream/upload", data, {
headers: {
"Content-Type": "multipart/form-data",
},
})
.then(function (response) {
fn(response);
})
.catch(function (error) {
fnErr(error.response);
});
};
The HTTP request succeeds, and all is well with the world: the server side code to upload is based on laravel:
// this is inside the controller.
public function index( Request $request )
{
// Set file attributes.
$filepath = '/public/chunks/';
$file = $request->file('data');
$filename = $uuid . ".webm";
// streamupload
File::streamUpload($filepath, $filename, $file, true);
return response()->json(['uploaded' => true,'uuid'=>$uuid]);
}
// there's a service provider used to create a new macro on the File:: object, providing the facility for appropriate handling the stream:
public function boot()
{
File::macro('streamUpload', function($path, $fileName, $file, $overWrite = true) {
$resource = fopen($file->getRealPath(), 'r+');
$storageClient = new StorageClient([
'projectId' => 'myprjectid',
'keyFilePath' => '/my/path/to/servicejson.json',
]);
$bucket = $storageClient->bucket('mybucket');
$adapter = new GoogleStorageAdapter($storageClient, $bucket);
$filesystem = new Filesystem($adapter);
return $overWrite
? $filesystem->putStream($fileName, $resource)
: $filesystem->writeStream($fileName, $resource);
});
}
So to reiterate:
React app chunks out blobs,
server side determines if it should create or append in Google Cloud Storage
server side succeeds.
4) Video inside Google Cloud platform is corrupted.
However, the video file, inside the Google Cloud container is corrupted and won't play. I'm unsure exactly why it is corrupted, but my guesses so far:
Some sort of Dodgy Mime type problem.. - different browsers seem to handle the codec / filetype differently from the mediarecorder: e.g. Chrome seems to be x-matroska (.mkv?) - firefox different again.. Ideally I would have a container of .webm - notice how I set the file name server side, and it isn't coming from the client. Should it? I'm unsure how to force the MediaRecorder to be a specific mimeType - I thought the blobOptions option should do it, but changing the extension and mime type seems to have little to no impact on the corruption occurring.
Some sort of problem during upload where an HTTP request doesn't execute and finish in order - e.g.
1 onDataAvailable completes second
2 onDataAvailable completes first
3 onDataAvailable completes third
I've sort of ruled this out because I think the chunks should be small enough.
Some sort of problem with Google Cloud Storage APIs that I'm using, perhaps in the wrong way? Does the cloud platform support streaming, and does this library send the correct params to do so?
Some sort of problem with how I'm uploading - should the axios headers be multipart formdata, or something else?
This is the package I'm using for the Server side: https://github.com/Superbalist/flysystem-google-cloud-storage
Can anyone could shed any light on how to achieve this goal of streaming up into Google Cloud without the video from the mediarecorder being corrupted? Hopefully there's enough detail here in the question to help figure it out. The problem as illustrated isn't on getting the file as far as Google cloud, but rather the resulting file being unplayable in any video format.
Update
I've ordered my chunks client side now, and queued them properly before letting them reach the server. No difference to the output. As some have suggested - a single blob upload request works fine.
Tried using streamable config param (from reading source code it seems like chunks need to be a certain size before Google recognises them as a resumable upload
$filesystem = new Filesystem($adapter, [
'resumable'=>true
]);
Not sure how: https://cloud.google.com/storage/docs/performing-resumable-uploads - is implemented within the libraries I'm using, (or within the Google Cloud APIs themselves if at all?). Do I need to implement that myself? Documentation is light on Google's part.
Short version:
The first thing you should do is buffer the whole video locally, and send a single payload to the server and to google drive. This will validate your code for a small video is actually correct. Once you can verify this you can move onto handling multi-chunk uploads.
Longer version:
For starters, you aren't passing the uuid to the request, it's being used:
const uploadRecording = (uuid, data, fn, fnErr) => {
axios
.post(endpoint + "/stream/upload", data, {
headers: {
"Content-Type": "multipart/form-data",
},
})
.then(function (response) {
fn(response);
})
.catch(function (error) {
fnErr(error.response);
});
};
Next, you can't trust how chunking will work, I think you verified this behavior with the out of order result of chunk logging. You need to assume on your server you will get chunks out of order and handle them correctly.
Each chunk you get on the server needs to put in the right place, you can't just "writeStream", you need to write to the explicit binary block. Specifically, on every request specify the byte range: Google docs:
curl -i -X PUT --data-binary #CHUNK_LOCATION \
-H "Content-Length: CHUNK_SIZE" \
-H "Content-Range: bytes CHUNK_FIRST_BYTE-CHUNK_LAST_BYTE/TOTAL_OBJECT_SIZE" \
"SESSION_URI"
CHUNK_LOCATION is the local path to the
chunk that you're currently uploading. CHUNK_SIZE is the number of
bytes you're uploading in the current request. For example, 524288. CHUNK_FIRST_BYTE is the
starting byte in the overall object that the chunk you're uploading
contains. CHUNK_LAST_BYTE is the ending byte in the
overall object that the chunk you're uploading contains.
TOTAL_OBJECT_SIZE is the total size of the
object you are uploading. SESSION_URI is the value returned in the
Location header when you initiated the resumable upload.
Try to eliminate as many variables as possible and pinpoint where exactly the file is getting corrupted.
Since you are using a React(JS) -> Laravel(PHP) -> GoogleCloud path,
first thing I would suggest is to test each step separately:
React -> Laravel - save the file on your server and check if its corrupted at this point
Laravel -> GoogleCloud - Load a file from the server filesystem and upload to cloud and see if it gets corrupted
I don't have experience with Google cloud, but I did something very similar with AWS and found that their video uploading service was extremely picky about the requests (including order of headers that were sent).
Try to compare the specs on the service you are using with your input, make the smallest possible thing that works and start adding variables until you get to the final state.
Also I don't see any kind of data ordering in your code.
If your chunks are close to each other, and with streaming it is highly possible then there is a chance that they will arrive in different order than originally sent. If you just append them to a file without any control of the sorting then the file will indeed get corrupted. Not sure if for webm that would cause just parts of the video to be broken or the entire thing to die.
I am trying to save a variable's data into a text file and update the file every time the variable changes. I found solutions in Node.js and vanilla JavaScript but I cannot find a particular solution in React.js.
Actually I am trying to store Facebook Long Live Access Token in to a text file and would like to use it in the future and when I try importing 'fs' and implementing createFile and appendFile methods I get an error saying Method doesn't exist.
Please help me out. Here is the code below
window.FB.getLoginStatus((resp) => {
if (resp.status === 'connected') {
const accessToken = resp.authResponse.accessToken;
try {
axios.get(`https://graph.facebook.com/oauth/access_token?client_id=CLIENT_id&client_secret=CLIENT_SECRET&grant_type=fb_exchange_token&fb_exchange_token=${accessToken}`)
.then((response) => {
console.log("Long Live Access Token " + response.data.access_token + " expires in " + response.data.expires_in);
let longLiveAccessToken = response.data.access_token;
let expiresIn = response.data.expires_in;
})
.catch((error) => {
console.log(error);
});
}
catch (e) {
console.log(e.description);
}
}
});
React is a frontend library. It's supposed to be executed in the browser, which for security reasons does not have access to the file system. You can make React render in the server, but the example code you're showing is clearly frontend code, it uses the window object. It doesn't even include anything React-related at first sight: it mainly consists of an Ajax call to Facebook made via Axios library.
So your remaining options are basically these:
Create a text file and let the user download it.
Save the file content in local storage for later access from the same browser.
Save the contents in online storage (which could also be localhost).
Can you precise if any of these methods would fit your needs, so I can explain it further with sample code if needed?
I am trying to build a simple photo upload app on Ionic (Cordova). I am using the cordovaImagePicker plugin to have the user select images from the mobile device. This plugin returns an array of paths on the device.
For handling the upload part I am using jquery-file-upload (mostly because that is what I used for the browser version and I am doing all kinds of processing for which I have the code ready). The problem is however that jquery-file-upload expects to work with an input element <input type="file"> which creates a javascript File object containing all kinds of metadata.
So in order to get the cordovaImagePicker to work with jquery-file-upload, I figure I have to convert the filepath to a File object. Below I am using the cordova file plugin to achieve this:
$cordovaImagePicker.getPictures($scope.pickOptions).then(function(filelist) {
$.each(filelist, function (index, filepath) {
$window.resolveLocalFileSystemURL(filepath, function(fileEntry) {
fileEntry.file(function(file) {
var reader = new FileReader();
reader.onloadend = function(e) {
fileObj = new File([this.result],"filename.jpg",{type: "image/jpeg"});
// send filelist from cordovaImagePicker to jquery-fileupload as if through file input
$('#fileupload').fileupload('send', {files: fileObj});
};
reader.readAsArrayBuffer(file);
}, function(e){$scope.errorHandler(e)});
}, function(e){$scope.errorHandler(e)});
});
}, function(error) {
// error getting photos
console.log('Error selecting images through $cordovaImagePicker');
});
So first of all this is not really working correctly, apparently I am doing doing something wrong, since for example the type attribute ends up being another object that contains the type attribute with the correct values (and other such weird issues). I would be happy if someone could point out what I am doing wrong.
Surely there must be something (cordova plugin?) that I am not aware of that does this conversion for me (including for example adding a thumbnail)? Alternatively, maybe there is something that can easily make jquery-file-upload work with filepaths? I couldn't find anything so far...
However, it feels I am trying too hard here to force connecting two components that were just not built to work together (File objects vs filepath) and I should maybe just rewrite the processing and use the cordova file transfer plugin?
I ended up rewriting the uploader with the cordova-file-transfer which works like a charm. I wasted more time trying to work around it than just rewriting it from scratch.
I am working on meanjs application generated using https://github.com/DaftMonk/generator-angular-fullstack. I am trying to generate a .pdf file using phantomjs and download it to the browser.
The issue is that the downloaded .pdf file always shows the blank pages regardless of the number of pages. The original file on server is not corrupt. When I investigated further, found that the downloaded file is always much larger than the original file on the disk. Also this issue happens only with .pdf files. Other file types are working fine.
I've tried several methods like res.redirect('http://localhost:9000/assets/exports/receipt.pdf');, res.download('client\\assets\\exports\\receipt.pdf'),
var fileSystem = require('fs');
var stat = fileSystem.statSync('client\\assets\\exports\\receipt.pdf');
res.writeHead(200, {
'Content-Type': 'application/pdf',
'Content-Length': stat.size
});
var readStream = fileSystem.createReadStream('client\\assets\\exports\\receipt.pdf');
return readStream.pipe(res);
and even I've tried with https://github.com/expressjs/serve-static with no changes in the result.
I am new to nodejs. What is the best way to download a .pdf file to the browser?
Update:
I am running this on a Windows 8.1 64bit Computer
I had corruption when serving static pdfs too. I tried everything suggested above. Then I found this:
https://github.com/intesso/connect-livereload/issues/39
In essence the usually excellent connect-livereload (package ~0.4.0) was corrupting the pdf.
So just get it to ignore pdfs via:
app.use(require('connect-livereload')({ignore: ['.pdf']}));
now this works:
app.use('/pdf', express.static(path.join(config.root, 'content/files')));
...great relief.
Here is a clean way to serve a file from express, and uses an attachment header to make sure the file is downloaded :
var path = require('path');
var mime = require('mime');
app.get('/download', function(req, res){
//Here do whatever you need to get your file
var filename = path.basename(file);
var mimetype = mime.lookup(file);
res.setHeader('Content-disposition', 'attachment; filename=' + filename);
res.setHeader('Content-type', mimetype);
var filestream = fs.createReadStream(file);
filestream.pipe(res);
});
There are a couple of ways to do this:
If the file is a static one like brochure, readme etc, then you can tell express that my folder has static files (and should be available directly) and keep the file there. This is done using static middleware:
app.use(express.static(pathtofile));
Here is the link: http://expressjs.com/starter/static-files.html
Now you can directly open the file using the url from the browser like:
window.open('http://localhost:9000/assets/exports/receipt.pdf');
or
res.redirect('http://localhost:9000/assets/exports/receipt.pdf');
should be working.
Second way is to read the file, the data must be coming as a buffer. Actually, it should be recognised if you send it directly, but you can try converting it to base64 encoding using:
var base64String = buf.toString('base64');
then set the content type :
res.writeHead(200, {
'Content-Type': 'application/pdf',
'Content-Length': stat.size
});
and send the data as response.
I will try to put an example of this.
EDIT: You dont even need to encode it. You may try that still. But I was able to make it work without even encoding it.
Plus you also do not need to set the headers. Express does it for you. Following is the Snippet of API code written to get the pdf in case it is not public/static. You need API to serve the pdf:
router.get('/viz.pdf', function(req, res){
require('fs').readFile('viz.pdf', function(err, data){
res.send(data);
})
});
Lastly, note that the url for getting the pdf has extension pdf to it, this is for browser to recognise that the incoming file is pdf. Otherwise it will save the file without any extension.
Usually if you are using phantom to generate a pdf then the file will be written to disc and you have to supply the path and a callback to the render function.
router.get('/pdf', function(req, res){
// phantom initialization and generation logic
// supposing you have the generation code above
page.render(filePath, function (err) {
var filename = 'myFile.pdf';
res.setHeader('Content-type', "application/pdf");
fs.readFile(filePath, function (err, data) {
// if the file was readed to buffer without errors you can delete it to save space
if (err) throw err;
fs.unlink(filePath);
// send the file contents
res.send(data);
});
});
});
I don't have experience of the frameworks that you have mentioned but I would recommend using a tool like Fiddler to see what is going on. For example you may not need to add a content-length header since you are streaming and your framework does chunked transfer encoding etc.