I am developing a web page , where a user will select two audio files and the app will join the two audio files and make it as a single output audio file. I am using nodejs in back end and angularjs in client side. How can I achieve this requirement? I went through many libraries nothing suits it.
I'm looking at a similar use-case at the moment. The libraries aren't great as most need a big program to be installed in your server environment. Examples are:
sox-audio: Which should be ok as long as you don't need iterative (variable number of files) concatenation. But this requires installation of SoX.
audio-concat: A wrapper for ffmpeg, but also needs installation of ffmpeg.
Or of you don't need the output audio to be seekable, you could simply do it with streams. Concept:
var fs = require('fs')
var writeStream = fs.createWriteStream('outputAudio.mp3'); // Or whatever you want to call it
// Input files should be an array of the paths to the audio files you want to stitch
recursiveStreamWriter(inputFiles) {
if(inputFiles.length == 0) {
console.log('Done!')
return;
}
let nextFile = inputFiles.shift();
var readStream = fs.createReadStream(nextFile);
readStream.pipe(writeStream, {end: false});
readStream.on('end', () => {
console.log('Finished streaming an audio file');
recursiveStreamWriter(inputFiles);
});
}
This works and the transitions are fine, but audio players struggle to seek the audio. A decent full example can be found here of the recursive stream method.
As an example I am combining Iphone ringtone with Nokia ringtone and piping the streamed readers to a write stream.
//Created two reader streams
let readerstream1 = fs.createReadStream('iphone-6.mp3');
let readerstream2 = fs.createReadStream('Nokia.mp3');
// Created a write stream
let writestream = fs.createWriteStream('NewRingtone.mp3');
//just pipe one after the other
readerstream1.pipe(writestream);
readerstream2.pipe(writestream);
Related
I have a tensorflow.js script/app that runs in Node.js using tfjs-node and Universal Sentence Encoder (USE).
Each Time the script runs, it downloads a 525 MegaByte File (the USE model file).
Is there any way to load the Universal Sentence Encoder Model File from the local file system to avoid downloading such a large file every time I need to run the node.js tensorflow script?
I've noted several similar model loading examples but none that work with Universal Sentence Encoder as it does not appear to have the same type functionality. Below is a stripped down example of a functioning script that downloads the 525 MB file every time it executes.
Any help or recommendations would be appreciated.
const tf = require('#tensorflow/tfjs-node');
const use = require('#tensorflow-models/universal-sentence-encoder');
// No Form of Universal Sentence Encoder loader appears to be present
let model = tf.loadGraphModel('file:///Users/ray/Documents/tf_js_model_save_load2/models/model.json');
use.load().then(model => {
const sentences = [
'Hello.',
'How are you?'
];
model.embed(sentences).then(embeddings => {
embeddings.print(true /* verbose */);
});
});
I've tried several recommendations that appear to work for other models but not Universal Sentence Encoder such as:
const tf = require('#tensorflow/tfjs');
const tfnode = require('#tensorflow/tfjs-node');
async function loadModel(){
const handler = tfnode.io.fileSystem('tfjs_model/model.json');
const model = await tf.loadLayersModel(handler);
console.log("Model loaded")
}
loadModel();
its not a model issue per-say, its a module issue.
model can be loaded any way you want, but the module #tensorflow-models/universal-sentence-encoder implements only a specific internal way on how it loads actual model data.
specifically, it internally uses tf.util.fetch.
solution? use some library (or write your own) to register a global fetch handler that knows how to handle file:// prefixes - if global fetch handler exists, tf.util.fetch will simply just use it.
hint: https://gist.github.com/joshua-gould/58e1b114a67127273eef239ec0af8989
I can use ffmpeg in js but how can i use this code in react
const ffmpegPath = require('#ffmpeg-installer/ffmpeg').path
const ffmpeg = require('fluent-ffmpeg')
ffmpeg.setFfmpegPath(ffmpegPath)
ffmpeg('video.mp4')
.setStartTime('00:00:03')
.setDuration('10')
.output('video_out.mp4')
.on('end', function(err) {
if(!err) { console.log('conversion Done') }
})
.on('error', function(err){
console.log('error: ', err)
}).run()
In my understanding, you want to change a video file before uploading it?
I'm afraid this is pretty hard to do in the browser. Browsers usually don't have easy access to the local file system of the computer and have trouble reading and writing to disk.
The code you have included is meant for a node environment. A hint is the use of the required function on line 1 & 2 as node provides this function natively.
My proposed solution would be:
User uploads original (full-length) video to your server through a react app.
Server processes the file (using a node environment and the code that you have copied) and removes the first three seconds. A tutorial on how to receive video uploads in nodejs can be found here: froala nodejs video upload
Server saves the new file & deletes the original
I hope this helps a bit.
Context: I am attempting to automate the inspection of eps files to detect a list of attributes, such as whether the file contains locked layers, embedded bitmap images etc.
So far we have found some of these things can be detected via inspection of the raw eps file data and its accompanying metadata (similar to the information returned by imagemagick.) However it seems that in files created by illustrator 9 and above the vast majority of this information is encoded within the "AI9_DataStream" portion of the file. This data is encoded via ascii85 and compressed. We have found some success in getting at this data by using: https://github.com/huandu/node-ascii85 to decode and nodes zlib library to decompress / unzip. (Our project is written in node / javascript). However it seems that in roughly half of our test cases / files the unzipping portion fails, throwing Z_DATA_ERROR / "incorrect data check".
Our method responsible for trying to decode:
export const decode = eps =>
new Promise((resolve, reject) => {
const lineDelimiters = /\r\n%|\r%|\n%/g;
const internal = eps.match(
/(%AI9_DataStream)([\s\S]*?)(AI9_PrivateDataEnd)/
);
const hasDataStream = internal && internal.length >= 2;
if (!hasDataStream) resolve('');
const encoded = internal[2].replace(lineDelimiters, '');
const decoded = ascii85.decode(encoded);
try {
zlib.unzip(decoded, (err, buffer) => {
// files can crash this process, for now we need to allow it
if (err) resolve('');
else resolve(buffer.toString('utf8'));
});
} catch (err) {
reject(err);
}
});
I am wondering if anyone out there has had any experience with this issue and has some insight into what might be causing this and whether there is an alternative avenue to explore for reliably decoding this data. Information on this topic seems a bit sparse so really anything that could get us going in the right direction would be very much appreciated.
Note: The buffers produced by the ascii85 decoding all have the same 78 9c header which should indicate standard zlib compression (and it does in fact decompress into parsable data about half the time without error)
Apparently we were misreading something about the ascii85 encoding. There is a ~> delimiter at the end of the encoded block that needs to be omitted from the string before decoding and subsequent unzipping.
So instead of:
/(%AI9_DataStream)([\s\S]*?)(AI9_PrivateDataEnd)/
Use:
/(%AI9_DataStream)([\s\S]*?)(~>)/
And you can get to the correct encoded / compressed data. So far this has produced human readable / regexable data in all of our current test cases so unless we are thrown another curve that seems to be the answer.
The only reliable method for getting content from PostScript is to run it through a PostScript interpreter, because PostScript is a programming language.
If you stick to a specific workflow with well understood input, then you may have some success in simple parsing, but that's about the only likely scenario which will work.
Note that EPS files don't have 'layers' and certainly don't have 'locked' layers.
You haven't actually pointed to a working example, but I suspect the content of the AI9_DataStream is not relevant to the EPS. Its probably a means for Illustrator to include its own native file format inside the EPS file, without it affecting a PostScript interpreter. This is how it works with AI-produced PDF files.
This means that when you reopen the EPS file with Adobe Illustrator, it ignores the EPS and uses the embedded native file, which magically grants you the ability to edit the file, including features like layers which cannot be represented in the EPS.
I have an entirely browser-based (i.e. no backend) application which analyzes XML data in files which average about 250MB each. The actual parsing and analysis happens in a web worker, which is fed data in 64KB chunks by a FileReader instance, and this is all quite performant.
I have a request from the client to expand this application so that it can generate a .zip file containing the original input file and the results of the analysis, and allow the user to save that file to her local machine. Generating a .zip file in memory with those contents isn't a problem. The problem lies in transferring that much data from the web worker which generates it back to the main browser thread, so that it can be saved; attempting to do this invariably provokes a crash or out-of-memory exception. (I've tried transferring strings all at once and a chunk at a time, and I've tried using an ArrayBuffer as a transferable object to avoid copying. All fail in the same fashion.)
Unfortunately, I don't know any way to invoke a file save operation directly from a worker thread. I know several methods of doing so from the main browser thread, but all of them require either the ability to create DOM nodes (which worker threads of course can't do), or the use of interfaces (i.e. msSaveBlob, saveAs) which no browser seems to expose to a worker thread. I've spent a while looking for possibilities on the web, but found nothing usable; FileWriterSync looked good, but only Chrome supports it, and I need to target IE and Firefox as well.
Is there a method I've overlooked for saving files directly from a web worker? If so, what is it? Or am I just out of luck here?
tl;dr demo
You don't need to copy the entire file to the client side at all. You don't even need to transfer it, in fact. First a recap.
This is how to create Blob from some typed array:
// Some arbitrary binary data
const mydata = new Uint16Array([1,2,3,4,5]);
// mydata vs. mydata.buffer does not seem to make any difference
const blob = new Blob([mydata], {type: "octet/stream"});
You can create an object URL, which is a copy of the original Blob managed by the browser and accessible as URL. I have done this with huge files without seeing performance impact:
const url = URL.createObjectURL(blob);
This is how I typically download URLs:
const link = document.createElement("a");
link.download = "data.bin";
link.href = e.data.link;
link.appendChild(new Text("Download data"));
link.addEventListener("click", function() {
this.parentNode.removeChild(this);
// remember to free the object url, but wait until the download is handled
setTimeout(()=>{URL.revokeObjectURL(e.data.link);}, 500)
});
document.body.appendChild(link);
You can trigger the download automatically by invoking click event on that link. I prefer to let the user decide when to download.
So, all together:
worker.js
// Some arbitrary binary data
const mydata = new Uint16Array([1,2,3,4,5]);
self.onmessage = function(e) {
console.log("Message: ",e.data)
switch(e.data.name) {
case "make-download" :
const blob = new Blob([mydata.buffer], {type: "octet/stream"});
const url = URL.createObjectURL(blob);
self.postMessage({name:"download-link", link:url});
break;
default:
console.error("Unknown message:", e.data.name);
}
}
main.js
var worker = new Worker("worker.js");
worker.addEventListener("message", function(e) {
switch(e.data.name) {
case "download-link" : {
if(e.data.error) {
console.error("Download error: ", e.data.error);
}
else {
const link = document.createElement("a");
link.download = "data.bin";
link.href = e.data.link;
link.appendChild(new Text("Download data"));
link.addEventListener("click", function() {
this.parentNode.removeChild(this);
// remember to free the object url, but wait until the download is handled
setTimeout(()=>{URL.revokeObjectURL(e.data.link);}, 500)
});
document.body.appendChild(link);
}
break;
}
default:
console.error("Unknown message:", e.data.name);
}
});
function requestDownload() {
worker.postMessage({name:"make-download"});
}
When I click Download in my demo, I can see this in my HEX editor:
Looks just fine :)
I am trying to build a simple photo upload app on Ionic (Cordova). I am using the cordovaImagePicker plugin to have the user select images from the mobile device. This plugin returns an array of paths on the device.
For handling the upload part I am using jquery-file-upload (mostly because that is what I used for the browser version and I am doing all kinds of processing for which I have the code ready). The problem is however that jquery-file-upload expects to work with an input element <input type="file"> which creates a javascript File object containing all kinds of metadata.
So in order to get the cordovaImagePicker to work with jquery-file-upload, I figure I have to convert the filepath to a File object. Below I am using the cordova file plugin to achieve this:
$cordovaImagePicker.getPictures($scope.pickOptions).then(function(filelist) {
$.each(filelist, function (index, filepath) {
$window.resolveLocalFileSystemURL(filepath, function(fileEntry) {
fileEntry.file(function(file) {
var reader = new FileReader();
reader.onloadend = function(e) {
fileObj = new File([this.result],"filename.jpg",{type: "image/jpeg"});
// send filelist from cordovaImagePicker to jquery-fileupload as if through file input
$('#fileupload').fileupload('send', {files: fileObj});
};
reader.readAsArrayBuffer(file);
}, function(e){$scope.errorHandler(e)});
}, function(e){$scope.errorHandler(e)});
});
}, function(error) {
// error getting photos
console.log('Error selecting images through $cordovaImagePicker');
});
So first of all this is not really working correctly, apparently I am doing doing something wrong, since for example the type attribute ends up being another object that contains the type attribute with the correct values (and other such weird issues). I would be happy if someone could point out what I am doing wrong.
Surely there must be something (cordova plugin?) that I am not aware of that does this conversion for me (including for example adding a thumbnail)? Alternatively, maybe there is something that can easily make jquery-file-upload work with filepaths? I couldn't find anything so far...
However, it feels I am trying too hard here to force connecting two components that were just not built to work together (File objects vs filepath) and I should maybe just rewrite the processing and use the cordova file transfer plugin?
I ended up rewriting the uploader with the cordova-file-transfer which works like a charm. I wasted more time trying to work around it than just rewriting it from scratch.