How to switch from camera recording to on screen recording using RecordRTC API? - recordrtc

I am a newbie to Audio/Video Recording. The script works well for my cam and audio recorder. However, I would like to know how to implement it similar to the Extension available so that I can record a tab and all the audio involved. Thanks in advance. Currently using Ver.5.4.0

Here is an open-sourced chrome-extension that supports both tab, screen and any opened app's screen recording:
https://github.com/muaz-khan/Chrome-Extensions/tree/master/screen-recording
You can use tabCapture API to capture MediaStream object; now you can record the resulting stream using MediaRecorder API or RecordRTC.
RecordRTC can record following kind of streams:
Stream captured from a webcam or microphone
Stream captured using tabCapture API
Stream captured using desktopCapture API
Stream captured from <canvas> or <video> elements using captureStream API
Streams generated by WebAudio API
e.g.
var capturedStream = videoElement.captureStream();
var recorder = RecordRTC(videoElement.captureStream(), {
type: 'video'
});
Or:
var recorder = RecordRTC(tabCaptureStream, {
type: 'video'
});
Simply make sure that you're getting the MediaStream object from above API; and now you can use RecordRTC to record that stream.
Regarding "replacing video track with secondary camera track or screen track", you can use addTrack, removeTrack as well as replaceTrack methods. However I'm not sure if MediaRecorder API can record replaced track:
// using Firefox
theStreamYouAreRecording.replaceTrack( screenTrack );
// using Chrome or Firefox
theStreamYouAreRecording.addTrack ( screenTrack );
So you must either record camera or screen. Do not replace tracks.

Related

How to Render RTSP Streams within a React App [duplicate]

I want to display the live footage of an ip camera on a web page built using ReactJS.
I found some solutions on the internet but that provide solutions using the http url. However my camera has a username and password, and I don't know how to embed the username/password into the http url.
I have a functioning rtsp url with the username/password.
I would like to have a video element inside the react app like this:
render() {
return (
<div>
<video
....
/>
</div>
);
}
My functioning rtsp url is like: rtsp://username:password#172.16.3.18:554
Your solution should be set with two parts: a nodejs application that will read the steram from the RTSP and client side a canvas that will get that stream from the nodejs application.
Think of it as a "proxy"
On Server:
Stream = require('node-rtsp-stream')
stream = new Stream({
name: 'name',
streamUrl: 'rtsp://username:password#172.16.3.18:554',
wsPort: 9999,
ffmpegOptions: { // options ffmpeg flags
'-stats': '', // an option with no neccessary value uses a blank string
'-r': 30 // options with required values specify the value after the key
}
})
On Client:
client = new WebSocket('ws://NODEJS_SERVER_IP:9999')
player = new jsmpeg(client, {
canvas: canvas // Canvas should be a canvas DOM element
})
There is a good npm you can use that does that:
https://www.npmjs.com/package/node-rtsp-stream
I think you need a special media player. Have you tried hls.js. You may include the library, then build your own component and pass the link to it so it may play then.

How to display IP camera feed from an RTSP url onto reactjs app page?

I want to display the live footage of an ip camera on a web page built using ReactJS.
I found some solutions on the internet but that provide solutions using the http url. However my camera has a username and password, and I don't know how to embed the username/password into the http url.
I have a functioning rtsp url with the username/password.
I would like to have a video element inside the react app like this:
render() {
return (
<div>
<video
....
/>
</div>
);
}
My functioning rtsp url is like: rtsp://username:password#172.16.3.18:554
Your solution should be set with two parts: a nodejs application that will read the steram from the RTSP and client side a canvas that will get that stream from the nodejs application.
Think of it as a "proxy"
On Server:
Stream = require('node-rtsp-stream')
stream = new Stream({
name: 'name',
streamUrl: 'rtsp://username:password#172.16.3.18:554',
wsPort: 9999,
ffmpegOptions: { // options ffmpeg flags
'-stats': '', // an option with no neccessary value uses a blank string
'-r': 30 // options with required values specify the value after the key
}
})
On Client:
client = new WebSocket('ws://NODEJS_SERVER_IP:9999')
player = new jsmpeg(client, {
canvas: canvas // Canvas should be a canvas DOM element
})
There is a good npm you can use that does that:
https://www.npmjs.com/package/node-rtsp-stream
I think you need a special media player. Have you tried hls.js. You may include the library, then build your own component and pass the link to it so it may play then.

I'm concerned about memory leak from RecordRTC url object

Using RecordRTC library, I'm hooking my React web application with webcam video recording, replaying and saving functionalities. Coming from native application development, I'm always concerned about potential memory leak which often can be easily diagnosed by checking system memory or lagging UI experience. In web applications, what diagnoses can you perform to see if a JS object is being created and deleted properly without leaks.
My concern appeared when I began integrating replay functionality as shown below. The requestusermedia method instantiates the webcam stream when React component mounts. In fact, the src state gets assigned with the url to the video stream. Afterwards, anytime a stop button is clicked, a new url, representing a webm file of recorded video, is created and assigned to the same src state. The functionality of streaming and replaying works as planned. But, I'm concerned that continuation of creating and replaying video, essentially creating a new url wrapping webm file would only result in memory leak unless the browser is refreshed.
Are there any checks in the browser level I could conduct to diagnose this? Or is this something I shouldn't be concerned of at all in the web application world?
requestUserMedia() {
captureUserMedia((stream) => {
this.setState({ src: window.URL.createObjectURL(stream)});
});
}
handleRecord(){
if (!this.state.record) {
captureUserMedia((stream) => {
var recorder = RecordRTC(stream, {
type: 'video'
});
recorder.startRecording();
this.state.recordVideo = recorder;
});
} else {
var recorder = this.state.recordVideo
recorder.stopRecording(() => {
var blob = recorder.getBlob();
var url = window.URL.createObjectURL(blob);
this.setState({ src: url })
});
}
let newRecordState = !this.state.record
this.setState({
record: newRecordState
})
}
Setting the videos src to a string created with URL.createObjectURL has been deprecated for that reason. Set video.srcObject = stream instead.
For the second createObjectURL use URL.revokeObjectURL to revoke the previous one.

Joining the two audio files in node js

I am developing a web page , where a user will select two audio files and the app will join the two audio files and make it as a single output audio file. I am using nodejs in back end and angularjs in client side. How can I achieve this requirement? I went through many libraries nothing suits it.
I'm looking at a similar use-case at the moment. The libraries aren't great as most need a big program to be installed in your server environment. Examples are:
sox-audio: Which should be ok as long as you don't need iterative (variable number of files) concatenation. But this requires installation of SoX.
audio-concat: A wrapper for ffmpeg, but also needs installation of ffmpeg.
Or of you don't need the output audio to be seekable, you could simply do it with streams. Concept:
var fs = require('fs')
var writeStream = fs.createWriteStream('outputAudio.mp3'); // Or whatever you want to call it
// Input files should be an array of the paths to the audio files you want to stitch
recursiveStreamWriter(inputFiles) {
if(inputFiles.length == 0) {
console.log('Done!')
return;
}
let nextFile = inputFiles.shift();
var readStream = fs.createReadStream(nextFile);
readStream.pipe(writeStream, {end: false});
readStream.on('end', () => {
console.log('Finished streaming an audio file');
recursiveStreamWriter(inputFiles);
});
}
This works and the transitions are fine, but audio players struggle to seek the audio. A decent full example can be found here of the recursive stream method.
As an example I am combining Iphone ringtone with Nokia ringtone and piping the streamed readers to a write stream.
//Created two reader streams
let readerstream1 = fs.createReadStream('iphone-6.mp3');
let readerstream2 = fs.createReadStream('Nokia.mp3');
// Created a write stream
let writestream = fs.createWriteStream('NewRingtone.mp3');
//just pipe one after the other
readerstream1.pipe(writestream);
readerstream2.pipe(writestream);

How to play an attachment video from the timeline item

I want to insert a timelineitem with video attachment and if user selects a specific menu item, glass to play the video. I'm doing all from .net app like this, please correct me, if i'm doing wrong.
TimelineItem item = new TimelineItem()
item.MenuItems.Insert(0, new MenuItem(){Action="what is the action to use?";...});
request = Service.Timeline.Insert(item, attachment, contentType);
request.Upload();
I would like to know, do i need a menu item, if yes, what is the action i should use?
Currently i'm sending the video attachment, but there is no way to play the video.
Any help is greatly appreciated.
You don't need to specify any menuItems but your timeline item should not contain html content.
Make sure that your video is of a supported format: once it's inserted, and Glass has synced and fully downloaded the attached video, it should start playing right when you land on the item in your timeline.
This works using the QuickStart project for Java (mirror-java-starter-demo):
https://github.com/googleglass/mirror-quickstart-java
Replace the lines near line 119 in MainServlet.java with this:
URL url = new URL(req.getParameter("imageUrl"));
String contentType = req.getParameter("contentType");
url = new URL("http://localhost:8888/static/videos/video.mp4");
contentType = "video/mp4";
byte[] b = ByteStreams.toByteArray(url.openStream());
int i = b.length;
InputStream temp = url.openStream();
MirrorClient.insertTimelineItem(credential, timelineItem, contentType, temp);
Then run the project and click the "A Picture" button to upload a video from a new folder inside static called videos called video.mp4. I used a 10 second clip I recorded with glass (6.30 MB).
Note that when running with App Engine 1.76 on a windows machine I got this error when uploading, but changing to 1.80 made this issue go away:
Here is Windows metadata about the video that might be useful:
Depending on your network connection it might take a little bit for the video to show in your timeline, but mine plays.

Resources