How to get WebRTC video to video streaming to decode video? - reactjs

I have a working WebRTC connection between two people using PeerJS that captures a stream from a user video element and sends it to the other person. The video capture is as follows (working with React in Typescript):
useEffect(() => {
const srcVideo = document.getElementById('normalVideo');
const sinkVideo = document.getElementById('sinkVideo');
if (srcVideo === null) return;
const setSinkSrc = () => {
let stream;
const fps = 0; // Setting to zero means frames captured when requestFrame() is called.
// #ts-ignore
stream = srcVideo.captureStream(fps);
// #ts-ignore
sinkVideo.srcObject = stream;
setStreamOut(stream);
};
srcVideo.addEventListener('loadedmetadata', setSinkSrc);
return () => {
srcVideo.removeEventListener('loadedmetadata', setSinkSrc);
}
}, [vidSrc]);
and sends it out with PeerJS:
mySocket.on('user-connected', (theirPeerID: string, theirSocketID: string) => {
if ( streamOut ) {
const outConnection: Peer.MediaConnection = myPeer.call(theirPeerID, streamOut);
// This is weird.
// navigator.mediaDevices.getUserMedia({audio: true, video: true}).then(streamOut => myPeer.call(theirPeerID, streamOut));
}
});
And the user connects with the stream
useEffect(() => {
myPeer.on('call', (hostMediaConnection: Peer.MediaConnection) => {
const sinkVideo = document.getElementById('sinkVideo');
hostMediaConnection.answer();
hostMediaConnection.on('stream', (hostStream: MediaStream) => {
console.log(hostStream.getAudioTracks().length);
console.log(hostStream.getVideoTracks().length);
if (sinkVideo === null || sinkVideo === undefined) {
console.log("Sink video either undefined or null");
return
}
// Do not use URL.createObjectURL();
// #ts-ignore
sinkVideo.srcObject = hostStream;
// #ts-ignore
sinkVideo.addEventListener('loadedmetadata', () => {
// #ts-ignore
sinkVideo.play();
});
});
hostMediaConnection.on("error", (err) => {
console.log(err.type, "%cMediaConnectionError", "color:green;", err);
});
});
}, []);
On a local stream, the video works just fine. Where the smaller element's data comes from a media stream: sinkVideo.srcObject = stream;
However, when this runs, there is just a black screen on the consumer side. Audio is streamed - and is able to be heard - but no video is ever shown. Going into chrome://webrtc-internals, I correctly see two RTCMediaConnections: outbound data from the source, and inbound data for the sink. Audio is being transmitted, video is being transmitted, video frames are being decoded, and yet nothing. The screen is black for the consumer, where it should not be.
So, my question is where am I going wrong? video is apparently being sent out, PeerJS makes a successful connection from the source to the sink, and the sink is decoding video. Audio has absolutely no problem at any point getting sent out or being recieved, decoded, and heard by the sink.
I have a comment above a line of commented out code.
// This is weird.
// navigator.mediaDevices.getUserMedia({audio: true, video: true}).then(streamOut => myPeer.call(theirPeerID, streamOut));
Because when I send out my audio and video as the media, the stream is processed just fine, and the sink gets the video. This is one of the reasons I know that a successful WebRTC connection is being made: when I stream user webcam, data is sent.

Related

React websockets disappear on refresh

So I have a websocket connection that is open and I notice once I refresh the page the websocket messages disappears. I've read online that this is supposed to happen. What is a good way to persist these messages so that they do not disappear. Right now I have the websocket messages in a react state. I've seen some say localstorage or cookies, but I don't think that is scalable as their can be thousands of messages in minutes that could overload the browser storage? Below I am using react-use-websocket package and I get the last message and store that inside a state array. That is the wrong approach I need a longer storage solution.
const { lastJsonMessage, lastMessage, sendMessage, readyState, getWebSocket } = useWebSocket(resultUrl, {
//Will attempt to reconnect on all close events, such as server shutting down
shouldReconnect: () => true,
reconnectAttempts: 10,
reconnectInterval: 3000
});
useEffect(() => {
const val = lastJsonMessage ? JSON.parse(lastJsonMessage as unknown as string) : {};
if (val !== null && Object.keys(val).length > 0) {
setMessageHistory((prev) => prev.concat(val));
}
}, [lastJsonMessage, setMessageHistory]);
// Get Video assets after finishing;
useEffect(() => {
messageHistory.forEach((msg) => {
const { type } = msg;
if (type === 'video.live_stream.recording' || type === 'video.live_stream.active') {
const localPlaybackId = msg.data?.playback_ids[0];
setPlaybackId(localPlaybackId);
}
setVideoType(type);
});
}, [messageHistory, videoType]);
const connectionStatus = {
[ReadyState.CONNECTING]: 'Connecting',
[ReadyState.OPEN]: 'Open',
[ReadyState.CLOSING]: 'Closing',
[ReadyState.CLOSED]: 'Closed',
[ReadyState.UNINSTANTIATED]: 'Uninstantiated'
}[readyState];
if (liveStreamId) {
sendMessage(liveStreamId);
}
messageHistory.filter((msg) => {
msg.data.id === liveStreamId;
});
in this case, usually when the app is loaded you should do a GET request to the server in order to load all the last messages. Also since you are talking about "thousands of messages" you should implement a lazy loading aka paginator or infinite-scroll

how to turn off buffering in react-player?

when I try to play the next video it does not start and I guess the problem is buffering.
P.S my url is video.m3u8 files
It works fine, but when i change url nothing happens, i would like to know how can i stop current video and load a new one whe, url changes ?
here's my rewind function
const showVideo = async () => {
sessionStorage.setItem("sPlayerLinkId", params.id);
const body = new FormData();
const mac = window.TvipStb.getMainMacAddress();
body.append("link_id", params.id);
body.append("mac", mac);
let response = await fetch(getVideo, {
method: "POST",
body: body,
});
let data = await response.json();
if (data.error) {
openDialog("crush");
return 0;
}
if (_isMounted.current) setVideoLink(data.response.url); };
var goToNext = function () {
playerRef.current.seekTo(0, "seconds");
setVideoLink(null);
if (playerInfo.next_id) {
params.id = playerInfo.next_id;
showVideo();
} else navigate(-1);};
<ReactPlayer
url={videoLink}
playing={isPlaying}
ref={playerRef}
key={params.id}
onProgress={() => {
current();
}}
config={{
file: {
forceHLS: true,
},
}}
/>
I would suggest you build your own player from scratch using just react and a style library.
I had similar issues using react-player and I had to resort to building my own custom player in which I could now ensure that buffering is handled the way I expect it to.
I handled buffering using the progress event as follows
const onProgress = () => {
if (!element.buffered) return;
const bufferedEnd = element.buffered.end(element.buffered.length - 1);
const duration = element.duration;
if (bufferRef && duration > 0) {
bufferRef.current!.style.width = (bufferedEnd / duration) * 100 + "%";
}
};
element.buffered represents a collection of buffered time ranges.
element.buffered.end(element.buffered.length - 1) gets the time at the end of the buffer range. With this value, I was able to compute the current buffer range and update the buffer progress accordingly.
I ended up writing an article that would help others learn to build an easily customizable player from scratch using just React and any style library (in this case charkra UI was used).

Problem with Media toggle in MediaStream APIs

I'm trying to develop a video-conferencing application like GoogleMeet and I'm facing a problem as follows:
Let's say I'm sharing my screen with the others and now I want to turn on my camera. When I turn it on, I have to replace my video.srcObject from DisplayMedia to UserMedia and it works just fine. But when I turn my camera off, my screen should switch back to the screen that was initially shared. For this I tried to save my DisplayMedia object first before switching, but it doesn't work that way and my screen goes blank. for now I'm just creating a new object whenever I switch, It is good if I have to switch back to the camera but when I've to switch back to the screen as mentioned in above example, It requests the user again for which screen to share which is annoying.
Here is the code for my Camera:
const videoRef = useRef();
const userCameraCapture = async cameraIsOn => {
if(cameraIsOn){
if(screenStream)
removeVideoTracks();
let stream = await navigator.mediaDevices.getUserMedia({video:true});
setCameraStream(stream);
videoRef.current.srcObject = stream;
}else{
removeVideoTracks();
setCameraStream(null);
if(screenStream){
videoRef.current.srcObject = await navigator.mediaDevices.getDisplayMedia({video:true});
}else{
videoRef.current.srcObject = null;
}
}
}
and for screen sharing:
const userCameraCapture = async cameraIsOn => {
if(cameraIsOn){
if(screenStream)
removeVideoTracks();
let stream = await navigator.mediaDevices.getUserMedia({video:true});
setCameraStream(stream);
videoRef.current.srcObject = stream;
}else{
removeVideoTracks();
setCameraStream(null);
if(screenStream){
videoRef.current.srcObject = await navigator.mediaDevices.getDisplayMedia({video:true});
}else{
videoRef.current.srcObject = null;
}
}
}
These functions are toggled by a child component and they are working properly other than the problem above.The videoRef is reference to a video tag.

Multiple video call (n users) using Peerjs in React Native

I have an application in which I am trying to get video chatting to work in React Native.
Used packages like react-native-webrtc and react-native-peerjs.
Created peer js server using Node Js.
One to One Video call is working fine with react native Peerjs. But, Now I want more than 2 users to be connected upto n users.
Is it possible to convert one to one video call to Multiple video call. Kindly let me know how Multiple video call can be achieved using Peer js and web rtc.
Here is my code for one to one video call:
Initialize webrtc and PeerJS:
const initialize = async () => {
const isFrontCamera = true;
const devices = await mediaDevices.enumerateDevices();
const facing = isFrontCamera ? 'front' : 'environment';
const videoSourceId = devices.find(
(device: any) => device.kind === 'videoinput' && device.facing === facing,
);
const facingMode = isFrontCamera ? 'user' : 'environment';
const constraints: MediaStreamConstraints = {
audio: true,
video: {
mandatory: {
minWidth: 1280,
minHeight: 720,
minFrameRate: 30,
},
facingMode,
optional: videoSourceId ? [{ sourceId: videoSourceId }] : [],
},
};
const newStream = await mediaDevices.getUserMedia(constraints);
setLocalStream(newStream as MediaStream);
console.log("************ Started ************");
// const io = socketio(SERVER_URL);
// io.connect();
console.log(SERVER_URL);
const io = socketio.connect(SERVER_URL, {
reconnection: true,
autoConnect: true,
reconnectionDelay: 500,
jsonp: false,
reconnectionAttempts: Infinity,
// transports: ['websocket']
});
io.on('connect', () => {
console.log("----------- Socket Connected -----------");
setSocket(io);
io.emit('register', username);
});
io.on('users-change', (users: User[]) => {
console.log("----------- New User - " + JSON.stringify(users) + " -----------");
setUsers(users);
});
io.on('accepted-call', (user: User) => {
setRemoteUser(user);
});
io.on('rejected-call', (user: User) => {
setRemoteUser(null);
setActiveCall(null);
Alert.alert('Your call request rejected by ' + user?.username);
navigate('Users');
});
io.on('not-available', (username: string) => {
setRemoteUser(null);
setActiveCall(null);
Alert.alert(username + ' is not available right now');
navigate('Users');
});
const peerServer = new Peer(undefined, {
host: PEER_SERVER_HOST,
path: PEER_SERVER_PATH,
secure: false,
port: PEER_SERVER_PORT,
config: {
iceServers: [
{
urls: [
'stun:stun1.l.google.com:19302',
'stun:stun2.l.google.com:19302',
],
},
],
},
});
peerServer.on('error', (err: Error) =>
console.log('Peer server error', err),
);
peerServer.on('open', (peerId: string) => {
setPeerServer(peerServer);
setPeerId(peerId);
io.emit('set-peer-id', peerId);
});
io.on('call', (user: User) => {
peerServer.on('call', (call: any) => {
//Alert.alert("PeerServer Call");
setRemoteUser(user);
Alert.alert(
'New Call',
'You have a new call from ' + user?.username,
[
{
text: 'Reject',
onPress: () => {
io.emit('reject-call', user?.username);
setRemoteUser(null);
setActiveCall(null);
},
style: 'cancel',
},
{
text: 'Accept',
onPress: () => {
io.emit('accept-call', user?.username);
call.answer(newStream);
setActiveCall(call);
navigate('Call');
},
},
],
{ cancelable: false },
);
call.on('stream', (stream: MediaStream) => {
setRemoteStream(stream);
});
call.on('close', () => {
closeCall();
});
call.on('error', () => { });
});
});
};
When a user call another user:
const call = (user: User) => {
if (!peerServer || !socket) {
Alert.alert('Peer server or socket connection not found');
return;
}
if (!user.peerId) {
Alert.alert('User not connected to peer server');
return;
}
socket.emit('call', user.username);
setRemoteUser(user);
try {
const call = peerServer.call(user.peerId, localStream);
call.on(
'stream',
(stream: MediaStream) => {
setActiveCall(call);
setRemoteStream(stream);
},
(err: Error) => {
console.error('Failed to get call stream', err);
},
);
} catch (error) {
console.log('Calling error', error);
}
};
Now, how should I call multiple user from the code below and how multiple streams have to be handled.
const call = peerServer.call(user.peerId, localStream);
Is it possible to convert one to one video call to Multiple video call
It's not possible to "convert" a one to one video call to "multiple" in a peer-to-peer architecture. In a p2p architecture with n participants, each participant will have a separate, one-to-one connection with the rest n-1 other participants.
I may possibly be misunderstanding your question, but if you're asking whether it's possible to establish n-1 connections for each participant, then the answer is yes. Here's how I would implement:
Anytime a new participant joins a session, extract their peer information. This is the peerId provided by the peer.js library.
Next, let the rest of the participants know about the presence of this new user. For this, you'll share this new participant's name, peerID and any other metadata with the rest of the participants in the room. This can be done by the signalling logic that you have implemented using socket.io.
Now going forward, you have 2 options:
The new participant could initiate the one-to-one peer connection with others in the room, OR,
The rest of the participants could initiate a one-on-one connection with the new participant.
Personally I prefer the first. So continuing the process:
Using the same signalling logic via socket.io, the rest of the participants will let the new user know about their presence by providing their own peer information and other metadata.
Once the new participant gets everyone's peer information, initiate a new peer connection using call.on('stream', callback) and start broadcasting their video.
On the recipient side, when a call is received along with the stream, you'll create a new video element in react-native, and bind the received media stream to this element. Which means, each participant will have n-1 video elements for streaming the media of n-1 other participants. The recipient also starts to broadcast their own video to the initiator of the call.
Here's a tutorial showing how this can be done using vanilla JavaScript, along with the github repository with source code.
Now, to answer the next question:
Kindly let me know how Multiple video call can be achieved using Peer js and webrtc.
This depends on the number of participants, where they lie geographically, browser/device limits, device computational power, and network bandwidth. So there are multiple factors involved which makes it tricky to give any specific number.
Browsers can place their own upper limits on the maximum number of connections possible, and there might be other values for Android and iOS. On chrome, the max theoretical limit is 500. If you're developing for Android, you may want to check here. But I couldn't manage to find much info on this.
Most practical applications involving WebRTC don't rely on a mesh architecture. Common implementations involve using an SFU, which takes multiple media streams and forwards them. A slightly more sophisticated technique is an MCU architecture, which combines all those media streams from multiple participants into a single one, and send that single stream to the rest of the participants.
I discuss this in some detail here:
https://egen.solutions/articles/how-to-build-your-own-clubhouse-part-2/#architectures-scaling-and-costs
Here's a nice article that explains the difference between SFU and MCU.

How can I send an image of a live video through a web socket and store it in a React state variable?

I'm using React and socket.io to build a chat room where people can live stream video.
I have a video player with a live stream that I want to pass through socket.io. When the stream is passed from the client to the server and back to the client, I want to store it in a state variable as an item in an array so I can display all live streams to a user.
Right now I am just trying to draw an image of the stream on a <canvas> and emit that.
I define each stream by the current user streaming, using their user.username.
Stream.js
function Stream(props) {
const refVideo = useRef();
const refCanvas = useRef();
const [streams, setStreams] = useState([]);
function startStream() {
navigator.mediaDevices.getUserMedia({
video: true,
audio: true
}).then((stream) => {
// set source of video to stream
refVideo.current.srcObject = stream;
// define canvas context
const context = refCanvas.current.getContext('2d');
// emit canvas as data url every 100 milliseconds
const interval = setInterval(() => {
// draw image
context.drawImage(refVideo.current, 0, 0, context.width, context.height);
// define stream by username
const streamObj = {
image: refCanvas.current.toDataURL('image/jpeg'),
username: props.user.username,
};
// send stream to server
socket.emit('stream', streamObj);
}, 100);
});
}
useEffect(() => {
// when stream is received from server
socket.on('stream', function(data) {
// find stream by username
const foundStream = streams.find((stream) => {
return stream.username === data.username;
});
// ONLY if stream was not found, add it to state
if(!foundStream) {
setStreams((streams) => [...streams, data]);
}
});
}, []);
return (
<video ref={refVideo} />
<canvas ref={refCanvas} />
);
}
server.js
socket.on('stream', function(data) {
// send stream back to room
io.to(room).emit('stream', data);
});
My stream displays in the <video> and the object is emitted through the socket to the server and back to the client but for some reason, my stream is being added to my streams state array every time (every 100ms interval).
I check if the stream is already in the array using foundStream, so I'm not sure why that is happening. Am I missing something?
That is normal. emit sent to all, including the sender, you should use broadcast.emit to send to all except the sender itself.

Resources