I'm trying to develop a video-conferencing application like GoogleMeet and I'm facing a problem as follows:
Let's say I'm sharing my screen with the others and now I want to turn on my camera. When I turn it on, I have to replace my video.srcObject from DisplayMedia to UserMedia and it works just fine. But when I turn my camera off, my screen should switch back to the screen that was initially shared. For this I tried to save my DisplayMedia object first before switching, but it doesn't work that way and my screen goes blank. for now I'm just creating a new object whenever I switch, It is good if I have to switch back to the camera but when I've to switch back to the screen as mentioned in above example, It requests the user again for which screen to share which is annoying.
Here is the code for my Camera:
const videoRef = useRef();
const userCameraCapture = async cameraIsOn => {
if(cameraIsOn){
if(screenStream)
removeVideoTracks();
let stream = await navigator.mediaDevices.getUserMedia({video:true});
setCameraStream(stream);
videoRef.current.srcObject = stream;
}else{
removeVideoTracks();
setCameraStream(null);
if(screenStream){
videoRef.current.srcObject = await navigator.mediaDevices.getDisplayMedia({video:true});
}else{
videoRef.current.srcObject = null;
}
}
}
and for screen sharing:
const userCameraCapture = async cameraIsOn => {
if(cameraIsOn){
if(screenStream)
removeVideoTracks();
let stream = await navigator.mediaDevices.getUserMedia({video:true});
setCameraStream(stream);
videoRef.current.srcObject = stream;
}else{
removeVideoTracks();
setCameraStream(null);
if(screenStream){
videoRef.current.srcObject = await navigator.mediaDevices.getDisplayMedia({video:true});
}else{
videoRef.current.srcObject = null;
}
}
}
These functions are toggled by a child component and they are working properly other than the problem above.The videoRef is reference to a video tag.
Related
when I try to play the next video it does not start and I guess the problem is buffering.
P.S my url is video.m3u8 files
It works fine, but when i change url nothing happens, i would like to know how can i stop current video and load a new one whe, url changes ?
here's my rewind function
const showVideo = async () => {
sessionStorage.setItem("sPlayerLinkId", params.id);
const body = new FormData();
const mac = window.TvipStb.getMainMacAddress();
body.append("link_id", params.id);
body.append("mac", mac);
let response = await fetch(getVideo, {
method: "POST",
body: body,
});
let data = await response.json();
if (data.error) {
openDialog("crush");
return 0;
}
if (_isMounted.current) setVideoLink(data.response.url); };
var goToNext = function () {
playerRef.current.seekTo(0, "seconds");
setVideoLink(null);
if (playerInfo.next_id) {
params.id = playerInfo.next_id;
showVideo();
} else navigate(-1);};
<ReactPlayer
url={videoLink}
playing={isPlaying}
ref={playerRef}
key={params.id}
onProgress={() => {
current();
}}
config={{
file: {
forceHLS: true,
},
}}
/>
I would suggest you build your own player from scratch using just react and a style library.
I had similar issues using react-player and I had to resort to building my own custom player in which I could now ensure that buffering is handled the way I expect it to.
I handled buffering using the progress event as follows
const onProgress = () => {
if (!element.buffered) return;
const bufferedEnd = element.buffered.end(element.buffered.length - 1);
const duration = element.duration;
if (bufferRef && duration > 0) {
bufferRef.current!.style.width = (bufferedEnd / duration) * 100 + "%";
}
};
element.buffered represents a collection of buffered time ranges.
element.buffered.end(element.buffered.length - 1) gets the time at the end of the buffer range. With this value, I was able to compute the current buffer range and update the buffer progress accordingly.
I ended up writing an article that would help others learn to build an easily customizable player from scratch using just React and any style library (in this case charkra UI was used).
I Also posted this topic in Zoom Community and still got not replies.
[Situtation]
I’ve been following the documentation to implement Screen Sharing feature with React.
Although, it only shows the sharing screen to started user and others doesn’t see it.
When sharing is stopped, I get this error (There’s no error on sharing start)
TypeError: Cannot read properties of undefined (reading ‘contextGL’)
at c2f9e9ce-c03d-429c-a087-fb847ca77016:1:111290
at c2f9e9ce-c03d-429c-a087-fb847ca77016:1:111392
Actual Code
// For rendering received screen share stream
client.on('active-share-change',async payload =>{
const target = client.getAllUser().find(item => item.userId === payload.userId)
// const video = document.querySelector(`.user_media.video[data-video_id="${target?.displayName}"]`)
// Also Tried with video element above (reference to the share started user's video tag), but the result was same.
const testVideo = document.querySelector('.share-canvas#test_video')
if(payload.state === 'Active' ){
console.log('STARTED!')
console.log(payload.userId)
try {
await stream.startShareView(testVideo,payload.userId)
} catch (error){
console.log(error)
}
}else if(payload.state === 'Inactive'){
try {
await stream.stopShareView()
} catch (error) {
console.log(error)
}
}
})
// ~~~~~~~~ different file ~~~~~~~~~
// For start sharing screen
const startShareScreen = () => {
const testVideo = document.querySelector('.share-canvas#test_video')
stream.startShareScreen(testVideo)
}
Please Help!
Kindly check for using the canvas, you can remove the canvas and create the element canvas again, or before you load the share screen you can clear the canvas using https://www.w3schools.com/tags/canvas_clearrect.asp
I have a working WebRTC connection between two people using PeerJS that captures a stream from a user video element and sends it to the other person. The video capture is as follows (working with React in Typescript):
useEffect(() => {
const srcVideo = document.getElementById('normalVideo');
const sinkVideo = document.getElementById('sinkVideo');
if (srcVideo === null) return;
const setSinkSrc = () => {
let stream;
const fps = 0; // Setting to zero means frames captured when requestFrame() is called.
// #ts-ignore
stream = srcVideo.captureStream(fps);
// #ts-ignore
sinkVideo.srcObject = stream;
setStreamOut(stream);
};
srcVideo.addEventListener('loadedmetadata', setSinkSrc);
return () => {
srcVideo.removeEventListener('loadedmetadata', setSinkSrc);
}
}, [vidSrc]);
and sends it out with PeerJS:
mySocket.on('user-connected', (theirPeerID: string, theirSocketID: string) => {
if ( streamOut ) {
const outConnection: Peer.MediaConnection = myPeer.call(theirPeerID, streamOut);
// This is weird.
// navigator.mediaDevices.getUserMedia({audio: true, video: true}).then(streamOut => myPeer.call(theirPeerID, streamOut));
}
});
And the user connects with the stream
useEffect(() => {
myPeer.on('call', (hostMediaConnection: Peer.MediaConnection) => {
const sinkVideo = document.getElementById('sinkVideo');
hostMediaConnection.answer();
hostMediaConnection.on('stream', (hostStream: MediaStream) => {
console.log(hostStream.getAudioTracks().length);
console.log(hostStream.getVideoTracks().length);
if (sinkVideo === null || sinkVideo === undefined) {
console.log("Sink video either undefined or null");
return
}
// Do not use URL.createObjectURL();
// #ts-ignore
sinkVideo.srcObject = hostStream;
// #ts-ignore
sinkVideo.addEventListener('loadedmetadata', () => {
// #ts-ignore
sinkVideo.play();
});
});
hostMediaConnection.on("error", (err) => {
console.log(err.type, "%cMediaConnectionError", "color:green;", err);
});
});
}, []);
On a local stream, the video works just fine. Where the smaller element's data comes from a media stream: sinkVideo.srcObject = stream;
However, when this runs, there is just a black screen on the consumer side. Audio is streamed - and is able to be heard - but no video is ever shown. Going into chrome://webrtc-internals, I correctly see two RTCMediaConnections: outbound data from the source, and inbound data for the sink. Audio is being transmitted, video is being transmitted, video frames are being decoded, and yet nothing. The screen is black for the consumer, where it should not be.
So, my question is where am I going wrong? video is apparently being sent out, PeerJS makes a successful connection from the source to the sink, and the sink is decoding video. Audio has absolutely no problem at any point getting sent out or being recieved, decoded, and heard by the sink.
I have a comment above a line of commented out code.
// This is weird.
// navigator.mediaDevices.getUserMedia({audio: true, video: true}).then(streamOut => myPeer.call(theirPeerID, streamOut));
Because when I send out my audio and video as the media, the stream is processed just fine, and the sink gets the video. This is one of the reasons I know that a successful WebRTC connection is being made: when I stream user webcam, data is sent.
I’m trying to first open multiple browsers, each with one page by re-using a page object.
I then perform some other actions.
How can I then loop through these open browser windows after leaving the current Page object?
I was thinking of having an array of page objects, or pulling some ID for each object and getting back somehow. Thank you
const puppeteer = require('puppeteer');
async function run () {
const url = 'http://www.example.com';
const browser = await puppeteer.launch({ headless: false });
const defaultContext = await browser.defaultBrowserContext();
const browserCount = 10;
for (let i = 0; i < browserCount; i++) {
const newContext = await browser.createIncognitoBrowserContext();
const page = await newContext.newPage();
await page.goto(url);
}
//Perform some other tasks
//Now loop back through the 10 open browser pages.
//Go to the first open browser window, pull data from the page
//Go to the next open browser window, pull data from page
}
run();
As I said in the comment section, you don't store these opened pages anywhere, so you can't really go back to any of them.
The loop should look more like this:
let pages = [];
for (let i = 0; i < browserCount; i++) {
const newContext = await browser.createIncognitoBrowserContext();
pages.push(await newContext.newPage());
pages[i].goto(url);
}
// now pages.length === 10, so you can work with any of these pages
// e.g. clicking on some element on the second page:
pages[1].click('#loginForm');
I'm using React and socket.io to build a chat room where people can live stream video.
I have a video player with a live stream that I want to pass through socket.io. When the stream is passed from the client to the server and back to the client, I want to store it in a state variable as an item in an array so I can display all live streams to a user.
Right now I am just trying to draw an image of the stream on a <canvas> and emit that.
I define each stream by the current user streaming, using their user.username.
Stream.js
function Stream(props) {
const refVideo = useRef();
const refCanvas = useRef();
const [streams, setStreams] = useState([]);
function startStream() {
navigator.mediaDevices.getUserMedia({
video: true,
audio: true
}).then((stream) => {
// set source of video to stream
refVideo.current.srcObject = stream;
// define canvas context
const context = refCanvas.current.getContext('2d');
// emit canvas as data url every 100 milliseconds
const interval = setInterval(() => {
// draw image
context.drawImage(refVideo.current, 0, 0, context.width, context.height);
// define stream by username
const streamObj = {
image: refCanvas.current.toDataURL('image/jpeg'),
username: props.user.username,
};
// send stream to server
socket.emit('stream', streamObj);
}, 100);
});
}
useEffect(() => {
// when stream is received from server
socket.on('stream', function(data) {
// find stream by username
const foundStream = streams.find((stream) => {
return stream.username === data.username;
});
// ONLY if stream was not found, add it to state
if(!foundStream) {
setStreams((streams) => [...streams, data]);
}
});
}, []);
return (
<video ref={refVideo} />
<canvas ref={refCanvas} />
);
}
server.js
socket.on('stream', function(data) {
// send stream back to room
io.to(room).emit('stream', data);
});
My stream displays in the <video> and the object is emitted through the socket to the server and back to the client but for some reason, my stream is being added to my streams state array every time (every 100ms interval).
I check if the stream is already in the array using foundStream, so I'm not sure why that is happening. Am I missing something?
That is normal. emit sent to all, including the sender, you should use broadcast.emit to send to all except the sender itself.