I am using opentok to stream a video file MediaStreamTrack
Sample code:
<video id="myPreview" controls ></video>
const mediaStream = $('#myPreview')[0].captureStream()
OT.initPublisher(el, {
style: {
nameDisplayMode: "on",
buttonDisplayMode: "off",
},
// resolution: "320x240",
// frameRate: 20,
audioSource: mediaStream.getAudioTracks()[0],
videoSource: mediaStream.getVideoTracks()[0],
});
It works fine but the problem is when viewers are around 20 the quality of video is pixelated.
More context:
I upload a video and use the blob to play the video in the preview.
I'm open for better solutions:
a. Upload video to server and stream edited moderate quality(not sure what to do) video in myPreview.
b. Upload video to 3rd party service and load video via their player.
Thanks in advance.
Make sure you have enough bandwidth for publisher and subscribers. If one subscriber does not have enough bandwidth to receive VGA, for instance, it will receive a lower quality stream, but all the others should be fine. If publisher does not have a good upload bandwidth, then, it will be difficult for subscribers to receive good quality.
I hope this helps.
You may want to set the minVideoBitrate on your publisher which will force a minimum bitrate. This is an integer value in bits per second. If there is a participant that cannot handle that minimum bitrate then they will drop to audio-only.
OT.initPublisher(el, {
style: {
nameDisplayMode: "on",
buttonDisplayMode: "off",
},
minVideoBitrate: 2500000,
audioSource: mediaStream.getAudioTracks()[0],
videoSource: mediaStream.getVideoTracks()[0],
});
Related
I'm using discord.js and ytdl-core to play audio from YouTube over discord voice chat. I'm trying to make a 'loop' command that will start looping the song so that when the song ends, it automatically plays again.
I've tried running commands like
// Create audio resource
var resource = createAudioResource(stream);
// Subscribe audio player to voice connection
connection.subscribe(player);
// Play audio
player.play(resource);
player.on('idle', () => {
player.play(resource);
});
But I keep getting
Error: Cannot play a resource that has already ended.
I've tried recreating the createAudioResource and playing that but I get
(node:93492) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 close listeners added to [PassThrough]. Use emitter.setMaxListeners() to increase limit
Does anybody have any insights on how to loop an audio player bot?
I think I solved this (there still may be issues with memory leaks, but I haven't been receiving any errors)
I ended up having to re-download the stream, recreate the audio from this new stream, then play it. So my code looks something like this:
player.on('idle', () => {
stream = ytdl(args[0], {filter: 'audioonly'});
resource = createAudioResource(stream);
player.play(resource);
});
EDIT: I noticed a potential issue with multiple loop commands adding more and more event listeners, so I added this bit of code before the event listener call.
if (play.player['_events']['idle']) return;
I am trying in my skill of Alexa play a 5-second loop 1 hour audio but I cannot find the way.
Does anyone know how to perform this action?
const LoopAudioHandler = {
canHandle(handlerInput) {
const request = handlerInput.requestEnvelope.request;
return request.type === 'IntentRequest'
&& request.intent.name === 'LoopAudio';
},
handle(handlerInput) {
const audio = "<audio src='https://audio-alexa-ang.s3.amazonaws.com/perro-ladrando-v2.mp3' />"
return handlerInput.responseBuilder
.speak(audio)
.reprompt(HELP_REPROMPT)
.getResponse();
},
};
The result is that only plays 1 time only. and I need it to play in loop for an hour.
Unfortunately audio tags have some limitations (docs). One of them is that maximum amount of audio length in one response which is 240s. Another limitation is amount of audio tags in one response which is 5.
In your case, you could make audio files longer, merging them and making them ~48s long, so in this case, you will be able to add 5 audio tags and play 240s of audio, or just make audio file of length 240s and use only that alone, it's up to you.
There is another way though. To play up to an hour of audio you would have to go another route and use audio player interface, which acts a little bit different than regular skill, but it would allow you to loop audio indefinitely. An example skill using audio player interface can be found here.
I am a newbie to Audio/Video Recording. The script works well for my cam and audio recorder. However, I would like to know how to implement it similar to the Extension available so that I can record a tab and all the audio involved. Thanks in advance. Currently using Ver.5.4.0
Here is an open-sourced chrome-extension that supports both tab, screen and any opened app's screen recording:
https://github.com/muaz-khan/Chrome-Extensions/tree/master/screen-recording
You can use tabCapture API to capture MediaStream object; now you can record the resulting stream using MediaRecorder API or RecordRTC.
RecordRTC can record following kind of streams:
Stream captured from a webcam or microphone
Stream captured using tabCapture API
Stream captured using desktopCapture API
Stream captured from <canvas> or <video> elements using captureStream API
Streams generated by WebAudio API
e.g.
var capturedStream = videoElement.captureStream();
var recorder = RecordRTC(videoElement.captureStream(), {
type: 'video'
});
Or:
var recorder = RecordRTC(tabCaptureStream, {
type: 'video'
});
Simply make sure that you're getting the MediaStream object from above API; and now you can use RecordRTC to record that stream.
Regarding "replacing video track with secondary camera track or screen track", you can use addTrack, removeTrack as well as replaceTrack methods. However I'm not sure if MediaRecorder API can record replaced track:
// using Firefox
theStreamYouAreRecording.replaceTrack( screenTrack );
// using Chrome or Firefox
theStreamYouAreRecording.addTrack ( screenTrack );
So you must either record camera or screen. Do not replace tracks.
I am a Java developer and I have couple of questions related to Google speech API V1Beta1.
Question1 (Syncrecognize case):
I tried to upload (through GCS) small size (less than one min running file) audio file to google speech api it is working But the confidence output level is 0.32497215 only. That is my result is not exactly same to my audio input.
How to increase the confidence level output?
Question 2 (Asyncrecognize case):
I tried big size audio file (more than one min running file). This case I used the API call:
https://speech.googleapis.com/v1beta1/speech:asyncrecognize?key=XXXXXXXXXXXXXXXXXXXX
and Payload:
"{"config":{"encoding":"LINEAR16","sample_rate": 16000},"audio":{"uri":"gs://" + bucketName +"/"+ objectName + ""}}"
Here I got the output json like
{"name": "57...........................95"}.
After getting this output I make new API call (Operation interface) with this name value.
https://speech.googleapis.com/v1beta1/operations/57.................................95?key=XXXXXXXXXXXXXXXXX
I got the output
{
"name": "57....................................95",
"done": true,
"response": {
"#type": "type.googleapis.com/google.cloud.speech.v1beta1.AsyncRecognizeResponse"
}
}
How to proceed the work with this value? I need to get audio speech text.
Please help me to fix this issues. Thanks in advance.
Ideas to Question 1:
You should give more details in RecognitionConfig object, for example specify the languageCode and add hints via the SpeechContext object.
Answer to Question 2:
Check the sample rate of the audio file, you must be sure that is equal to the rate you gave in the request. You can check it e.g. with the following code soxi audio_file.flac (sox needed for this one).
I want to insert a timelineitem with video attachment and if user selects a specific menu item, glass to play the video. I'm doing all from .net app like this, please correct me, if i'm doing wrong.
TimelineItem item = new TimelineItem()
item.MenuItems.Insert(0, new MenuItem(){Action="what is the action to use?";...});
request = Service.Timeline.Insert(item, attachment, contentType);
request.Upload();
I would like to know, do i need a menu item, if yes, what is the action i should use?
Currently i'm sending the video attachment, but there is no way to play the video.
Any help is greatly appreciated.
You don't need to specify any menuItems but your timeline item should not contain html content.
Make sure that your video is of a supported format: once it's inserted, and Glass has synced and fully downloaded the attached video, it should start playing right when you land on the item in your timeline.
This works using the QuickStart project for Java (mirror-java-starter-demo):
https://github.com/googleglass/mirror-quickstart-java
Replace the lines near line 119 in MainServlet.java with this:
URL url = new URL(req.getParameter("imageUrl"));
String contentType = req.getParameter("contentType");
url = new URL("http://localhost:8888/static/videos/video.mp4");
contentType = "video/mp4";
byte[] b = ByteStreams.toByteArray(url.openStream());
int i = b.length;
InputStream temp = url.openStream();
MirrorClient.insertTimelineItem(credential, timelineItem, contentType, temp);
Then run the project and click the "A Picture" button to upload a video from a new folder inside static called videos called video.mp4. I used a 10 second clip I recorded with glass (6.30 MB).
Note that when running with App Engine 1.76 on a windows machine I got this error when uploading, but changing to 1.80 made this issue go away:
Here is Windows metadata about the video that might be useful:
Depending on your network connection it might take a little bit for the video to show in your timeline, but mine plays.