I am trying in my skill of Alexa play a 5-second loop 1 hour audio but I cannot find the way.
Does anyone know how to perform this action?
const LoopAudioHandler = {
canHandle(handlerInput) {
const request = handlerInput.requestEnvelope.request;
return request.type === 'IntentRequest'
&& request.intent.name === 'LoopAudio';
},
handle(handlerInput) {
const audio = "<audio src='https://audio-alexa-ang.s3.amazonaws.com/perro-ladrando-v2.mp3' />"
return handlerInput.responseBuilder
.speak(audio)
.reprompt(HELP_REPROMPT)
.getResponse();
},
};
The result is that only plays 1 time only. and I need it to play in loop for an hour.
Unfortunately audio tags have some limitations (docs). One of them is that maximum amount of audio length in one response which is 240s. Another limitation is amount of audio tags in one response which is 5.
In your case, you could make audio files longer, merging them and making them ~48s long, so in this case, you will be able to add 5 audio tags and play 240s of audio, or just make audio file of length 240s and use only that alone, it's up to you.
There is another way though. To play up to an hour of audio you would have to go another route and use audio player interface, which acts a little bit different than regular skill, but it would allow you to loop audio indefinitely. An example skill using audio player interface can be found here.
Related
I'm using discord.js and ytdl-core to play audio from YouTube over discord voice chat. I'm trying to make a 'loop' command that will start looping the song so that when the song ends, it automatically plays again.
I've tried running commands like
// Create audio resource
var resource = createAudioResource(stream);
// Subscribe audio player to voice connection
connection.subscribe(player);
// Play audio
player.play(resource);
player.on('idle', () => {
player.play(resource);
});
But I keep getting
Error: Cannot play a resource that has already ended.
I've tried recreating the createAudioResource and playing that but I get
(node:93492) MaxListenersExceededWarning: Possible EventEmitter memory leak detected. 11 close listeners added to [PassThrough]. Use emitter.setMaxListeners() to increase limit
Does anybody have any insights on how to loop an audio player bot?
I think I solved this (there still may be issues with memory leaks, but I haven't been receiving any errors)
I ended up having to re-download the stream, recreate the audio from this new stream, then play it. So my code looks something like this:
player.on('idle', () => {
stream = ytdl(args[0], {filter: 'audioonly'});
resource = createAudioResource(stream);
player.play(resource);
});
EDIT: I noticed a potential issue with multiple loop commands adding more and more event listeners, so I added this bit of code before the event listener call.
if (play.player['_events']['idle']) return;
I'm hosting a video website and ran into a problem I can't really explain. I host the videos using cloudflare which provides a hls stream url.
On non-apple devices I use the hls.js library to stream these videos, this is done using the (React) code below:
useEffect(() => {
let video = document.getElementById(video_id);
if (video.canPlayType('application/vnd.apple.mpegurl')) {
video.src = hlsUrl
return (() => {
clearWatchTimeout()
video.src = null
})
} else if (Hls.isSupported()) {
let hls = new Hls({});
hls.attachMedia(video);
hls.on(Hls.Events.MEDIA_ATTACHED, function () {
hls.loadSource(hlsUrl);
hls.on(Hls.Events.MANIFEST_PARSED, function (event, data) {
console.log("manifest loaded, found " + data.levels.length + " quality level");
})
});
return (() => {
clearWatchTimeout()
hls.detachMedia()
hls.destroy()
})
} else {
console.log("This device is not capable of playing hls streams")
}
});
The videoplayer works, but somehow the quality drops with the hls.js implementation. This can most easily be noticed on apple devices by playing the video in safari (quality remains ok) compared to playing the video in chrome (quality deteriorates). An example video can which should have crappy quality can be seen following the link below:
https://www.etudor.nl/chapter/1008/1216/1196/Getal-en-Ruimte-vwo-B-deel-4-(11e-editie)-Hoofdstuk-13
I have turned on debugging mode of the hls library but I haven't found anything weird in the logs.
EDIT:
Just as an extra tidbit of information: If I manually raise the quality to a higher level (say level 4) the video uses that quality and runs perfectly, so bandwidth really does not seem to be the issue.
EDIT 2:
Another bit of information. If I log the bandwidth estimate over time, it seems to decrease every time the quality level has been decreased. If i manually up the quality to level 4, the bandwidth estimate immediately increases.
Video.js supports adding multiple audio tracks to a video file. When I add audio tracks to my player, I can switch between the audio tracks. However, it doesn't play the corresponding audio. How can I play the audio track when I change it from the player? Thank you.
My code looks like:
...
componentDidMount() {
this.player = videojs(this.videoNode, this.props.videoJsOptions);
this.props.audioTracks.forEach(audio => {
// Create a track object.
var track = new videojs.AudioTrack(audio);
// Add the track to the player's audio track list.
this.player.audioTracks().addTrack(track);
});
}
...
My video player looks like:
I am using opentok to stream a video file MediaStreamTrack
Sample code:
<video id="myPreview" controls ></video>
const mediaStream = $('#myPreview')[0].captureStream()
OT.initPublisher(el, {
style: {
nameDisplayMode: "on",
buttonDisplayMode: "off",
},
// resolution: "320x240",
// frameRate: 20,
audioSource: mediaStream.getAudioTracks()[0],
videoSource: mediaStream.getVideoTracks()[0],
});
It works fine but the problem is when viewers are around 20 the quality of video is pixelated.
More context:
I upload a video and use the blob to play the video in the preview.
I'm open for better solutions:
a. Upload video to server and stream edited moderate quality(not sure what to do) video in myPreview.
b. Upload video to 3rd party service and load video via their player.
Thanks in advance.
Make sure you have enough bandwidth for publisher and subscribers. If one subscriber does not have enough bandwidth to receive VGA, for instance, it will receive a lower quality stream, but all the others should be fine. If publisher does not have a good upload bandwidth, then, it will be difficult for subscribers to receive good quality.
I hope this helps.
You may want to set the minVideoBitrate on your publisher which will force a minimum bitrate. This is an integer value in bits per second. If there is a participant that cannot handle that minimum bitrate then they will drop to audio-only.
OT.initPublisher(el, {
style: {
nameDisplayMode: "on",
buttonDisplayMode: "off",
},
minVideoBitrate: 2500000,
audioSource: mediaStream.getAudioTracks()[0],
videoSource: mediaStream.getVideoTracks()[0],
});
I am developing a web page , where a user will select two audio files and the app will join the two audio files and make it as a single output audio file. I am using nodejs in back end and angularjs in client side. How can I achieve this requirement? I went through many libraries nothing suits it.
I'm looking at a similar use-case at the moment. The libraries aren't great as most need a big program to be installed in your server environment. Examples are:
sox-audio: Which should be ok as long as you don't need iterative (variable number of files) concatenation. But this requires installation of SoX.
audio-concat: A wrapper for ffmpeg, but also needs installation of ffmpeg.
Or of you don't need the output audio to be seekable, you could simply do it with streams. Concept:
var fs = require('fs')
var writeStream = fs.createWriteStream('outputAudio.mp3'); // Or whatever you want to call it
// Input files should be an array of the paths to the audio files you want to stitch
recursiveStreamWriter(inputFiles) {
if(inputFiles.length == 0) {
console.log('Done!')
return;
}
let nextFile = inputFiles.shift();
var readStream = fs.createReadStream(nextFile);
readStream.pipe(writeStream, {end: false});
readStream.on('end', () => {
console.log('Finished streaming an audio file');
recursiveStreamWriter(inputFiles);
});
}
This works and the transitions are fine, but audio players struggle to seek the audio. A decent full example can be found here of the recursive stream method.
As an example I am combining Iphone ringtone with Nokia ringtone and piping the streamed readers to a write stream.
//Created two reader streams
let readerstream1 = fs.createReadStream('iphone-6.mp3');
let readerstream2 = fs.createReadStream('Nokia.mp3');
// Created a write stream
let writestream = fs.createWriteStream('NewRingtone.mp3');
//just pipe one after the other
readerstream1.pipe(writestream);
readerstream2.pipe(writestream);