I have a list of audio files that I need to be able to play and I am using Audio from expo-av library in React Native. I am wondering what the best practices are for handling playback of a list of audio files. The requirement is that playback should be done from the list; in other words, we don't want to navigate to a different screen component to handle playback.
I would like to separate concerns of the media list component from the actual player. So "MediaList" would be responsible for listing the audio files, and "Media" component would be responsible for handling the playback. In this case it makes sense to initialize expo-av Audio object in each Media component. This makes separates concerns of Media and MediaList, however, I this also looks like a performance issue since there are so many Audio instances.
So my question is, does having an Audio sound object for each Media instance make sense from performance/resources perspective? Or should I only have one Audio sound object and re-use that every time I want to play any file? The question is pretty broad without code but I hope someone can provide some direction on the best approach.
Related
So I am trying to create the waveform shape in react native while recording an audio, I looked up many packages but they all need an audio url so they don't support realtime recording, I tried creating one by myself which i used a package that provides me with the decibals value when recording and then push the value to state array but it cause too many lags since I setstate every 0.5 sec.
Any suggestion?
This package audio-react-recorder provides the recording interface as well as a somewhat customisable waveform. I think it's a good place to start. I used it a few times, it works quite well.
Here's a demo
Let me know if it works out for you.
I have large WPF application which also uses MEF.
I want to trigger an Audio alert on certain condition and repeat it for the times specified by user. audio file can be .wav or .mp3.
I am making use of SoundPlayer to play the audio.
I am not sure which timer to use for repeat intervals.
I dont want to block UI thread when audio is playing and also want it to be threadsafe.
thanks in advance.
Why not consider using Event Aggregation, which is an implementation of publisher subscriber pattern. Whenever your application encounters a condition where it wants to play audio, it would publish its intensions via the event aggregator.
There would be a listener who listens for these events and would play the correct audio based on the type of the event. If there are multiple requests, the listener may play them in parallel or in a sequence. You can implement the desired threading model within the listener and standard thread-sleep
This way, you get to keep all you audio configuration and logic tucked behind a single module. All the other modules will just ‘tell’ this module what they want to play (and optionally for how long)
On the other hand, unless your application is large enough the above approach would be an overkill
A good write-up on event aggregation
http://codebetter.com/glennblock/2009/02/23/event-aggregation-with-mef-with-and-without-eventaggregator/
in short: i have some audio files in my local sqlitedatabse and want to play with with the native media player from android.
reffered to the documentation i can play an audio either by placing my audio file on the sd card or by streaming from server or by an URI.
There is no way to play an audio file by giving an byte array to the media player.
so my solution would be to build an CONTENTPROVIDER which lets my media player access the audio file in the database via an URI. I came up with that idea through this tutorial
http://www.vogella.com/articles/AndroidSQLite/article.html
Is this possible? Are there better ways to implement my issue?
If I understand your question correctly, you are proposing creating an artificial URL that you are then passing to the android system to be read, and you supply the data backing the URL by reading it from SQLite. That will probably work, and it solves some issues with content decoding, but it requires the creation of awkward URLs and sockets within the system. All that might not be the most reliable or efficient way to go.
The alternative is to decode the data yourself into PCM, and supply it to Android via an Audio Track. This would be superior because you aren't trying to create a hacked (and artificial) URL just to pipe a stream of data through, but it may require you to parse the data yourself and convert it to PCM. Depending on what formats you have in your database, this may be difficult and not worth it because Android libraries for these kinds of conversions are not as accessible. So, which way is best depends on your goals, but I think those are your two options.
I'm writing a pair of applications for distributing audio (among other features). I have a WPF program that allows an artist to record and edit audio. Clicking a button then uploads this to a silverlight-powered website. A consumer visiting this website can then listen to the audio. Simple. It works. But I'd like it to be better: I need an audio format that works seamlessly on both the recording and playback sides.
I'm currently using mp3 format, and I'm not happy with it. For the recording/editing, I use the Alvas Audio c# library. It works ok, but for MP3 recording requires that the artist goes into his registry to change msacm.l3acm to l3codecp.acm. That's a lot to ask of an end-user. Furthermore mp3 recording seems rather fragile when I install on a new machine. (Sometimes it randomly just doesn't work until you've fiddled around for a while. I still don't know why.) I've been told that unless I want to pay royalties to the mp3 patent holders, I always need to rely on this type of registry change.
So what other audio format could I use instead? I need something compressed. Alvas audio can also record to GSM, for example, but that won't play back in silverlight. Silverlight will play WMA, but I don't know how to record in that format - Alvas Audio won't. I'd be open to using another recording library instead, but I haven't managed to find one.
Am I missing something obvious, or is there really no user-friendly way to record audio in WPF and play it back in Silverlight? It seems like there should be...
Any suggestions greatly appreciated.
Thanks.
IMO, WMA would be your best bet. I'm not sure how your application is setup or how low level you want to go, but the Windows Media Format SDK is a great way to encode WMA and the runtimes come with Windows. There are .NET PIAs and samples for it here: http://windowsmedianet.sourceforge.net/
Given that Ogg Vorbis is being adopted for the new HTML audio tag in (cough) some browsers, it's probably worth checking it out. You won't get bitten by any licensing concerns if you follow this route. If ease of deployment is top of your list, then go with WMA.
[tries hard not to start ranting about fragmented state of codec options in browsers and the commercial interests that scupper any concensus]
Suppose there is a Silverlight streaming video player on a random web site. How can I intercept the video stream and for example save it to file - i.e. the real source of the file.
I know some of the sites embed the source in tag - or at least that was the case with Flash. But sometimes, players are smarter than that and call some logic via web service. It is still possible to figure everything out by analyzing the .dll with reflector, but that is hardcore! Every player may have a different logic, so I figured out it would be easier to just get the current stream somehow.
Any thoughts?
Ooook! Got me an answer that could be used as a nice workaround. With the use of fiddler I was able to capture the traffic and figure out what's going on. Now I'm happily watchin the same video as before only using the uber feature of WMP that lets me play videos faster.