Is it possible to capture both mic and line-in at the same time using ALSA? - c

Not terribly familiar with ALSA, but I'm supporting an application that uses it.
Is it possible to record audio from both the mic and line-in simultaneously? Not necessarily mixing the audio, though that is a possibility that has been requested. Can both be set to record and use ALSA to read each individually?
Documentation on ALSA is not terribly helpful, and this is basically my first sojourn into sound mixing on Linux using ALSA.
Any and all help would be greatly appreciated; hoping there is someone out there that has done something like this in the past and either has a sample to share or a link to point me in the right direction.

Maybe this can be done: Not sure, but from http://www.jrigg.co.uk/linuxaudio/ice1712multi.html ,not tested, but this will give you 1 virtual device with 4 channels.
pcm.multi_capture {
type multi
slaves.a.pcm hw:0
slaves.a.channels 2
slaves.b.pcm hw:1
slaves.b.channels 2
bindings.0.slave a
bindings.0.channel 0
bindings.1.slave a
bindings.1.channel 1
bindings.2.slave b
bindings.2.channel 0
bindings.3.slave b
bindings.3.channel 1
}
I dont know if you can mix them with route or the correct sintax:
pcm.route_capture {
type route
slave.pcm "multi_capture"
ttable.0.0 0.5
ttable.1.1 0.5
ttable.0.2 0.5
ttable.1.3 0.5
}
or
pcm.route_capture {
type route
slave.pcm "multi_capture"
ttable.0.0 0.5
ttable.1.1 0.5
ttable.2.0 0.5
ttable.3.1 0.5
}
If someone test, please tells us the results? Thank you!
I wish you luck!

arecord -l will give you a list of available capture devices. In my case:
**** List of CAPTURE Hardware Devices ****
card 0: M2496 [M Audio Audiophile 24/96], device 0: ICE1712 multi [ICE1712 multi]
Subdevices: 1/1
Subdevice #0: subdevice #0
So, with my card, you would be out of luck - there is only one device (i.e. only one distinct source). This device will give you all data routed to it by hardware, as configured by an external mixer application.
With some cards it might, however, be possible to route MIC to channel 1 (left) and LINE to channels 2 (right), and then record 2 channels, separating them as needed in your application. Of course, if supported by hardware, you could also use two channels each and record four channels.

Related

how to send a link of plant UML configuration

I have an activity diagram created in plantUML:
#startuml
|#LightBlue|ILC Running Continously|
start
note
ILC grabs data from
VOLTTRON message bus
for HVAC system
and electric meter
end note
repeat :Calculate averaged power;
repeat
repeat while (Is Averaged Power above "Demand Limit"?) is (no)
->yes;
repeat: Calculated needed demand reduction;
:Calculate AHP Wieghts;
:Curtail selected loads;
note
Typical to VOLTTRON edge
device on interacting
with building systems via
protocol driver framework
end note
:Wait in Minutes "Control Time";
backward: Curtail more;
note
Devices already curtailed
saved in ILC agent memory
end note
repeat while (Is "Demand Limit" goal met?) is (no)
->yes;
backward: Manage Demand;
#enduml
Is it possible to send someone a link of the entire configuration which is NOT just the output picture link which is in the snip below but the entire configuration for someone else to modify?!
I don't know whether or not it is documented anywhere but from experience / trial error and observing the URLs with the plantuml webserver one can see that the URLs are:
uml: http://www.plantuml.com/plantuml/uml/SyfFKj2rKt3CoKnELR1Io4ZDoSa70000
png: http://www.plantuml.com/plantuml/png/SyfFKj2rKt3CoKnELR1Io4ZDoSa70000
svg: http://www.plantuml.com/plantuml/svg/SyfFKj2rKt3CoKnELR1Io4ZDoSa70000
ASCII art: http://www.plantuml.com/plantuml/txt/SyfFKj2rKt3CoKnELR1Io4ZDoSa70000
in other words the part uml / png / svg /txt changes according to the outcome.
Also when one tries a none existing part it is reverted back to uml

Dronekit Example Follow Me Python Script not working

I try to run an example script from dronekit. the code is looks like this :
import gps
import socket
import time
from droneapi.lib import VehicleMode, Location
def followme():
"""
followme - A DroneAPI example
This is a somewhat more 'meaty' example on how to use the DroneAPI. It uses the
python gps package to read positions from the GPS attached to your laptop an
every two seconds it sends a new goto command to the vehicle.
To use this example:
* Run mavproxy.py with the correct options to connect to your vehicle
* module load api
* api start <path-to-follow_me.py>
When you want to stop follow-me, either change vehicle modes from your RC
transmitter or type "api stop".
"""
try:
# First get an instance of the API endpoint (the connect via web case will be similar)
api = local_connect()
# Now get our vehicle (we assume the user is trying to control the first vehicle attached to the GCS)
v = api.get_vehicles()[0]
# Don't let the user try to fly while the board is still booting
if v.mode.name == "INITIALISING":
print "Vehicle still booting, try again later"
return
cmds = v.commands
is_guided = False # Have we sent at least one destination point?
# Use the python gps package to access the laptop GPS
gpsd = gps.gps(mode=gps.WATCH_ENABLE)
while not api.exit:
# This is necessary to read the GPS state from the laptop
gpsd.next()
if is_guided and v.mode.name != "GUIDED":
print "User has changed flight modes - aborting follow-me"
break
# Once we have a valid location (see gpsd documentation) we can start moving our vehicle around
if (gpsd.valid & gps.LATLON_SET) != 0:
altitude = 30 # in meters
dest = Location(gpsd.fix.latitude, gpsd.fix.longitude, altitude, is_relative=True)
print "Going to: %s" % dest
# A better implementation would only send new waypoints if the position had changed significantly
cmds.goto(dest)
is_guided = True
v.flush()
# Send a new target every two seconds
# For a complete implementation of follow me you'd want adjust this delay
time.sleep(2)
except socket.error:
print "Error: gpsd service does not seem to be running, plug in USB GPS or run run-fake-gps.sh"
followme()
I try to run it in my Raspberry with Raspbian OS, but i got an error message like this :
Error : gpsd service does not seem to be running, plug in USB GPS or run run-fake-gps.sh
I get a feeling that my raspberry is needed a gps kind of device to be attached before i can run this script, but i dont really know.
Please kindly tell me whats wrong with it..
the full path of instruction i got from here :
http://python.dronekit.io/1.5.0/examples/follow_me.html
As the example says:
[This example] will use a USB GPS attached to your laptop to have the vehicle follow you as you walk around a field.
Without a GPS device, the code doesn't know where you are so it would not be possible to implement any sort of "following" behavior. Before running the example, you would need to:
Acquire some sort of GPS device (I use one of these, but there are lots of alternatives).
Configure gpsd on your laptop to interface with the GPS device.

Silverlight Plug in Crashes in Video Conference

We hav developed an application for video conference using silverlight.
It works properly for 15 to 19 min then video get stopped and silverlight plugin has crashed.
for video encoding we r using the JPEG encoder and single image from capturesource get encoded and send on each tick of timer..
I also tried to use Silversuite but message popup arrives i.e. Silversuite expire
Is der proper solution for encoding or timer or plug in...
Thanx...
we extend the crashing period from 15 min to 1 to 1 n 1/2 hr ......by flushing the mem stream and decreasing the receiving buffer size .....

Syncronization audio and video

I need to display stream video using MediaElement in Windwso Phone application.
I'm getting from web-service a stream that contains frames in H264 format AND raw-AAC bytes (strange, but ffmpeg can parse with -f ac3 parameter only).
So, if try to play only one of stream (audio OR video) it plays nice. But I have issues when try it both.
For example, if I report video sample without timestamp and report audio with timestamp, my video plays 3x-5x faster then I need.
MediaStreamSample msSamp = new MediaStreamSample(
_videoDesc,
vStream,
0,
vStream.Length,
0,
_emptySampleDict);
ReportGetSampleCompleted(msSamp);
From my web-service I getting a DTS and PTS for video and audio frames in following format:
120665029179960
but when I set it for sample, my audio stream plays too slow and with delays.
Timebase is 90khz.
So, could someone tell me how I can resolve it? Maybe I should calculate others timestamps for samples? If so, show me the way, please.
Thanks.
Okay, I solved it.
So, what I need to do for sync A/V:
Calculate right timestamps for each video and audio frames using framerate.
For example, for video I have 90 kHz and for audio 48 kHz and 25 frames per second - my frame increments will be:
_videoFrameTime = (int)TimeSpan.FromSeconds((double)0.9 / 25).Ticks;
_audioFrameTime = (int)TimeSpan.FromSeconds((double)0.48 / 25).Ticks;
And now we should add these values for each sample:
private void GetAudioSample()
{
...
/* Getting sample from buffer */
MediaStreamSample msSamp = new MediaStreamSample(
_audioDesc,
audioStream,
0,
audioStream.Length,
_currentAudioTimeStamp,
_emptySampleDict);
_currentAudioTimeStamp += _audioFrameTime;
ReportGetSampleCompleted(msSamp);
}
For gettign video frame method will be the same with a _videoFrameTime incrementation instead.
Hope this will be helpfull for someone.
Roman.

OpenCV Capture from external camera

I'm currently writing an real time application using OpenCV and in the following case:
I'm trying to capture an image from a HDV camera plugged in firewire 800.
I have tried to loop on index used on cvCaptureFromCam,
but no camera can't be found (except the webcam).
there is my code sample, it loop on index (escaping 0 cause it's the webcam's index) :
CvCapture* camera;
int index;
for (index = 1; index < 100; ++index) {
camera = cvCaptureFromCAM(index);
if (camera)
break;
}
if (!camera)
abort();
On any time it stops on the abort.
I'm compiling on OSX 10.7 and I have tested :
OpenCV 1.2 private framework
OpenCV 2.0 private framework (found here : OpenCV2.0.dmg)
OpenCV compiled by myself (ver. 2)
I know that the problem is knowned and there is a lot of discussion about this,
but I'm not able ti find any solution.
Does anyone have been in the same case ?
Regards.
To explicitly select firewire, perhaps you can try to add 300 to your index? At least in OpenCV 2.4, each type of camera is given a specific domain. For example, Video4Linux are given domain 200, so 200 is the first V4L camera, 201 is the second, etc. For Firewire, the domain is 300. If you specify an index less than 100, OpenCV just iterates through each of its domains in order, which may not be the order you expect. For example, it might find your webcam first, and never find the firewire camera. If this is not the issue, please accept my appologies.
index should start at 0 instead of 1.
If that doesn't work, maybe your camera is not supported by OpenCV. I suggest you check if it is in the compatibility list.

Resources