I'm currently writing an real time application using OpenCV and in the following case:
I'm trying to capture an image from a HDV camera plugged in firewire 800.
I have tried to loop on index used on cvCaptureFromCam,
but no camera can't be found (except the webcam).
there is my code sample, it loop on index (escaping 0 cause it's the webcam's index) :
CvCapture* camera;
int index;
for (index = 1; index < 100; ++index) {
camera = cvCaptureFromCAM(index);
if (camera)
break;
}
if (!camera)
abort();
On any time it stops on the abort.
I'm compiling on OSX 10.7 and I have tested :
OpenCV 1.2 private framework
OpenCV 2.0 private framework (found here : OpenCV2.0.dmg)
OpenCV compiled by myself (ver. 2)
I know that the problem is knowned and there is a lot of discussion about this,
but I'm not able ti find any solution.
Does anyone have been in the same case ?
Regards.
To explicitly select firewire, perhaps you can try to add 300 to your index? At least in OpenCV 2.4, each type of camera is given a specific domain. For example, Video4Linux are given domain 200, so 200 is the first V4L camera, 201 is the second, etc. For Firewire, the domain is 300. If you specify an index less than 100, OpenCV just iterates through each of its domains in order, which may not be the order you expect. For example, it might find your webcam first, and never find the firewire camera. If this is not the issue, please accept my appologies.
index should start at 0 instead of 1.
If that doesn't work, maybe your camera is not supported by OpenCV. I suggest you check if it is in the compatibility list.
Related
At the page https://www.wowza.com/docs/how-to-build-a-basic-app-with-gocoder-sdk-for-ios there are the following examples:
if (self.goCoder != nil) {
// Associate the U/I view with the SDK camera preview
self.goCoder.cameraView = self.view;
// Start the camera preview
[self.goCoder.cameraPreview startPreview];
}
// Start streaming
[self.goCoder startStreaming:self];
// Stop the broadcast that is currently running
[self.goCoder endStreaming:self];
The equivalent Java code for Android is reported at the page https://www.wowza.com/docs/how-to-build-a-basic-app-with-gocoder-sdk-for-android#start-the-camera-preview, it is:
// Associate the WOWZCameraView defined in the U/I layout with the corresponding class member
goCoderCameraView = (WOWZCameraView) findViewById(R.id.camera_preview);
// Start the camera preview display
if (mPermissionsGranted && goCoderCameraView != null) {
if (goCoderCameraView.isPreviewPaused())
goCoderCameraView.onResume();
else
goCoderCameraView.startPreview();
}
// Start streaming
goCoderBroadcaster.startBroadcast(goCoderBroadcastConfig, this);
// Stop the broadcast that is currently running
goCoderBroadcaster.endBroadcast(this);
The code is self-explaining: the first blocks start a camera preview, the second blocks start a streaming and the third blocks stop it. I want the preview and the streaming inside a Codename One PeerComponent, but I didn't remember / understand how I have to modify both these native code examples to return a PeerComponent to the native interface.
(I tried to read again the developer guide but I'm a bit confused on this point).
Thank you
This is the key line in the iOS instructions:
self.goCoder.cameraView = self.view;
Here you define the view that you need to return to the peer and that we can place. You need to change it from self.view to a view object you create. I think you can just allocate a UIView and assign/return that.
For the Android code instead of using the XML code they use there you can use the WOWZCameraView directly and return that as far as I can tell.
The problem shows on all Win8 systems, all brands, all types of desktop, laptop, all-in-one, and tablets (tested on nearly every system at BestBuy which there's a ton of them so I can't be the first person to see this.)
What is happening is shown in below image (note captions below each surface), where the rendering on Win8 is brighter than Win7 for native code and WinForm which is based off a windowed ID3D11Device/Context; and to make things worse; the rendering is darker via WPF and WPFs shared surface/texture features though using similar device/context. The actual rendering loop and shaders are identical. Win7/Vista render the same/ideal brightness via native type render target or WPF shared surface.
The DirectX 11 code was developed on Win7. It's very basic DX stuff and the shader is as simple a shader as possible; very similar to the most basic DirectX SDK examples.
Why is DX11 Win8 brightness not consistent with Win7? Gradients seem different too.
Why would Win8 WPF shared surface/texture create even more difference?
What is the best strategy to solve such rendering brightness differences?
I did end up answering, but welcome improvements or expand with related answers to brightness/lighting problems between win7 and win8 as searching the net for such topic shows little results.
After much work between me and MS, (MS wanted a repro without using DXUT even though I told them PNTriangles11 caused the issue and it was a wild goose chase.) I found it was related to EnumOutputs failing on Win8 (MS still to provide an official reason, I will update) and DXUT portions that call EnumOutputs fail resulting in the problematic section of code DXUTApplyDefaultDeviceSettings(DXUTDeviceSettings *modifySettings)
where...
modifySettings->d3d11.sd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM_SRGB;
should be changed to...
modifySettings->d3d11.sd.BufferDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
which resolves the issue and color is consistent between Win7 and Win8. MS pointed this out. Though I'd like to know why EnumOutputs is failing on Win8 and MS is likely the only one possible to answer this. EnumOutputs failed on every Win8 system at BestBuy, all types of systems.
As for another DXUT modification needed for Win8 compatibility, within DXUTChangeDevice, adding a test for nonzero hAdapterMonitor is likely wanted...
bool bNeedToResize = false;
if(hAdapterMonitor && DXUTGetIsWindowedFromDS( pNewDeviceSettings ) && !bKeepCurrentWindowSize )
{
UINT nClientWidth;
UINT nClientHeight;
if( ::IsIconic( DXUTGetHWNDDeviceWindowed() ) )
For completeness as relates to topic title, gamma correction information can be found at directx gamma correction and new windows 8 brightness control features for integrated displays win8 brightness control
I'm currently testing my C++,DX10 program (based on DXUT June 2010) on Windows 8,
And I'm having the same problems.
Here are additional changes/advice that I advice to do on DXUT:
1) do NOT use the arguments /width, /height, /windowed, /fullscreen in the strExtraCommandLineParams of DXUTInit
2) in DXUTGetMonitorInfo, s_pFnGetMonitorInfo( hMonitor, lpMonitorInfo ); returns FAIL on my system.
So I have replaced this line by something like:
BOOL success = s_pFnGetMonitorInfo( hMonitor, lpMonitorInfo );
//try to return the less wrong result
if ( !success )
{
RECT rcWork;
if ( lpMonitorInfo &&
( lpMonitorInfo->cbSize >= sizeof( MONITORINFO ) ) &&
SystemParametersInfoA( SPI_GETWORKAREA, 0, &rcWork, 0 )
)
{
lpMonitorInfo->rcMonitor.left = 0;
lpMonitorInfo->rcMonitor.top = 0;
lpMonitorInfo->rcMonitor.right = GetSystemMetrics( SM_CXSCREEN );
lpMonitorInfo->rcMonitor.bottom = GetSystemMetrics( SM_CYSCREEN );
lpMonitorInfo->rcWork = rcWork;
lpMonitorInfo->dwFlags = MONITORINFOF_PRIMARY;
return TRUE;
}
return FALSE;
}
else
{
return TRUE;
}
3) Concerning the brightness (gamma correction),
if have added:
if ( BackBufferFormat == DXGI_FORMAT_R8G8B8A8_UNORM_SRGB )
{
return false;
}
in my callback IsD3D10DeviceAcceptable, so I ban every gamma correction device.
And now everything seems to work
(BTW, I'm not sure to understand your "hAdapterMonitor &&" modification, because it doesn't use directly this pointer, but maybe we don't have the same DXUT version )
I'm pretty new to OpenCV and I'm trying to get my bearings by looking at, and running, sample code.
One of the sample programs that I was looking at is a program for displaying webcam video. Here are the important lines (the program doesn't execute farther than this):
// Make frame.
CvCapture* capture = cvCaptureFromCAM(0);
if(!capture) {
printf("Webcam not initialized....");
}
// Display video in frame.
Unfortunately, the if statement always executes. For some reason, capture is not initialized.
Even stranger, when I run the program, it even gives me a GUI to select the webcam that I want to use:
However, even after I select the webcam, capture is not initialized!
What does this mean? How do I fix this?
Thanks for any suggestions.
It is possible that OpenCV cannot access the webcam until after you select it. In that case, try looping until the webcam is available:
CvCapture *capture = NULL;
do {
// you could also try passing in CV_CAP_ANY or -1 instead of 0
capture = cvCaptureFromCAM(0);
} while (!capture);
If this still doesn't work, call cvErrorStr(cvGetErrStatus()) to get a string explaining the error.
This is going to be one of those awkward questions looking for an answer that probably doesn't exist, but here goes.
I've been developing some simple games using Corona and whilst the functionality seems to work pretty well across most of the physical devices I have tested on, the one main issue is the layout. I know you can't really build for every single device perfectly, but I'm wondering if there is a common method to make an app look good across as many screens as possible. I have access to these devices
iPad 1 & 2: 4:3 (1.33)
iPhone 960 × 640 3:2 (1.5)
iPhone 480x320 3:2 (1.5)
Galaxy Nexus 16:9 (1.77)
From what I have seen, people aim to use 320x480 as a scaled resolution and then let Corona upscale to the correct device resolution (with any #2x images as required) but this leads to letterboxing or cropping depending on the config.lua scale setting. Whilst it does scale correctly, having a letterbox isn't great.
So would I be best to not specify a width&height in the config file, but instead to use some sort of screen check at first to look for 1.33 / 1.5 / 1.77 aspect ratios? Surely with the whole point of Corona SDK, there would be some sort of 'typical' setup that developers use for the start of any new project?
Thank you
It seems that I have found a pretty good solution based on this forum post on the Ansca website: http://developer.anscamobile.com/forum/2012/03/12/understanding-letterbox-scalling
In summary, the config.lua should look like this:
application = {
content = {
width = 320,
height = 480,
scale = "letterbox",
xAlign = "center",
yAlign = "center",
imageSuffix = {
["#2x"] = 2,
},
}
}
Create background images at 360*570 for older devices. 320x480 screens will crop the image slightly and it will scale nicely for older Android devices.
Create background images at 1140*720 for iPad and iPhone retina - again these will scale on Android and be slightly cropped on iOS.
As an example, where you would normally create a 320x480 image and display it with:
local bg = display.newImageRect("bg.png",320x480)
bg.x = display.contentWidth/2
bg.y = display.contentHeight/2
.. instead create a 360x570 background and use the following code:
local bg = display.newImageRect("bg.png",360x570)
bg.x = display.contentWidth/2
bg.y = display.contentHeight/2
This is just a summary, so check the link for more detailed instructions.
Well, you CAN use a number slightly off 2 for the scaling if you want correct images for the different devices. Ex:
application =
{
content =
{
width = 640,
height = 960,
scale = "zoomEven",
imageSuffix =
{
["-iphone3"] = 0.5,
["-ipad2"] = 1.066,
["-ipad3"] = 2.133,
},
}
}
In which "background.png" would be a 640x960 image for the iphone4, while "background-iphone3.png" would be 320x480 (you don´t need this, but it will reduce memory requirement for iphone3 applications). "background-ipad3.png" would need to be 1536x2048 (and half that for -ipad2).
Of course it doesn´t solve the aspect ratio for screen positioning, but it solves it for all other gfx related problems. Remember to use display.newImageRect, not display.newImage or you won´t see any difference.
Am currently developing a GPS powered solution for tracking field units, and I plan to use Samsung's BADA OS powered phones (affordable but powerful). The only problem am now facing, is i don't know the easiest way to obtain GPS info from the phone (a GT-S5333) to a server (probably via a GET). What can I do?
I've searched for available options and they might not be very friendly as this app doesn't even require an interface on the phone, just sending of the GPS info to a server. Samsung provides a C++ api for Bada (but I think this might be over-kill for this sort of task.)
The other option might be using Webwag's widget api, but I've tried it already, and it doesn't even seem to be possible to do anything beyond RSS widgets.
Might someone help?
Eventually, I did learn about the possibility of using J2ME's Location API, and indeed it is supported on Bada OS phones (virtually any modern phone). So I went for it, and this is how I would obtain my device's location:
import javax.microedition.location.*;
private String reportLocation(Form mainForm){
Criteria cr= new Criteria();
cr.setHorizontalAccuracy(500);
final LocationProvider lp= LocationProvider.getInstance(cr);
new Thread(){
public void run(){
lp.setLocationListener(PatrolGPSDevice.this, 30, -1, -1);
}
}.start();
Location l = lp.getLocation(60);
Coordinates c = l.getQualifiedCoordinates();
if(c != null ) {
double lat = c.getLatitude();
double lon = c.getLongitude();
//do what I want with the location data -- in this case, send it to a server
}