In my C/C++ program, I'm using OpenCV to capture images from my webcam. The camera (Logitech QuickCam IM) can capture at resolutions 320x240, 640x480 and 1280x960. But, for some strange reason, OpenCV gives me images of resolution 320x240 only. Calls to change the resolution using cvSetCaptureProperty() with other resolution values just don't work. How do I capture images with the other resolutions possible with my webcam?
I'm using openCV 1.1pre1 under Windows (videoinput library is used by default by this version of openCv under windows).
With these instructions I can set camera resolution. Note that I call the old cvCreateCameraCapture instead of cvCaptureFromCam.
capture = cvCreateCameraCapture(cameraIndex);
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_WIDTH, 640 );
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_HEIGHT, 480 );
videoFrame = cvQueryFrame(capture);
I've tested it with Logitech, Trust and Philips webcams
There doesn't seem to be a solution. The resolution can be increased to 640x480 using this hack shared by lifebelt77. Here are the details reproduced:
Add to highgui.h:
#define CV_CAP_PROP_DIALOG_DISPLAY 8
#define CV_CAP_PROP_DIALOG_FORMAT 9
#define CV_CAP_PROP_DIALOG_SOURCE 10
#define CV_CAP_PROP_DIALOG_COMPRESSION 11
#define CV_CAP_PROP_FRAME_WIDTH_HEIGHT 12
Add the function icvSetPropertyCAM_VFW to cvcap.cpp:
static int icvSetPropertyCAM_VFW( CvCaptureCAM_VFW* capture, int property_id, double value )
{
int result = -1;
CAPSTATUS capstat;
CAPTUREPARMS capparam;
BITMAPINFO btmp;
switch( property_id )
{
case CV_CAP_PROP_DIALOG_DISPLAY:
result = capDlgVideoDisplay(capture->capWnd);
//SendMessage(capture->capWnd,WM_CAP_DLG_VIDEODISPLAY,0,0);
break;
case CV_CAP_PROP_DIALOG_FORMAT:
result = capDlgVideoFormat(capture->capWnd);
//SendMessage(capture->capWnd,WM_CAP_DLG_VIDEOFORMAT,0,0);
break;
case CV_CAP_PROP_DIALOG_SOURCE:
result = capDlgVideoSource(capture->capWnd);
//SendMessage(capture->capWnd,WM_CAP_DLG_VIDEOSOURCE,0,0);
break;
case CV_CAP_PROP_DIALOG_COMPRESSION:
result = capDlgVideoCompression(capture->capWnd);
break;
case CV_CAP_PROP_FRAME_WIDTH_HEIGHT:
capGetVideoFormat(capture->capWnd, &btmp, sizeof(BITMAPINFO));
btmp.bmiHeader.biWidth = floor(value/1000);
btmp.bmiHeader.biHeight = value-floor(value/1000)*1000;
btmp.bmiHeader.biSizeImage = btmp.bmiHeader.biHeight *
btmp.bmiHeader.biWidth * btmp.bmiHeader.biPlanes *
btmp.bmiHeader.biBitCount / 8;
capSetVideoFormat(capture->capWnd, &btmp, sizeof(BITMAPINFO));
break;
default:
break;
}
return result;
}
and edit captureCAM_VFW_vtable as following:
static CvCaptureVTable captureCAM_VFW_vtable =
{
6,
(CvCaptureCloseFunc)icvCloseCAM_VFW,
(CvCaptureGrabFrameFunc)icvGrabFrameCAM_VFW,
(CvCaptureRetrieveFrameFunc)icvRetrieveFrameCAM_VFW,
(CvCaptureGetPropertyFunc)icvGetPropertyCAM_VFW,
(CvCaptureSetPropertyFunc)icvSetPropertyCAM_VFW, // was NULL
(CvCaptureGetDescriptionFunc)0
};
Now rebuilt highgui.dll.
I've done image processing in linux before and skipped OpenCV's built in camera functionality because it's (as you've discovered) incomplete.
Depending on your OS you may have more luck going straight to the hardware through normal channels as opposed to through openCV. If you are using Linux, video4linux or video4linux2 should give you relatively trivial access to USB webcams and you can use libavc1394 for firewire. Depending on the device and the quality of the example code you follow, you should be able to get the device running with the parameters you want in an hour or two.
Edited to add: You are on your own if its Windows. I imagine it's not much more difficult but I've never done it.
I strongly suggest using VideoInput lib, it supports any DirectShow device (even multiple devices at the same time) and is more configurable. You'll spend five minutes make it play with OpenCV.
Check this ticket out:
https://code.ros.org/trac/opencv/ticket/376
"The solution is to use the newer libv4l-based wrapper.
install libv4l-dev (this is how it's called in Ubuntu)
rerun cmake, you will see "V4L/V4L2: Using libv4l"
rerun make. now the resolution can be changed. tested with built-in isight on MBP."
This fixed it for me using Ubuntu and might aswell work for you.
Code I finally got working in Python once Aaron Haun pointed out I needed to define the arguments of the set function before using them.
#Camera_Get_Set.py
#By Forrest L. Erickson of VRX Company Inc. 8-31-12.
#Opens the camera and reads and reports the settings.
#Then tries to set for higher resolution.
#Workes with Logitech C525 for resolutions 960 by 720 and 1600 by 896
import cv2.cv as cv
import numpy
CV_CAP_PROP_POS_MSEC = 0
CV_CAP_PROP_POS_FRAMES = 1
CV_CAP_PROP_POS_AVI_RATIO = 2
CV_CAP_PROP_FRAME_WIDTH = 3
CV_CAP_PROP_FRAME_HEIGHT = 4
CV_CAP_PROP_FPS = 5
CV_CAP_PROP_POS_FOURCC = 6
CV_CAP_PROP_POS_FRAME_COUNT = 7
CV_CAP_PROP_BRIGHTNESS = 8
CV_CAP_PROP_CONTRAST = 9
CV_CAP_PROP_SATURATION = 10
CV_CAP_PROP_HUE = 11
CV_CAPTURE_PROPERTIES = tuple({
CV_CAP_PROP_POS_MSEC,
CV_CAP_PROP_POS_FRAMES,
CV_CAP_PROP_POS_AVI_RATIO,
CV_CAP_PROP_FRAME_WIDTH,
CV_CAP_PROP_FRAME_HEIGHT,
CV_CAP_PROP_FPS,
CV_CAP_PROP_POS_FOURCC,
CV_CAP_PROP_POS_FRAME_COUNT,
CV_CAP_PROP_BRIGHTNESS,
CV_CAP_PROP_CONTRAST,
CV_CAP_PROP_SATURATION,
CV_CAP_PROP_HUE})
CV_CAPTURE_PROPERTIES_NAMES = [
"CV_CAP_PROP_POS_MSEC",
"CV_CAP_PROP_POS_FRAMES",
"CV_CAP_PROP_POS_AVI_RATIO",
"CV_CAP_PROP_FRAME_WIDTH",
"CV_CAP_PROP_FRAME_HEIGHT",
"CV_CAP_PROP_FPS",
"CV_CAP_PROP_POS_FOURCC",
"CV_CAP_PROP_POS_FRAME_COUNT",
"CV_CAP_PROP_BRIGHTNESS",
"CV_CAP_PROP_CONTRAST",
"CV_CAP_PROP_SATURATION",
"CV_CAP_PROP_HUE"]
capture = cv.CaptureFromCAM(0)
print ("\nCamera properties before query of frame.")
for i in range(len(CV_CAPTURE_PROPERTIES_NAMES)):
# camera_valeus =[CV_CAPTURE_PROPERTIES_NAMES, foo]
foo = cv.GetCaptureProperty(capture, CV_CAPTURE_PROPERTIES[i])
camera_values =[CV_CAPTURE_PROPERTIES_NAMES[i], foo]
# print str(camera_values)
print str(CV_CAPTURE_PROPERTIES_NAMES[i]) + ": " + str(foo)
print ("\nOpen a window for display of image")
cv.NamedWindow("Camera", 1)
while True:
img = cv.QueryFrame(capture)
cv.ShowImage("Camera", img)
if cv.WaitKey(10) == 27:
break
cv.DestroyWindow("Camera")
#cv.SetCaptureProperty(capture, CV_CAP_PROP_FRAME_WIDTH, 1024)
#cv.SetCaptureProperty(capture, CV_CAP_PROP_FRAME_HEIGHT, 768)
cv.SetCaptureProperty(capture, CV_CAP_PROP_FRAME_WIDTH, 1600)
cv.SetCaptureProperty(capture, CV_CAP_PROP_FRAME_HEIGHT, 896)
print ("\nCamera properties after query and display of frame.")
for i in range(len(CV_CAPTURE_PROPERTIES_NAMES)):
# camera_valeus =[CV_CAPTURE_PROPERTIES_NAMES, foo]
foo = cv.GetCaptureProperty(capture, CV_CAPTURE_PROPERTIES[i])
camera_values =[CV_CAPTURE_PROPERTIES_NAMES[i], foo]
# print str(camera_values)
print str(CV_CAPTURE_PROPERTIES_NAMES[i]) + ": " + str(foo)
print ("/nOpen a window for display of image")
cv.NamedWindow("Camera", 1)
while True:
img = cv.QueryFrame(capture)
cv.ShowImage("Camera", img)
if cv.WaitKey(10) == 27:
break
cv.DestroyWindow("Camera")
I am using debian and ubuntu, i had the same problem, i couldn't change the resolution of video input using CV_CAP_PROP_FRAME_WIDTH and CV_CAP_PROP_FRAME_HEIGHT
I turned out that the reason was a missing library.
I installed lib4l-dev through synaptic, rebuilt OpenCV and the problem is SOLVED!
I am posting this to ensure that no one else wastes time on this setproperty function. I spent 2 days on this to see that nothing seems to be working. So I dug out the code (I had installed the library the first time around). This is what actually happens - cvSetCaptureProperty, calls setProperty inside CvCapture class and lo behold setProperty does nothing. It just returns false.
Instead I'll pick up using another library to feed OpenCV a capture video/images. I am using OpenCV 2.2
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_WIDTH, WIDTH );
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_HEIGHT, HEIGHT);
cvQueryFrame(capture);
That will not work with OpenCV 2.2, but if you use OpenCV 2.1 it will work fine !
If you are on windows platform, try DirectShow (IAMStreamConfig).
http://msdn.microsoft.com/en-us/library/dd319784%28v=vs.85%29.aspx
Under Windows try to use VideoInput library:
http://robocraft.ru/blog/computervision/420.html
I find that in Windows (from Win98 to WinXP SP3), OpenCV will often use Microsoft's VFW library for camera access. The problem with this is that it is often very slow (say a max of 15 FPS frame capture) and buggy (hence why cvSetCaptureProperty often doesn't work). Luckily, you can usually change the resolution in other software (particularly "AMCAP", which is a demo program that is easily available) and it will effect the resolution that OpenCV will use. For example, you can run AMCAP to set the resolution to 640x480, and then OpenCV will use that by default from that point onwards!
But if you can use a different Windows camera access library such as the "videoInput" library http://muonics.net/school/spring05/videoInput/ that accesses the camera using very efficient DirectShow (part of DirectX). Or if you have a professional quality camera, then often it will come with a custom API that lets you access the camera, and you could use that for fast access with the ability to change resolution and many other things.
Just one information that could be valuable for people having difficulties to change the default capture resolution (640 x 480) ! I experimented myself a such problem with opencv 2.4.x and one Logitech camera ... and found one workaround !
The behaviour I detected is that the default format is setup as initial parameters when camera capture is started (cvCreateCameraCapture), and all request to change height or width :
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_WIDTH, ...
or
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_HEIGHT, ...
are not possible afterwards ! Effectively, I discovered with adding return error of ioctl functions that V4l2 driver is returning EBUSY for thet requests !
Therefore, one workaround should be to change the default value directly in highgui/cap_v4l.cpp :
*#define DEFAULT_V4L_WIDTH 1280 // Originally 640*
*#define DEFAULT_V4L_HEIGHT 720 // Originally 480*
After that, I just recompiled opencv ... and arrived to get 1280 x 720 without any problem ! Of course, a better fix should be to stop the acquisition, change the parameters, and restart stream after, but I'm not enough familiar with opencv for doing that !
Hope it will help.
Michel BEGEY
Try this:
capture = cvCreateCameraCapture(-1);
//set resolution
cvSetCaptureProperty(capture, CV_CAP_PROP_FRAME_WIDTH, frameWidth);
cvSetCaptureProperty(capture, CV_CAP_PROP_FRAME_HEIGHT, frameHeight);
cvQueryFrame(capture);
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_WIDTH, any_supported_size );
cvSetCaptureProperty( capture, CV_CAP_PROP_FRAME_HEIGHT, any_supported_size);
cvQueryFrame(capture);
should be just enough!
Related
I'm using the H.264 library to compress a video frame by frame. It works, I can replay it back locally without any issue.
However, I need to send that video over the LAN and that LAN is rather busy already so I need to limit the size of each frame to a maximum of about 250Kb.
I use the following code to setup the parameters, but changing the bit rate values does not seem to have any effect on what the library does with the input frames:
x264_param_t param = {};
if(x264_param_default_preset(¶m, "faster", nullptr) < 0)
{
return -1;
}
param.i_csp = X264_CSP_I420;
param.i_width = 3840;
param.i_height = 2160;
param.i_keyint_max = static_cast<int>(f_frame_header.f_fps);
param.i_threads = X264_THREADS_AUTO;
param.b_vfr_input = 0;
param.b_repeat_headers = 1;
param.b_annexb = 1;
// the following three parameters are the ones I tried to change with no results
param.rc.i_bitrate = 100000;
param.rc.i_vbv_max_bitrate = 100000;
param.rc.i_vbv_buffer_size = 125000;
if(x264_param_apply_profile(¶m, "high") < 0)
{
return -1;
}
...enter loop reading frames and compressing them...
Changing the i_bitrate, i_vbv_max_bitrate and i_vbv_buffer_size parameters seems to have absolutely no effect on the size of the resulting frames. I still get some frames over 500Kb and in many even, rather large frames one after the other as the following sizes show:
20264
358875
218429
20728
25215
310230
36127
9077
29785
341541
222778
23542
21356
276772
25339
32459
421036
11179
6172
286070
193849
What I would need is the largest frame to be around 250,000 at its maximum. Now I understand that once in a while it go over a bit, but not 2×. That's just too much for my current available bandwidth.
What am I doing wrong in the parameters setup above?
I've seen this command line:
ffmpeg -i input -c:v libx264 -b:v 2M -maxrate 2M -bufsize 1M output.mp4
which would suggest that what I'm doing above should work (I tried all sorts of values including the ones one that command line). Yet the frame size does not really change between my runs.
I tried with a blur applied to each frame to see whether it work help. Yes! It did. The result is a movie which is 2.44 times smaller than the original.
To load each JPEG image from the original, I use ImageMagick++ (in C++), so I just do the following blur on each image:
image.blur(0.0, 5.0);
and that took about 10 hours total (without the blur the same processing took about 40 minutes) but it was worth it since in the end the compressed movie went from 1,293,272,023 bytes to only 529,556,265 bytes (2.44218 times smaller). The blur added about 3.3 seconds of processing per frame and there are a little over 11,000 frames in the original.
Note: I used 5.0 for the blur because I have 4K images and although I can see a sharp difference when I look at one frame, when playing back the resulting movie, I don't notice the final blur. If you have smaller images, you probably want to use a smaller number. It looks like many people use a blur of just 0.05 and already have good results in compression ratios.
In C, use the BlurImage() function:
Image *BlurImage(const Image *image,const double radius,
const double sigma,ExceptionInfo *exception)
Here are some references about using a blur to further compress JPEG images as it helps eliminates sharp edges which do not compress well in the JPEG format (as sharp edge are not as natural):
Recommendation for compressing JPG files with ImageMagick
How do I reduce the file size of an image? (search on "blur" to find the section)
Could I blur an image to dramatically reduce the file size?
I'm trying to create a DPI aware app which responds to user requested DPI change events by resizing the window.
The program in question is created in C and uses SDL2, however to retrieve system DPI information I use xlib directly, as the SDL DPI support in X11 is lacking.
I found two ways to get the correct DPI information on program startup, both involving getting Xft.dpi information from Xresource: one is to use XGetDefault(display, "Xft", "dpi"), while the other is to use XResourceManagerString, XrmGetStringDatabase and XrmGetResource. Both of them return the correct DPI value when the program is created.
The problem is, if the user changes the system scale while the program is running, both XGetDefault abd XrmGetResource still return the old DPI value even though when I run "xrdb -query | grep Xft.dpi" the value has indeed changed.
Does anyone know a way to get the updated Xft.dpi value?
I found out a way to do exactly what I wanted, even though it's rather hackish.
The solution (using XLib) is to create a new, temporary connection to the X server using XOpenDisplay and XCloseDisplay, and poll the resource information from that new connection.
The reason this is needed is because X fetches the resource information only once per new connection, and never updates it. Therefore, by opening a new connection, X will get the updated xresource data, which can then be used for the old main connection.
Be mindful that constantly opening and closing new X connections may not be great for performance, so only do it when you absolutely need to. In my case, since the window has borders, I only check for DPI changes when the title height has changed, as a DPI change will change the size of your title border due to font size differences.
First off it must be noted that the value of the Xft.dpi resource isn't necessarily accurate -- it depends on whether the system and or user login scripts have correctly set it or not.
Also it is important to remember that the Xft.dpi resource is intended to be used by the Xft library, not by arbitrary programs looking for the screen resolution.
The Xft.dpi resource can be set as follows. This example effectively only deals with a display with a single screen, and note that it uses xdpyinfo. This also shows how it might not be exact, but could be rounded. Finally this example shows calculation of both the horizontal and vertical resolution, but Xft really only wants the horizontal resolution:
SCREENDPI=$(xdpyinfo | sed -n 's/^[ ]*resolution:[ ]*\([^ ][^ ]*\) .*$/\1/p;//q')
SCREENDPI_X=$(expr "$SCREENDPI" : '\([0-9]*\)x')
SCREENDPI_Y=$(expr "$SCREENDPI" : '[0-9]*x\([0-9]*\)')
# N.B.: If true screen resolution is within 10% of 100DPI it makes the most
# sense to claim 100DPI to avoid font-scaling artifacts for bitmap fonts.
if expr \( $SCREENDPI_X / 100 = 1 \) \& \( $SCREENDPI_X % 100 \<= 10 \) >/dev/null; then
FontXDPI=100
fi
if expr \( $SCREENDPI_Y / 100 = 1 \) \& \( $SCREENDPI_Y % 100 \<= 10 \) >/dev/null; then
FontYDPI=100
fi
echo "Xft.dpi: ${FontYDPI}" | xrdb -merge
I really wish I knew why Xft didn't at least try to find out the screen's resolution itself instead of relying all of the time on its "dpi" resource being set, but I've found that the current implementation only uses the resource setting, so something like the above is actually always necessary to set the resource properly (and further one must also make sure the X Server itself has been properly configured with the correct physical screen dimensions).
From a C program you want to do just what xdpyinfo itself does and skip all the nonsense about Xft's resources. Here's the xdpyinfo code paraphrased:
Display *dpy;
dpy = XOpenDisplay(displayname);
for (scr = 0; scr < ScreenCount(dpy); scr++) {
int xres, yres;
/*
* there are 2.54 centimeters to an inch; so there are 25.4 millimeters.
*
* dpi = N pixels / (M millimeters / (25.4 millimeters / 1 inch))
* = N pixels / (M inch / 25.4)
* = N * 25.4 pixels / M inch
*/
xres = ((((double) DisplayWidth(dpy, scr)) * 25.4) /
((double) DisplayWidthMM(dpy, scr))) + 0.5;
yres = ((((double) DisplayHeight(dpy, scr)) * 25.4) /
((double) DisplayHeightMM(dpy, scr))) + 0.5;
}
XCloseDisplay(dpy);
Note also that if you are for some odd reason scaling your whole display (e.g. with xrandr), then you should want the fonts to scale equally with everything else. It's just a horrible bad hack to use whole-screen scaling to scale just the fonts, especially when for most things it's simpler to just tell the application to use properly scaled fonts that will display at a constant on-screen point size (which is exactly what Xft uses the "dpi" resource to do). I'm guessing Ubuntu does something stupid to change the screen resolution, e.g. using xrandr to scale up the apparent size of icons and other on-screen widgets without applications having to know about screen size and resolution, then it has to lie to Xft by rewriting the Xft.dpi resource.
Note that if you avoid whole-screen scaling then applications that don't use Xft can still get proper font scaling by correctly requesting a properly scaled font, i.e. even with bitmap fonts you can get them scaled to the proper physical on-screen size by using the screen's actual resolution in the font-spec. E.g. continuing from the above shell fragment:
# For pre-Xft applications we can specify physical font text sizes IFF we also tell
# it the screen's actual resolution when requesting a font. Note the use of the
# rounded values here.
#
DecentDeciPt="80"
DecentPt="8"
export DecentDeciPt DecentPt
#
# Best is to arrange one's font-path to get the desired one first, but....
# If you know the name of a font family that you like and you can be sure
# it is installed and in the font-path somewhere....
#
DefaultFontSpec='-*-liberation mono-medium-r-*-*-*-${DecentDeciPt}-${FontXDPI}-${FontYDPI}-m-*-iso10646-1'
export DefaultFontSpec
#
# For Xft we have set the Xft.dpi resource so this allows the physical font size to
# be specified (e.g. with Xterm's "-fs" option) and for a decent scalable font
# to be chosen:
#
DefaultFTFontSpec="-*-*-medium-r-*-*-*-*-0-0-m-*-iso10646-1"
DefaultFTFontSpecL1="-*-*-medium-r-*-*-*-*-0-0-m-*-iso8859-1"
export DefaultFTFontSpec DefaultFTFontSpecL1
# Set a default font that should work for everything
#
eval echo "*font: ${DefaultFontSpec}" | xrdb -merge
Finally here's an example of starting an xterm (that's been compiled to use Xft) with the above settings (i.e. the Xft.dpi resource and the shell variables above) to show text at physical size of 10.0 Points on the screen:
xterm -fs 10 -fa $DefaultFTFontSpec
You could try to use xdpyinfo(1); on my system it outputs, among a lot of other things:
dimensions: 1280x1024 pixels (332x250 millimeters)
resolution: 98x104 dots per inch
depths (7): 24, 1, 4, 8, 15, 16, 32
I don't know whether it can help you because I don't know how do you change the DPI of your screen, but chances are it works. Good luck!
--- UPDATE after comment ---
In a comment below from the OP, it is said that "there is a setting to change the DPI"... still I don't know which. Anyway, I tried Ctrl+Alt+Plus and Ctrl+Alt+Minus to change the resolution of the X server on the fly. After having changed the resolution, and seeing everything bigger than before, I ran xdpyinfo again. IT DIDN'T WORK: still the same output. But may be the method you use (which?) instead works...
I'm just getting my gamepad setup for SDL2, I've been creating a mapping for the SDL_GameControllerAddMapping function. Everything has worked except for the movement and look axes on my pad.
Setup like this:
game_state->joystick = SDL_JoystickOpen( 0 );
Then listen to the events like this:
SDL_Joystick* js = game_state->joystick;
for ( unsigned int i = 0; i < SDL_JoystickNumAxes( js ); ++i )
{
int axis = SDL_JoystickGetAxis( js, i );
printf( "a%d: %d\n", i, axis );
}
Close like this:
SDL_JoystickClose( game_state->joystick );
At the moment game_state is just a struct with nothing but an SDL_Joystick* in it.
I get output similar to this:
a0: 0
a1: 0
a2: -32768
a3: 0
a4: 0
a5: -32768
If I pull the triggers all the way down I get:
a0: 0
a1: 0
a2: 32767
a3: 0
a4: 0
a5: 32767
This tells me that the triggers should be mapped to a2 and a5, but the movement and look axes don't change these values and they don't show up for any of the other joystick input types. I've already used SDL_JoystickGetHat, SDL_JoystickGetButton and SDL_JoystickGetBall to map the rest of the pad.
None of the other buttons change these values either. I'm not sure, but I'm probably missing something. I've looked through the SDL2 wiki, but I didn't find anything that helped. I've also googled around for mappings for my pad, but the only thing I could find was this. Unfortunately they are missing mine. Other Google results suggest using Steam's Big Picture Mode to configure my pad, as that will generate an SDL2 mapping, but my brain wouldn't engage and I didn't understand what I was supposed to do.
I'm using these drivers. Before installing them my pad wasn't recognised at all. After installing I get this problem.
Update:
I've had a play with jstest-gtk and it reports the same thing. It says six axes are present, but 0, 1, 3, 4 don't seem to be triggered by anything on the pad.
Edit:
Oh I forgot to mention that I'm using ubuntu 15.10.
Update 2:
I now have an official xbox one pad and it's working fine. The problem with the other pad still exists though. It's a Rock Candy Wired Controller for Xbox One.
I've also use both the pads on Steam. The problem with the Rock Candy is present there too. For now I'm going to assume it's a problem with the drivers. The source is available, but I have no experience with the Linux kernel, so for the sake of my system I'm going to leave it alone.
I'm having a problem where Octave will render figures just fine in the figure box, but then refuses to properly export to PNG when I use the print() command. This is also true when I try other formats like EPS or JPG.
My current version of Octave is 3.8.1-1ubuntu1, which is up to date at the time of this post. My Ubuntu version is also 14.04. I do not receive any error messages when the code runs.
The script commands used to plot are pretty basic. For example:
linewidth = 4;
xStr = 'Particle Diameter (\mum)';
yStr = 'Scattering Cross-Section (\mum^2)';
FontName = 'Times New Roman';
LabelFontSize = 22;
AxisFontSize = 18;
F1 = figure(1);
clf('reset');
plot(diameter*1e6,sigma_0*1e12,'k','linewidth',linewidth);
hold on
plot(diameter*1e6,sigma_1*1e12,'r','linewidth',linewidth);
X = xlabel(xStr);
set(X,'FontName',FontName,'fontsize',LabelFontSize);
Y = ylabel(yStr);
set(Y,'FontName',FontName,'fontsize',LabelFontSize);
axis([xMin xMax sigMin sigMax]);
set(gca,'fontsize',AxisFontSize,'linewidth',2);
legend('2.0 \mum','3.8 \mum',4);
print(F1,'Mie.png','-dpng');
The strange thing is that I have other images from months ago that rendered the LaTex bits just fine, and even used nearly identical code. That almost seems like some recent software upgrade may have killed my plotting.
I appreciate any help you can give me. This issue is driving me nuts.
This is a known problem when using the OpenGL toolkits (graphics_toolkit FLTK) which is default in octave3.8.x. Previous versions used gnuplot for printing.
So you have two choices:
Switch back to gnuplot with "graphics_toolkit gnuplot" before doing any plotting. You may also add this to your .octaverc so it's set every time you start octave
Use LaTex output: http://wiki.octave.org/Printing_with_FLTK
I have searched online and wasn't able to find an answer to this so I figured I could ask the experts here. Is there anyway to get the current window resolution in OpenCV? I've tried the cvGetWindowProperty passing in the named instance of the window, but I can't find a flag to use.
Any help would be greatly appreciated.
You can get the width and height of the contents of the window by using shape[1] and shape[0] respectively.
I think when you use Open CV, the image from the camera is stored as a Numpy array, with the shape being [rows, cols, bgr_channels] like [480,640,3]
code e.g.
import cv2 as cv2
cv2.namedWindow("myWindow")
cap = cv2.VideoCapture(0) #open camera
ret,frame = cap.read() #start streaming
windowWidth=frame.shape[1]
windowHeight=frame.shape[0]
print(windowWidth)
print(windowHeight)
cv2.waitKey(0) #wait for a key
cap.release() # Destroys the capture object
cv2.destroyAllWindows() # Destroys all the windows
console output:
640
480
You could also call getWindowImageRect() which gets a whole rectangle: x,y,w,h
e.g.
import cv2 as cv2
cv2.namedWindow("myWindow")
cap = cv2.VideoCapture(0) #open camera
ret,frame = cap.read() #start streaming
windowWidth=cv2.getWindowImageRect("myWindow")[2]
windowHeight=cv2.getWindowImageRect("myWindow")[3]
print(windowWidth)
print(windowHeight)
cv2.waitKey(0) #wait for a key
cap.release() # Destroys the capture object
cv2.destroyAllWindows() # Destroys all the windows
-which very curiously printed 800 500 (the actual widescreen format from the camera)
Hmm... it's not really a great answer (pretty hack!), but you could always call cvGetWindowHandle. With that native window handle, I'm sure you could figure out some native calls to get the contained image sizes. Ugly, hackish, and not-very-portable, but that's the best I could suggest given my limited OpenCV exposure.