PowerShell script returning the wrong screen resolution - winforms

I just wrote a simple PowerShell script to get the screen resolution of my monitor, but it seems to be returning the wrong values.
# Returns an screen width and screen height of maximum screen resolution
function Get-ScreenSize {
$screen = [System.Windows.Forms.Screen]::PrimaryScreen
$width = $screen.Bounds.Width
$height = $screen.Bounds.Height
return $width, $height
}
Get-ScreenSize
I am running this script on a 4k monitor with the resolution set at 3840 x 2160, but it is giving me the following output:
1536
864
Is there anything that would cause System.Windows.Forms.Screen to get the wrong "Bounds" values?

Well I didn't exactly find out why I was getting such strange results... but I did find another approach that actually seems simpler and appears to be accurate.
$vc = Get-WmiObject -class "Win32_VideoController"
$vc.CurrentHorizontalResolution
$vc.CurrentVerticalResolution
This will print the current screen resolution and appears to be giving me accurate results which is what I was actually looking for. If anyone figures out what could cause the other approach to produce inaccurate results I would still really like to know why it is happening though...

It's because that command gives you the scaled resolution. If you're running 3840 x 2160 but you're not running on 100% scaling you'll get a different value.

That's odd.
Why on earth has Microsoft only provided the Get-DisplayResolution cmdlet with Server Core?
That edition ships without a Start-button... and according to the comment above on the returned display size (minus start-bar); I won't be surprised to hear that cmdlet is using the same .NET code library.
Quick search in my HKLM\SYSTEM\CurrentControlSet\Control lists a few keys for monitors and values per screen, but nothing useful.
Edit: see Q7967699.
PS D:\Scripts> Add-Type -AssemblyName System.Windows.Forms
PS D:\Scripts> [System.Windows.Forms.Screen]::AllScreens
BitsPerPixel : 32
Bounds : {X=0,Y=0,Width=3840,Height=2160}
DeviceName : \\.\DISPLAY1
Primary : True
WorkingArea : {X=0,Y=0,Width=3840,Height=2120}

Related

How can I get updated system DPI information from X11 in a C program?

I'm trying to create a DPI aware app which responds to user requested DPI change events by resizing the window.
The program in question is created in C and uses SDL2, however to retrieve system DPI information I use xlib directly, as the SDL DPI support in X11 is lacking.
I found two ways to get the correct DPI information on program startup, both involving getting Xft.dpi information from Xresource: one is to use XGetDefault(display, "Xft", "dpi"), while the other is to use XResourceManagerString, XrmGetStringDatabase and XrmGetResource. Both of them return the correct DPI value when the program is created.
The problem is, if the user changes the system scale while the program is running, both XGetDefault abd XrmGetResource still return the old DPI value even though when I run "xrdb -query | grep Xft.dpi" the value has indeed changed.
Does anyone know a way to get the updated Xft.dpi value?
I found out a way to do exactly what I wanted, even though it's rather hackish.
The solution (using XLib) is to create a new, temporary connection to the X server using XOpenDisplay and XCloseDisplay, and poll the resource information from that new connection.
The reason this is needed is because X fetches the resource information only once per new connection, and never updates it. Therefore, by opening a new connection, X will get the updated xresource data, which can then be used for the old main connection.
Be mindful that constantly opening and closing new X connections may not be great for performance, so only do it when you absolutely need to. In my case, since the window has borders, I only check for DPI changes when the title height has changed, as a DPI change will change the size of your title border due to font size differences.
First off it must be noted that the value of the Xft.dpi resource isn't necessarily accurate -- it depends on whether the system and or user login scripts have correctly set it or not.
Also it is important to remember that the Xft.dpi resource is intended to be used by the Xft library, not by arbitrary programs looking for the screen resolution.
The Xft.dpi resource can be set as follows. This example effectively only deals with a display with a single screen, and note that it uses xdpyinfo. This also shows how it might not be exact, but could be rounded. Finally this example shows calculation of both the horizontal and vertical resolution, but Xft really only wants the horizontal resolution:
SCREENDPI=$(xdpyinfo | sed -n 's/^[ ]*resolution:[ ]*\([^ ][^ ]*\) .*$/\1/p;//q')
SCREENDPI_X=$(expr "$SCREENDPI" : '\([0-9]*\)x')
SCREENDPI_Y=$(expr "$SCREENDPI" : '[0-9]*x\([0-9]*\)')
# N.B.: If true screen resolution is within 10% of 100DPI it makes the most
# sense to claim 100DPI to avoid font-scaling artifacts for bitmap fonts.
if expr \( $SCREENDPI_X / 100 = 1 \) \& \( $SCREENDPI_X % 100 \<= 10 \) >/dev/null; then
FontXDPI=100
fi
if expr \( $SCREENDPI_Y / 100 = 1 \) \& \( $SCREENDPI_Y % 100 \<= 10 \) >/dev/null; then
FontYDPI=100
fi
echo "Xft.dpi: ${FontYDPI}" | xrdb -merge
I really wish I knew why Xft didn't at least try to find out the screen's resolution itself instead of relying all of the time on its "dpi" resource being set, but I've found that the current implementation only uses the resource setting, so something like the above is actually always necessary to set the resource properly (and further one must also make sure the X Server itself has been properly configured with the correct physical screen dimensions).
From a C program you want to do just what xdpyinfo itself does and skip all the nonsense about Xft's resources. Here's the xdpyinfo code paraphrased:
Display *dpy;
dpy = XOpenDisplay(displayname);
for (scr = 0; scr < ScreenCount(dpy); scr++) {
int xres, yres;
/*
* there are 2.54 centimeters to an inch; so there are 25.4 millimeters.
*
* dpi = N pixels / (M millimeters / (25.4 millimeters / 1 inch))
* = N pixels / (M inch / 25.4)
* = N * 25.4 pixels / M inch
*/
xres = ((((double) DisplayWidth(dpy, scr)) * 25.4) /
((double) DisplayWidthMM(dpy, scr))) + 0.5;
yres = ((((double) DisplayHeight(dpy, scr)) * 25.4) /
((double) DisplayHeightMM(dpy, scr))) + 0.5;
}
XCloseDisplay(dpy);
Note also that if you are for some odd reason scaling your whole display (e.g. with xrandr), then you should want the fonts to scale equally with everything else. It's just a horrible bad hack to use whole-screen scaling to scale just the fonts, especially when for most things it's simpler to just tell the application to use properly scaled fonts that will display at a constant on-screen point size (which is exactly what Xft uses the "dpi" resource to do). I'm guessing Ubuntu does something stupid to change the screen resolution, e.g. using xrandr to scale up the apparent size of icons and other on-screen widgets without applications having to know about screen size and resolution, then it has to lie to Xft by rewriting the Xft.dpi resource.
Note that if you avoid whole-screen scaling then applications that don't use Xft can still get proper font scaling by correctly requesting a properly scaled font, i.e. even with bitmap fonts you can get them scaled to the proper physical on-screen size by using the screen's actual resolution in the font-spec. E.g. continuing from the above shell fragment:
# For pre-Xft applications we can specify physical font text sizes IFF we also tell
# it the screen's actual resolution when requesting a font. Note the use of the
# rounded values here.
#
DecentDeciPt="80"
DecentPt="8"
export DecentDeciPt DecentPt
#
# Best is to arrange one's font-path to get the desired one first, but....
# If you know the name of a font family that you like and you can be sure
# it is installed and in the font-path somewhere....
#
DefaultFontSpec='-*-liberation mono-medium-r-*-*-*-${DecentDeciPt}-${FontXDPI}-${FontYDPI}-m-*-iso10646-1'
export DefaultFontSpec
#
# For Xft we have set the Xft.dpi resource so this allows the physical font size to
# be specified (e.g. with Xterm's "-fs" option) and for a decent scalable font
# to be chosen:
#
DefaultFTFontSpec="-*-*-medium-r-*-*-*-*-0-0-m-*-iso10646-1"
DefaultFTFontSpecL1="-*-*-medium-r-*-*-*-*-0-0-m-*-iso8859-1"
export DefaultFTFontSpec DefaultFTFontSpecL1
# Set a default font that should work for everything
#
eval echo "*font: ${DefaultFontSpec}" | xrdb -merge
Finally here's an example of starting an xterm (that's been compiled to use Xft) with the above settings (i.e. the Xft.dpi resource and the shell variables above) to show text at physical size of 10.0 Points on the screen:
xterm -fs 10 -fa $DefaultFTFontSpec
You could try to use xdpyinfo(1); on my system it outputs, among a lot of other things:
dimensions: 1280x1024 pixels (332x250 millimeters)
resolution: 98x104 dots per inch
depths (7): 24, 1, 4, 8, 15, 16, 32
I don't know whether it can help you because I don't know how do you change the DPI of your screen, but chances are it works. Good luck!
--- UPDATE after comment ---
In a comment below from the OP, it is said that "there is a setting to change the DPI"... still I don't know which. Anyway, I tried Ctrl+Alt+Plus and Ctrl+Alt+Minus to change the resolution of the X server on the fly. After having changed the resolution, and seeing everything bigger than before, I ran xdpyinfo again. IT DIDN'T WORK: still the same output. But may be the method you use (which?) instead works...

How to write the content of a Flink var to screen in Zeppelin?

I try to run the followin simple commands in Apache Zeppelin.
%flink
var rabbit = env.fromElements(
"ARTHUR: What, behind the rabbit?",
"TIM: It is the rabbit!",
"ARTHUR: You silly sod! You got us all worked up!",
"TIM: Well, that's no ordinary rabbit. That's the most foul, cruel, and bad-tempered rodent you ever set eyes on.",
"ROBIN: You tit! I soiled my armor I was so scared!",
"TIM: Look, that rabbit's got a vicious streak a mile wide, it's a killer!")
var counts = rabbit.flatMap { _.toLowerCase.split("\\W+")}.map{ (_,1)}.groupBy(0).sum(1)
counts.print()
I try to print out the results in the notebook. But unfortunately, I only get the following output.
rabbit: org.apache.flink.api.scala.DataSet[String] = org.apache.flink.api.scala.DataSet#37fdb65c
counts: org.apache.flink.api.scala.AggregateDataSet[(String, Int)] = org.apache.flink.api.scala.AggregateDataSet#1efc7158
res103: org.apache.flink.api.java.operators.DataSink[(String, Int)] = DataSink '<unnamed>' (Print to System.out)
How can I spill the content of counts to the notebook in Zeppelin?
The way to print the result of such computation in Zeppelin is:
%flink
counts.collect().foreach(println(_))
//or one might prefer
//counts.collect foreach println
Output:
(a,3)
(all,1)
(and,1)
(armor,1)
...
The reason for the observed behaviour lies in the interplay between Apache Zeppelin and Apache Flink. Zeppelin captures all standard output of Console. However, Flink also prints output to System.out and that's exactly what's happening when you call counts.print(). The reason why bzz's solution works is that it prints the result using Console.
I opened a JIRA issue [1] and opened a pull request [2] to correct this behaviour so that you can also use counts.print().
[1] https://issues.apache.org/jira/browse/ZEPPELIN-287
[2] https://github.com/apache/incubator-zeppelin/pull/288

Octave won't exort LaTex symbols

I'm having a problem where Octave will render figures just fine in the figure box, but then refuses to properly export to PNG when I use the print() command. This is also true when I try other formats like EPS or JPG.
My current version of Octave is 3.8.1-1ubuntu1, which is up to date at the time of this post. My Ubuntu version is also 14.04. I do not receive any error messages when the code runs.
The script commands used to plot are pretty basic. For example:
linewidth = 4;
xStr = 'Particle Diameter (\mum)';
yStr = 'Scattering Cross-Section (\mum^2)';
FontName = 'Times New Roman';
LabelFontSize = 22;
AxisFontSize = 18;
F1 = figure(1);
clf('reset');
plot(diameter*1e6,sigma_0*1e12,'k','linewidth',linewidth);
hold on
plot(diameter*1e6,sigma_1*1e12,'r','linewidth',linewidth);
X = xlabel(xStr);
set(X,'FontName',FontName,'fontsize',LabelFontSize);
Y = ylabel(yStr);
set(Y,'FontName',FontName,'fontsize',LabelFontSize);
axis([xMin xMax sigMin sigMax]);
set(gca,'fontsize',AxisFontSize,'linewidth',2);
legend('2.0 \mum','3.8 \mum',4);
print(F1,'Mie.png','-dpng');
The strange thing is that I have other images from months ago that rendered the LaTex bits just fine, and even used nearly identical code. That almost seems like some recent software upgrade may have killed my plotting.
I appreciate any help you can give me. This issue is driving me nuts.
This is a known problem when using the OpenGL toolkits (graphics_toolkit FLTK) which is default in octave3.8.x. Previous versions used gnuplot for printing.
So you have two choices:
Switch back to gnuplot with "graphics_toolkit gnuplot" before doing any plotting. You may also add this to your .octaverc so it's set every time you start octave
Use LaTex output: http://wiki.octave.org/Printing_with_FLTK

Using PyMEL to set the "Alpha to Use" attribute in an object of class psdFileTex

I am using Maya to do some procedural work, and I have a lot of textures that I need to load into Maya, and they all have transparencies (alpha channels). I would very much like to be able to automate this process. Using PyMEL, I can create my textures and hook them up to a shader, but the alpha doesn't set properly by default. There is an attribute in the psdFileTex node called "Alpha to Use", and it must be set to "Transparency" in order for my alpha channel to work. My question is this - how do I use PyMEL scripting to set the "Alpha to Use" attribute properly?
Here is the code I am using to set up my textures:
import pymel.core as pm
pm.shadingNode('lambert', asShader=True, name='myShader1')
pm.sets(renderable=True, noSurfaceShader=True, empty=True, name='myShader1SG')
pm.connectAttr('myShader1.outColor', 'myShader1SG.surfaceShader', f=True)
pm.shadingNode('psdFileTex', asTexture=True, name='myShader1PSD')
pm.connectAttr('myShader1PSD.outColor', 'myShader1.color')
pm.connectAttr('myShader1PSD.outTransparency', 'myShader1.transparency')
pm.setAttr('myShader1ColorPSD.fileTextureName', '<pathway>/myShader1_texture.psd', type='string')
If anyone can help me, I would really appreciate it.
Thanks
With any node, you can use listAttr() to get the available editable attributes. Run listAttr('myShaderPSD'), note in it's output, there will be two attributes called 'alpha' and 'alphaList'. Alpha, will return you the current selected alpha channel. AlphaList will return you however many alpha channels you have in your psd.
Example
pm.PyNode('myShader1PSD').alphaList.get()
# Result: [u'Alpha 1', u'Alpha 2'] #
If you know you'll only ever be using just the one alpha, or the first alpha channel, you can simply do this.
psdShader = pm.PyNode('myShader1PSD')
alphaList = psdShader.alphaList.get()
if (len(alphaList) > 0):
psdShader.alpha.set(alphaList[0])
else:
// No alpha channel
pass
Remember that lists start iterating from 0, so our first alpha channel will be located at position 0.
Additionally and unrelated, while you're still using derivative commands of the maya.core converted for Pymel, there's still some commands you can use to help make your code read nicer.
pm.setAttr('myShader1ColorPSD.fileTextureName', '<pathway>/myShader1_texture.psd', type='string')
We can convert this to pymel like so:
pm.PyNode('myShader1ColorPSD').fileTextureName.set('<pathway>/myShader1_texture.psd')
And:
pm.connectAttr('myShader1PSD.outColor', 'myShader1.color')
Can be converted to:
pm.connect('myShader1PSD.outColor', 'myShader1.color')
While they may only be small changes, it reads just the little bit nicer, and it's native PyMel.
Anyway, I hope I have helped you!

Get the size of a window in OpenCV

I have searched online and wasn't able to find an answer to this so I figured I could ask the experts here. Is there anyway to get the current window resolution in OpenCV? I've tried the cvGetWindowProperty passing in the named instance of the window, but I can't find a flag to use.
Any help would be greatly appreciated.
You can get the width and height of the contents of the window by using shape[1] and shape[0] respectively.
I think when you use Open CV, the image from the camera is stored as a Numpy array, with the shape being [rows, cols, bgr_channels] like [480,640,3]
code e.g.
import cv2 as cv2
cv2.namedWindow("myWindow")
cap = cv2.VideoCapture(0) #open camera
ret,frame = cap.read() #start streaming
windowWidth=frame.shape[1]
windowHeight=frame.shape[0]
print(windowWidth)
print(windowHeight)
cv2.waitKey(0) #wait for a key
cap.release() # Destroys the capture object
cv2.destroyAllWindows() # Destroys all the windows
console output:
640
480
You could also call getWindowImageRect() which gets a whole rectangle: x,y,w,h
e.g.
import cv2 as cv2
cv2.namedWindow("myWindow")
cap = cv2.VideoCapture(0) #open camera
ret,frame = cap.read() #start streaming
windowWidth=cv2.getWindowImageRect("myWindow")[2]
windowHeight=cv2.getWindowImageRect("myWindow")[3]
print(windowWidth)
print(windowHeight)
cv2.waitKey(0) #wait for a key
cap.release() # Destroys the capture object
cv2.destroyAllWindows() # Destroys all the windows
-which very curiously printed 800 500 (the actual widescreen format from the camera)
Hmm... it's not really a great answer (pretty hack!), but you could always call cvGetWindowHandle. With that native window handle, I'm sure you could figure out some native calls to get the contained image sizes. Ugly, hackish, and not-very-portable, but that's the best I could suggest given my limited OpenCV exposure.

Resources