WinForms: Making a set of controls scale vertically - winforms

I have a Windows Form that displays several DataGridViews in the following layout:
(No access to image hosting at work, so please pardon the ASCII art...)
┌─────────────────────────────────────────┐
│┌───────────┐┌──────────────────────────┐│
││ ││ ││
│└───────────┘│ ││
|┌───────────┐│ ││
││ ││ ││
│└───────────┘│ ││
|┌───────────┐│ ││
││ ││ ││
│└───────────┘└──────────────────────────┘│
└─────────────────────────────────────────┘
Unfortunately, when the user resizes the form to be taller, the form ends up looking like this:
┌─────────────────────────────────────────┐
│┌───────────┐┌──────────────────────────┐│
││ ││ ││
│└───────────┘│ ││
| | ||
| | ||
|┌───────────┐│ ││
││ ││ ││
│└───────────┘│ ││
| | ||
| | ||
|┌───────────┐│ ││
││ ││ ││
│└───────────┘└──────────────────────────┘│
└─────────────────────────────────────────┘
Instead of this:
┌─────────────────────────────────────────┐
│┌───────────┐┌──────────────────────────┐│
││ ││ ││
││ ││ ││
│└───────────┘│ ││
|┌───────────┐│ ││
││ ││ ││
││ ││ ││
│└───────────┘│ ││
|┌───────────┐│ ││
││ ││ ││
││ ││ ││
│└───────────┘└──────────────────────────┘│
└─────────────────────────────────────────┘
To reproduce this, anchor the Top-Left DataGridView to Top-Left, the Center-Left DataGridView to Left, and the Bottom-Left DataGridView to Bottom-Left, and the big DataGridView to all 4.
What can I do to get the behavior I want?

Put a TableLayoutPanel in the left hand column with 3 rows and 1 column, dock each of the smaller controls in a row with docking stlye "fill", then anchor the TableLayoutPanel left, top, and bottom.

Try using the some SplitContainer controls in combination with some Panel or GroupBox containers. Then your user will have the ability to size as needed also.

Related

Mimic `let' statement in a macro

In the following directory structure:
.
├── a
│ ├── 1
│ ├── 2
│ ├── 3
│ └── 4
├── b
│ ├── 5
│ ├── 6
│ ├── 7
│ └── 8
├── c
├── d
└── test.hy
The following code prints the wrong path:
(eval-and-compile (import os) (import pathlib [Path]))
(defmacro with-cwd [dir #* body]
`(let [ cwd (.cwd Path) ]
(try (.chdir os ~dir)
~#body
(finally (.chdir os cwd)))))
(defmacro let-cwd [dir vars #* body] `(let ~vars (with-cwd ~dir ~#body)))
(setv a (/ (. (Path __file__) parent) "a"))
(let-cwd a [ b (/ a.parent "b") ] (print f"Hello from {(.cwd Path)}!\n") (print (.resolve b)))
It is supposed to print the following:
Hello from /home/shadowrylander/with-cwd-test/a!
/home/shadowrylander/with-cwd-test/b
While it is instead printing:
Hello from /home/shadowrylander/with-cwd-test/a!
/home/shadowrylander/with-cwd-test/a/b
Why is b not properly assigned when doing (quasiquote (let ~vars (with-cwd ~dir ~#body)))?
This looks like a bug (https://github.com/hylang/hy/issues/2318), although it has to do with __file__ and is unrelated to let or macros. Remember, when you run into trouble, simplify your problematic code as much as you can, so you can figure out what's going on.

Latency after moving micro-service (using ZeroMQ, C, & Python processes) from 64 bit hardware to 32 bit hardware, but nominal cpu usage

I have two processes wrote in C that set up PUSH/PULL ZeroMQ sockets and two threads in a Python process that mirror the PUSH/PULL sockets. There are roughly 80 - 300 light weight (<30 bytes) messages per second being sent from the C process to the Python process, and 10-30 similar messages from the Python process to the C process.
I was running these services on 64 bit ARMv8 (Ubuntu based) and AMD64 (Ubuntu 18.04) with no noticeable latency. I tried running the exact same services on a 32 bit Linux based system and was shocked to see messages coming through over 30 seconds behind, even after killing the C services. When checking the CPU usage, it was pretty flat 30-40% and didn't appear to be the bottle neck.
My ZeroMQ socket settings didn't change between systems, I set LINGER to 0, I tried RCVTIMEO between 0 to 100 ms, and I tried varying BACKLOG between 0 and 50, with no difference either way. I tried using multiple IO threads and setting socket thread affinity, also to no avail. For the PUSH sockets I'm connecting the sockets on tcp://localhost:##### and binding the PULL sockets to tcp://*:#####. I also used ipc:///tmp/..., messages were being sent and received, but the latency still existed on the 32 bit system.
I investigated other Python steps in-between receiving the messages, and they don't appear to be taking more than a millisecond at most. When I time the socket.recv(0) it's as high as 0.02 seconds even when the RCVTIMEO is set to 0 for that socket.
Any suggestions why I would see this behaviour on the new 32 bit platform and not on other platforms? Am I possibly looking in all the wrong places?
Here's a bit of code to help explain:
The connection and the _recv() class-method are roughly depicted below:
def _connect(self):
self.context = zmq.Context(4)
self.sink = self.context.socket(zmq.PULL)
self.sink.setsockopt(zmq.LINGER, 0)
self.sink.setsockopt(zmq.RCVTIMEO, 100)
self.sink.setsockopt(zmq.BACKLOG, 0)
self.sink.bind("tcp://*:55755")
def _recv(self):
while True:
msg = None
try:
msg = self.sink.recv(0) # Use blocking or zmq.NOBLOCK, still appears to be slow
except zmq.Error
... meaningful exception handle here
# This last step, when timed usually takes less than a millisecond to process
if msg:
msg_dict = utils.bytestream_to_dict(msg) # unpacking step (negligible)
if msg_dict:
self.parser.parse(msg_dict) # parser is a dict of callbacks also negligible
On the C process side
zmq_init (4);
void *context = zmq_ctx_new ();
/* Connect the Sender */
void *vent = zmq_socket (context, ZMQ_PUSH);
int timeo = 0;
int timeo_ret = zmq_setsockopt(vent, ZMQ_SNDTIMEO, &timeo, sizeof(timeo));
if (timeo_ret != 0)
error("Failed to set ZMQ recv timeout because %s", zmq_strerror(errno));
int linger = 100;
int linger_ret = zmq_setsockopt(vent, ZMQ_LINGER, &linger, sizeof(linger));
if (linger_ret != 0)
error("Failed to set ZMQ linger because %s", zmq_strerror(errno));
if (zmq_connect (vent, vent_port) == 0)
info("Successfully initialized ZeroMQ ventilator on %s", vent_port);
else {
error("Failed to initialize %s ZeroMQ ventilator with error %s", sink_port,
zmq_strerror(errno));
ret = 1;
}
...
/* When a message needs to be sent it's instantly hitting this where msg is a char* */
ret = zmq_send(vent, msg, msg_len, ZMQ_NOBLOCK);
On docker running on target 32 bit system
lstopo - -v --no-io
Machine (P#0 local=1019216KB total=1019216KB HardwareName="Freescale i.MX6 Quad/DualLite (Device Tree)" HardwareRevision=0000 HardwareSerial=0000000000000000 Backend=Linux LinuxCgroup=/docker/d2b0a3b3a5eedb7e10fc89fdee6e8493716a359597ac61350801cc302d79b8c0 OSName=Linux OSRelease=3.10.54-dey+g441c8d4 OSVersion="#1 SMP PREEMPT RT Tue Jan 28 12:11:37 CST 2020" HostName=db1docker Architecture=armv7l hwlocVersion=1.11.12 ProcessName=lstopo)
Package L#0 (P#0 CPUModel="ARMv7 Processor rev 10 (v7l)" CPUImplementer=0x41 CPUArchitecture=7 CPUVariant=0x2 CPUPart=0xc09 CPURevision=10)
Core L#0 (P#0)
PU L#0 (P#0)
Core L#1 (P#1)
PU L#1 (P#1)
Core L#2 (P#2)
PU L#2 (P#2)
Core L#3 (P#3)
PU L#3 (P#3)
depth 0: 1 Machine (type #1)
depth 1: 1 Package (type #3)
depth 2: 4 Core (type #5)
depth 3: 4 PU (type #6)
EDIT:
We were able to make the latency disappear on our target machine by disabling nearly all other worker threads.
Q : roughly 80 - 300 light weight (<30 bytes) messages per second being sent from the C process to the Python process, and 10-30 similar messages from the Python process to the C process.
a ) there is Zero information about sending any messages from python to C ( not about this contained in the posted source code, only C PUSH-es to python )
b ) 300 [Hz] < 30 B payloads are nothing, in terms of ZeroMQ capabilities
c ) python is, since ever ( and almost for sure will remain so ), a pure-[SERIAL] in sense, whatever amount of Thread-instances, so any the execution will have to wait till it gets POSACK'ed the GIL-lock ownership, blocking any other work, thus re-instating a pure-[SERIAL] work one-step-after-another... yet at an additional costs of GIL-lock handshaking added.
d ) given all processes run on the same hardware-platform ( see the tcp://localhost... specified ), there is no reason to spawn as many as ( 4 + 4 )-IO-threads ( where python cannot "harness"-em on reading by a just one and a single-thread at a time (slo-mo), given no more but 4-CPU-cores were reported above by an lstopo excerpt:
Machine (995MB)
+Package L#0
Core L#0 +PU L#0 (P#0)
Core L#1 +PU L#1 (P#1)
Core L#2 +PU L#2 (P#2)
Core L#3 +PU L#3 (P#3)
e) ISO-OSI-L2/L3 parameters make sense to tweak but after all larger sources of latency got shaved off.
f) last, but not least, run python pystone test ( on both the original platform and the target-hardware platform ), to see the actual relative performance of the i.MX6-CPU-powered python, to become able to compare apples to apples
Running pystone on the target machine results in: This machine benchmarks at 10188.5 pystones/second and my host machine is 274264 pystones/second
so, the problem with the deployment onto an i.MX6-target is not just its 32-bit O/S per se, but also the 27x slower processing of over-subscribed IO-threads ( more threads 4+4 than 4-CPU-cores ) do not improve the flow of messages.
A better view, served by lstopo-no-graphics -.ascii
┌───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐
│ Machine (31876MB) │
│ │
│ ┌────────────────────────────────────────────────────────────┐ ┌───────────────────────────┐ │
│ │ Package P#0 │ ├┤╶─┬─────┼┤╶───────┤ PCI 10ae:1F44 │ │
│ │ │ │ │ │ │
│ │ ┌────────────────────────────────────────────────────────┐ │ │ │ ┌────────────┐ ┌───────┐ │ │
│ │ │ L3 (8192KB) │ │ │ │ │ renderD128 │ │ card0 │ │ │
│ │ └────────────────────────────────────────────────────────┘ │ │ │ └────────────┘ └───────┘ │ │
│ │ │ │ │ │ │
│ │ ┌──────────────────────────┐ ┌──────────────────────────┐ │ │ │ ┌────────────┐ │ │
│ │ │ L2 (2048KB) │ │ L2 (2048KB) │ │ │ │ │ controlD64 │ │ │
│ │ └──────────────────────────┘ └──────────────────────────┘ │ │ │ └────────────┘ │ │
│ │ │ │ └───────────────────────────┘ │
│ │ ┌──────────────────────────┐ ┌──────────────────────────┐ │ │ │
│ │ │ L1i (64KB) │ │ L1i (64KB) │ │ │ ┌───────────────┐ │
│ │ └──────────────────────────┘ └──────────────────────────┘ │ ├─────┼┤╶───────┤ PCI 10bc:8268 │ │
│ │ │ │ │ │ │
│ │ ┌────────────┐┌────────────┐ ┌────────────┐┌────────────┐ │ │ │ ┌────────┐ │ │
│ │ │ L1d (16KB) ││ L1d (16KB) │ │ L1d (16KB) ││ L1d (16KB) │ │ │ │ │ enp2s0 │ │ │
│ │ └────────────┘└────────────┘ └────────────┘└────────────┘ │ │ │ └────────┘ │ │
│ │ │ │ └───────────────┘ │
│ │ ┌────────────┐┌────────────┐ ┌────────────┐┌────────────┐ │ │ │
│ │ │ Core P#0 ││ Core P#1 │ │ Core P#2 ││ Core P#3 │ │ │ ┌──────────────────┐ │
│ │ │ ││ │ │ ││ │ │ ├─────┤ PCI 1002:4790 │ │
│ │ │ ┌────────┐ ││ ┌────────┐ │ │ ┌────────┐ ││ ┌────────┐ │ │ │ │ │ │
│ │ │ │ PU P#0 │ ││ │ PU P#1 │ │ │ │ PU P#2 │ ││ │ PU P#3 │ │ │ │ │ ┌─────┐ ┌─────┐ │ │
│ │ │ └────────┘ ││ └────────┘ │ │ └────────┘ ││ └────────┘ │ │ │ │ │ sr0 │ │ sda │ │ │
│ │ └────────────┘└────────────┘ └────────────┘└────────────┘ │ │ │ └─────┘ └─────┘ │ │
│ └────────────────────────────────────────────────────────────┘ │ └──────────────────┘ │
│ │ │
│ │ ┌───────────────┐ │
│ └─────┤ PCI 1002:479c │ │
│ └───────────────┘ │
└───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘

how to get a correctly sized window in ncurses

I am trying out ncurses programming in C on Linux (Mint) and am having a strange problem. I keep getting windows with the wrong number of columns for the first and final lines. For example, with this code found on StackOverflow
#include <ncurses.h>
int main(){
initscr();
WINDOW * win = newwin(10,50,10,10);
box(win,0,0);
wrefresh(win);
wgetch(win);
endwin();
return 0;
}
I get this output:
┌─┐
│ │
│ │
│ │
│ │
│ │
│ │
│ │
│ │
└─┘
As if the first and final lines are only three columns wide. If I add text to the window, using waddch, I can only add three characters to the top line as well.
Any help would be appreciated, I can't find examples of other people running into this issue on the web, but it's not the easiest thing to come up with a good search string for.
Looks like you're using one of those xterm look-alikes, and running into their omission of repeat-character, noted a little over a year ago in the ncurses FAQ.

Draw table in C — like table on manual page of Linux

I want to make nice tables like you see on some manual page of Linux documentation
in C programming language. Is there any library or functions to create a table like them.
For example a table that you can find on man syslog like the following, produced by running man syslog:
┌──────────────────────┬───────────────┬────────────────────┐
│Interface │ Attribute │ Value │
├──────────────────────┼───────────────┼────────────────────┤
│openlog(), closelog() │ Thread safety │ MT-Safe │
├──────────────────────┼───────────────┼────────────────────┤
│syslog(), vsyslog() │ Thread safety │ MT-Safe env locale │
└──────────────────────┴───────────────┴────────────────────┘
This was probably done with "tbl". See man tbl. Also see the L. L. Cherry
and M. E. Lesk document "Tbl — A Program to Format Tables" which can be found via Google.
An example
This file:
$ cat f.tbl
.TS
allbox;
c s s
c c c
n n n.
AT&T Common Stock
Year Price Dividend
1984 15-20 $1.20
5 19-25 1.20
6 21-28 1.20
7 20-36 1.20
8 24-30 1.20
9 29-37 .30*
.TE
* (first quarter only)
Produced this (with tbl f.tbl > f.troff; nroff f.troff):
┌────────────────────────┐
│ AT&T Common Stock │
├─────┬───────┬──────────┤
│Year │ Price │ Dividend │
├─────┼───────┼──────────┤
│1984 │ 15‐20 │ $1.20 │
├─────┼───────┼──────────┤
│ 5 │ 19‐25 │ 1.20 │
├─────┼───────┼──────────┤
│ 6 │ 21‐28 │ 1.20 │
├─────┼───────┼──────────┤
│ 7 │ 20‐36 │ 1.20 │
├─────┼───────┼──────────┤
│ 8 │ 24‐30 │ 1.20 │
├─────┼───────┼──────────┤
│ 9 │ 29‐37 │ .30* │
└─────┴───────┴──────────┘
* (first quarter only)
You can take a look at the ncurses library here: http://tldp.org/HOWTO/NCURSES-Programming-HOWTO/

How can I determine what stdout "points" to in C?

I want to be able to tell when my program's stdout is redirected to a file/device, and when it is left to print normally on the screen. How can this be done in C?
Update 1: From the comments, it seems to be system dependent. If so, then how can this be done with posix-compliant systems?
Perhaps isatty(stdout)?
Edit: As Roland and tripleee suggest, a better answer would be isatty(STDOUT_FILENO).
Look up isatty and more generally fileno.
I am afraid that you can't, at least with standard C in a platform independent manner. The idea behind standard input/output is that C will do it's IO from a standard place. That standard place could be a terminal or a file or anything else, that is not the consideration of C. So you can't detect what is standard IO currently used.
EDIT: If a platform specific solution is okay for you then please refer to other answers (and also edit the question accordingly).
If a Linux-specific solution is OK, you can examine the symlinks under the /proc directory for your process. E.g.,
$ exec 3>/dev/null
$ ls -l /proc/$$/fd
total 0
lrwx------ 1 root root 64 Sep 12 03:28 0 -> /dev/pts/1
lrwx------ 1 root root 64 Sep 12 03:29 1 -> /dev/pts/1
lrwx------ 1 root root 64 Sep 12 03:29 2 -> /dev/pts/1
lrwx------ 1 root root 64 Sep 12 03:29 255 -> /dev/pts/1
l-wx------ 1 root root 64 Sep 12 03:29 3 -> /dev/null
You might want to check this out:
http://www.cplusplus.com/reference/clibrary/cstdio/freopen/
I'm quoting from the link:
freopen
Reopen stream with different file or mode
freopen first tries to close any file already associated with the stream given as third parameter and disassociates it.
Then, whether that stream was successfuly closed or not, freopen opens the file whose name is passed in the first parameter, filename, and associates it with the specified stream just as fopen would do using the mode value specified as the second parameter.
This function is specially useful for redirecting predefined streams like stdin, stdout and stderr to specific files.
Though I'm not sure if this'll help you find out what it is pointing to in the first place.

Resources