I've got a FlowLayoutPanel with properties:
Dock = Fill (in a usercontrol)
FlowDirection = TopDown
WrapContents = false
I do it this way so that each item added to the panel gets added to the bottom.
The items that I add to this panel are usercontrols which themselves have FlowLayoutPanels on them, however they have the standard behaviour (LeftToRight, WrapContents = true). The problem that I'm having is that the interior usercontrol's FlowLayoutPanel isn't resizing to fill the outer control, but when I set autosizing to true on these controls, then the panel won't wrap its contents - which is a known problem apparently.
If it helps visualize what I'm trying to do, it looks like this:
______________________________
| __________________________ | Outer box = exterior flowlayout
| |Text____________________| | (TopDown, NoWrap)
| | # # # # # # # # # # # #| |
| | # # # # | | Interior boxes = usercontrols with text and a
| |________________________| | flowlayoutpanel on them
| __________________________ | (LeftToRight, Wrap)
| |Text____________________| |
| | # # # # # # # # # # # #| | # = pictures
| | # # | |
| |________________________| |
|____________________________|
I don't think you can dock controls in a FlowLayoutPanel, unless you subclass LayoutEngine and make your own version of the pane using your custom engine. However, there's an awesome solution to this problem. Use a TableLayoutPanel! Since you only want 1 column, it's very easy to use a TableLayoutPanel for this purpose.
The only caveat is that the TLP needs to have 0 rows initially, and you then add the user controls programmatically. And the trick is to dock the user control to Top. This works:
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
TableLayoutPanel tlp1 = new TableLayoutPanel();
this.Controls.Add(tlp1);
tlp1.Dock = DockStyle.Fill;
for (int i = 0; i < 5; i++)
{
UserControl1 uc = new UserControl1();
uc.Dock = DockStyle.Top;
tlp1.Controls.Add(uc);
}
}
}
UserControl1 in this case was a user control with a FLP on it which had a bunch of buttons in it so I could confirm that the docking and flowing would work.
Related
In many cases an object in Powershell 5.1 can have a property that is an array. And the main issue is that it is not in the end (for example we have 2 such properties, so anyway one of them will be not in the end).
For example
(Get-Process)[0] | Format-Table Name,Threads,Handles
Name Threads Handles
---- ------- -------
Code {6212, 10164, 10804, 5260...} 238
I tried to make a normal table from that output:
(Get-Process)[0] |
Select-Object Name, #{ l = 'Threads'; e = { $_.Threads.ID -join "`n" } }, Handles |
Format-Table -Wrap
Name Threads Handles
---- ------- -------
Code 6212 238
10164
10804
5260
716
9120
6336
But as you can see it adds "additional" spaces at the array column. As I understand these spaces are the length of the array
((Get-Process)[0].Threads.ID -join "").Length
Can I eliminate these exessive spaces in the output?
Because if I have dozens of elements in array, I can't see last columns.
By the way "Out-String -Width" doesn't help either:
(Get-Process)[0] |
Select-Object Name, #{ l='Threads';e={$_.Threads.ID | Out-String -Width 12 }}, Handles |
Format-Table -Wrap
-Autosize make no difference.
And Width parameter make the situation worse:
(Get-Process)[0] |
Select-Object Name, #{ l = 'Threads'; e = { $_.threads.ID -join "`n" } }, Handles |
Format-Table -Wrap #{ e='Name'; width = 4} ,#{ e='Threads';width = 12}, Handles
Name Threads Handles
---- ------- -------
Code 6212 238
10164
10804
5260
716
9120
6336
I try to remove empty lines or values in an array, for example if the output of $DBLevel is empty, the hole line should be removed. Is there a easy way to to this?
thanks
$vmname= "srvsql009"
$DBType = "sql"
$DBLevel = ""
$style = "blue"
$myCol = #()
$x = "" | Select 'Name','DB-Type','DB-Level','Style'
$x.Name = $vmname
$x.'DB-Type' = $DBType
$x.'DB-Level' = $DBLevel
$x.'Style' = $style
$myCol += $x
$myCol
IsPublic IsSerial Name BaseType
True True Object[] System.Array
Imagine you had results like this in your $myCol variable.
PS> $myCol
Name DB-Type DB-Level Style
---- ------- -------- -----
srvsql009 sql blue
SQLSRV200 cosmos ALPHA RED
You can filter items in an array using the Where-Object command, like this:
PS> $myCol | Where DB-Level -ne ""
Name DB-Type DB-Level Style
---- ------- -------- -----
SQLSRV200 cosmos ALPHA RED
PowerShell comparison operators are different from most every other programming language, meaning you can't use == to compare equals, or <= for less than or equal to operations.
To learn more about the comparison operators, check out this link.
Continuing from my comment,
Removing elements from a simple array (Object[]) is both time and memory consuming because an array is of fixed length and when you remove an element, the whole array needs to be rebuilt in memory.
If you have a collection you need to be able to remove items from, better make use of a [System.Collections.Generic.List[object]]
Example collection (following your syntax)
$vmname = "srvsql009"
$DBType = "sql"
$DBLevel = 120
$style = "blue"
$list = [System.Collections.Generic.List[object]]::new()
for ($i = 0; $i -lt 5; $i++) {
$x = "" | Select 'Name','DB-Type','DB-Level','Style'
$x.Name = 'srvsql{0:D3}' -f ([int]$vmname.Substring(6) + $i) # for demo, increment the number
$x.'DB-Type' = $DBType
$x.'DB-Level' = if ($i -eq 3) {$null} else {$DBLevel + $i} # for demo add an object with empty 'DB-Level'
$x.'Style' = $style
# add this object to the list
$list.Add($x)
}
$list now contains
Name DB-Type DB-Level Style
---- ------- -------- -----
srvsql009 sql 120 blue
srvsql010 sql 121 blue
srvsql011 sql 122 blue
srvsql012 sql 123 blue
srvsql013 sql 124 blue
Now, if you want to remove one of the items in this list, you can do that using its index:
$list.RemoveAt(2) # removes item 'srvsql011'
Result:
Name DB-Type DB-Level Style
---- ------- -------- -----
srvsql009 sql 120 blue
srvsql010 sql 121 blue
srvsql012 sql blue
srvsql013 sql 124 blue
Or remove by determining the object to delete:
$x = $list | Where-Object { !$_.'DB-Level' }
$list.Remove($x)
Result:
Name DB-Type DB-Level Style
---- ------- -------- -----
srvsql009 sql 120 blue
srvsql010 sql 121 blue
srvsql011 sql 122 blue
srvsql013 sql 124 blue
My cmdlet get-objects returns an array of MyObject with public properties:
public class MyObject{
public string testString = "test";
}
I want users without programming skills to be able to modify public properties (like testString in this example) from all objects of the array.
Then feed the modified array to my second cmdlet which saves the object to the database.
That means the syntax of the "editing code" must be as simple as possible.
It should look somewhat like this:
> get-objects | foreach{$_.testString = "newValue"} | set-objects
I know that this is not possible, because $_ just returns a copy of the element from the array.
So you'd need to acces the elements by index in a loop and then modify the property.This gets really quickly really complicated for people that are not familiar with programming.
Is there any "user-friendly" built-in way of doing this? It shouldn't be more "complex" than a simple foreach {property = value}
I know that this is not possible, because $_ just returns a copy of the element from the array (https://social.technet.microsoft.com/forums/scriptcenter/en-US/a0a92149-d257-4751-8c2c-4c1622e78aa2/powershell-modifying-array-elements)
I think you're mis-intepreting the answer in that thread.
$_ is indeed a local copy of the value returned by whatever enumerator you're currently iterating over - but you can still return your modified copy of that value (as pointed out in the comments):
Get-Objects | ForEach-Object {
# modify the current item
$_.propertyname = "value"
# drop the modified object back into the pipeline
$_
} | Set-Objects
In (allegedly impossible) situations where you need to modify a stored array of objects, you can use the same technique to overwrite the array with the new values:
PS C:\> $myArray = 1,2,3,4,5
PS C:\> $myArray = $myArray |ForEach-Object {
>>> $_ *= 10
>>> $_
>>>}
>>>
PS C:\> $myArray
10
20
30
40
50
That means the syntax of the "editing code" must be as simple as possible.
Thankfully, PowerShell is very powerful in terms of introspection. You could implement a wrapper function that adds the $_; statement to the end of the loop body, in case the user forgets:
function Add-PsItem
{
[CmdletBinding()]
param(
[Parameter(Mandatory,ValueFromPipeline,ValueFromRemainingArguments)]
[psobject[]]$InputObject,
[Parameter(Mandatory)]
[scriptblock]$Process
)
begin {
$InputArray = #()
# fetch the last statement in the scriptblock
$EndBlock = $Process.Ast.EndBlock
$LastStatement = $EndBlock.Statements[-1].Extent.Text.Trim()
# check if the last statement is `$_`
if($LastStatement -ne '$_'){
# if not, add it
$Process = [scriptblock]::Create('{0};$_' -f $Process.ToString())
}
}
process {
# collect all the input
$InputArray += $InputObject
}
end {
# pipe input to foreach-object with the new scriptblock
$InputArray | ForEach-Object -Process $Process
}
}
Now the users can do:
Get-Objects | Add-PsItem {$_.testString = "newValue"} | Set-Objects
The ValueFromRemainingArguments attribute also lets users supply input as unbounded parameter values:
PS C:\> Add-PsItem { $_ *= 10 } 1 2 3
10
20
30
This might be helpful if the user is not used to working with the pipeline
Here's a more general approach, arguably easier to understand, and less fragile:
# $dataSource would be get-object in the OP
# $dataUpdater is the script the user supplies to modify properties
# $dataSink would be set-object in the OP
function Update-Data {
param(
[scriptblock] $dataSource,
[scriptblock] $dataUpdater,
[scriptblock] $dataSink
)
& $dataSource |
% {
$updaterOutput = & $dataUpdater
# This "if" allows $dataUpdater to create an entirely new object, or
# modify the properties of an existing object
if ($updaterOutput -eq $null) {
$_
} else {
$updaterOutput
}
} |
% $dataSink
}
Here are a couple of examples of use. The first example isn't applicable to the OP, but it's being used to create a data set that is applicable (a set of objects with properties).
# Use updata-data to create a set of data with properties
#
$theDataSource = #() # will be filled in by first update-data
update-data {
# data source
0..4
} {
# data updater: creates a new object with properties
New-Object psobject |
# add-member uses hash table created on the fly to add properties
# to a psobject
add-member -passthru -NotePropertyMembers #{
room = #('living','dining','kitchen','bed')[$_];
size = #(320, 200, 250, 424 )[$_]}
} {
# data sink
$global:theDataSource += $_
}
$theDataSource | ft -AutoSize
# Now use updata-data to modify properties in data set
# this $dataUpdater updates the 'size' property
#
$theDataSink = #()
update-data { $theDataSource } { $_.size *= 2} { $global:theDataSink += $_}
$theDataSink | ft -AutoSize
And then the output:
room size
---- ----
living 320
dining 200
kitchen 250
bed 424
room size
---- ----
living 640
dining 400
kitchen 500
bed 848
As described above update-data relies on a "streaming" data source and sink. There is no notion of whether the first or fifteenth element is being modified. Or if the data source uses a key (rather than an index) to access each element, the data sink wouldn't have access to the key. To handle this case a "context" (for example an index or a key) could be passed through the pipeline along with the data item. The $dataUpdater wouldn't (necessarily) need to see the context. Here's a revised version with this concept added:
# $dataSource and $dataSink scripts need to be changed to output/input an
# object that contains both the object to modify, as well as the context.
# To keep it simple, $dataSource will output an array with two elements:
# the value and the context. And $dataSink will accept an array (via $_)
# containing the value and the context.
function Update-Data {
param(
[scriptblock] $dataSource,
[scriptblock] $dataUpdater,
[scriptblock] $dataSink
)
% $dataSource |
% {
$saved_ = $_
# Set $_ to the data object
$_ = $_[0]
$updaterOutput = & $dataUpdater
if ($updaterOutput -eq $null) { $updaterOutput = $_}
$_ = $updaterOutput, $saved_[1]
} |
% $dataSink
}
I am new to powershell and am writing my first somewhat complicated script. I would like to import a .csv file and create multiple text arrays with it. I think that I have found a way that will work but it will be time consuming to generate all of the lines that I need. I assume I can do it more simply using foreach-object but I can't seem to get the syntax right.
See my current code...
$vmimport = Import-Csv "gss_prod.csv"
$gssall = $vmimport | ForEach-Object {$_.vmName}`
$gssweb = $vmimport | Where-Object {$_.tier -eq web} | ForEach-Object {$_.vmName}
$gssapp = $vmimport | Where-Object {$_.tier -eq app} | ForEach-Object {$_.vmName}
$gsssql = $vmimport | Where-Object {$_.tier -eq sql} | ForEach-Object {$_.vmName}
The goal is to make 1 group with all entries containing only the vmName value, and then 3 separate groups containing only the vmName value but using the tier value to sort them.
Can anyone help me with an easier way to do this?
Thanks!
For the last three you can group the object by the Tier property and have the result as a hasthable. Then you can reference the Tier name to get its VMs.
#group objects by tier
$gs = $vmimport | Group-Object tier -AsHashTable
# get web VMs
$gs['web']
# get sql VMs
$gs['app']
You may want to use a dictionary for storing the data:
$vmimport = Import-Csv "gss_prod.csv"
$gssall = $vmimport | % { $_.vmName }
$categories = "web", "app", "sql", ...
$gss = #{}
foreach ($cat in $categories) {
$gss[$cat] = $vmimport | ? { $_.tier -eq $cat } | % { $_.vmName }
}
I like the Shay Levy way, but the values of hash tables remain hash tables. Here is an other more efficient approach where values are jagged arrays, and categories are made automatically (contrary to Ansgar Wiechers solution):
# define hashtable
$gs = #{};
# fill it
$vmimport | foreach {$gs[$_.tier]+=, $_.vmName};
# get web VMs
$gs['web'] # the result is an array of 'web' vmNames.
In the DataGrid, there is a CheckBoxColumn and a TextColumn, that displays file paths:
| | |
| x |C:\docs\etc\somefile.txt |
| |C:\programs\misc\files\2.0\oth| <- cut off, too long
| x | |
I would prefer if long strings would scroll to the end, so the user can see the filename:
| | |
| x |..misc\files\2.0\otherfile.zip|
| | |
Is there a way to do this? Thanks
Another solution could be to use a textblock in the column template. Set texttrimming to ellipsis and put the long text in the tooltip property. http://msdn.microsoft.com/en-us/library/system.windows.controls.textblock.texttrimming.aspx
If you really want the ellipsis to the left like in your example, you may need to do some code behind measuring, see Length of string that will fit in a specific width