DataGrid Text auto scroll - wpf

In the DataGrid, there is a CheckBoxColumn and a TextColumn, that displays file paths:
| | |
| x |C:\docs\etc\somefile.txt |
| |C:\programs\misc\files\2.0\oth| <- cut off, too long
| x | |
I would prefer if long strings would scroll to the end, so the user can see the filename:
| | |
| x |..misc\files\2.0\otherfile.zip|
| | |
Is there a way to do this? Thanks

Another solution could be to use a textblock in the column template. Set texttrimming to ellipsis and put the long text in the tooltip property. http://msdn.microsoft.com/en-us/library/system.windows.controls.textblock.texttrimming.aspx
If you really want the ellipsis to the left like in your example, you may need to do some code behind measuring, see Length of string that will fit in a specific width

Related

bash: search for string, copy string next to it and list all for further post-processing

I have the following challenge:
my source_file.txt contains:
track001="alpha"
some text ... but also again the string track001 without " symbol... some more text
track002="beta"
some text ... but also again the string track002 without " symbol ... some more text
track027="gamma"
some text ... but also again the string track003 without " symbol ... some more text
track...="..."
... about 30 entries.
Now, I want to
search for the string next to trackxxx=" (=> find the alpha, beta and gamma string)
afterwards provide the list to the user for further pre-processing in the terminal:
| Reference | Title | Status |
|---------- |--------| ------------------|
| 001 | alpha | [ not selected ] |
| 002 | beta | [ not selected ] |
| ... | ... | [ not selected ] |
| 027 | gamma | [ not selected ] |
type Reference number (xxx): < user prompt>
change Status (selected = 1 / not selected = 0): < user prompt >
I thought about:
to copy the file and delete all lines which do not start with trackxxx=" but I guess there is nice sed which does the magic.
I need to paste all into a matrix to ease the pre-processing
for the pre-processing I would like to keep it simple (terminal interaction) no zenity etc.. Maybe someone has an idea to make the selector operation more user friendly.
Appreciate your support, thank you!
As a partial answer, because of the request for explanation of my comments:
sed -n 's/^track\(.*\)="\([^"]*\).*/ \1 \2 /p' will give you a list of
001 alpha
002 beta
...
027 gamma
which can be fed into a for-loop in bash to do the actual processing.
sed -n will not produce output, unless a line is explicitly printed
s/pattern/replacement/ replaces the pattern by the replacement
^track matches track if it is at the beginning of a line (^)
\(.*\) creates a capture group; the \( opens the capture group and the \) closes it. The capture group contains all characters up to the next element in the pattern
-=" This is the next element in the pattern: literal ="
\([^"]*\) second capture group. All character that are not " are added to this group.
.* the rest of the line. Will most probably begin with a ", but if you forget the closing ", that's ok too.
-The replacement string \1 \2 is a combination of the two capture group, \1 for the first and \2 for the second.
p Explicitly print this line if the pattern is matched. Because of the -n, normal output is suppressed, and you will get only the explicitly printed lines.

JQ using not with IN does not work or have any effect?

This code works as expected:
jq --argjson BL ${BL} '.rows[] | select(.cells[] | .value | IN($BL[]))
It returns a list of elements that contain a value in $BL
I want to return all those that are not in $BL, so I use | not
It returns the exact same result as without the | not, it seems to make no difference.
jq --argjson BL ${BL} '.rows[] | select(.cells[] | .value | IN($BL[]) | not)
using the following retuned nothing at all
jq --argjson BL ${BL} '.rows[] | select(.cells[] | .value | IN($BL[]|not))
is there a simple thing I'm missing with using IN with NOT?
for reference $BL is and array on email address, trying to make an api call and return all elements that don't have an email listed in $BL
Your select receives a series of boolean values, one for each item in the .cells array. Using not inverts all of them, which means if you had a mixed set of boolean values, it would still be mixed, and in either case select would take those being evaluated to true.
The solution is to use any or all to aggregate these boolean values. Without any sample data, I assume you are looking for
.rows[] | select(any(.cells[]; .value | IN($BL[])) | not)

Is there a documented list of Snowflake query types?

I am working with the view SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY. It would be extremely helpful to have an exhaustive list of query types that might appear in the column QUERY_TYPE, with the type of commands that generate them. For example, does a PUT command generate a PUT query type? Or is it something like "LOAD"?
If anyone knows where such a list can be found, please post a link. Snowflake's documentation of the view does not provide any list.
Thanks all who have answered so far. Since the consensus is that no such list exists, here is a merge of the entries provided so far with the values found in my own database. Please keep posting additional answers if your DB contains entries not found below. This way, sooner or later, we will have a fairly complete list:
QUERY_TYPE
CREATE_USER
REVOKE
DROP_CONSTRAINT
RENAME_SCHEMA
UPDATE
CREATE_VIEW
CREATE_TASK
RENAME_TABLE
INSERT
ALTER_TABLE_ADD_COLUMN
RENAME_COLUMN
MERGE
BEGIN_TRANSACTION
ALTER_VIEW_MODIFY_SECURITY
GRANT
ALTER_SESSION
DELETE
DROP_ROLE
DESCRIBE
UNKNOWN
TRUNCATE_TABLE
DROP
SHOW
ALTER_WAREHOUSE_SUSPEND
GET_FILES
UNLOAD
CREATE_NETWORK_POLICY
ALTER_TABLE_DROP_COLUMN
CREATE
REMOVE_FILES
ALTER
ALTER_USER
PUT_FILES
COPY
ALTER_ACCOUNT
DROP_TASK
CREATE_CONSTRAINT
DESCRIBE_QUERY
SELECT
RENAME_USER
COMMIT
RENAME_VIEW
USE
CREATE_TABLE
ALTER_NETWORK_POLICY
CREATE_ROLE
ALTER_TABLE_MODIFY_COLUMN
SET
ALTER_USER_ABORT_ALL_JOBS
ROLLBACK
LIST_FILES
UNSET
CREATE_TABLE_AS_SELECT
DROP_USER
ALTER_WAREHOUSE_RESUME
QUERY_TYPE
ALTER_PIPE
ALTER_ROLE
ALTER_TABLE
ALTER_TABLE_DROP_CLUSTERING_KEY
ALTER_USER_RESET_PASSWORD
CREATE_EXTERNAL_TABLE
CREATE_MASKING_POLICY
CREATE_SEQUENCE
CREATE_STREAM
DROP_STREAM
RENAME_DATABASE
RENAME_FILE_FORMAT
RENAME_ROLE
RENAME_WAREHOUSE
RESTORE
By the looks of it there is no complete list of query types that show up in this table. Best I can do is give you a list from my own database, which still doesn't contain things like alter role etc. To answer your other question a PUT command is actually PUT_FILES by the looks of it:
select distinct query_type from SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY;
+-------------------------+
|QUERY_TYPE |
+-------------------------+
|ALTER |
|ALTER_SESSION |
|ALTER_TABLE_ADD_COLUMN |
|ALTER_TABLE_DROP_COLUMN |
|ALTER_TABLE_MODIFY_COLUMN|
|ALTER_USER |
|ALTER_WAREHOUSE_RESUME |
|ALTER_WAREHOUSE_SUSPEND |
|BEGIN_TRANSACTION |
|COMMIT |
|COPY |
|CREATE |
|CREATE_CONSTRAINT |
|CREATE_EXTERNAL_TABLE |
|CREATE_MASKING_POLICY |
|CREATE_ROLE |
|CREATE_SEQUENCE |
|CREATE_STREAM |
|CREATE_TABLE |
|CREATE_TABLE_AS_SELECT |
|CREATE_USER |
|CREATE_VIEW |
|DELETE |
|DESCRIBE |
|DESCRIBE_QUERY |
|DROP |
|DROP_CONSTRAINT |
|DROP_STREAM |
|DROP_USER |
|GET_FILES |
|GRANT |
|INSERT |
|LIST_FILES |
|MERGE |
|PUT_FILES |
|REMOVE_FILES |
|RENAME_COLUMN |
|RENAME_DATABASE |
|RENAME_TABLE |
|RESTORE |
|REVOKE |
|ROLLBACK |
|SELECT |
|SET |
|SHOW |
|TRUNCATE_TABLE |
|UNKNOWN |
|UNLOAD |
|UPDATE |
|USE |
+-------------------------+
Added ours ... 16 extra's ... pass it on :-)
QUERY_TYPE
ALTER
ALTER_ACCOUNT
ALTER_PIPE
ALTER_ROLE
ALTER_SESSION
ALTER_TABLE
ALTER_TABLE_ADD_COLUMN
ALTER_TABLE_DROP_CLUSTERING_KEY
ALTER_TABLE_DROP_COLUMN
ALTER_TABLE_MODIFY_COLUMN
ALTER_USER
ALTER_USER_ABORT_ALL_JOBS
ALTER_USER_RESET_PASSWORD
ALTER_WAREHOUSE_RESUME
ALTER_WAREHOUSE_SUSPEND
BEGIN_TRANSACTION
COMMIT
COPY
CREATE
CREATE_CONSTRAINT
CREATE_EXTERNAL_TABLE
CREATE_MASKING_POLICY
CREATE_NETWORK_POLICY
CREATE_ROLE
CREATE_SEQUENCE
CREATE_STREAM
CREATE_TABLE
CREATE_TABLE_AS_SELECT
CREATE_TASK
CREATE_USER
CREATE_VIEW
DELETE
DESCRIBE
DESCRIBE_QUERY
DROP
DROP_CONSTRAINT
DROP_ROLE
DROP_STREAM
DROP_TASK
DROP_USER
GET_FILES
GRANT
INSERT
LIST_FILES
MERGE
PUT_FILES
REMOVE_FILES
RENAME_COLUMN
RENAME_DATABASE
RENAME_FILE_FORMAT
RENAME_ROLE
RENAME_SCHEMA
RENAME_TABLE
RENAME_USER
RENAME_VIEW
RENAME_WAREHOUSE
RESTORE
REVOKE
ROLLBACK
SELECT
SET
SHOW
TRUNCATE_TABLE
UNKNOWN
UNLOAD
UNSET
UPDATE
USE
Here are some additional ones:
ALTER_AUTO_RECLUSTER
ALTER_SET_TAG
ALTER_TABLE_MODIFY_CONSTRAINT
ALTER_UNSET_TAG
CALL
DROP_SESSION_POLICY
RECLUSTER

splunk query taking long time to return the value, can we eliminate append

i have initially used inputlook to get the output and query was returning output in fractions of sec, but now i want to use the source as input and run the Splunk query but its taking lot of time to return output.
Please suggest solution to optimise the output time.
I am thinking of removing multiple append
index=csvlookups source="F:\\SplunkMonitor\\csvlookups\\Core_Network\\lookup_table_sip_pbx_usage.csv" OR source="F:\\SplunkMonitor\\csvlookups\\Core_Network\\lookup_table_dpt_capacity.csv" OR source="F:\\SplunkMonitor\\csvlookups\\Core_Network\\lookup_table_sip_pbx_forecasts.csv"
| eval Date=strftime(strptime(Date,"%m/%d/%Y"),"%Y-%m-%d")
| sort Date, CLLI
| rename CLLI as Office
| search Office="CLGRAB21DS1"
| stats sum(Usage) as Usage by Office, Date
| append
[ search index=csvlookups source="F:\\SplunkMonitor\\csvlookups\\Core_Network\\lookup_table_sip_pbx_usage.csv" OR source="F:\\SplunkMonitor\\csvlookups\\Core_Network\\lookup_table_dpt_capacity.csv" OR source="F:\\SplunkMonitor\\csvlookups\\Core_Network\\lookup_table_sip_pbx_forecasts.csv"
| eval Date=strftime(strptime(Date,"%m/%d/%Y"),"%Y-%m-%d")
| reverse
| search Office="CLGRAB21DS1" AND Type="SIP PBX"
| fields Date NB_RTU
| fields - _raw _time ]
| sort Date
| fillnull value="CLGRAB21DS1" Office
| filldown Usage
| filldown NB_RTU
| fillnull value=0 Usage
| eval _time = strptime(Date, "%Y-%m-%d")
| eval latest_time = if("now" == "now", now(), relative_time(now(), "now"))
| where ((_time >= relative_time(now(), "-3y#h")) AND (_time <= latest_time))
| fields - latest_time Date
| append
[ gentimes start=-1
| eval Date=strftime(mvrange(now(),now()+60*60*24*365*3,"1mon"),"%F")
| mvexpand Date
| fields Date
| append
[ search index=csvlookups source="F:\\SplunkMonitor\\csvlookups\\Core_Network\\lookup_table_sip_pbx_usage.csv" OR source="F:\\SplunkMonitor\\csvlookups\\Core_Network\\lookup_table_dpt_capacity.csv" OR source="F:\\SplunkMonitor\\csvlookups\\Core_Network\\lookup_table_sip_pbx_forecasts.csv"
| rename "Expected Date of Addition" as edate
| eval edate=strftime(strptime(edate,"%m/%d/%Y"),"%Y-%m-%d")
| rename edate as "Expected Date of Addition"
| table Contact Customer "Expected Date of Addition" "Number of Channels" Switch
| reverse
| search Customer = "Regular Usage" AND Switch = "CLGRAB21DS1"
| rename "Number of Channels" as val
| return $val ]
| reverse
| filldown search
| rename search as Usage
| where Date != ""
| reverse
| append
[ search index=csvlookups source="F:\\SplunkMonitor\\csvlookups\\Core_Network\\lookup_table_sip_pbx_usage.csv" OR source="F:\\SplunkMonitor\\csvlookups\\Core_Network\\lookup_table_dpt_capacity.csv" OR source="F:\\SplunkMonitor\\csvlookups\\Core_Network\\lookup_table_sip_pbx_forecasts.csv"
| rename "Expected Date of Addition" as edate
| eval edate=strftime(strptime(edate,"%m/%d/%Y"),"%Y-%m-%d")
| rename edate as "Expected Date of Addition"
| table Contact Customer "Expected Date of Addition" "Number of Channels" Switch
| reverse
| search Customer != "Regular Usage" AND Switch = "CLGRAB21DS1"
| rename "Expected Date of Addition" as Date
| eval _time=strptime(Date, "%Y-%m-%d")
| rename "Number of Channels" as Forecast
| stats sum(Forecast) as Forecast by Date]
| sort Date
| rename Switch as Office
| eval Forecast1 = if(isnull(Forecast),Usage,Forecast)
| fields - Usage Forecast
| streamstats sum(Forecast1) as Forecast
| fields - Forecast1
| eval Date=strptime(Date, "%Y-%m-%d")
| eval Date=if(Date < now(), now(), Date) ]
| filldown Usage
| filldown Office
| eval Forecast = Forecast + Usage
| eval Usage = if(Forecast >= 0,NULL,Usage)
| eval _time=if(isnull(_time), Date, _time)
| timechart limit=0 span=1w max(Usage) as Usage, max(NB_RTU) as NB_RTU, max(Forecast) as Forecast by Office
| rename "NB_RTU: CLGRAB21DS1" as "RTU's Purchased", "Usage: CLGRAB21DS1" as "Usage", "Forecast: CLGRAB21DS1" as "Forecast"
| filldown "RTU's Purchased" |sort -Forecast
Definitely an expensive query you don't want to run often or over large timeranges. In your first append, why are you using reverse? Are you trying to get latest time and earliest time which is why you used the append? You could use earliest and latest for this and eliminate the first subsearch. You could also consider eventstats instead of stats on that first search since you'll still retain the raw data.
You're also summing by _time, so you should think about binning your _time spans (i.e. | bin Date span=1h). Also, why are you using filldown? I'm guessing you want to grab values from different rows and need the rows to match? If so, use streamstats for this
If inputlookup was working well you should stick with that as you won't get much faster.
It's hard to give specific advice about your query without knowing more about the data and your end goals. In general:
Filter early. Make your base query (before the first '|') as specific as possible. Run your where and search clauses as soon as you can.
Use fields instead of table. It's more efficient.
Sort only when necessary. Usually, it's not necessary.
Fewer appends is better.

Nested flowlayout panel not wrapping

I've got a FlowLayoutPanel with properties:
Dock = Fill (in a usercontrol)
FlowDirection = TopDown
WrapContents = false
I do it this way so that each item added to the panel gets added to the bottom.
The items that I add to this panel are usercontrols which themselves have FlowLayoutPanels on them, however they have the standard behaviour (LeftToRight, WrapContents = true). The problem that I'm having is that the interior usercontrol's FlowLayoutPanel isn't resizing to fill the outer control, but when I set autosizing to true on these controls, then the panel won't wrap its contents - which is a known problem apparently.
If it helps visualize what I'm trying to do, it looks like this:
______________________________
| __________________________ | Outer box = exterior flowlayout
| |Text____________________| | (TopDown, NoWrap)
| | # # # # # # # # # # # #| |
| | # # # # | | Interior boxes = usercontrols with text and a
| |________________________| | flowlayoutpanel on them
| __________________________ | (LeftToRight, Wrap)
| |Text____________________| |
| | # # # # # # # # # # # #| | # = pictures
| | # # | |
| |________________________| |
|____________________________|
I don't think you can dock controls in a FlowLayoutPanel, unless you subclass LayoutEngine and make your own version of the pane using your custom engine. However, there's an awesome solution to this problem. Use a TableLayoutPanel! Since you only want 1 column, it's very easy to use a TableLayoutPanel for this purpose.
The only caveat is that the TLP needs to have 0 rows initially, and you then add the user controls programmatically. And the trick is to dock the user control to Top. This works:
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
TableLayoutPanel tlp1 = new TableLayoutPanel();
this.Controls.Add(tlp1);
tlp1.Dock = DockStyle.Fill;
for (int i = 0; i < 5; i++)
{
UserControl1 uc = new UserControl1();
uc.Dock = DockStyle.Top;
tlp1.Controls.Add(uc);
}
}
}
UserControl1 in this case was a user control with a FLP on it which had a bunch of buttons in it so I could confirm that the docking and flowing would work.

Resources