Is it possible to override/set the gatling.core.runDescription value dynamically so that the report contains a meaningful description?
I am running gatling with command line switches that change the number of users, the ramp time, the scenarios that do/do not get used, etc. I'd like to construct a string at runtime and use that to set the value of the runDescription but I cannot figure out a way to do so.
Related
I have a set of eVars defined in DTM(dynamic tag manager). I would be setting those values in a custom event in my code. There will be multiple instances where I would be setting this values. I can trigger multiple s.tl() calls and set those values. But I want to reduce the number of s.tl() calls. Is there any way to make one s.tl() call and set multiple values to the same eVars?
Your options for sending multiple values to the same variable on the same hit are:
Use a list variable
Since you wanted to use an eVar, the closest to what you want to do is probably a list variable. It is mostly like an eVar, but not as flexible. Also, you only get 3 of them per report suite, so you should try to see if the other options will work for you first, unless this is a super important KPI and the other options just won't work for you (from a reporting PoV).
Example:
s.list1='foo1,foo2,foo3';
Use a merchandising eVar (product syntax)
This method uses a regular eVar but you configure it as a product syntax merchandising eVar (configuration done within the Adobe Analytics Admin interface).
Example:
s.products=";;;;;eVar1=foo1,;;;;;eVar1=foo2,;;;;;eVar1=foo3";
Note: You may optionally want to specify a category and/or product depending on what you are ultimately trying to do (especially if your site has ecommerce tracking; it helps filter this out of actual products)
Use a list prop
You can configure any (or all!) of the 75 available props in the interface to be a list prop. The main downsides to a list prop is the 100 char limit for the prop (which may be too short, given you have multiple values), and that it is a traffic variable (only hit scope). But.. depending on what you are actually trying to record and report on, a list prop may be all you need.
Example:
s.prop1='foo1,foo2,foo3';
Long version of the question
I have a complex filtering operation that I'm trying to implement for a ui-grid application. Essentially, I have a big grid with lots of columns, each having the typical filter fields at the top of the columns. That works great.
Then I have an extra analysis step that the user can turn on (which involves looking for sets of rows that meet a certain criterion, and then marking rows visible or not based on the results) that MUST be applied logically after all the other filters (i.e. it does share 'commutative property' as all the column-top filters do). This extra analysis/filter step intends to take the row set that is produced by the column-top filters and then apply this one final, mother-of-all-complex-filtration steps.
I am able to get that filtration logic to produce initially correct results - when the user first clicks into the special mode, I perform the analysis and save the necessary info in a hidden column of the grid; and then a RowsProcessor sets the row.visible attribute accordingly. (perhaps I didn't need the RowsProcessor, and maybe I could have just set the visibility in the analysis subroutine.) But whatever - the point is that the rows are marked visible or not. The problem occurs when the user subsequently adds/removes/changes a filter to one of the column top filters. That extra analysis step by necessity needs to be based upon the rows that are visible according to the column-top-filters. And the first time into the special filtering routine, a call to gridApi.core.getVisibleRows() returns exactly that rowset. But after that, the visible rowset is now reduced by the prior execution of the special filtering. But I need to get back to the rowset (i.e. complete recalculation of the row.visible attributes) of just the column-top-filters, without any special final filtration. Is there a way to do that - to effectively undo the filtration effects of the RowsProcessor?
Short version of the question
Is there some way to force recalculation of the visible row set based on the column top filters? and to do so in a way to get control back so additional filtration steps can be executed?
I've looked at various things in the APIs but cannot tell which, if any, might help me. For example:
In the ui.grid (Grid) portion of the API, I see many different flavors of refresh methods that may help, but there's no distinction given that I understand. I hope the one that I need is not refreshRows( ) that says "not functional at present"
Also, the GridRow 'class' seems to have various methods that speak of
visibility "overrides" - that sounds possibly like what I might need
(my final visibility result possibly being an override to those calculated by the column-top filters). But I tried using those methods instead of directly setting row.visible and I did not see any difference.
Can anyone suggest a direction for me to try?
and even better, is there any written description that provides a high-level overview of ui-grid functionality? I love the package, but using it for the first time, I'm just having a hard time with what are probably basic concepts, and possibly I'm thinking about this problem all wrong.
Once again, thanks for any assistance.
Whenever the rowsProcessors run they start by setting all rows to visible, then each rowsProcessor runs in turn with the results from the previous rowsProcessor being passed to the next one. RowsProcessors have a priority, so you can set your processor to run at the appropriate place in the sequence.
It sounds like your problem is that you're using getVisibleRows to calculate what to do, rather than looking at the rows that are passed in to your rows processor, and evaluating based on which rows are visible in that input.
My guess is that you would be better to set your rowsProcessor to have a high (late) priority, and then process all your calculations within that processor rather than attempting to cache them on the data set itself. If you need to extract the visible rows from the set of renderableRows that are passed to your processor, you could do it with:
var visibleRows = renderableRows.filter( function(row) { return row.visible; });
I am using RRDTool to graph Data and a predicted Trend (LSL) in one Graph.
Therefore I am adjusting the corresponding template.
At the moment I set my end time like this:
--end start+7d
When looking at the resulting graphs via the website I can select different time ranges on the right side:
Custom time Range, Overview, 4 Hours, 25 Hours, One Week, One Month and One Year
What I want:
If I select a time range of 4 Hours, 7 days of forecasting makes no sense. I want to calculate the end time dependent on the time range selected. For example I want the time period displayed in the future being exact of the same size as the time range selected.
Basically I want to define my ending time like this:
--end start+(end-start)
This is not possible, because the end time can not be defined by itself.
Is there a way to extract the selected time-range before defining the end by hand? I could calculate start+(end-start) in my PHP Template and insert it when defining the ending time.
Every help appreciated.
EDIT: I forgot to mention, that I am using RRDTool via PNP4Nagios. When speaking of a website I meant the PNP4Nagios Standard Web Appearance. This is shipped by default when installing PNP4Nagios via Packages.
With PNP4Nagios, your custom template can be used to define all the graph definition -- with the exception of the time window, which is added to the parameter lists in $opt[] and $def[]. So, you cannot easily override the time window 'end' as it is already defined as 'now' by PNP4Nagios (and the 'start' is already defined relative to the end, based on the time range you selected in the web interface). In fact, RRDTool is fairly robust, so if it sees a start/end being redefined the last such definition usually takes precedence... but this doesn't solve your issue.
I think what you're trying to do is have the 1day graph (which normally starts at 'end-1day' and ends at 'now') to go from 'now-1day' to 'now+1day' so that your prediction line could fill the second part. This would need to be done by editing the PNP4Nagios code, which is a bit out of scope of this answer.
PNP4Nagios allows definition of the standard timeranges in the config.php; you can also define new timeranges when you call the graph. This means you can achieve the required time window like this:
pnp4nagios/graph?host=<hostname>&srv=<servicedesc>&start=-1day&end=+1day
... although this is just a one-off and does not override the defaults.
The current view config in PNP4Nagios does not allow the default views to specify an end offset, only a start offset.
I am building a web app where the user is able to search for a location and the possible spots are drawn from a database that numbers around 10,000. I would like to use the jQuery UI autocomplete plugin for this and am wondering if it is realistic to load 10,000 sites into an array that it searches from. If not what can I do to make it work and speed it up.
Thanks!
you probably don't want to send 10,000 locations to each browser. check out: http://jqueryui.com/demos/autocomplete/#remote
jquery will send the partial string to the server once it passes 2 characters (in that example). you then send back 10 or so matches. as the user types in more characters, the matches are more refined until the user sees the one they want.
i've done this with substring matches as well, although the fast and typical way to do this is by matching the start of the string.
on the server side, you probably want to cache the matches somehow.
I know you can set client permissions for a whole dataset like so:
<dataset name="foo" databroker="bar" client-permissions="view"/>
Is there a way to set client-permissions on just one field (similar to how other metadata like "valid" can be set for one field)?
Note: this is in Aviarc 3.5.0, so data bindings are not available.
Update: The use case I have in mind is a search parameters dataset. If I arrive at the search screen from a certain location then one parameter should be locked, because the search results should be filtered by that parameter.
Creating a new databroker for what amounts to a scratch search parameters dataset, just so I can set the read-only property on a single field, is really looking like overkill.
Update: Just to clarify, the dataset doesn't currently have any databroker bound to it, it is just used like a hash to store search parameters.
There isn't currently a way to set client-permissions on a single column/field.
It should be possible to set a datarule on a column which prevents the column being writable by anything other than dataset refreshes.
When I have individual pieces of data which should be read-only but are included in client-writable datasets, I keep copies of the data in non-client writable datasets and overwrite the client-writable ones when they get back.
As mentioned, data rules have the facility to set read-only on individual fields. They can be set on a given field for all rows, or on a field of a single row.
Adam has mentioned that creating a separate databroker for this case would be overkill, which is correct. The DataBinding layer is intended to provide this kind of specialization for certain use cases within your application.
So, you would create a DataBinding, pointing at your search DataBroker, that adds the rule you require to either an existing operation, or a new one that you define. The Dataset is then bound to the DataBinding instead of the DataBroker and from then on is used in the normal way.
The intention is that rules bound by DataBrokers apply to all data of the type supplied through that broker, so would be rules focusing on data integrity, formatting etc.
The DataBindings on the other hand are a layer within the application allowing you to bind rules relating to user interaction with the data, as in your example. It is expected that there might be multiple databindings for a given broker, each for a different application path or user task to interact with that data in a different way.
It should be possible to work around this by isolating the parameter I want to be read-only into its own dataset, and setting client-permissions to 'view' just for that parameter/dataset.
This does add the overhead of having to add a special case for that parameter, but I shouldn't need to extend it to any more special cases.