systemverilog, signal concatenation - concatenation

I have a VHDL record in the design e.g
TYPE signal_record IS RECORD
signal_0 : std_ulogic;
signal_1 : std_ulogic;
...
signal_31 : std_ulogic;
END RECORD;
On my SV testbench, I would like to apply an assertion on each of the signal_%d in systemverilog.
generate
for (genvar i = 0; i < 31; i++)
begin :
assert property(pp_one_property(clk, {`PATH_TO_SIGNAL.signal_,i}));
end
endgenerate
However this won't work, as systemverilog would expect a signal as the second argument of the assertion property.
Is there a trick how to apply assertion dynamically on such signals?

You have to write all of them by hand, unfortunately. The concatenation operator doesn't create new identifiers. If you were to have your record as an array instead of 32 individual fields, you could maybe do it because you would be able to index the entries based on the genvar.

SystemVerilog allows you to construct identifiers from text macros w/o introducing white spaces by using two consecutive grave accent (i.e. ``). This is probably the closest you get to what you want.

Related

query parameter preserve as json

I'm trying to store API request query parameters in JSON format, in a way that preserves the inferred original types of the parameters' values. I do this without knowing what these APIs look like beforehand.
The code below deals with each query argument (delimited by &) one by one.
for (int i = 0; i < url_arg_cnt; i++) {
const http_arg_t *arg = http_get_arg(http_info, i);
if (cJSON_GetObjectItem(query, arg->name.p) == NULL) {
// Currently just treating as a string.
cJSON_AddItemToObject(query, arg->name.p, cJSON_CreateString(arg->value.p));
SLOG_INFO("name:value is %s:%s\n", arg->name.p, arg->value.p);
} else {
//duplicate key.
}
With the above code, for input
?start=0&count=2&format=policyid|second&id%5Bkey1%5D=1&id[key2]=2&object=%7Bone:1,two:2%7D&nested[][foo]=1&nested[][bar]=2
I get these prints:
name:value is start:0
name:value is count:2
name:value is format:policyid|second
name:value is id[key1]:1
name:value is id[key2]:2
name:value is object:{one:1, two:2}
name:value is nested[][foo]:1
name:value is nested[][bar]:2
According to this document and other places I've researched,
https://swagger.io/docs/specification/serialization/
There is no consensus on how the query parameters are passed, therefore no guarantee what I could encounter here. So my goal is to support as many variations as possible.
These possibilities seem to be the most common:
Arrays:
?x = 1,2,3
?x=1&x=2&x=3
?x=1%202%203
?x=1|2|3
?x[]=1&x[]=2
String:
?x=1
Object, could be nested:
?x[key1]=1&x[key2]=2
?x=%7Bkey1:1,key2:2%7D
?x[][foo]=1&x[][bar]=2
?fields[articles]=title,body&fields[people]=name
?x[0][foo]=bar&x[1][bar]=baz
Any ideas how to best go about this? Basically for these query parameters I want to aggregate ('exploded') arguments that belong together and save to query proper intended json objects. Line in question:
cJSON_AddItemToObject(query, arg->name.p, cJSON_CreateString(arg->value.p));
Converting the URI query to JSON
This post will provide more generic (canonical) approach toward the problem of extraction of the variables from the URI string.
The query is defined across several descriptive standards (RFCs and specifications), so tho have canonical approach, we need to use the specifications to create a normalized form of the query before we can build the object.
TL;DR
To assure that we can be implement the specifications with the ability to cater for future extensions, the algorithm to convert the query to JSON should be separated in steps, each one gradually building the normalized form of the query, before it can be converted to JSON object. To do so, we need the following steps:
Extract the query from the URI
Split to key=value
Normalize the key (build the object hierarchy)
Normalize the value (populate the object attributes and build the attribute arrays)
Build JSON object based on the normalized key=value
Such separation of the steps will allow much easier adoption of future changes in the specifications. The parsing of the values can be done with RegEx or with a parser (BNF, PEG, etc.).
Conversion steps
First thing to be done is to extract the query string from the URI. This is described in the RFC3986 and will be explained in it's own section Extracting the query string. The extraction of the query segment, as we will see later, can be easily done with RegEx.
After query string is extracted from the URI, one needs to interpret the information conveyed by the query. As we will see below, the query has a very loose definition in the RFC3986, and the case where the query is conveying variables is further elaborated in RFC6570. During the extraction, the algorithm should extract the values (that are in form of key=value) and store them in a map structure (one approach would be to use strict as described in following SO post. The section Interpreting the query string provides overview of the process.
After the variables are separated and placed in form of key=value, next stage is to normalize the key. Proper interpretation of the key will allow us to build the hierarchical structure of the JSON object from the key=value structure. The RFC6570 is not providing much information on how the key can be normalized, however the OpenAPI specification provides a good information how to handle different types of key. The normalization will be further elaborated in section Normalizing the key
Next we need to normalize the variables by continuing to build on the RFC6570 which defines the types of the variables in several levels. This will be further elaborated in section Normalizing the value
Final stage is to build the JSON object with cJSON_AddItemToObject(query, name, cJSON_CreateString(value));. More details will be discussed in the Building the JSON Object section.
During implementation, some of the steps can be merged to a single step to optimize the implementation.
Extracting the query string
The RFC3986 which is the main descriptive standard that is governing the URI is defining the URI as:
URI = scheme ":" hier-part [ "?" query ] [ "#" fragment ]
The query part is defined in the section 3.4 of the RFC as the segment of the URI such as:
... The query component is indicated by the first question
mark ("?") character and terminated by a number sign ("#") character
or by the end of the URI. ...
The formal syntax of the query segment is defined as:
query = *( pchar / "/" / "?" )
pchar = unreserved / pct-encoded / sub-delims / ":" / "#"
unreserved = ALPHA / DIGIT / "-" / "." / "_" / "~"
pct-encoded = "%" HEXDIG HEXDIG
sub-delims = "!" / "$" / "&" / "'" / "(" / ")"
/ "*" / "+" / "," / ";" / "="
This means that the query can contain more instances of ? and / before the # is met. Actually, as long as the characters after first occurrence of the? are in the set of characters that do not have special meaning, everything that is found until first # is encountered is the query.
At the same time, this also implies that the sub-delimiter &, as well as the ? has no special meaning according to this RFC when is encountered inside the query string, as long as it's in the proper form and position in the URI. This implies that each implementation can define its own structure. The language of RFC in chapter 3.4 confirms such implications by leaving space for other interpretations by using often instead of always
... However, as query components
are often used to carry identifying information in the form of
"key=value" pairs ...
In addition, the RFC also provides the following RegEx that can be used to extract the query part from the URI:
regex : ^(([^:/?#]+):)?(//([^/?#]*))?([^?#]*)(\?([^#]*))?(#(.*))?
segments: 12 3 4 5 6 7 8 9
Where the capture #7 is the query from the URI.
The easiest approach for extracting the query, provided that we are not interested in the remaining parts of the URI, is to use the RegEx to split the URI and extract the query string that will not contain the leading ? nor the terminating #.
This RFC3986 is further extended with the RFC3987 in order to cover the international characters, however the RegEx defined by the RFC3986 remains valid
Extracting variables from the query string
To decompose the query string to key=value pairs, we need to do reverse engineering of the RFC6570 which establishes a descriptive standard for the expansion of the variables and constructing the valid query. As the RFC is stating
... A URI Template provides both a structural description of a URI space
and, when variable values are provided, machine-readable instructions
on how to construct a URI corresponding to those values. ...
From the RFC, we can extract the following syntax for a variable in the query:
query = variable *( "&" variable )
variable = varname "=" varvalue
varvalue = *( valchar / "[" / "] / "{" / "}" / "?" )
varname = varchar *( ["."] varchar )
varchar = ALPHA / DIGIT / "_" / pct-encoded
pct-encoded = "%" HEXDIG HEXDIG
valchar = unreserved / pct-encoded / vsub-delims / ":" / "#"
unreserved = ALPHA / DIGIT / "-" / "." / "_" / "~"
vsub-delims = "!" / "$" / "'" / "(" / ")"
/ "*" / "+" / ","
The extraction can be performed with a parser that implements the above grammar, or by iterating over the query with the following RegEx and extracting the (key, value) pairs.
([\&](([^\&]*)\=([^\&]*)))
In case we use RegEx, note that in previous section we had omitted the "?" at the start of the query and "#" at the end, so we need don't need to handle this characters in the separation of the variables.
Normalizing the key
There descriptive standard RFC6570 provides generic rules of the format of the key, the RFC is not helping much when it comes to the rules for the interpretation of the key when an object is constructed. Some of the specifications such as the OpenAPI specification, JSON API Specification), etc. can help with the interpretation, but they are not providing the full set of rules, rather a subset. To make the things wort, some of the SDKs (ex. PHP SDK) have its own rules for building the keys.
In such situation, the best approach is to create a hierarchical rules for key normalization that will convert the key to a unified format, similar to json path dot notation. The hierarchical rules will allow us to control how the ambiguous situations (in case of collisions between specifications), but controlling the order of the rules. The json path notation will allow us to build the object in the final step without the necessity to have proper order of the key=value pairs.
Following is the grammar of the normalized format:
key = sub-key *("." sub-key )
sub-key = name [ ("[" index "]") ]
name = *( varchar )
index = NONZERO-DIGIT *( DIGIT )
This grammar will allow for keys such as foo, foo.baz, foo[0].baz, foo.baz[0], foo.bar.baz etc.
Following are a good starting point to set of rules and the transformation
Flat key (key -> key)
Attribute key (key.atr -> key.atr)
Array key (key[] -> key[0])
Object Array key (key[attribute] -> key.attribute), (key[][attribute] -> key[0].attribute), (key[attribute][] -> key.attribute[0])
More rules can be added to address special cases. During the transformation, the algorithm should pass from the most specific rules (the bottom rules) to the most generic rules and try to find a full match. If a full match if found, the key will be overwritten with the normal form and the remaining rules will be skipped.
Normalizing the value
Similar to the normalization of the key, the value should also be normalized in cases where the value represents a list. We will need to convert the value from the arbitrary list format to the form format (coma separated list) which is defined by the following grammar:
value = singe-value *( "," singe-value )
singe-value = *( unreserved / pct-encoded )
This grammar will allow us the value to take form a, a,b, a,b,c, etc.
Extracting the list of the values from the value string can be done with splitting the string by the valid delimiters (",",";","|", etc.) and producing the list in a normalized form.
Building the JSON Object
Once the keys and the values are normalized, converting the flat list (the map structure) to a JSON Object can be done by a singe pass trough all of the keys in the list. The normalized format of the key will help us, since the key conveys the whole information about his hierarchy in the object, so even if we had not encountered some of the intermediate attributes, we are able to build the object.
Similar, we can recognize if the value of the attribute should be a flat string or an array from the variable itself, so here as well, no additional information is required to create the proper representation.
Alternative approach
As alternative approach, we can construct a full grammar that will create the AST (abstract syntax tree), and use the tree to produce the JSON object, however due to the multiple variations of the formats and ability to have future extensions, this approach will be less flexible.
Useful links
The grammar in the text is following ABNF grammar rules
JSON Path
GNU Bison is example of BNF parser
C PEG parser library is example of PEG parser
I recently ran into the same issue and will share some wisdom gained from the episode.
I'm assuming you are implementing this on a MITM device (web firewall, etc.).
As notedly in the question, there is no consensus in how the query parameters are passed. Not one standard or a set of rules that govern this -- in fact, any server may implement its own syntax, as long as the syntax is supported by the server code. The best one can do is to 1) decide what query parameter forms to support (do the best you can, maybe as many as possible) and 2) support only those forms, treat the rest (ones not supported) as String values, like your current code does.
It's not worth it to fret too much about the accuracy of the preservation/inference of type in question, or formalizing/generalizing it for a heavyweight solution because 1) the arbitrariness of syntax you may encounter (not necessarily conforming to any standard, web servers can really do whatever they want, therefore the query parameters often don't conform to the, say, swagger standard referenced) and 2) looking at the query parameters only gives you so much information -- the benefit/value of implementing anything more than vague approximations (per rules defined by yourself, as stated before) is hard to be seen. Think about even the simplest of cases, how vague they can be: you sorta have to pretend in the x=something&x=something exploded case, arrays have to have at least two elements. If only one element -- x=something -- you treat it as a string, for how else do you know whether it's an array or a string? How about the x=1 case, is 1 a string or a number, the original / intended type? Also, how about x=foo&y=1 | 2 | 3? or when you see "1, 2, 3", with spaces? Are the spaces supposed to be ignored, are they array delimiters themselves, or are they actually a part of the array elements. Finally, how do you even know the intended string is not "1 | 2 | 3" itself, meaning it's not an array!
So the best one can do in parsing these strings and trying to support/ infer all these variations (different rules) is to define ones own rules (what one is okay/happy with) and support only those.

SSIS Derived Column - Text in Numeric Field is not converting

I'm importing thousands of csv files into an SQL DB. They each have two columns: Date and Value. In some of the files, the value column contains simply a period (ex: "."). I've tried to create a derived column that will handle any cell that contains a period with the following code:
FINDSTRING((DT_WSTR,1)[VALUE],".",1) != 0 ? NULL(DT_R8) : [VALUE]
But, when the package runs it gets the following error when it reaches the cell with the period in it:
The data conversion for column "VALUE" returned status value 2 and status text
"The value could not be converted because of a potential loss of data".
I'm guessing there might be an escape character that I'm missing in my FINDSTRING function but I can't seem to find what that may be. Does anyone have any thoughts on how I can get around this issue?
Trying to debug things like this is why I always advocate adding many Derived Columns to the Data Flow. It's impossible to debug that entire expression. Instead, first find the position of the period and add that as a new column. Then you can feed that into the ternary operation and bit by bit you can add data viewers to ensure you are seeing what you expect to see.
Personally, I'd take a different approach. It seems that you'd like to make any columns that are . into a null of type DT_R8.
Add a derived column, TrimmedValue and use this expression to remove any leading/trailing whitespace and then
RTRIM(LTRIM(Value))
Add a second derived column component, this time we'll add column MenopausalValue as it will remove the period. Use this expression
(TrimmmedValue == ".") ? Trimmedvalue : NULL(DT_WSTR, 50)
Now, you can add your final Derived Column wherein we convert the string representation of Value to the floating point representation.
IsNull(MenopausalValue) ? NULL(DT_R8) : (DT_R8) MenopausalValue
If the above shows an error, then you need to apply the following version as I can never remember the evaluation sequence for ternary operations that change type.
(DT_R8) (IsNull(MenopausalValue) ? NULL(DT_R8) : (DT_R8) MenopausalValue)
Examples of breaking these operations into many steps for debugging purposes
https://stackoverflow.com/a/15176398/181965
https://stackoverflow.com/a/31123797/181965
https://stackoverflow.com/a/33023858/181965
You can do it like this:
TRIM(Value) == "." ? NULL(DT_R8) : (DT_R8)Value

Check for integer in string array

I am trying to check a string array for existence of a converted integer number. This sits inside of a procedure where:
nc_ecosite is an integer variable
current_consite is a string array
ecosite is an integer
current_ecosite_nc is double
IF to_char(nc_ecosite, '999') IN
(select current_consite from current_site_record
where current_ecosite_nc::integer = nc_ecosite) THEN
ecosite := nc_ecosite;
The result always comes from the ELSIF that follows the first IF. This occurs when nc_ecosite is in the array (from checks). Why is ecosite not being populated with nc_ecosite when values are matching?
I am working with Postgres 9.3 inside pgAdmin.
I found the following to provide the desired result:
IF nc_ecosite in
(select (unnest(string_to_array(current_consite, ',')))::integer
from current_site_record
where current_ecosite_nc::integer = nc_ecosite) THEN
ecosite := nc_ecosite::integer;
The immediate reason for the problem is that to_char() inserts a leading blank for your given pattern (legacy reasons - to make space for a potential negative sign). Use the FM Template Pattern Modifier to avoid that:
to_char(nc_ecosite, 'FM999')
Of course, it would be best to operate with matching data types to begin with - if at all possible.
Barring that, I suggest this faster and cleaner statement:
SELECT INTO ecosite nc_ecosite -- variable or column??
WHERE EXISTS (
SELECT 1 FROM current_site_record c
WHERE current_ecosite_nc::integer = nc_ecosite
AND to_char(nc_ecosite, 'FM999') = ANY(current_consite)
);
IF NOT FOUND THEN ... -- to replace your ELSIF
Make sure you don't run into naming conflicts between parameters, variables and column names! A widespread convention is to prepend variable names with _ (and never use the same for column names). But you better table-qualify column names in all queries anyway. You did not make clear which is a column and which is a variable ...
I might be able to optimize the statement further if I had the complete function and table definition.
Related:
Remove blank-padding from to_char() output
Variables for identifiers inside IF EXISTS in a plpgsql function
Naming conflict between function parameter and result of JOIN with USING clause

select first two characters of values in a concatenated string

I am trying to create a formula field that checks a string that is a series of concatenated values separated by a comma. I want to check the first two characters of each comma separated value in the string. For example, the string pattern could be: abcd,efgh,ijkl,mnop,qrst,uvwx
In my formula I'd like to check if the first two characters are 'ab','ef'
If so, I would return true, else false.
Thanks.
To do this properly, you need to use a regular expression. Unfortunately the REGEX function is not available in formula fields. It is, however, available in formulas in Validation Rules and in Workflow Rules. You can, therefore, specify the below formula in either of a Validation or Workflow Rule:
OR(
AND(
NOT(
BEGINS( KXENDev__Languages__c, "ab" )
),
NOT(
BEGINS( KXENDev__Languages__c, "ef" )
)
),
REGEX( KXENDev__Languages__c , ".*,(?!ab|ef).*")
)
If it's a Validation Rule, you're done -- this formula will create an error if any of the entries do not start with "ab" or "ef". If it's a Workflow Rule, then you can add a Field Update to it to update some field with False when this formula is true (if this formula is true then there is at least one item that doesn't start with ab or ef, so that would make your field False).
Some may ask "What's with the BEGINS statements? Couldn't you have done this all with one REGEX?" Yes, I probably could, but that makes for an increasingly complex REGEX statement, and these are quite difficult to debug in Salesforce.com, so I prefer to keep my REGEXes in Salesforce.com as simple as possible.
I suggest you to search for ',ab' and ',ef' using CONTAINS method. But first of all you need to re implement method which composes this string so it puts ',' before first substring. At the end returned string should look like ',abcd,efgh,ijkl,mnop,qrst,uvwx'.
If you are not able to re implement method which compose this string use LEFT([our string goes here],2) method to check first two chars.

CONCATENATE syntax error "unable to interpret text-cb1"

I've been trying to make a dynamic col for a select. it's just for learning.
I've made a selection screen with some select-options and checkbox parameters. whenever i have a checked checkbox i want to concatenate a string to my lineselection var.
lineselect = ' CARRID CONNID'.
SELECTION-SCREEN BEGIN OF BLOCK block1 WITH FRAME TITLE text-001.
[...]
SELECTION-SCREEN END OF BLOCK block1.
IF cbcofr EQ 'X'. "where cbcofr is checkbox
CONCATENATE text-cb1 INTO lineselect SEPARATED BY space. "where text-cb1 is 'CONTRYFR
ENDIF.
When i check for error the compiler just says "Unable to interpret "text-cb1". possible cause: incorrect spelling or comma error."
Is not about text-cb1 , i've tried with string 'COUNTRYFR' and says the same thing. I don't get where my error is.
The syntax for concatenate is as follows:
CONCATENATE c1 c2 [... cn] INTO targetc [SEPARATED by sep].
or
CONCATENATE lines of itab into targetc [SEPARATED by sep].
As you have already noted, you need at least two source variables to concatenate.
Full documentation can be found here
As of Netweaver release 7.02 you can also do this:
targetc = c1 && [c2 ... && cn].
In this case, you lose the "separator" functionality, though.

Resources