Why has uvm_top disappeared? - uvm

I compiled my first IEEE 1800.2-2017 UVM code this week and was surprised to discover that uvm_top no longer exists. A quick search of IEEE 1800.2-2017 reveals no occurrence of "uvm_top" and a quick look at the source code reveals that it really has disappeared. Here are two workarounds:
Instead of, eg:
comp = uvm_top.find("*.m_agent.m_seqr"); // uvm 1.2
you could do:
comp = uvm_root::get().find("*.m_agent.m_seqr"); // IEEE 1800.2-2017
or, if you prefer:
uvm_root uvm_top = uvm_root::get();
comp = uvm_top.find("*.m_agent.m_seqr");
I have two questions:
i) Why did the creators of IEEE 1800.2-2017 get rid of uvm_top?
ii) What do they intend us to do instead? (one of those things above or something else?)

Speculations:
Dogmatism that global variables are evil and should be avoided (even if in this case the variable is const).
It's easier for them to generate the class reference, because the tool can handle classes and functions, but not variables inside packages.

Related

Is there a Python alternative to IDL's array_indices() function?

I am working on a port of some IDL code to Python (3.7). I have a translation working which uses whatever direct Python alternatives are available, and supplementing what I can with idlwrap. In an effort to eliminate legacy IDL functions from the code, I am looking for an alternative to ARRAY_INDICES(). Right now, I have simply translated the entire function directly and import it on its own. I've spent a good deal of time trying to understand exactly what it does, and even after translating it verbatim, it is still unclear to me, which makes coming up with a simple Python solution challenging.
The good news is I only need it to work with one specific set of arrays whose shapes wont change. An example of the code that will be run follows:
temp = np.sum(arr, axis=0)
goodval = idlwrap.where(temp > -10)
ngood = goodval.size
arr2 = np.zeros_like(arr)
for i in range(0, ngood - 1):
indices = array_indices(arr2, goodval[i])
#use indices for computation

How to check a napi_value of type napi_number is an integer or decimal by using node.js N-API function,

How to check a given napi_value of type napi_number is an integer or decimal (a number with fractional value) by using node.js native N-API function .
Look like there is no isInt() or isDouble() equivalent function in N-API (we don’t want to use V8 function call either). Let us consider a scenario where we are calling a native addon function f1() from JavaScript by passing a JavaScript object as argument as shown in the snippet.
let obj = { n1: 123, n2: 123.45 };
myaddon.f1( obj );
The native function f1() want to extract value associated with the keys n1 and n2 by calling the best fit value extraction N-API function. For example to extract value of n1 it may be best to use one of napi_get_value_int* and similarly for the n2 the double is a better choice.
napi_get_value_double
napi_get_value_int32
napi_get_value_uint32
napi_get_value_int64
Unfortunately I could not find any N-API function to verify the derivative of napi_number property. Have you come across similar situation, if so how did you solve this problem?
https://nodejs.org/api/n-api.html
A request has been opened with node-addon-api team regarding this feature and they provided an answer that sound logical. I thought of sharing the answer with this community that may help similar queries others may have. Here is the answer from node-addon-api team
While handling numbers with JavaScript, it is important to know
that all numbers in JavaScript are double-precision 64 bit IEEE
754 values (despite some engines like v8 might have additional
represents on small integers, there is no such definition in ECMA spec
and no way to determine these types of number in JavaScript).
napi_get_value_{double,int32,uint32,int64} just convert these value to
its desired one. There might be a precision loss in the conversion. If
a determined number is required in the case, use BigInt instead.

C - Double type variables : same formulas, different values

EDIT
SOLVED
Solution was to use the long double versions of sin & cos: sinl & cosl.
It is my first post here, so bear with me :).
I come today here to ask for your input on a small problem that I am having with a C application at work. Basically, I am computing an Extended Kalman Filter and one of my formulas (that I store in a variable) has multiple computations of sin and cos, at least 16 in total in the same line. I want to decrease the time it takes for the computation to be done, so the idea is to compute each cos and sin separately, store them in a variable, and then replace the variables back in the formula.
So I did this:
const ComputationType sin_Roll = compute_sin((ComputationType)(Roll));
const ComputationType sin_Pitch = compute_sin((ComputationType)(Pitch));
const ComputationType cos_Pitch = compute_cos((ComputationType)(Pitch));
const ComputationType cos_Roll = compute_cos((ComputationType)(Roll));
Where ComputationType is a macro definition (renaming) of the type Double. I know it looks ugly, a lot of maybe unnecessary castings, but this code is generated in Python and it was specifically designed so....
Also, compute_cos and compute_sin are defined as such:
#define compute_sin(a) sinf(a)
#define compute_cos(a) cosf(a)
My problem is the value I get from the "optimized" formula is different from the value of the original one.
I will post the code of both and I apologise in advance because it is very ugly and hard to follow but the main points where cos and sin have been replaced can be seen. This is my task, to clean it up and optimize it, but I am doing it step by step to make sure I don't introduce a bug.
So, the new value is:
ComputationType newValue = (ComputationType)(((((((ComputationType)-1.0))*(sin_Pitch))+((DT)*((((Dg_y)+((((ComputationType)-1.0))*(Gy)))*(cos_Pitch)*(cos_Roll))+(((Gz)+((((ComputationType)-1.0))*(Dg_z)))*(cos_Pitch)*(sin_Roll)))))*(cos_Pitch)*(cos_Roll))+((((DT)*((((Dg_y)+((((ComputationType)-1.0))*(Gy)))*(cos_Roll)*(sin_Pitch))+(((Gz)+((((ComputationType)-1.0))*(Dg_z)))*(sin_Pitch)*(sin_Roll))))+(cos_Pitch))*(cos_Roll)*(sin_Pitch))+((((ComputationType)-1.0))*(DT)*((((Gz)+((((ComputationType)-1.0))*(Dg_z)))*(cos_Roll))+((((ComputationType)-1.0))*((Dg_y)+((((ComputationType)-1.0))*(Gy)))*(sin_Roll)))*(sin_Roll)));
And the original is:
ComputationType originalValue = (ComputationType)(((((((ComputationType)-1.0))*(compute_sin((ComputationType)(Pitch))))+((DT)*((((Dg_y)+((((ComputationType)-1.0))*(Gy)))*(compute_cos((ComputationType)(Pitch)))*(compute_cos((ComputationType)(Roll))))+(((Gz)+((((ComputationType)-1.0))*(Dg_z)))*(compute_cos((ComputationType)(Pitch)))*(compute_sin((ComputationType)(Roll)))))))*(compute_cos((ComputationType)(Pitch)))*(compute_cos((ComputationType)(Roll))))+((((DT)*((((Dg_y)+((((ComputationType)-1.0))*(Gy)))*(compute_cos((ComputationType)(Roll)))*(compute_sin((ComputationType)(Pitch))))+(((Gz)+((((ComputationType)-1.0))*(Dg_z)))*(compute_sin((ComputationType)(Pitch)))*(compute_sin((ComputationType)(Roll))))))+(compute_cos((ComputationType)(Pitch))))*(compute_cos((ComputationType)(Roll)))*(compute_sin((ComputationType)(Pitch))))+((((ComputationType)-1.0))*(DT)*((((Gz)+((((ComputationType)-1.0))*(Dg_z)))*(compute_cos((ComputationType)(Roll))))+((((ComputationType)-1.0))*((Dg_y)+((((ComputationType)-1.0))*(Gy)))*(compute_sin((ComputationType)(Roll)))))*(compute_sin((ComputationType)(Roll)))));
What I want is to get the same value as in the original formula. To compare them I use memcmp.
Any help is welcome. I kindly thank you in advance :).
EDIT
I will post also the values that I get.
New value : -1.2214615708217025e-005
Original value : -1.2214615708215651e-005
They are similar up to a point, but the application is safety critical and it is necessary to validate the results.
You can not meet your expectation for a couple of reasons.
By altering the code you adjust the machine instructions being used in subtle ways that will impact the final value.
For instance if originally it was using fused multiplies and adds and this is no longer happening it will change the result.
You don't mention the target architecture. Some architectures retain more than 64bits in the floating point pipeline. These extra bits get rounded when forced into 64bit memory. Again altering how this works will have minor effects on the final output.

Is it possible to override bracket operators in ruby like endpoint notations in math?

Trying to implement something like this:
arr = (1..10)
arr[2,5] = [2,3,4,5]
arr(2,5] = [3,4,5]
arr[2,5) = [2,3,4]
arr(2,5) = [3,4]
Well, we need to override four bracket opreators: [], [), (], ()
Any ideas?
It's called "Including or excluding" in mathematics. https://en.wikipedia.org/wiki/Interval_(mathematics)#Including_or_excluding_endpoints
In short, this is not possible with the current Ruby parser.
The slightly longer answer: You'd have to start by modifying parse.y to support the syntax you propose and recompile Ruby. This is of course not a terrible practical approach, since you'd have to do that again for every new Ruby version. The saner approach would be to start a discussion on ruby-core to see if there is sufficient interest for this to be made part of the language (probably not tbh).
Your wanted syntax is not valid for the Ruby parser, but it could be implemented in Ruby with the help of self-modifying code.
The source files need to be pre-processed. A simple regular expression can substitute your interval expressions with ordinary method syntax, i.e.
arr[2,5] -> interval_closed(arr,2,5)
arr(2,5] -> interval_left_open(arr,2,5)
arr[2,5) -> interval_right_open(arr,2,5)
arr(2,5) -> interval_open(arr,2,5)
The string holding the modified source can be evaluated and becomes part of the application just like a source file on the hard disk. (See instance_eval)
The usage of self-modifying code should be well justified.
Is the added value worth the effort and the complications?
Does the code have to be readable for other programmers?
Is the preprocessing practical? E.g. will this syntax occur in one or a few isolated files, or be spread everywhere?

Specific functions vs many Arguments vs context dependent

An Example
Suppose we have a text to write and could be converted to "uppercase or lowercase", and can be printed "at left, center or right".
Specific case implementation (too many functions)
writeInUpperCaseAndCentered(char *str){//..}
writeInLowerCaseAndCentered(char *str){//..}
writeInUpperCaseAndLeft(char *str){//..}
and so on...
vs
Many Argument function (bad readability and even hard to code without a nice autocompletion IDE)
write( char *str , int toUpper, int centered ){//..}
vs
Context dependent (hard to reuse, hard to code, use of ugly globals, and sometimes even impossible to "detect" a context)
writeComplex (char *str)
{
// analize str and perhaps some global variables and
// (under who knows what rules) put it center/left/right and upper/lowercase
}
And perhaps there are others options..(and are welcome)
The question is:
Is there is any good practice or experience/academic advice for this (recurrent) trilemma ?
EDIT:
What I usually do is to combine "specific case" implementation, with an internal (I mean not in header) general common many-argument function, implementing only used cases, and hiding the ugly code, but I don't know if there is a better way that I don't know. This kind of things make me realize of why OOP was invented.
I'd avoid your first option because as you say the number of function you end up having to implement (though possibly only as macros) can grow out of control. The count doubles when you decide to add italic support, and doubles again for underline.
I'd probably avoid the second option as well. Againg consider what happens when you find it necessary to add support for italics or underlines. Now you need to add another parameter to the function, find all of the cases where you called the function and updated those calls. In short, anoying, though once again you could probably simplify the process with appropriate use of macros.
That leaves the third option. You can actually get some of the benefits of the other alternatives with this using bitflags. For example
#define WRITE_FORMAT_LEFT 1
#define WRITE_FORMAT_RIGHT 2
#define WRITE_FORMAT_CENTER 4
#define WRITE_FORMAT_BOLD 8
#define WRITE_FORMAT_ITALIC 16
....
write(char *string, unsigned int format)
{
if (format & WRITE_FORMAT_LEFT)
{
// write left
}
...
}
EDIT: To answer Greg S.
I think that the biggest improvement is that it means that if I decide, at this point, to add support for underlined text I it takes two steps
Add #define WRITE_FORMAT_UNDERLINE 32 to the header
Add the support for underlines in write().
At this point it can call write(..., ... | WRITE_FORMAT_UNLDERINE) where ever I like. More to the point I don't need to modify pre-existing calls to write, which I would have to do if I added a parameter to its signature.
Another potential benefit is that it allows you do something like the following:
#define WRITE_ALERT_FORMAT (WRITE_FORMAT_CENTER | \
WRITE_FORMAT_BOLD | \
WRITE_FORMAT_ITALIC)
I prefer the argument way.
Because there's going to be some code that all the different scenarios need to use. Making a function out of each scenario will produce code duplication, which is bad.
Instead of using an argument for each different case (toUpper, centered etc..), use a struct. If you need to add more cases then you only need to alter the struct:
typedef struct {
int toUpper;
int centered;
// etc...
} cases;
write( char *str , cases c ){//..}
I'd go for a combination of methods 1 and 2.
Code a method (A) that has all the arguments you need/can think of right now and a "bare" version (B) with no extra arguments. This version can call the first method with the default values. If your language supports it add default arguments. I'd also recommend that you use meaningful names for your arguments and, where possible, enumerations rather than magic numbers or a series of true/false flags. This will make it far easier to read your code and what values are actually being passed without having to look up the method definition.
This gives you a limited set of methods to maintain and 90% of your usages will be the basic method.
If you need to extend the functionality later add a new method with the new arguments and modify (A) to call this. You might want to modify (B) to call this as well, but it's not necessary.
I've run into exactly this situation a number of times -- my preference is none of the above, but instead to use a single formatter object. I can supply it with the number of arguments necessary to specify a particular format.
One major advantage of this is that I can create objects that specify logical formats instead of physical formats. This allows, for example, something like:
Format title = {upper_case, centered, bold};
Format body = {lower_case, left, normal};
write(title, "This is the title");
write(body, "This is some plain text");
Decoupling the logical format from the physical format gives you roughly the same kind of capabilities as a style sheet. If you want to change all your titles from italic to bold-face, change your body style from left justified to fully justified, etc., it becomes relatively easy to do that. With your current code, you're likely to end up searching through all your code and examining "by hand" to figure out whether a particular lower-case, left-justified item is body-text that you want to re-format, or a foot-note that you want to leave alone...
As you already mentioned, one striking point is readability: writeInUpperCaseAndCentered("Foobar!") is much easier to understand than write("Foobar!", true, true), although you could eliminate that problem by using enumerations. On the other hand, having arguments avoids awkward constructions like:
if(foo)
writeInUpperCaseAndCentered("Foobar!");
else if(bar)
writeInLowerCaseAndCentered("Foobar!");
else
...
In my humble opinion, this is a very strong argument (no pun intended) for the argument way.
I suggest more cohesive functions as opposed to superfunctions that can do all kinds of things unless a superfunction is really called for (printf would have been quite awkward if it only printed one type at a time). Signature redundancy should generally not be considered redundant code. Technically speaking it is more code, but you should focus more on eliminating logical redundancies in your code. The result is code that's much easier to maintain with very concise, well-defined behavior. Think of this as the ideal when it seems redundant to write/use multiple functions.

Resources