Multiple data types in bison/flex - c

I'm writing a bison/flex parser, with multiple data types, all compatible with ANSI C. It won't be a C language, but will retain its data types.
Thing is... I am not sure how to do this correctly.
For example, in an expression, say 'n1' + 'n2', if 'n1' is double and 'n2' is a 32 bit integer, I will need to do type conversion right? How to do it correctly?
i.e. I will logically need to evaluate which type is bigger (here it's double), then convert the int32 to double and then perform the add operation, which would result in a double of value n1 + n2.
I also want to provide support for type casting.
What's the best way to do it correctly? Is there a way to do it nicely or will I have to put a billion of conversion functions like uint32todouble, int32todouble, int32tolongdouble, int64tolongdouble, etc etc.
Thanks!
EDIT: I have been asked to clarify my question, so I will.
I agree this is not directly related to bison/flex, but I would like people experienced in this context to hint me.
Say I have such an operation in my own 'programming' language (i would say it's more scripting, but anyways) i.e. the one I would parse :
int64 b = 237847823435ll
int64 a = int64(82 + 3746.3746434 * 265.345 + b)
Here, the int64() pseudo-function is a type cast. First, we can see the 82 is an int constant, followed by 3746.3746434 and 265.345, and b is an int64. So when I do the operation at A, I will have to :
Change the type of 82 to double
Change the type of b to double
Do the calculations
Since we have a double and that we want to cast it to an int64, convert the double to an int64, and store the result in variable 'a'
As you see, it's quite lots of type changes... And I wonder how I can for example do them in the most elegant and less work possible. I'm talking about the internal implementation.
I could for example write things like :
int64_t double_to_int64(double k) {
return (int64_t) k; // make specific double to int64 conversion
}
For each of the types, so I'd have functions specific to each conversion, but it would take quite lots of time to achieve it and bsides it's an ugly way of doing things. Since some of the variables and number tokens in my parser/lexer are stored in buffers (for different reasons), I don't see really how I could find a way to convert from a type to another without doing such functions. Not to mention with all the unsigned/signed types, it will double the number of required functions.
Thanks

This has nothing to do with flex or bison. It is a language design question.
I suggest you have a look at the type promotion features of other languages. For example, C and Java promote byte, char, and short to int whenever used in an expression. So that cuts a lot of cackle straight away.
These operations are single instructions on the hardware. You don't need to write any functions at all; just generate the appropriate code. If you're designing an interpretive system, design the p-code accordingly.

Related

Representing C Structs in SMT-LIB

I am trying to use the Z3 solver (which works over SMT-LIB) to reason over C programs involving structs. I would like some way to represent that the struct is a variable that contains other variables in SMT-LIB, but I can't find a way to do that. Does anyone know of a way to represent C structs in SMT-LIB?
You can use algebraic data types feature of SMTLib 2.6 to model structs. See Section 4.2.3 of http://smtlib.cs.uiowa.edu/papers/smt-lib-reference-v2.6-r2017-07-18.pdf
This feature allows not only regular struct declarations but also recursive ones; i.e., you can also model structs that have fields of the same type.
I should add that algebraic data types in SMT are actually more general than what you need, they actually can be used to model values constructed with different algebraic constructors. (For the straightforward record case, you'll simply use one constructor.)
Algebraic-data types are rather a new feature in SMTLib, but both Z3 and CVC4 support it. Solver quality might vary depending on the features you use, but if you simply use datatypes to construct and deconstruct values it should work pretty much out of the box.

Performance of C native types vs Postgresql C API types

Essentially, my question boils down to:
Do I bother casting PostgreSQL datatypes to C types, or do I bother writing specific cases for handling each PostgreSQL datatype. Here are some details.
I'm working on some C user defined functions in PostgreSQL which are designed deal with a large number of values, passed in as an array of some PostgreSQL specific numeric type. I don't know the size of the array beforehand, but it can range from 100's to 10's of millions of entries for any of 4 related arrays (value a, pk a, value b, pk b).
My question is, if I am not modifying any of these values from PostgreSQL, is there any conceivable performance benefit to casting these PostgreSQL input datatypes (INT2, INT4, INT8, Float4, Float8) into C datatypes (Int, Float or Double), and controlling function logic with native C datatypes before casting back to PostgreSQL datatypes to return?
For reference, I am implementing the algorithm described here for finding the closest match of any element in a vector Y to each element in a vector X:
https://stats.stackexchange.com/a/161458
Pretty new to both C and the PostgreSQL C API, any help is appreciated.
Thanks!

multiple checks in a 'if statement' in C language but not working

I have my code something like this.
int L=25;
float x;
Value to x is allotted by long calculation
if(x<=L)
x=x-L;
But it is not changing the value when x=L.
I have also tried
if(x>L || x==L)
Even in this case, value of x does not change for x=L.
Please help
Either x is slightly greater than 25 and you have been fooled into thinking it is exactly 25 by software that does not display the entire value correctly or the code being executed and the values being used differ from what you have shown in this question.
EDIT: Contrary to my initial view, and of some others, the issue isn't do with comparing different types. As per the comments, the most recent C standard that seems to be out there and free (http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf) makes it clear that comparison will force type conversion, generally towards the higher precision type.
AS an aside, in my personal view, it is still wiser to make these conversions explicit because then it is clear as you scan the code what is going on. The issue here is probably the one highlighted by another answerer.
It is quite possible the issue is with your typing. It is best to be explicit:
int L=25;
float x;
// Value to x is allotted by long calculation
if (x <= ((float)L)) {
x = x - ((float)L);
}

assign byte to string delphi xe2

I have code
procedure _UUEncode;
var
Sg: string;
Triple: string[3];
begin
...
Byte(Sg[1]) := Byte(Sg[1]) + Length(Triple); // <- on this line I got error
...
end;
I got "left sign cannot be assigne to" error, someone can help me?
I try to make conversion from Delphi 7 component to XE2 component
thanks for suggestion, I realy appreciated, may be someone have cheklist what I must pay attention while convert delphi7 vcl component to xe2
I would write it like this, in all versions of Delphi:
inc(Sg[1], Length(Triple));
It's always worth avoiding casts if possible. In this case you are wanting to increment an ordinal value, and inc is what does that.
The reason your typecast failed is that casts on the target of an assignment are special. These typecasts are known as variable typecasts and the documentation says:
You can cast any variable to any type, provided their sizes are the same and you do not mix integers with reals.
In your case the failure is because the sizes do not match. That's because Char is two bytes wide in Unicode Delphi. So, the most literal conversion of your original code is:
Word(Sg[1]) := Word(Sg[1]) + Length(Triple);
However, it's just better and clearer to use inc.
It's also conceivable that your uuencode function should be working with AnsiString since uuencode maps binary data to a subset of ASCII. If you did switch to AnsiString then your original code would work unchanged. That said, I still think inc is clearer!

When to use enums?

I'm currently reading about enums in C. I understand how they work, but can't figure out situations where they could be useful.
Can you give me some simple examples where the usage of enums is appropriate?
They're often used to group related values together:
enum errorcode {
EC_OK = 0,
EC_NOMEMORY,
EC_DISKSPACE,
EC_CONNECTIONBROKE,
EC_KEYBOARD,
EC_PBCK
};
Enums are just a way to declare constant values with more maintainability. The benefits include:
Compilers can automatically assign values when they are opaque.
They mitigate programmers doing bad things like int monday = SUNDAY + 1.
They make it easy to declare all of your related constants in a single spot.
Use them when you have a finite list of distinct, related values, as in the suits of a deck of cards. Avoid them when you have an effectively unbounded list or a list that could often change, as in the set of all car manufacturers.
Sometime you want to express something that is finite and discrete. An example from the GNU C Programming tutorial are compass directions.
enum compass_direction
{
north,
east,
south,
west
};
Another example, where the ability of enums to correspond to integers comes in handy, could be status codes.
Usually you start the OK code with 0, so it can be used in if constructs.
The concept behind an enum is also sometimes called an "enumerated type". That is to say, it's a type all of whose possible values are named and listed as part of the definition of the type.
C actually diverts from that a bit, since if a and b are values in an enum, then a|b is also a valid value of that type, regardless of whether it's listed and named or not. But you don't have to use that fact, you can use an enum just as an enumerated type.
They're appropriate whenever you want a variable whose possible values each represent one of a fixed list of things. They're also sometimes appropriate when you want to define a bunch of related compile-time constants, especially since in C (unlike C++), a const int is not a compile-time constant.
I think that their most useful properties are:
1) You can concisely define a set of distinct constants where you don't really care what the values are as long as they're different. typedef enum { red, green, blue } color; is better than:
#define red 0
#define green 1
#define blue 2
2) A programmer, on seeing a parameter / return value / variable of type color, knows what values are legal and what they mean (or anyway knows how to look that up). Better than making it an int but documenting "this parameter must be one of the color values defined in some header or other".
3) Your debugger, on seeing a variable of type color, may be able to do a reverse lookup, and give red as the value. Better than you looking that up yourself in the source.
4) The compiler, on seeing a switch of an expression of type color, might warn you if there's no default and any of the enumerated values is missing from the list of cases. That helps avoid errors when you're doing something different for each value of the enum.
Enums are primarily used because they are easier to read than integer numbers to programmers. For example, you can create an enum that represents dayy in a week.
enum DAY {Monday, Tuesday....}
Monday is equivalent to integer 0, Tuesday = Monday + 1 etc.
So instead of integers thta represent each week day, you can use a new type DAY to represent this same concept. It is easier to read for the programmer, but it means the same to the compiler.

Resources