Construct a DFA of the language L = { L1 \ L2 } - dfa

How can I construct a DFA of the language L = { L1 \ L2 }
The DFA's of L1 and L2 are given, but how can I "substract" one DFA from another? Is this somehow possible with the relative complement http://en.wikipedia.org/wiki/Complement_(set_theory) and DeMorgans Law?
My solution:

To my understanding, the desired DFA can be obtained by using a modified product automaton, as used for the intersection of L1 and L2, but the terminal states have to be defined differently. Instead of making a product state (q_1,q_2) a terminal state if and only if q_1 and q_2 are terminal states in A(L1) and A(L2) respectively, define it to be a terminal state if and only if q_1 is a terminal state and q_2 is not a terminal state.
I'm not quite sure if besides this elementary argument, the result can be applied by the set formulation of De Morgan's law.

Related

Implement a state machine in Verilog using a 2D array as transition table

I'm trying to implement a very simple Mealy state machine in Verilog. I have already done it with case and if statements, but I want to be able to do the same using a 2D array as transition table, for clarity.
Here is the code:
module ex10_1_2(
// output
output reg o,
// debug output to see the current state
output [1:0] s,
// input, clk signal
input i, clk);
// states
localparam A = 2'b00, B=2'b01, C = 2'b10, D = 2'b11;
// transition table
reg [3:0] ttable [1:0][2:0];
// current state
reg [1:0] cs;
initial begin
// initial values: state A, output 0.
cs <= A;
o <= 0;
// curr. state|input|next st.|output
ttable[A][0] = {B, 1'b0};
ttable[A][1] = {A, 1'b1};
ttable[B][0] = {C, 1'b1};
ttable[B][1] = {A, 1'b0};
ttable[C][0] = {D, 1'b0};
ttable[C][1] = {C, 1'b0};
ttable[D][0] = {A, 1'b1};
ttable[D][1] = {C, 1'b0};
end
always # (posedge clk)
begin
cs <= ttable[cs][i][2:1];
o <= ttable[cs][i][0];
end
assign s = cs;
endmodule
As you can see, the transition table is 4 rows * 2 columns. Each cell contains 3 bits, the 2 MSBs indicate the next state and the LSB is the next output.
I have tried this implementation both with blocking and non-blocking assignments, but it still doesn't work.
I'm using EPWave and EDAPlayground with Icarus Verilog as simulator. What I'm getting is that o (the output) is undetermined most of the time, and r (which is a debug wire used to see the internal current state goes like 0, 1, 2, X and remains in X for the rest of the simulation)
What am I missing?
PD.: simulation
Here is a link that should allow you to simulate the code: edaplayground. Notice there are two modules: the first one is the case/if one, which already works. The module I'm trying to debug is ex10_1_2.
I have replaced the matrix declaration by the followin:
reg [2:0] ttable [3:0][1:0];
it seems that I had an out-fo-bounds error because of the previous [1:0]. I was confused because I though I would have enough with 2 bits, since that is what the state codes need. However this is an array index, therefore I need 4 positions, one for each state.

How do nested scopes affect stack depth?

I tried compiling the following c-code using MSVC into assembly both with (CL TestFile.c /Fa /Ot) and without optimizations (CL TestFile.c /Fa) and the result is they produce the same stack-depth.
Why does the compiler use 8 bytes for each of the 3 varibles x, y, and z when it knows it will use a maximum of 16 bytes? Instead of y$1 = 4 and z$2 = 8 could it not use y$1 = 4 and z$2 = 4 so y and z use the same memory on the stack without any problems?
int main() {
int x = 123;
if (x == 123) {
int y = 321;
}
else {
int z = 234;
}
}
; Parts of the assembly code
x$ = 0
y$1 = 4
z$2 = 8
main PROC
$LN5:
sub rsp, 24
; And so on...
Nested scopes do not affect stack depth. Per the C standard, nested scopes affect visibility of identifiers and do not impose any requirements on how a C implementation uses the stack, if it has one. A C compiler is permitted by the C standard generate any code that gets the same observable behavior.
For the program shown in the question, the only observable behavior is to exit with a success status, so a good compiler should, when optimizing, generate a minimal program. For example, GCC 10.2 for x86-64 generates just an xor and a ret:
main:
xor eax, eax
ret
So does Clang 11.0.1. If MSVC does not, that is a deficiency in it. (However, it may be that the switches /Os and /Ot do not request optimization or do not request much optimization; they may just express a preference for speed or time when used in conjunction with other optimization switches.)
Further, a good compiler should perform lifetime analysis of the use of objects, constructing a graph representing where nodes are places in code and are labeled with creations or uses of values and directed edges are potential program control flows (or some equivalent representation of the source code). Then assembler (or intermediate code) should be generated to implement the semantics required by the graph. If two sets of source code have equivalent graphs, the compiler should generate equivalent assembly (or intermediate code) for them (up to some reasonable ability to process complicated graphs) regardless of whether definitions in nested scopes were used or not.

How to optimize valve simulation logic? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
This is an simple logic-programming and optimisation etude, that I've created for myself, and somewhat stumbled upon it.
I have a numerical simulation of simple scheme. Consider some reservoir (or a capacitor) Cm, which is constantly pumping up with pressure. Lets call its current state Vm:
At its output it has a valve, or a gate, G, that could be open or closed, according to following logic:
Gate opens, when pressure(or voltage) Vm exceeds some threshold, call it Vopen: Vm > Vopen
Gate remains open, while current outrush Ia is greater than some Ihold: Ia > IHold
Gate conducts power only out of reservoir (like a diode)
I am doing numerical ODE solving of this, i.e. determining Vm and Ia at each (equal, small) timestep dt. Have three variants of this:
Variable types:
float Vm=0.0, Ia=0.0, Vopen, Ihold, Ra, dt;
int G=0;
Loop body v1 (serial):
Vm = Vm + (-Ia*G)*dt;
G |= (Vm > Vopen);
Ia = Ia + (Vm*Ra*G)*dt;
G &= (Ia > Ihold);
Loop body v2 (serial, with temp var, ternary conditionals):
int Gv; // temporary var
Vm = Vm + (-Ia*G)*dt;
Gv = (Vm > Vopen) ? 1 : G;
Ia = Ia + (Vm*Ra*Gv)*dt;
G = (Ia > Ihold) ? Gv : 0;
Loop body v3 (parallel, with cache):
// cache new state first
float Vm1 = Vm + (-Ia*G)*dt;
float Ia1 = Ia + (Vm*Ra*G)*dt;
// update memory
G = ( Vm1 > Vopen ) || (( Ia1 > Ihold ) && G);
Vm = Vm1;
Ia = Ia1; // not nesesary to cache, done for readibility
the G was taken from building up the following truth table, plus imagination:
Q:
Which is correct? Are they?
How third variant(parallel logic) differs from first two(serial logic)?
Are there more effective ways of doing this logic?
PS. I am trying to optimize it for SSE, and then (separately) for OpenCL (if that gives optimisation hints)
PPS. For those who is curious, here is my working simulator, involving this gate (html/js)
By the overall description they are the same and should all fulfil your needs.
Your serial code will produce half-steps. That means if you break it down to the discrete description V(t) can be described as being 1/2 dt in front of I(t). The fist one will keep G changing in every half-step and the second one will synchronise it to I. But since you are not evaluating G in between in doesn't really matter. There is also no real problem with V and I being a half step apart but you should keep it in mind but maybe you should use for plotting/evaluation the vector {V(t),(I(t-1)+I(t))/2,G(t)}.
The parallel code will keep them all in the same time step.
For your pure linear problem the direct integration is a good solution. Higher order ode solver won't buy you anything. State-space representation for pure linear systems only write the same direct integration in a different way.
For SIMD optimisation there isn't munch to do. You need to evaluate step-wise since you are updating I by V and V by I. That means you can't run the steps in parallel which makes many interesting optimisation not possible.

How would this C struct be translated into Cobol

I'm pretty new to Cobol and I'm having difficulty figuring out how to use the structs. What would the C structs below look like when they are converted into Cobol?
These are the structs I have:
struct dataT
{
int m;
};
struct stack
{
int top;
struct dataT items[STACKSIZE];
} st;
How would this statement be represented in Cobol?
st.items[st.top].m
This is very much a stab in the dark since I've never written a line of COBOL before today1. However, after a little googling2 and playing around in ideone, I think I've at least captured the flavor of what the code would look like, if not the actual solution:
IDENTIFICATION DIVISION.
PROGRAM-ID. IDEONE.
ENVIRONMENT DIVISION.
DATA DIVISION.
WORKING-STORAGE SECTION.
01 WS-STACK.
05 WS-TOP PIC 9 VALUE 0.
05 WS-ITEMS OCCURS 10 TIMES INDEXED BY I.
10 WS-M PIC 9 VALUE 0.
PROCEDURE DIVISION.
ADD 1 TO WS-TOP.
MOVE 9 TO WS-M(WS-TOP).
ADD 1 TO WS-TOP.
MOVE 8 to WS-M(WS-TOP).
DISPLAY "WS-STACK :" WS-STACK.
DISPLAY "WS-TOP :" WS-TOP.
DISPLAY "WS-ITEMS[WS-STACK.WS-TOP].M :" WS-M(WS-TOP).
SUBTRACT 1 FROM WS-TOP.
DISPLAY "WS-TOP :" WS-TOP.
DISPLAY "WS-ITEMS[WS-STACK.WS-TOP].M :" WS-M(WS-TOP).
STOP RUN.
Yes, size is hardcoded to 10 (don't know how to do symbolic constants in COBOL), and WS-TOP and WS-M can only store values from 0 to 9.
Needless to say, data types in COBOL and C are very different. I haven't actually created a new stack type; I've declared a single data items with a couple of sub-items, one of which is a table that can store 10 instances of something called WS-M. This is effectively the same as writing
int main( void )
{
int top = 10;
int m[10];
m[--top] = 9;
m[--top] = 8;
printf("top = %d\n", top );
printf("m[%d] = %d", top, m[top] );
top++;
printf("top = %d\n", top );
printf("m[%d] = %d", top, m[top] );
return 0;
}
in C, with the main difference being that I wrote the C code such that the stack grows "downwards" (which is more natural). As far as I could tell in the ten minutes I spent going through that COBOL tutorial, COBOL does not really have an equivalent to a struct type; even though data items can be grouped in a hierarchical manner, you're not creating a new struct or record type as such. If I wanted multiple stacks, I'd have to declare multiple, separate backing stores and index variables.
I think.
I'll have to do a little more reading.
1. At this point in the day I would rather work on just about anything other than the problem in front of me right now, and I've always been curious about how the other half lived. Also, I'm working on an online banking platform and I know half our backends are written in COBOL, so it wouldn't hurt to take sime time to learn it.
2. I cannot vouch for the quality of this tutorial; it's the first one I found that seemed reasonably complete and easy to read.
You would do that like so:
*> Not a symbolic constant, but if it is never referenced as a
*> target of a move/compute, the compiler should recognize that.
*> and tune for that.
01 Stack-Size Pic S9(8) comp-4 Value <<some-number>>.
*> The "comp-4" data type is a twos complement integer, S9(8) makes
*> it a 32-bit word. Depending upon your compiler, you will sometimes
*> see "binary" or "comp-5"
01 My-Stack.
02 Stack-Top Pic S9(8) comp-4 Value 0.
02 Stack-Items occurs 0 to <<some-maximum-size>>
depending on Stack-Size.
*>-------*
*> This is your data structure, if you made it a copybook, you would have
*> the similar effect of having the struct def from the C code. You can
*> use the copy/replacing features if you need to make multipule data
*> items with different names.
*>
03 Stack-M Pic S9(8) comp-5.
*>-------*
To access the current value on top of the stack:
Move Stack-M (Stack-Top) to where-ever
Some helpful paragraphs:
Pop-Stack.
Move 0 to Stack-M (Stack-Top)
Subtract 1 from Stack-Top
Exit.
Push-Stack.
Add 1 to Stack-Top
Move <<whatever value>> to Stack-M (Stack-Top)
Exit.

Automatically printing the structures and variables in C

I am working with 4-5 .c files (around 2000 to 5000 lines each) which include several
headers.Currently I do not have any debug prints which would help me debug the program
during its course of execution.
My question is :-
Is there a way (or some existing tool) to parse the .c files and add new set of print
statements for all the variables in the current scope in the .c file ? Just the same way as
VC++ allows us to see Locals and globals etc. I need them printed at each step. Also,
pointers should be dereferenced.
For ex. lets say at one point in the .c file, there are 10 global variables and 3 locals.
I need to generate the smart printfs to print these 13 variables at that point. Later in
the program if there are 20 variables, i should be able to print the 20 variables etc.
The included header files contain all the relevant declarations for each of these
variables (which can be structures/pointers/arrays or some combinations etc etc.)
I was trying to achieve this via perl script.
What I did is, I generated the preprocessed file (.i file) and i tried parsing it via perl
and then generate individual print functions specific to each variable, but after half a
days' effort i realized that its just too time consuming.
Is there a tool that already does that ? If not this, anything close to it should be good
enough (On which i can apply some perl processing etc)
My goal is that after the program execution,at each step during the program execution, I should be able to see the variables(valid at that scope) without having to invoke the debugger.
I am allowed to process the .c files and re-write them again etc. etc.
Hope my question is clear and thanks for your replies.
Assuming that your C program can be interpreted by Frama-C's value analysis, which is far from a given, you could use that to obtain a log of the values of all living variables at each point of the program or at points of interest.
Consider the following program:
int x = 1;
main(){
int l;
x=2;
Frama_C_dump_each();
l=3;
Frama_C_dump_each();
{
int blocklocal = l + 1;
Frama_C_dump_each();
x = blocklocal + 1;
Frama_C_dump_each();
}
Frama_C_dump_each();
return 0;
}
Running frama-c -val -slevel 1000000000 -no-results t.c on this program produces the log:
[value] Values of globals at initialization
x ∈ {1}
[value] DUMPING STATE of file t.c line 7
x ∈ {2}
=END OF DUMP==
[value] DUMPING STATE of file t.c line 9
x ∈ {2}
l ∈ {3}
=END OF DUMP==
[value] DUMPING STATE of file t.c line 12
x ∈ {2}
l ∈ {3}
blocklocal ∈ {4}
=END OF DUMP==
[value] DUMPING STATE of file t.c line 14
x ∈ {5}
l ∈ {3}
blocklocal ∈ {4}
=END OF DUMP==
[value] DUMPING STATE of file t.c line 16
x ∈ {5}
l ∈ {3}
=END OF DUMP==
The Frama_C_dump_each() statements were inserted by me manually, but you could also nudge the interpreter so that it dumps a state automatically at each statement.
For this approach to work, you need the entire source code of your program, including standard library functions (strlen(), memcpy(), …) and you must hard-code the values of the input at the beginning of the main() function. Otherwise, it will behave as the static analyzer that it really is instead of behaving as a C interpreter.
You could also use the GUI to observe values of variables in your program, but if it is not linear, statements that are visited several times either because of function calls or because of loops will show all the values that can be taken during execution.

Resources