I've been looking at some dafny tutorials and couldn't find how to read from (or write to) simple text files. Surely, this has to be possible right?
I have cooked up a very basic file IO library for Dafny based on code from the Ironfleet project.
The library consists of two files: a Dafny file fileio.dfy declaring signatures for various file operations, and a C# file fileionative.cs that implements them.
As an example, here is a simple Dafny program that writes the string hello world! to the file foo.txt in the current directory.
To compile, place all three files in the same directory, then run:
dafny fileiotest.dfy fileionative.cs
which should print something like
Dafny 2.1.1.10209
Dafny program verifier finished with 4 verified, 0 errors
Compiled program written to fileiotest.cs
Compiled assembly into fileiotest.exe
Then you can run the program (I use mono since I'm on unix):
mono fileiotest.exe
which should print done on success.
Finally, you can check the contents of the file foo.txt! It should say hello world!
A few last notes.
First, the specifications for the operations in fileio.dfy are pretty weak. I haven't defined any kind of logical model of what's on disk, so you won't be able to prove things like "if I read the file I just wrote, I get back the same data". (Indeed, such things are not true except under additional assumptions about other processes on the machine, etc.) If you are interested in trying to prove such things, let me know and I can help further.
Second, one thing the signatures do give you is enforced error handling. All operations return a bool saying whether or not they failed, and the specifications basically tell you nothing unless you know all operations have succeeded. If this is a reasonable programming discipline for you, it's nice to have it enforced by Dafny. (If you don't want this, it's easy to take out.)
Related
I am trying to call a do file which has loops from a program in other do file. I am getting an error.
Now, if I use do instead of include, it runs fine but I don't get to use local macros created. I used include so I can use the macros further in the program. I don't want to use global.
First do file (test.do).
forval i = 1/5 {
local val`i' = `i'
}
Second do file(call-test.do)
capture program drop test
program test
include "test.do"
di `val1'
end
test
I got error r(9611);
I using version 16.1
Response from Stata support
The -include- is designed to let you share definitions. It will not
work correctly within a program as documented in -help include-
The short answer is that -include- is usually ok to use in programs,
but not with looping commands, and if you use -include- in a program,
it probably isn't working the way you think it is.
Here's the long version of exactly what is going on:
When you use -include- in a program, your program literally includes
the -include- command in it. The program does NOT have the contents
of the include file substituted in place. That's the start of the
problem for looping commands.
In any case, when a program executes the -include- command, Stata gets
confused about whether to define a loop program on the behalf of a
looping command globally or within the program, and things go downhill
from there. Given how the code is structured, it is unlikely we could
fix -include- to behave differently, so our documentation really
should simply recommend against using -include- in programs. In
addition, at the point at which the failure occurs, Stata simply knows
that it cannot call a program that it thinks should already be in
memory, hence the 9611 return code. It has no idea at that point that
this was because it was called with -include-, unfortunately.
We could in the future introduce a true C-like "#include" for use in
programs which would simply substitute in-line the lines from whatever
was included into your program
I created a simple C program a while ago. It's a simple command-line generator that takes some number, prints the results and stops. I always ran it in the editor's command line enviroment that automatically paused after the program ran, so I omitted adding a getchar() at the end.
I now regret this, because I managed to lose the source. All I have now is the complied .o and .exe file, and the latter - of course - exits immediately after it prints the output, so it's unusable. It wasn't that long, about 100 lines, but I'd like to avoid rewriting it. (Also, I might even learn something new from this way.)
Now I have very basic knowledge of C, and about zero on computer-degree x86 assembly (though I learnt the basics of 8086-assembly for microcontrollers, it won't be that helpful now I guess), so I'm kinda stuck here. Can I either add that getchar() like pausing function to the complied code, or is there any way I can make that .exe stop before exiting while still keeping it standalone?
The program will run on a Windows 10 system.
I would write some sort of batch script in which you call your program and then just run pause, which waits for you to hit a key before it continues.
wrapper.bat:
yourprogram.exe
pause
Of course you can disassemble your executable into raw x86 assembly code, then look up the code for a simple getchar() on Windows, add that and reassemble. However, it would probably be less time consuming to rewrite the program, depending on how complex it was or just create a wrapper batch-script.
It's possible to hijack .o file, you can even do it with .exe, .dll, ... but it's not simple and requires a lot of know-how. What I would suggest is to use some sort of decompiler to try to restore the original source code, make the change and compile it again. You can find suggestions of decompilers in this old answer.
Can I include a first.c file into another second.c? (I am doing some socket programming to store the messages received by server in linked list so in first program I am trying to keep linked list and second program socket programming file to access the data of first in second). What kind of data in first file can be accessed in the second file? Is this is a good practice?
Please explain about the user defined .h files and give me an example for both.
C language is a low level permissive language. If the programmer wants to do weird things the compiler won't to anything to stop it to do.
Your question is of that flavour : you can include first.c in second.c, neither the compiler nor the linker will protest. And in simple cases (only 2 source files) it will work the same. You could also rename first.c to first.h and include it. All that are simply convention ... and good practices.
Because never ever do that (except in very special cases as suggested by Jonathan Leffler). You make the separate compilation rules break in pieces. When you include a file, it is (from the compiler point of view) the same as including it in you text editor. You know you can always have a single monolithic source file, and you should know (or you will soon if you try ...) that it is hard to test and error prone because you have only 2 scopes : global and local to function, and it could easily lead to poorly structured programming.
The great ancients found better to have smaller source files, easier to write, test, and read and understand, and the include files contains the smallest part necessary to allow the separate sources to communicate : normally only declarations and constants, seldom global variables.
The conclusion is nothing more than you got in comments: yes you can, but your surely will not want to do that.
Hullo,
When one disasembly some win32 exe prog compiled by c compiler it
shows that some compilers links some 'hidden' routines in it -
i think even if c program is an empty one and has a 5 bytes or so.
I understand that such 5 bytes is enveloped in PE .exe format but
why to put some routines - it seem not necessary for me and even
somewhat annoys me. What is that? Can it be omitted? As i understand
c program (not speaking about c++ right now which i know has some
initial routines) should not need such complementary hidden functions..
Much tnx for answer, maybe even some extended info link, cause this
topic interests me much
//edit
ok here it is some disasembly Ive done way back then
(digital mars and old borland commandline (i have tested also)
both make much more code, (and Im specialli interested in bcc32)
but they do not include readable names/symbols in such dissassembly
so i will not post them here
thesse are somewhat readable - but i am not experienced in understending
what it is ;-)
https://dl.dropbox.com/u/42887985/prog_devcpp.htm
https://dl.dropbox.com/u/42887985/prog_lcc.htm
https://dl.dropbox.com/u/42887985/prog_mingw.htm
https://dl.dropbox.com/u/42887985/prog_pelles.htm
some explanatory comments whats that heere?
(I am afraid maybe there is some c++ sh*t here, I am
interested in pure c addons not c++ though,
but too tired now to assure that it was compiled in c
mode, extension of compiled empty-main prog was c
so I was thinking it will be output in c not c++)
tnx for longer explanations what it is
Since your win32 exe file is a dynamically linked object file, it will contain the necessary data needed by the dynamic linker to do its job, such as names of libraries to link to, and symbols that need resolving.
Even a program with an empty main() will link with the c-runtime and kernel32.dll libraries (and probably others? - a while since I last did Win32 dev).
You should also be aware that main() is only the entry point of your program - quite a bit has already gone on before this point such as retrieving and tokening the command-line, setting up the locale, creating stderr, stdin, and stdout and setting up the other mechanism required by the c-runtime library such a at_exit(). Similarly, when your main() returns, the runtime does some clean-up - and at the very least needs to call the kernel to tell it that you're done.
As to whether it's necessary? Yes, unless you fancy writing your own program prologue and epilogue each time. There are probably are ways of writing minimal, statically linked applications if you're sufficiently masochistic.
As for storage overhead, why are you getting so worked up? It's not enough to worry about.
There are several initialization functions that load whenever you run a program on Windows. These functions, among other things, call the main() function that you write - which is why you need either a main() or WinMain() function for your program to run. I'm not aware of other included functions though. Do you have some disassembly to show?
You don't have much detail to go on but I think most of what you're seeing is probably the routines of the specific C runtime library that your compiler works with.
For instance there will be code enabling it to run from the entry point 'main' which portable executable format understands to call the main(char ** args) that you wrote in your C program.
I unfortunately was doing a little code archeology today (while refactoring out some old dangerous code) and found a little fossil like this:
# line 7 "foo.y"
I was completely flabbergasted to find such an archaic treasure in there. I read up on it on a website for C programming. However it didn't explain WHY anyone would want to use it. I was left to myself therefore to surmise that the programmer put it in purely for the sheer joy of lying to the compiler.
Note:
(Mind you the fossil was actually on line 3 of the cpp file) (Oh, and the file was indeed pointing to a .y file that was almost identical to this file.
Does anyone have any idea why such a directive would be needed? Or what it could be used for?
It's generally used by automated code generation tools (like yacc or bison) to set the line number to the value of the line in the actual source file rather than the C source file.
That way, when you get an error that says:
a += xyz;
^ No such identifier 'xyz' on line 15 of foo.y
you can look at line 15 of the actual source file to see the problem.
Otherwise, it says something ridiculous like No such identifier 'xyz' on line 1723 of foo.c and you have to manually correlate that line in your auto-generated C file with the equivalent in your real file. Trust me, unless you want to get deeply involved in the internals of lexical and semantic analysis (or you want a brain haemorrhage), you don't want to go through the code generated by yacc (bison may generate nicer code, I don't know but nor do I really care since I write the higher level code).
It has two forms as per the C99 standard:
#line 12345
#line 12345 "foo.y"
The first sets just the reported line number, the second changes the reported filename as well, so you can get an error in line 27 of foo.y instead of foo.c.
As to "the programmer put it in purely for the sheer joy of lying to the compiler", no. We may be bent and twisted but we're not usually malevolent :-) That line was put there by yacc or bison itself to do you a favour.
The only place I've seen this functionality as being useful is for generated code. If you're using a tool that generates the C file from source defined in another form, in a separate file (ie: the ".y" file), using #line can help the user know where the "real" problem is, and where they should go to correct it (the .y file where they put the original code).
The purpose of the #line directive is mainly for use by tools - code generators can use it so that debuggers (for example) can keep context of where things are in the user's code or so error messages can refer the user to the location in his source file.
I've never seen that directive used by a programmer manually putting it in - and I;m not sure how useful that would be.
It has a deeper purpose. The original C preprocessor was a separate program from the compiler. After it had merged several .h files into the .c file, people still wanted to know that the error message is coming from line 42 of stdio.h or line 17 of main.c. Without some means of communication, the compiler would otherwise have no way to know which source file originally held the offending line of code.
It also influences the tables needed by any source-level debugger to translate between generated code and source file and line number.
Of course, in this case, you are looking at a file that was written by a tool (probably named yacc or bison) that is used to create parsers from a description of their grammar. This file is not really a source file. It was created from the real source text.
If your archaeology is leading you to an issue with the parser, then you will want to identify what parser generator is actually being used, and do a little background reading on parsers in general so you understand why it doing things this way at all. The documentation for yacc, bison, or whatever the tool is will likely also be helpful.
I've used #line and #error to create a temporary *.c file that you compile and let your IDE give you a browsable list of errors found by some 3rd party tool.
For example, I piped the output file from PC-LINT into a perl script which converted the human readable errors to #line and #error lines. Then compiled this output, and my IDE lets me step through each error using F4. A lot faster that manually opening up each file and jumping to a particular line.