How to represent functions in flowchart? - c

I define some function in my Ansi C Program (simple program). I don't known how to represent a function in flowchart. Anybody can help me?

In my opinion, a flowchart is more of a functional description of your algorithm and not where you would "define" a function in the sense of your program. Yes, a functional aspect represented in your flowchart may directly map to a single function in your C program, but it may be that multiple functions or multiple threads are used to accomplish it as well. The flowchart isn't where you would describe these.
In short, the flowchart is not where you should be "defining" functions for your C program. It should be a high-level representation of functional aspects of your program, not the implementation of it.

On a flowchart, a function can be anything: a state, an action that occurs while transitioning betwwen states, etc. It all depends on how you have your flowchart organized. I would recommend building your flowchart normally, then go back and add a function name to the description of anything that is implemented by a function.

There is no direct symbol.. you can create your function with the basic input /output/process symbols

Here is a tutorial on writing C functions using DRAKON charts (DRAKON charts are very similar to flowcharts):
http://drakon-editor.sourceforge.net/cpp/c.html

I had a similar question, Flowcharting a Get-ter, which was answered with:
NOTE: Please don't upvote this answer but instead the original within the above link.
Flowcharts represent flow of control, not flow of information.
Flowcharting formally captures steps and the linkages between them
that describe the transfer of the flow of control that are often based
on decisions: in particular, conditional branches and loops.
Flow of control is about what is done or happens next, and (sadly) not
about the required data to perform that step.
According to Wikipedia, there are some extensions for the flow of
data; however, they are basically limited to documents and files.
Generally speaking, state is poorly represented in flowcharting; there
is virtually no notion of data, variables, scopes, lifetimes, or
types. So, data (and metadata about that data, such as allowed or
expected types) is mostly documented informally with human language
description in the text within individual steps of the flowchart.
Input & Output in flowcharting is meant to indicate communication with
another independent and top-level process (even if it is just a later
running copy of one's self). As such this communication is about
reading/writing to disc or to a network.
A getter does not qualify as input or output, which is to say
communication with another independent process, so I think that is
out. I don't think they even had getter's when flowcharting was first
applied to software design (circa 1950).
You might look in to UML. – Erik Eidt Dec 12 '16 at 16:51

Related

Counting the number of functions and data structures in a C codebase

Is there a way to take a C file (or a directory/project) and count the number of functions + data structures? This is similar to counting the LOC but instead is focused on counting the number of "conceptual units" the program handles as a way to measure its complexity.
It sounds like you are in need of perusing your source code. Doxygen is an excellent tool for summarizing just about every aspect of a C project. (and many other languages). It is OpenSource, and easily downloaded. Additionally, the list of features is extensive.
On a Linux environment, you'll want to look at a tool like objdump, that will show you a bunch of information about the compiled output.
There are pages that explain some of its complicated output, such as this.
But perhaps one of the simplest is objdump -T.

C: Using serialized data as type

So I've run into an interesting design pattern and I wanted to know if you guys had an opinion on it.
Basically, the design is passing everything around as a pre-serialized type. There is no "types" for the returns, for example. It is passed as a simple uint8_t*. There is a defined header that "tells" you what is in the buffer, how big it is, what the version of the buffer is, ect. I call it "pre-serialized" because it forces flattening of all structures.
The pros:
You can easily write it (or even a set of it) to what ever you want. Files, IO, whatever.
Can store arbitrary data.
The Cons: IMHO:
No type safety is going to be a nightmare
The programmer has to parse the code. Even if there is an enumerated type, the user would have to know what that type means. Even if there are functions to parse the type, the programmer has to know that is the function to call.
Version hell: changing code will cause a ripple effect of errors. Because everywhere is parsing it differently, you have no idea where the code works or where it is broken.
It is viral: because it is flat, you can't "insert" the header on the end of outside data. You could wrap the call if you copy your "data", but this could cause an unnecessary copy that would be SLOW. So either your code is slower than it needs to be, or you conform to this data structure.
It isn't human readable OR debug-able.
Have you seen this design pattern before? Is there a name for this design pattern? Things I missed?
Is there a name for this design pattern?
Well, Legacy Code? :) I have seen such design in 30 years old Cobol systems...
The pros you have stated are easily reachable also by using XML format (or JSON):
You can easily write it (or even a set of it) to what ever you want. Files, IO, whatever - most of all, web services!
Can store arbitrary data.
Furthermore, all your cons are eliminated.
The only pro I can see in your solution is conciseness - when every byte counts and you need to avoid any overhead as too expensive, then this is nice.
Added: Cobol has a feature to easily define the structure of such serialized data, see PICTURE clause. Reading the data is very easy then, you read them as variables. (Like if you have a binary data and define a struct in the C language and typecast the binary to the struct.)
As Honza said this would be normal in Legacy Cobol/PL1 (was there a Cobol/PL1 conversion or interface to COBOL programs ???).
In COBOL this design pattern would make sense, not sure about C though (one of the binary serialization packages or JSON etc might be more sensible).
In Cobol, you would have a Cobol copybook which all programs would use and could edit the data using the Cobol Copybook (with something like file-aid or Microfocus Data Editor).
Why use this "design pattern" in Cobol:
Regression testing of Modules; you can write a driver module like
Read Test-data-file
while more-data
Call Module
write Result to output-file
Read Test-data-file
end
You can then do a compare between Output from the
re-Change Program to the changed program.
Testing - some times you can use a "production file" in testing
A file provides trace or snapshot of what is going on, this can be very useful.
Easy to reorganize Batch streams:
Split a programs up (and pass the data via file). There variety of reason for doing this including
program has gotten to big and is hard to maintain.
Sorting the data
Performance (use a file rather than hitting the DB multiple times)
new uses for extracted data
While your cons are valid for C, they will be less of an issue in Cobol.
The key to using this "design pattern" is being able to edit/view/compare the format. If you can not edit/view/compare a file, I do not see the point

Why do C written libraries use so many structs?

I've looked to some open source Libraries in some places. And, I've realized which that Libraries are basically a great stack of structs. I've seen few methods.
Why does C written libraries uses so much structs? What's the basis behind this? This, for me, looked like a attempt to simulate object orientation, 'cause a fast searching told me that each struct is "instantiated" by the using program to make something, per example, in some Desktop enviroments for linux that I've seen that each window was a struct in the used GUI library.
Anyway, the question is that.
Structs are a great way to organize data. And data is fundamental, as Fred Brooks knew decades ago:
Show me your flowcharts and conceal your tables, and I shall continue
to be mystified. Show me your tables, and I won't usually need your
flowcharts; they'll be obvious.
Object-oriented programming doesn't have to be merely simulated in C, it can be realized. For example, did you know that inside your structs you can store function pointers which operate on those same structs, and then you are a little bit closer to C++'s classes?
Also consider extensibility: even a function taking many arguments may be improved by taking a single struct, because then its signature does not need to change when a new argument is added.
Finally, C does not have multiple return values from a single function call. But it can return a struct, which is about the same thing. C is a lot about building your own tools from the raw language, and being able to stash a bunch of related data and/or functions together in one place is a good building block.
With or without object orientation, structures are a useful way to group aggregate data into a single symbol. You can copy the structure wherever you like without having to write out all the members each time, and this makes the structure easier to change if you have to.
It also makes it easier to reference certain members using pointer arithmetic, if you're careful (see sockaddr).
Same argument as with arrays.
Simply put, there's no reason not to use structures.
Structures are useful while retrieving data using a pointer. Because single pointer is enough for complete bunch of data with in a structure.
One, it keeps the APIs clean. Instead of passing N separate arguments to a function, you pass a single argument containing N members.
Two, it allows the library to hide implementation details from the programmer. For example, the C FILE type abstracts away some details of stream I/O, details which vary from implementation to implementation. We don't need to know those details, so they're not exposed to us; we just use the FILE type to pass that information around.

Alternative to Hash Map for Small Data set in C

I am currently working on a command line interface for a particle simulator. Its parser takes reads input in the following format:
[command] [argument]* (-[flag] [flag argument])
Currently, the command is sent through a conditional block, compared to various known commands and its corresponding data packet is sent to the matching function. This, however, seems clunky, inefficient and inelegant.
I am thinking about using a hashmap instead, with a string representation of a command as the key and a function pointer as the value. The function referenced would then be sent a data packet containing arguments, flags, etc.
Is a hash map overkill in this situation? Does the extra infrastructure required to implement one outweigh the potential benefits? I am aiming for speed, elegance, function, and, since this is an open-source project, extensibility.
Thanks for the help.
You might want to consider the Ternary Search Tree. It has good performnce, efficient use of storage; and you don't need a hash function or a collision strategy.
The linked Bentley/Sedgwick article is a very thorough-yet-readable explanation of the accompanying C source.
I've been using a TST for name-lookup in the past 3 versions of my postscript interpreter. The only changes that have been needed have been due to changes in memory management. Here's a version I modified (lightly) to use explicit pointers. I use yet another version in my postscript interpreter, any of the xpost2*.zip versions, in the file core.c, which uses byte-offsets for pointers (have to be added to the user-memory byte-pointer to yield a real pointer).
Speed gained will probably be minimal, but you could hash the command to convert it to a number and then use a switch statement. Faster than a hash map.

Advanced Memory Editing/Function Calling

I've gotten extremely interested into coding trainers (Program that modifies value of a different process) for video games. I've done the simple 'god-mode' and 'unlimited money' things, but I want to do alot more than that. (Simple editing using WriteProcessMemory)
There are memory addresses of functions on the internet of the video game I'm working on, and one of functions is like "CreateCar" and I'm wanting to call that function from an external program.
My question: How can I call a function from an external process in C/C++, provided the function address, using a process handle or other method.
PS: If anyone could link me to tools (I've got debuggers, no need for more..) that help with this sort of thing, that'd be nice.
You can't, at least not safely. If the function has exactly one parameter, you can create a new thread in that process at the function address. If it has more, you might want to inject a DLL to do it.
But neither of these solutions are safe because creating a new thread to run the function can and will corrupt data structures if other threads are currently using them. The only safe way to call a function in another process is to somehow insert the call in exactly the right place in that process so that it's logically correct for that program. Never mind the technical hurdles (inserting hooks at arbitrary locations); you need to know exactly how the program works, which basically means you have a lot of reverse engineering ahead of you (or you need to get the source code).

Resources