Tool to convert (translate) C to Go? [closed] - c

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
What tool to use to convert C source code into Go source code?
For example, if the C code contains:
struct Node {
struct Node *left, *right;
void *data;
};
char charAt(char *s, int i) {
return s[i];
}
the corresponding Go code generated by the tool should be:
type Node struct {
left, right *Node
data interface{}
}
func charAt(s string, i int) byte {
return s[i]
}
The tool does not need to be perfect. It is OK if some parts of the generated Go code need to be corrected by hand.

rsc created github.com/rsc/c2go to convert the c based Go compiler into Go.
As an external example, akavel seems to be trying to use it to create a Go based lua: github.com/akavel/goluago/
github.com/xyproto/c2go is another project, but it hasn't been touched in a little while.

I guess no such (C to Go source code conversion) tool exist today. You might consider to make your own converter. The question becomes: is it worth it, and how to do that?
It probably might not be worth the effort, because Go and C could be somehow interoperable. For example, if you use the GCC 4.6 (or to be released 4.7, i.e. the latest snapshot) your probably can link C & Go code together, with some care.
Of course, as usual, the evil is in the details.
If you want a converter, do you want the obtained Go code to be readable and editable (then the task is more difficult, since you want to keep the structure of the code, and you also want to keep the comments)? In that case, you probably need your own C parser (and it is a difficult task).
If you don't care about readability of the generated Go code, you could for example extend an existing compiler to do the work. For example, GCC is extensible thru plugins or thru MELT extensions, and you could customize GCC (with MELT, or your own C plugin for GCC) to transform Gimple representation (the main internal representation for instructions inside GCC) to unreadable Go code. This is somehow simpler (but still require more than a week of work).
Of course, Go interfaces, channels and even memory management (garbage collected memory) has no standard C counterpart.

Check out this project
https://github.com/elliotchance/c2go
The detailed description is in this article
Update: August 6, 2021
Also check this one
https://github.com/gotranspile/cxgo

I'm almost sure there is no such tool, but IMHO in every language it's good to write in its own "coding style".
Remember how much we all loved C preprocessor tricks and really artistic work with pointers? Remember how much care it took to deal with malloc/free or with threads?
Go is different. You have no preprocessor, but you have closures, objects with methods, interfaces, garbage collector, slices, goroutines and many other nice features.
So, why to convert code instead of rewriting it in a much better and cleaner way?
Of course, I hope you don't have a 1000K lines of code in C that you have to port to Go :)

Take a look at SWIG, http://www.swig.org/Doc2.0/Go.html it will translate the C/C++ headers to go and wrap them for a starting point. Then you can port parts over bit by bit.

As far as I know, such tool does not exist (yet). So you're bound to convert your C code to Go by hand.
I don't know how complex the C code is you want to convert, but you might want to keep in mind Go has a "special" way of doing things. Like the usage of interfaces and channels.

Related

Taking an Image from a Webcam in Ubuntu Using C [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I am trying to use my webcam (Creative Live! Cam Chat) to take an image in C/C++ and save it to a certain folder(running Ubuntu). Ideally I'm looking to something as simple as possible despite it not being the most elegant solution.
So far I've found v4l2grab which I find incredibly confusing to understand, and also doesn't seem to work with the Creative webcam (returns a black picture that is 5Kb in size) although it does seem to work with the webcam installed as a part of my laptop.
Are there any simple C libraries or code that I could use to do this?
I don't know of a good library for the purpose (please add comment and tell me if there is one :-)). Note: for some uses, eg. OpenCV is just fine, and if it is enough for you, definitely do use it. But if you want more, read on.
So you should just write your own code to use it, it's not particularly hard. Here's one related question: How to use/learn Video4Linux2 (On Screen Display) Output APIs?
Some points to make learning easier:
After calling an IOCTL, always check return status and print possible error message. You will be getting lots of these while you work, so just be systematic about it. I suggest a function like check_error shown below, and calling it always immediatly after any ioctl call.
IMO a must: use IDE/editor, which can follow symbol to the actual header file (for example in Qt Creator, which is a fine pure C application IDE despite the name, hit F2 on symbol, and it will go even to system headers to show you where it is defined). Use this liberally on V4L2 related symbols and defines, and read comments in the header file, that's often the best documentation.
Use the query ioctls and write functions to dump values they return in nice format. For example have function void dump_cap(const struct v4l2_capability &cap) {...}, and add a similar function for every struct you use in your code as you go.
Don't be lazy about setting values inside structs you pass to IOCTL. Always initialize structs to 0 with memset(&ioctl_struct_var, 0, sizeof(ioctl_struct_var)); after declaring them, and also if you reuse them (except when doing 'get-modify-set' operation on some settings, which is quite common with V4L2).
If possible, have two (or more) different webcams (different resolutions, different brand), and test with both (all). This is easiest if you take video device as command line parameter, so you can just call your program with different argument for each cam you have.
Small steps. Often ioctls may not return what you expect, so no point writing code which uses the returned data, before you have actually seen what the query returns for your cameras.
The check_error function mentioned above:
void check_error(int return_value_of_ioctl, const char *msg) {
if (return_value_of_ioctl != -1) return; /* all ok */
int eno = errno; /* just to avoid accidental clobbering of errno */
fprintf(stderr, "error (%d) with %s: %s\n", eno, msg, strerror(eno));
exit(1); /* optional, depending on how you want to work with your code */
}
Call that immediatly after every ioctl, for example:
struct v4l2_capability cap;
setmem(&cap, 0, sizeof(cap));
int r=ioctl(fd, VIDIOC_QUERYCAP, &cap);
check_error(r, "VIDIOC_QUERYCAP");
dump_querycap(&cap);
You can use OpenCV. Use cvCreatecameraCapture (You can call it with argument 0 to get to the fault cam) to create an object and then call cvQueryFrame on that object. Calling cvQueryFrame each time will return a frame.
Have you had a look at OpenCV? It's quite handy for all sorts of image getting and processing. The process of taking picture is well documented, but I suggest you look at something like this, if you do indeed decide to use it.
Take a look at uvccapture source code. It is very simple, yet standard C which uses only V4L2 interface. OpenCV would also work, but it is more complicated to setup and compile.

A newbie question regarding making an executable program [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 12 years ago.
I’m a student of other discipline. I would like to make an executable program on my own. Suppose the program I would like to make is a small dictionary with a few thousands of words. In the hope of creating such a program in my Windows XP, I collected a compiler called Borland C++ V.5.02. I downloaded some books on C language like Programming Language (2nd Edition) by Brian W. Kernighan, Dennis M. Ritchie; Sams Teach Yourself C in 24 Hours, and Programming with C (2nd edition) by Byron S. Gottfried. I started reading those books but sooner I found there were no instruction how to make such a program or I was unable to understand from those huge contents. I’m expecting some instruction from you all telling how I should proceed. Please leave some comments to help me out create such type of program.
C is not the friendliest language to learn on your own if you have no notions of computer architecture.
Maybe something like Python is more suitable?
I'm sure you can download lots of Python books :-)
Welcome to programming. :)
It might be easier to think of your problem in small pieces:
How will you store your dictionary?
Plain text, any order -- simple to work with, slower
Plain text, sorted -- requires sorting the word list (easy with a sort utility, but you or your users have to remember to sort the list); faster
A binary version (say, 32-byte 'records' for words): much harder to edit, but fast lookups
A binary encoding of a tree structure, child nodes are allowed character transitions: requires tools to create, very fast lookups
clever hashing http://en.wikipedia.org/wiki/Bloom_filters can go very very quickly.
Once you've picked the storage (I suggest plain text, any order, as a good starting point) you'll need to figure out the algorithm:
For a given word, compare it against every single word in the file
So you'll need to read ever line in the file, one at a time (fgets in a loop)
Compare the word with the line (strcmp)
return 1 if found
return 0 if you reach the end of the file
Now, iterate this, once for each word in the input:
read in a line (fgets)
tokenize the string into words (strtok)
strip off punctuation (or ignore? or ...)
pass the word to the routine you wrote earlier
This is an awful dictionary program: for a dictionary of 100,000 words (my /usr/share/dict/words is 98,000, and I think the wordlist on OpenBSD systems is in the 150,000 range) and a document of 5,000 words, you'll run your inner loop roughly 250,000,000 times. That'll be slow on even fast machines. Which is why I mentioned all those much-more-complex data structures earlier -- if you want it fast, you can't do naive.
If you sort the word list, it'll be roughly 83,000 comparisons in your inner-loop.
(And now a small diversion: the look program supports a -b option to ask for a binary search; without the -b, it runs a linear search:
'help' 'universe' 'apple'
linear .040s .054s .058s
binary .001s .001s .001s
In other words, if you're going to be doing 5,000 of these, sorted word list will give you much faster run times.)
If you build a finite-state machine (the tree structure), it'll be as many comparisons as your input word has characters, times 5000. That'll be a huge savings.
If you build the bloom filters, it'll be computing one or two hashes (which is some simple arithmetic on your characters, very quick) and then one or two lookups. VERY fast.
I hope this is helpful, at least the simpler versions shouldn't be hard to implement.
Programming is not the easiest thing to do, despite some people think so. It takes a lot of time to learn and master so if you really considering creating your own application a lot of patience and time is required.
If you want to use this application of yours to learn programming then I'd suggest find some tutorials for particular language and digg in.
If it's something you'd need for your main discipline maybe you could hire somebody to create such app for you, or browse sourceforge.net for similar solution, or find commercial alternative.
And yes, it's hard in the beginning :)
If you want to learn C++ then you can use Microsoft Visual C++ Express which is free. Creating of executable program is pretty straightforward task. Look at Visual C++ Guided Tour

Tool to Scan Code Comments, and convert to Standard Format [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I'm working on a C project that has seen many different authors and many different documentation styles.
I'm a big fan of doxygen and other documentation generations tools, and I would like to migrate this project to use one of these systems.
Is anybody aware of a tool that can scan source code comments for keywords like "Description", "Author", "File Name" and other sorts of context to intelligently convert comments to a standard format? If not I suppose I could write a crazy script, or convert manually.
Thanks
The only one I can think of when I read the O'Reilly's book on Lex + Yacc, was that there was code to output the comments on the command line, there was a section in chapter 2 that shows how to parse the code for comments including the // and /*..*/...There's a link on the page for examples, download the file progs.zip, the file you're looking for is ch2-09.l which needs to be built, it can be easily modified to output the comments. Then that can be used in a script to filter out 'Name', 'Description' etc...
I can post the instructions here on how to do this if you are interested?
Edit: I think I have found what you are looking for, a prebuilt comment documentation extractor here.
I think as tommieb75 suggests, a proper parser is the way to handle this.
I'd suggest looking at ANTLR, since it supports re-writing the token buffer in-place, which I think would minimise what you have to do to preserve whitespace etc - see chapter 9.7 of The Definitive ANTLR reference.
If you have relatively limited set of styles to parse, it would be fairly simple to write a Visual Studio macro (for use in the IDE) or a standalone application (for just processing the source code 'offline') that will search a file for comments and then reformat them into a new style using certain titles or tags to split them apart.
A shortcut that may help you is to use my AtomineerUtils Pro Documentation add-in. It can find and convert all the comments in a source file in one pass. Out of the box it parses XML Documentation, Doxygen, JavaDoc and Qt formats (or anything sufficiently close to them) and can then output the comment in any of those formats. It can also be configured to convert incompatible legacy comments. There are several options to aid conversion, but the most powerful calls a Visual Studio Macro with the comment text before it parses it, allowing you to apply a bit of string processing to convert legacy comments into a format that AtomineerUtils can subsequently read (an example macro for one of the most commonly used legacy styles is supplied on the website, so it's usually pretty simple to modify this to cope with your legacy format, as long as it's suitable for a computer to parse).
The converted text need not be particularly tidy - Once AtomineerUtils can extract the documentaiton entries, it will clean up the comments for you - it optionally applies word wrapping, consistent element ordering and spacing etc automatically, and ensures that the comment accurately describes the code element it documents (its entries match the params, typeparams, exceptions thrown etc) and then outputs a replacement comment in its configured format. This saves you doing a lot of work in the conversion macro to get things tidy - and once you have finished converting you can continue to use the addin to save time documenting your code, and ensure that all new comments continue in the same style.

Looking for a good hash table implementation in C [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I am primarily interested in string keys. Can someone point me towards a library?
I had the same need and did some research and ended up using libcfu
It's simple and readable so if I have a need to modify, I can do it without spending too much time to understand. It's also of BSD license. No need to change my structs (to embed say a next pointer)
I had to reject the other options for following reasons (my personal reasons, YMMV):
sglib --> it's a macro maze and I wasn't comfortable debugging/making
changes on such a code base using just macros
cbfalconer --> lot of licensing redflags, and the site was down and too many unfavorable discussions on web about support/author; didn't want to take the risk
google sparce-hash --> as stated already, it's for C++, not C
glib (gnome hash) --> looked very promising; but I couldn't find any easy way to install the developer kit; I just needed the C routines/files -- not the full blown developement environment
Judy --> seems too complex for a simple use.. also was not ready to debug myself if I had to run into any issues
npsml (mentioned here) --> can't find the source
strmap found very simple and useful -- it's just too simplistic that both key and value must be strings; value being string seems too restrictive (should accept void *)
uthash --> seems good (has been mentioned on wikipedia on hashtable); found that it requires struct to be modified -- didn't want to do that as performace is not really a concern for my use --it's more of development velocity.
In summary for very simple use strmap is good; uthash if you are concerned with additional memory use. If just speed of development or ease of use is primary objective, libcfu wins [note libcfu internally does memory allocation to maintain the nodes/hashtables]. It's surprising that there aren't many simple C hash implementations available.
GLib is a great library to use as a foundation in your C projects. They have some decent data structure offerings including Hash Tables: http://developer.gnome.org/glib/2.28/glib-Hash-Tables.html (link updated 4/6/2011)
For strings, the Judy Array might be good.
A Judy array is a complex but very fast associative array data structure for storing and looking up values using integer or string keys. Unlike normal arrays, Judy arrays may be sparse; that is, they may have large ranges of unassigned indices.
Here is a Judy library in C.
A C library that provides a state-of-the-art core technology that implements a sparse dynamic array. Judy arrays are declared simply with a null pointer. A Judy array consumes memory only when it is populated, yet can grow to take advantage of all available memory if desired.
Other references,
This Wikipedia hash implementation reference has some C open source links.
Also, cmph -- A Minimal Perfect Hashing Library in C, supports several algorithms.
There are some good answers here:
Container Class / Library for C
http://sglib.sourceforge.net.
http://cbfalconer.home.att.net/download/
Dave Hanson's C Interfaces and Implementations includes a fine hash table and several other well-engineered data structures. There is also a nice string-processing interface. The book is great if you can afford it, but even if not, I have found this software very well designed, small enough to learn in its entirety, and easy to reuse in several different projects.
A long time has passed since I asked this question... I can now add my own public domain library to the list:
http://sourceforge.net/projects/npsml/
C Interfaces and Implementations discusses hash table implementations in C. The source code is available online. (My copy of the book is at work so I can't be more specific.)
Apache's APR library has its own hash-implementation. It is already ported to anything Apache runs on and the Apache license is rather liberal too.
khash.h from samtools/bwa/seqtk/klib
curl https://raw.github.com/attractivechaos/klib/master/khash.h
via http://www.biostars.org/p/10353/
Never used it but Google Sparsehash may work
Download tcl and use their time-proven tcl hash function. It's easy. The TCL API is well documented.
Gperf - Perfect Hash Function Generator
http://www.ibm.com/developerworks/linux/library/l-gperf.html
https://github.com/dozylynx/C-hashtable
[updated URL as original now 404s: http://www.cl.cam.ac.uk/~cwc22/hashtable/ ]
Defined functions
* create_hashtable
* hashtable_insert
* hashtable_search
* hashtable_remove
* hashtable_count
* hashtable_destroy
Example of use
struct hashtable *h;
struct some_key *k;
struct some_value *v;
static unsigned int hash_from_key_fn( void *k );
static int keys_equal_fn ( void *key1, void *key2 );
h = create_hashtable(16, hash_from_key_fn, keys_equal_fn);
insert_key = (struct some_key *) malloc(sizeof(struct some_key));
retrieve_key = (struct some_key *) malloc(sizeof(struct some_key));
v = (struct some_value *) malloc(sizeof(struct some_value));
(You should initialise insert_key, retrieve_key and v here)
if (! hashtable_insert(h,insert_key,v) )
{ exit(-1); }
if (NULL == (found = hashtable_search(h,retrieve_key) ))
{ printf("not found!"); }
if (NULL == (found = hashtable_remove(h,retrieve_key) ))
{ printf("Not found\n"); }
hashtable_destroy(h,1); /* second arg indicates "free(value)" */
stl has map and hash_map (hash_map is only in some implementations) that are key to value if you are able to use C++.
http://www.cplusplus.com/reference/stl/map/

C XML library for Embedded Systems [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 8 years ago.
Improve this question
I'm working on a project for an embedded system that's using XML for getting data into and out of the system. I don't want the XML handling to devolve into a bunch of bits that build XML strings using snprintf()/strcat() and friends or parse XML by counting "<" and ">" characters.
I've found several XML libraries, a couple of which might even be small enough, but the closest they come to C is C++, which is not in the cards for this system. I hoping I can find an XML library that meets the following constraints:
C source code
no dynamic memory allocation
cheap. Free is better, but copyleft won't do the trick.
It doesn't have to be a full parser - I just want to be able to pull text out of nested elements and have a reasonably simple way to generate XML that doesn't rely on format strings. Attributes aren't being used (yet), so the library doesn't need to support them even. The XML documents will be pretty small, so something that's DOM-like would be fine, as long as it'll work with client-provided buffers (parsing the raw XML in-place would be nice).
PugXML and TinyXML look to be pretty close, but I'm hoping that someone out there knows about an XML lib tailored just for C-based embedded systems that my googling is missing.
I don't know about dynamic memory allocation, but a standard C XML parser is expat, which is the underlying library for a number of parsers out there.
I am not sure but perhaps Mini-XML: Lightweight XML Library will help you:
Mini-XML only requires an ANSI C compatible compiler.
It is freeware.
Its binary size is around 100k.
It parses the whole xml-file and then store all the info into a linked list.
You could use an ASN.1 XER encoder; there's a free one at http://lionet.info/asn1c/
You could use the one from Gnome.
I have written sxmlc to be as simple as possible and I know people use it in routers, to perform in-place parsing of web queries.
Unfortunately (and I'm 5 years late...) it does use memory allocation, though kept at a minimum: one buffer to read each "XML line" (what lies between < and >, sorry ;)), and many small buffer to keep track of the tag name, attributes names and values, and text (though I always wanted to use char[16] or so for those).
And it makes use of strdup/strcpy and such.
As I want that anybody can use it freely, the licence is BSD.
Xerces-C library would be optimal to use, in this scenario.
If it is going to be pretty small XML, why not generate programatically, using sprintf or other stuff and use string extracting functions to parse the same. But as mentioned earlier, if little big, would suggest to use Xerces-c Library, as it is open source

Resources