Taking an Image from a Webcam in Ubuntu Using C [closed] - c

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I am trying to use my webcam (Creative Live! Cam Chat) to take an image in C/C++ and save it to a certain folder(running Ubuntu). Ideally I'm looking to something as simple as possible despite it not being the most elegant solution.
So far I've found v4l2grab which I find incredibly confusing to understand, and also doesn't seem to work with the Creative webcam (returns a black picture that is 5Kb in size) although it does seem to work with the webcam installed as a part of my laptop.
Are there any simple C libraries or code that I could use to do this?

I don't know of a good library for the purpose (please add comment and tell me if there is one :-)). Note: for some uses, eg. OpenCV is just fine, and if it is enough for you, definitely do use it. But if you want more, read on.
So you should just write your own code to use it, it's not particularly hard. Here's one related question: How to use/learn Video4Linux2 (On Screen Display) Output APIs?
Some points to make learning easier:
After calling an IOCTL, always check return status and print possible error message. You will be getting lots of these while you work, so just be systematic about it. I suggest a function like check_error shown below, and calling it always immediatly after any ioctl call.
IMO a must: use IDE/editor, which can follow symbol to the actual header file (for example in Qt Creator, which is a fine pure C application IDE despite the name, hit F2 on symbol, and it will go even to system headers to show you where it is defined). Use this liberally on V4L2 related symbols and defines, and read comments in the header file, that's often the best documentation.
Use the query ioctls and write functions to dump values they return in nice format. For example have function void dump_cap(const struct v4l2_capability &cap) {...}, and add a similar function for every struct you use in your code as you go.
Don't be lazy about setting values inside structs you pass to IOCTL. Always initialize structs to 0 with memset(&ioctl_struct_var, 0, sizeof(ioctl_struct_var)); after declaring them, and also if you reuse them (except when doing 'get-modify-set' operation on some settings, which is quite common with V4L2).
If possible, have two (or more) different webcams (different resolutions, different brand), and test with both (all). This is easiest if you take video device as command line parameter, so you can just call your program with different argument for each cam you have.
Small steps. Often ioctls may not return what you expect, so no point writing code which uses the returned data, before you have actually seen what the query returns for your cameras.
The check_error function mentioned above:
void check_error(int return_value_of_ioctl, const char *msg) {
if (return_value_of_ioctl != -1) return; /* all ok */
int eno = errno; /* just to avoid accidental clobbering of errno */
fprintf(stderr, "error (%d) with %s: %s\n", eno, msg, strerror(eno));
exit(1); /* optional, depending on how you want to work with your code */
}
Call that immediatly after every ioctl, for example:
struct v4l2_capability cap;
setmem(&cap, 0, sizeof(cap));
int r=ioctl(fd, VIDIOC_QUERYCAP, &cap);
check_error(r, "VIDIOC_QUERYCAP");
dump_querycap(&cap);

You can use OpenCV. Use cvCreatecameraCapture (You can call it with argument 0 to get to the fault cam) to create an object and then call cvQueryFrame on that object. Calling cvQueryFrame each time will return a frame.

Have you had a look at OpenCV? It's quite handy for all sorts of image getting and processing. The process of taking picture is well documented, but I suggest you look at something like this, if you do indeed decide to use it.

Take a look at uvccapture source code. It is very simple, yet standard C which uses only V4L2 interface. OpenCV would also work, but it is more complicated to setup and compile.

Related

Interact with kernel module hashtable from user space [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm trying to create a linux kernel module that allows for key value store functionality to be used by other programs in user space. I'm not sure how to go about interacting with the two.
My thought is to use hashtables and make something basic for now, like this:
struct hashtable{
char name[100];
int data;
struct hlist_node my_hash_list;
}
static int hash_table_init(void){
//TODO
return 0;
}
module_init(hash_table_init);
More specifically; with a basic module like this, how would I perform operations such as adding to the hashtable from user space? I understand file operations are one way to communicate to and from the module, but I'm not sure how that would work in this case, if applicable.
I think the most common paradigm used is VFS based operations on either sysfs or devfs. Using struct file_operations you can define a vtable for handling userspace operating on your virtual file. This is explained in much greater detail here.
In your specific case, a miscdevice would be the best approach with IOCTLs defined to add, get or remove entries from your hashtable. A great example of a basic miscdevice driver can be found here. Your IOCTL could take a userspace address use copy_from_user to get the necessary buffer pointed to by data in the IOCTL invocation (like the name of the key or the hash) and similary a IOCTL to get something by using copy_to_user to copy the contents of a specific key to userspace. (similar to BSD's copyin/copyout).
Netlink sockets are another way of user<->kernel communication, you can find an example here. They're slightly more complicated to use, I would not suggest them if you're just starting out with kernel development.
If you want to mess with arch/ code you could also add your own system call that would call into your driver, that requires that a certain part of your driver is always present in the kernel, at least to check if the driver is loaded and forward the call. If you're going that way you cannot compile the whole driver as a module and you should generally not try to do the split approach I just described and make sure it's only compilable as part of the kernel.
Now onto the scary part, you really are playing with fire here, in the kernel you have to be extremely vigilant in terms of boundary checks, locking, preemption awareness (ie. don't yield under a spinlock), resource management and address checks since you will crash the system or even worse introduce a security vulnerability should something go wrong.
I would not suggest trying this, unless this is just for learning how to do kernel development. Even a basic driver like the ones in the example can easily introduce critical security bugs or instability into the kernel.
If this is not just for learning then may I suggest memcached or Redis both of which are userspace and have been battle-tested and are in use by many companies as out-of-process shared hashtables, with or without network transparency (ie. Redis can work over a UNIX domain socket).

C printf to custom hardware [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I have existing code with various debug messages. Our embedded system does not have an external connection for a terminal. I want to modify the printf so that its output goes to a memory location, rather than STDOUT, which is mapped to a FIFO. I guess I need to find write my own outbyte/outnum. Just cant seem to find the GNU code where I can grab things.
I am running a microblaze processor inside an Xilinx FPGA. All print statements are for debugging. We are running GNU tools (Xilinx SDK). For development, was able stick an UART in the design and wire the uart to a few test points.
As we get ready to deploy, I will no longer have this visibility. We do have a USB-to-Serial connecton, but it looks like a fifo, not a serial port to the embedded system. We have a protocol for sending messages over this link. I was thinking of adding a debug message to our protocol. I would like to redirect my print statement to a buffer, and then process this buffer.
I was trying to take the existing printf (xil_printf) and create my own cc_printf, by rewriting the outbyte code, but just not able to dig down in the code far enough to see how to do it. (Frankly, I am VHDL/hardware guy).
Overall code size is tens of thousands of lines of C code. My section is maybe 3-4 thousand lines of code. The basic operation is for software/hardware updates of the system to come in via the USB port and moved to FLASH memory. My code parses incoming packets coming over the USB-to-serial link. Basically there is a bit set that tells me that there is a packet ready in a receive buffer. I process the packet and write to FLASH. As part of the protocol, there are ACK/NACKs/ABorts. I currently use the printf to printout statuses to my lab bench version of the system.
As stated, I would like to embed these printouts into our protocol. I am not married to printf, and would use some other print-function. I would like to print out messages and data. If something else would be a better starting point, I am fine with that. It seems that the biggest issue for me is grabbing the output of the print and directing it where to go.
Don't use printf directly, for debugging purposes.
Use a macro, perhaps something like
#define DEBUGPRINTF(Fmt,...) do \
{printf("%s:%d: " Fmt "\n", __FILE__, __LINE__, __VA_ARGS__);} while(0)
Then, once you have converted all your debug printf (and only these) to DEBUGPRINTF you just need to change the definition of that macro (e.g. on the embedded system, perhaps using snprintf ...).
So in the rest of your code, replace a printf("debugging x=%d\n", x); with DEBUGPRINTF("debugging x=%d", x); but do that only for debugging prints. BTWK 4KLOC (for your part) is really tiny, and even 200KLOC (for the whole thing) is small enough to make that replacement doable "by hand" (e.g. Emacs "find & replace interactively").
(I am guessing that you are first developing a small piece of code -a few thousand lines- on your laptop, and later porting it to the embedded system)
Once you have converted all your debug printf to DEBUGPRINTF (and only the debugging prints!) you can redefine that macro, perhaps inspired by
#define DEBUGPRINTF(Fmt,...) do \
{snprintf(debugbuffer, sizeof(debugbuffer), Fmt, __VA_ARGS__); } while(0)
(I guess that you'll need to add something more in that macro -probably before the closing brace- to actually send the debug output -that is the content of debugbuffer- somewhere, but how to do that is implementation & system specific)
But more likely you'll disable the debug printf with
#define DEBUGPRINTF(Fmt,...) do{}while(0)
BTW, the embedded target system might not even have any snprintf ...
If you want to study some readable C standard library implementation (on Linux) consider looking inside musl-libc source code. You'll need to understand what system calls are.
Actually, you should in fact write and debug your code on your laptop (e.g. running Linux) and only put on the embedded system something which you believe has no bugs. In practice, avoid deploying code with debug printfs (or think in advance of something much better). See also this.
It seems that the biggest issue for me is grabbing the output of the print and directing it where to go.
Probably use snprintf then "send" the debugbuffer to the appropriate place using appropriate primitives. With snprintf you know the size of the message (e.g. with the return value from snprintf and/or using %n in the format control string). Or have your own logging or debugging variadic function (using <stdarg.h> and va_start etc ....)
Do not confuse debugging messages with logging messages. If you want logging, design it carefully. But debugging messages should probably be removed at deployment time.
The Posix function fmemopen allows you to open a memory buffer like a file and returns a FILE * that can be used with the standard IO functions, like fprintf. As far as printing to a device that's not a serial port, I don't know what OS (of any) you are running on, but if present you could write a stream device driver for that custom device and set your stdout and stderr (and maybe stdin) to that. If you're running without an OS then somewhere someone has created a FILE structure that in interfacing with the serial port somehow, and printf is using that.

Error management for a C computer game [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
What kind of errors should I expect with a computer game written in C and how to handle them? With computer game I imply a program where there is no danger of any kind to human lives or "property".
I would like to add as few error handling code as necessary to keep everything as clear and simple as possible. For example I do not want to do this, because this is much simpler and sufficient for a game.
Up to now I have thought about this:
Error: out-of-memory when calling malloc.
Handling: Print error message and call exit(EXIT_FAILURE); (like this)
Error: A programming error, i.e. something which would work if implemented correctly.
Handling: Use assert to detect (which aborts the program if failed).
Error: Reading a corrupted critical file (e.g. game resource).
Handling: Print error message and call exit(EXIT_FAILURE);
Error: Reading a corrupted non-critical file (e.g. load a saved game).
Handling: Show message to user and ask to load another file.
Do you think this is a reasonable approach? What other error should I expect and what is a reasonable minimal approach to handle them?
You can expect at least those errors to happen as mentioned in the documentation to the libraries you use. For a C program that typically is libc at least.
Check the ERRORS section of the man-pages for the functions you'd be using.
I'd also think this over:
I do not want to do this, because this is much simpler and sufficient for a game.
Imagine you'd have fought yourself through a dozen game-levels and then suddenly the screen is gone with an odd OOM*1-error message. And ... - you didn't save! DXXM!
*1 Out-Of-Memory
As I've already stated in the comment I think this is a very broad question. However, it's Xmas and I'll try and be helpful (lest I upset Santa).
The general best practices have been given in the answers posted by #alk and #user2485710. I will try and give a generic boiler-plate for error handling as I see it in C.
You can't guard against everything without writing perfect code. Perfect code is unreachable (kind of like infinity in calculus) though you can try and get close.
If you try to put too much error handling code in, you will be affecting performance. So, let me define what I will call a simple function and a user function.
A user function is a function that can return an error value. e.g. fopen
A simple function is a function that can not return an error value. e.g.
long add(int a, int b)
{
long rv = a; // #alk - This way it shouldn't overflow. :P
return rv + b;
}
Here are a couple rules to follow:
All calls to user functions must handle the returned errors.
All calls to simple functions are assumed safe so no error handling is needed.
If a simple function's parameter is restricted (i.e. an int parameter that must be between 0 and 9) use an assert to ensure its validity (unless the value is the result of user input in which case you should either handle it or propagate it making this a user function).
If a user function's parameter is restricted and it doesn't cause an error do the same as above. Otherwise, propagate it without additional asserts.
Just like your malloc example you can wrap your user functions with code that will gracefully exit your game thereby turning them into simple functions.
This won't remove all errors but should help reduce them whilst keeping performance in mind. Testing should reduce the remaining errors to a minimum.
Forgive me for not being more specific, however, the question seems to ask for a generic method of error handling in C.
In conclusion I would add that testing, whether unit testing or otherwise, is where you make sure that your code works. Error handling isn't something you can plan for in its entirety because some possible errors will only be evident once you start to code (like a game not allowing you to move because you managed to get yourself stuck inside a wall which should be impossible but was allowed because of some strange explosive mechanics). However, testing can and should be planned for because that will reveal where you should spend more time handling errors.
My suggestion is about:
turning on the compiler's flags for raising errors and warning, make your compiler as much pedantic as possible, -Wall, -Werror, -Wextra, for example are a good start for both clang and gcc
be sure that you know what undefined behaviour means and what are the scenarios that can possibly trigger an UB, the compiler doesn't always helps, even with all the warnings turned on.
make your program modular, especially when it comes to memory management and the use of malloc
be sure that your compiler and your standard library of choice both support the C standard that you pick

Questions about register_chrdev_region() in linux device driver

I'm learning about the registration of a kernel module using register_chrdev_region(dev_t from, unsigned count, const char * name);.
I notice that with or without this function, my kernel module worked as expected. The code I used for testing:
first = MKDEV(MAJOR_NUM, MINOR_NUM);
register_chrdev_region(first, count, DEVICE_NAME);//<---with and without
mycdev=cdev_alloc();
mycdev->ops= &fops;
mycdev->owner = THIS_MODULE;
if (cdev_add(mycdev,first, count) == 0)
{printk(KERN_ALERT "driver loaded\n");}
I commented out the line register_chrdev_region(first, count, DEVICE_NAME);, and the printk message still appeared. I tried to communicate with the driver with or without this from user space, and both are successful.
So my question is, is this function register_chrdev_region() only used to make my driver a good kernel citizen, just like telling the others that "I'm using up the major number, please don't use"?
I tried to have a look in the kernel source char_dev.c to understand the function, but I find it too difficult to understand, anyone that's familiar with this?
Thanks!
That will work because it's not actually necessary to allocate your device numbers up front. In fact, it's considered preferable by many kernel developers to use the dynamic (on-the-fly, as-needed) allocation function alloc_chrdev_region.
Whether you do it statically up front or dynamically as needed, it is something you should do to avoid conflict with other device drivers which may have played by the rules and been allocated the numbers you're trying to use. Even if your driver works perfectly well without it, that won't necessarily be true on every machine or at any time in the future.
The rules are there for a reason and, especially with low-level stuff, you are well advised to follow them.
See here for more details on the set-up process.
If the major number for your devices clash with any other device already in use, then the driver won't have the allocation done.
If you have already tested which major number is free and used it, it might generally not throw up an error and you will face no problem as u load the driver.
But if you run on various systems and if the major number is already captured and used by some other system., Then your driver loading can fail.
Its always better to use dynamic allocation !!

Tool to convert (translate) C to Go? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
What tool to use to convert C source code into Go source code?
For example, if the C code contains:
struct Node {
struct Node *left, *right;
void *data;
};
char charAt(char *s, int i) {
return s[i];
}
the corresponding Go code generated by the tool should be:
type Node struct {
left, right *Node
data interface{}
}
func charAt(s string, i int) byte {
return s[i]
}
The tool does not need to be perfect. It is OK if some parts of the generated Go code need to be corrected by hand.
rsc created github.com/rsc/c2go to convert the c based Go compiler into Go.
As an external example, akavel seems to be trying to use it to create a Go based lua: github.com/akavel/goluago/
github.com/xyproto/c2go is another project, but it hasn't been touched in a little while.
I guess no such (C to Go source code conversion) tool exist today. You might consider to make your own converter. The question becomes: is it worth it, and how to do that?
It probably might not be worth the effort, because Go and C could be somehow interoperable. For example, if you use the GCC 4.6 (or to be released 4.7, i.e. the latest snapshot) your probably can link C & Go code together, with some care.
Of course, as usual, the evil is in the details.
If you want a converter, do you want the obtained Go code to be readable and editable (then the task is more difficult, since you want to keep the structure of the code, and you also want to keep the comments)? In that case, you probably need your own C parser (and it is a difficult task).
If you don't care about readability of the generated Go code, you could for example extend an existing compiler to do the work. For example, GCC is extensible thru plugins or thru MELT extensions, and you could customize GCC (with MELT, or your own C plugin for GCC) to transform Gimple representation (the main internal representation for instructions inside GCC) to unreadable Go code. This is somehow simpler (but still require more than a week of work).
Of course, Go interfaces, channels and even memory management (garbage collected memory) has no standard C counterpart.
Check out this project
https://github.com/elliotchance/c2go
The detailed description is in this article
Update: August 6, 2021
Also check this one
https://github.com/gotranspile/cxgo
I'm almost sure there is no such tool, but IMHO in every language it's good to write in its own "coding style".
Remember how much we all loved C preprocessor tricks and really artistic work with pointers? Remember how much care it took to deal with malloc/free or with threads?
Go is different. You have no preprocessor, but you have closures, objects with methods, interfaces, garbage collector, slices, goroutines and many other nice features.
So, why to convert code instead of rewriting it in a much better and cleaner way?
Of course, I hope you don't have a 1000K lines of code in C that you have to port to Go :)
Take a look at SWIG, http://www.swig.org/Doc2.0/Go.html it will translate the C/C++ headers to go and wrap them for a starting point. Then you can port parts over bit by bit.
As far as I know, such tool does not exist (yet). So you're bound to convert your C code to Go by hand.
I don't know how complex the C code is you want to convert, but you might want to keep in mind Go has a "special" way of doing things. Like the usage of interfaces and channels.

Resources