Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I want to receive some C code from the user and compile it Just-In-Time using tcc compiler. The compiler then gives me a pointer to a function in the compiled code. I want to call this function safely so that if this function causes a crash it just returns with an integer representing error, is this possible?
(This is an example of how I want to use tcc compiler library)
I want to call this function safely so that if this function causes a crash it just returns with an integer representing error, is this possible
That alone is potentially possible. Most things that cause a crash will cause a signal; which means that you can call setjump() before calling the unsafe code, then have signal handler/s that use longjmp() to restore a known state if the unsafe code crashes.
Can I call a function from unknown source safely?
That is a lot more than just guarding against crashes - you might also have to guard against deliberately malicious code that does not crash.
However; this depends on what you consider "safe" and how your software would be used. Typically (for personal computers, not servers) there is nothing that the end user could do that they couldn't also do by compiling their code with their own compiler and running it themselves (and this includes loading and starting your software into a forked process with malware injected into the virtual address space, then tampering with everything your code does); so "safe" (or "less safe than the user can already do anyway") becomes hard to define in meaningful way.
The only valid concern that I can think of is when the user has less permissions/privileges than your software (where the user could abuse your software to gain permissions/privileges they didn't already have). In this case; you shouldn't be considering letting the user run arbitrary code (it's simply too hard to make it safe).
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I want to write a function in C language such that it must check millions of parameters and if all of them are true, function returns true as well, otherwise, it returns false.
However, estimating the time of this operation is important, meaning that we need to know it takes how many milliseconds. (An approximate time would be enough.) We need to know this time to know the throughput of this function.
Note: This parameters are read locally from a file and we use an ordinary computers.
Rather than estimating the time, measure it. Modern CPU architecture performs optimizations so complex that a simple change in the data ordering could increase the running time of your function by a factor of six or more.
In your case it is very important to run a realistic benchmark: all parameters that you check need to be placed in memory at the same positions as in the actual program, and your code should be checking them in the same order. This way you would see the effect of caching. Since your function is all-or-nothing, branch prediction would have almost no effect on your code, because prediction would fail at most once before the loop exits.
Since you are reading your parameters from a file, it is important to use the same API in the test program as you plan to use in the actual program. Depending on the platform, I/O APIs may exhibit significant difference in performance, so your test code should test what you plan to use in production.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
This code can be compiled in C:
#include <stdio.h>
int main()
{
printf("Hello World");
return 0;
printf("shouldnt allow this line");
return 1; //also this line
}
the line printf("shouldnt allow this line"); and return 1; are unreachable. Is there any ways to check this during compilation with warning messages? Also, why does the compiler allows this?
Unreachable code is not an error because:
It's often useful, especially as the result of macro expansion or functions which are only ever called in a way that makes some paths unreachable due to some of their arguments being constant or limited to particular range. For instance, with an inline version of isdigit that's only ever called with non-negative arguments, the code path for an EOF argument would be unreachable.
In general, determining whether code is unreachable is equivalent to the halting problem. Sure there are certain cases like yours that are trivial to determine, but there is no way you can specify something like "trivial cases of unreachable code are errors, but nontrivial ones aren't".
Broadly speaking, C does not aim to help a developer catch mistakes; rather, C trusts the developer to do a perfect job, just as it trusts the compiler to do a perfect job.
Many newer languages take a more active stance, aiming to protect the developer from his or herself — and plenty of C compilers will emit compile-warnings (which can typically be "promoted" to errors via command-line flags) — but the C community has never wanted the language to stop trusting developers. It's just a philosophical difference. (If you've ever run into a case where a language prevents you from doing something that seems wrong but that you actually have a good reason for, you'll probably understand where they're coming from, even if you don't agree.)
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
While going through a book on machine instructions and programs I came across a particular point which says that an assembler scans an entire source program twice. It builds a symbol table during the 1st pass/scan and associates the entire program with it during the second scan. The assembler needs to provide an address in a similar way for a function.
Now, since the assembler passes through the program twice, why is it necessary to declare a function before it can be used? Wouldn't the assembler provide an address for the function from the 1st pass and then correlate it to the program during the 2nd pass ?
I am considering C programming in this case.
The simple answer is that C programs require that functions be declared before it can be used because the C language was designed to be processed by a compiler in a single pass. It has nothing to with assemblers and addresses of functions. The compiler needs to know the type of a symbol, whether its a function, variable or something else, before it can use it.
Consider this simple example:
int foo() { return bar(); }
int (*bar)();
In order to generate the correct code the compiler needs to know that bar isn't a function, but a pointer to a function. The code only works if you put extern int (*bar)(); before the definition of foo so the compiler knows what type bar is.
While the language could have been in theory designed to require the compiler to use two passes, this would have required some significant changes in the design of the language. Requiring two passes would also increase the required complexity of the compiler, decreasing the number of platforms that could host a C compiler. This was very important consideration back in the day when C was first being developed, back when 64K (65,536) bytes of RAM was a lot of memory. Even today would have noticeable impact on the compile times of large programs.
Note that the C language does sort of allows what you want anyways, by supporting implicit function declarations. (In my example above this it what happens in foo when bar isn't declared previously.) However this feature is obsolete, limited, and considered dangerous.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
In a recent interview, I was asked the following question:
You have a bug in your program, after attempting to debug it by inserting statements like printf, console.log, System.out.println, echo, etc, the bug disappears. How can this happen?
I responded with answers like the following:
You have something with side effects in the print statement, eg: System.out.println(foo.somethingWithSideEffects())
Adding printf changes the memory layout of the program, therefore it could cover adjacent memory and prevent crashes
Undefined Behavior in native code (like uninitialized values, buffer overruns, sequence points, etc)
The interviewer said those aren't the only ways that this could happen, and I couldn't think of any other ways simply adding a printf, etc could "fix" a bug in a program.
What other things could cause this to happen?
The biggest thing that comes to mind is that putting debugging code in can change the timing of the code, which can matter if there is a race condition in the code being debugged. It can be very frustrating to try to debug race conditions that disappear when inspected like this.
That could be happen because of memory overflowing , or there could be system interrupt while the program running.If you cannot really attach the debug so you may write eventlogs but it should be the last way to do i think
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I have been told by more senior, experienced and better-educated programmers than myself that the use of function-pointers in c should be avoided. I have seen the fact that some code contains function pointers as a rationale not to re-use that code, even when the only alternative is complete re-implementation. Upon further discussion I haven't been able to determine why this would be. I am happy to use function pointers where appropriate, and like the interesting and powerful things they allow you to do, but am I throwing caution to the wind by using them?
I see the pros and cons of function pointers as follows:
Pros:
Great opportunity for code modularity
OO-like features in non-OO c (i.e. code and data in the same object)
How else could you reasonably implement a callback?
Cons:
Negative impact to code readability - not always obvious what function is actually called when a function pointer is invoked
Minor performance hit compared to a direct function call
I think Con # 1. can usually reasonably be mitigated by well chosen symbol names and good comments. And Con # 2. will in general not be a big deal. Am I missing something - are there other reasons to avoid function pointers like the plague?
This question looks a little discussion-ey, but I'm looking for good reasons why I shouldn't use function pointers, not opinions
Function pointers are not evil. The main times you "shouldn't" use them are when either:
The use is gratuitous, i.e. not actually needed for what you're doing, or
In situations where you're writing hardened code and the function pointer might be stored at a location you're concerned may be a likely candidate for buffer overflow attacks.
As for when function pointers are needed, Adam's answer provided some good examples. The common theme in all those examples is that the caller needs to be able to provide part of the code that runs from the called function. Without function pointers, the only way you could do this would be to copy-and-paste the implementation of the function and change part of it to call a different function, for every individual usage case. For qsort and bsearch, which can be implemented portably, this would just be a nuisance and hideously ugly. For thread creation, on the other hand, without function pointers you would have to copy and paste part of the system implementation for the particular OS you're running on, and adapt it to call the function you want called. This is obviously unacceptable; your program would then be completely non-portable.
As such, function pointers are absolutely necessary for some tasks, and for other tasks, they are a major convenience which allows general code to be reused. I see no reason why they should not be used in such cases.
No, they're not evil. They're absolute necessary in order to implement various features such as callback functions in C.
Without function pointers, you could not implement:
qsort(3)
bsearch(3)
Window procedures
Threads
Signal handlers
And many more.