How clear flag in DEVICE OBJECT - c

Are these two Code equal?(clear flag)
ClearFlag(NewDeviceObject->Flags, DO_DEVICE_INITIALIZING);
NewDeviceObject->Flags &= ~DO_DEVICE_INITIALIZING;

Yes they are equal.
Unlike APIs, ClearFlag is a macro. You can enter the ntifs.h that you compile with and see it. Microsoft doesn't change macros like this, because then you won't be able to use the same binary on any older windows (which is not something they like). This is why only small utilities like that are inlined.
Anyway it's a much better practice and much cleaner to use the macro, so use it like you use other win macro that won't be changed for same reasons (it's just not a bug not using it).

Related

What is this IN part of the parameter to a function?

I am trying to do some work on Windows drivers but I am having trouble understanding one part of the example source code. I have never seen this before in my C experience and I couldn't find anything on it. Anyways, I was wondering what the "IN" part of the parameter variables are? Below is an example of the header of a function. It is also possible for it to be a few other things like "OUT", "INOUT", "INOPT", and maybe more (couldn't find anything else).
VOID
PLxReadRequestComplete(
IN WDFDMATRANSACTION DmaTransaction,
IN NTSTATUS Status
)
Those are simply markers (from the early days of the Windows DDK) that describe the intended use of the parameter.
In normal builds the macros are defined as nothing, however they could conceivably be defined to implementation-specific keywords that allow the compiler (using SAL or other static code analysis tools) to perform deeper analysis about the correct use of the argument/parameter. I don't think that they're used for SAL because they simply aren't 'rich' enough to describe all the attributes that SAL likes to take into account. So I think they're mainly intended to communicate intent to programmers.
That's not standard C. Most likely, IN has been defined to have some other value using a #define -- i.e., a macro. Search your *.h files for #define IN, #define OUT, etc, and see if you can find out what.

How to check if the function is available on compiler?

Is there a way to detect in compilation-time, if a function is built-in on compiler?
e.g, something like this:
#ifndef ITOA_FUNCTION
#define itoa myitoaimplementation
#endif
Thanks in advance.
No, there's not anything direct. About the best you can do is guess from things like the platform, compiler version, etc.
In most cases, I'd prefer one of two other routes though. One is to just give your own implementation a name different from what the compilers use, and always use it whether the compiler provides something similar or not.
The second is to put your implementations of functions like this into a separate file, and deal with the presence/absence in the makefile, just deciding whether to include that file in the project or not.

inline a function inside another inline function in C

I currently have inline functions calling another inline function (a simple 4 lines big getAbs() function). However, I discovered by looking to the assembler code that the "big" inline functions are well inlined, but the compiler use a bl jump to call the getAbs() function.
Is it not possible to inline a function in another inline function? By the way, this is embedded code, we are not using the standard libraries.
Edit : The compiler is WindRiver, and I already checked that inlining would be beneficial (4 instructions instead of +-40).
Depending on what compiler you are using you may be able to encourage the compiler to be less reluctant to inline, e.g. with gcc you can use __attribute__ ((always_inline)), with Intel ICC you can use icc -inline-level=1 -inline-forceinline, and with Apple's gcc you can use gcc -obey-inline.
The inline keyword is a suggestion to the compiler, nothing more. It's free to take that suggestion on board, totally ignore it or even lie to you and tell that it's doing it while it's really not.
The only way to force code to be inline is to, well, write it inline. But, even, then the compiler may decide it knows better and decide to shift it out to another function. It has a lot of leeway in generating executable code for your particular source, provided it doesn't change the semantics of it.
Modern compilers are more than capable of generating better code than most developers would hand-craft in assembly. I think the inline keyword should go the same path as the register keyword.
If you've seen the output of gcc at its insane optimisation level, you'll understand why. It has produced code that I wouldn't have dreamed possible, and that took me a long time to understand.
As an aside, check this out for what optimisations that gcc actually has, including a great many containing the text "inline" or "inlining".
#gramm: There's quite a few scenarios in which inline isn't necessarily to your benefit. Most compilers use some very advanced heuristics to determine when to inline. When discussing inlining, the simplest idea is, trust your compiler to produce the fastest code.
I have recently had a very similar problem, reading this post has given me a wackky idea. Why not Have a simple pre-compilation (a simple reg ex should do the job ) code parser that parses out the function call to actually put the source code in-line. use a tag such as /inline/ /end_of_inline/ so that you can use normal ide features (if you are or might use an ide.
Include this in your build process, that way you have the readability advantage as well as removing the compilers assumption that you are only as good a developer as most and do not understand when to in-line.
Nonetheless before trying this you should probably go through the compilers command line options.
I would suggest that if your getAbs() function (sounds like absolute value but you really should be showing us code with the question...) is 4 lines long, then you have much bigger optimizations to worry about than whether the code gets inlined or not.

Are these C #ifdefs for portability outdated?

I'm working with an old C code that still has a few dusty corners. I'm finding a lot of #ifdef statements around that refer to operating systems, architectures, etc. and alter the code for portability. I don't know how many of these statements are still relevant today.
I know that #ifdef isn't the best idea in certain circumstances, and I'll be sure to fix that, but what I'm interested in here is what's being tested.
I've listed them below. If you could tell me if any of them are definitely useful in this day and age, or if the machines or OSs with which they're associated have long since expired, that would be great. Also, if you know of any central reference for these, I'd love to hear about it.
Thanks in advance,
Ross
BORLANDC
BSD
CGLE
DRYRUN
HUGE
IBMPC
MAIN
M_XENIX
OPTIMIZED
P2C_H_PROTO
sgi
TBFINDADDREXTENDED
UNIX
vms
__GCC__
__GNUC__
__HUGE__
__ID__
__MSDOS__
__TURBOC__
Here you are.
You are coming from the wrong direction. Instead of asking what code can be safely deleted, you should ask - what code have to stay.
Find out what platforms have to be supported and delete everything that is not defined in any of them. You'll get yourself cleanest code possible that is still guaranteed to work.
What context is this code being used?
If it's a library other people outside your organization are using, you shouldn't touch this stuff unless you're releasing a new version and explicitly removing support for some OSs. In the latter case, you should remove all the relevant IFDEF code as part of making a new release, and should be explicit about what you are removing.
If it's a library people inside your organization are using, you should ask those people what you can remove, not us.
If it's code being used very narrowly (i.e. you control its use directly), you can, if you wish, safely remove any sort of compiler portability, since you are only using one compiler.
You're asking the wrong people: It's your users (or potential users) who decide what's still useful, not us. Start by finding out what platforms you need to support, and then you can find out what's not needed.
If, for example, you don't need to support 16-bit systems, you can dispense with __HUGE__, __MSDOS__, and __TURBOC__.
Any #ifdef based on arbitrary preprocessor definitions provided by the implementation is outdated - especially those which are in the namespace reserved for the application, not the implementation, as most of those are! The correct modern way to achieve this kind of portability is to either detect the presence of different interfaces/features/behavior with a configure script and #define HAVE_FOO etc. based on that, directly test standard preprocessor defines (like UINT_MAX to determine integer size), and/or provide prebuilt header files for each platform you want to support with the appropriate HAVE_FOO definitions.
The old-style "portability" #ifdefs closely coupled knowledge of every single platform all over your source, making for a nightmare when platforms changed and adopted new features, behaviors, or defaults. (Just imagine the mess of old code that assumes Windows is 16bit or Linux has SysV-style signal()!) The modern style isolates knowledge of the platform and allows the conditional compilation in your source files to depend only on the presence/absence/behavior of the feature it wants to use.
Code that is annotated like that can in fact be quite difficult to maintain. You could consider to look into something like autotools or alike to configure your sources for a particular architecture.

Large C macros. What's the benefit?

I've been working with a large codebase written primarily by programmers who no longer work at the company. One of the programmers apparently had a special place in his heart for very long macros. The only benefit I can see to using macros is being able to write functions that don't need to be passed in all their parameters (which is recommended against in a best practices guide I've read). Other than that I see no benefit over an inline function.
Some of the macros are so complicated I have a hard time imagining someone even writing them. I tried creating one in that spirit and it was a nightmare. Debugging is extremely difficult, as it takes N+ lines of code into 1 in the a debugger (e.g. there was a segfault somewhere in this large block of code. Good luck!). I had to actually pull the macro out and run it un-macro-tized to debug it. The only way I could see the person having written these is by automatically generating them out of code written in a function after he had debugged it (or by being smarter than me and writing it perfectly the first time, which is always possible I guess).
Am I missing something? Am I crazy? Are there debugging tricks I'm not aware of? Please fill me in. I would really like to hear from the macro-lovers in the audience. :)
To me the best use of macros is to compress code and reduce errors. The downside is obviously in debugging, so they have to be used with care.
I tend to think that if the resulting code isn't an order of magnitude smaller and less prone to errors (meaning the macros take care of some bookkeeping details) then it wasn't worth it.
In C++, many uses like this can be replaced with templates, but not all. A simple example of Macros that are useful are in the event handler macros of MFC -- without them, creating event tables would be much harder to get right and the code you'd have to write (and read) would be much more complex.
If the macros are extremely long, they probably make the code short but efficient. In effect, he might have used macros to explicitly inline code or remove decision points from the run-time code path.
It might be important to understand that, in the past, such optimizations weren't done by many compilers, and some things that we take for granted today, like fast function calls, weren't valid then.
To me, macros are evil. With their so many side effects, and the fact that in C++ you can gain same perf gains with inline, they are not worth the risk.
For ex. see this short macro:
#define max(a, b) ((a)>(b)?(a):(b))
then try this call:
max(i++, j++)
More. Say you have
#define PLANETS 8
#define SOCCER_MIDDLE_RIGHT 8
if an error is thrown, it will refer to '8', but not either of its meaninful representations.
I only know of two reasons for doing what you describe.
First is to force functions to be inlined. This is pretty much pointless, since the inline keyword usually does the same thing, and function inlining is often a premature micro-optimization anyway.
Second is to simulate nested functions in C or C++. This is related to your "writing functions that don't need to be passed in all their parameters" but can actually be quite a bit more powerful than that. Walter Bright gives examples of where nested functions can be useful.
There are other reasons to use of macros, such as using preprocessor-specific functionality (like including __FILE__ and __LINE__ in autogenerated error messages) or reducing boilerplate code in ways that functions and templates can't (the Boost.Preprocessor library excels here; see Boost.ScopeExit or this sample enum code for examples), but these reasons don't seem to apply for doing what you describe.
Very long macros will have performance drawbacks, like increased compiled binary size, and there are certainly other reasons for not using them.
For the most problematic macros, I would consider running the code through the preprocessor, and replacing the macro output with function calls (inline if possible) or straight LOC. If the macros exists for compatibility with other architectures/OS's, you might be stuck though.
Part of the benefit is code replication without the eventual maintenance cost - that is, instead of copying code elsewhere you create a macro from it and only have to edit it once...
Of course, you could also just make a method to be called but that is sort of more work... I'm against much macro use myself, just trying to present a potential rationale.
There are a number of good reasons to write macros in C.
Some of the most important are for creating configuration tables using x-macros, for making function like macros that can accept multiple parameter types as inputs and converting tables from human readable/configurable/understandable values into computer used values.
I cant really see a reason for people to write very long macros, except for the historic automatic function inline.
I would say that when debugging complex macros, (when writing X macros etc) I tend to preprocess the source file and substitute the preprocessed file for the original.
This allows you to see the C code generated, and gives you real lines to work with in the debugger.
I don't use macros at all. Inline functions serve every useful purpose a macro can do. Macro allow you to do very weird and counterintuitive things like splitting up identifiers (How does someone search for the identifier then?).
I have also worked on a product where a legacy programmer (who thankfully is long gone) also had a special love affair with Macros. His 'custom' scripting language is the height of sloppiness. This was compounded by the fact that he wrote his C++ classes in C, meaning all class functions and variables were all public. Anyways, he wrote almost everything in macro's and variadic functions (Another hideous monstrosity foisted on the world). So instead of writing a proper template class he would use a Macro instead! He also resorted to macro's to create factory classes as well, instead of normal code... His code is pretty much unmaintanable.
From what I have seen, macro's can be used when they are small and are used declaratively and don't contain moving parts like loops, and other program flow expressions. It's OK if the macro is one or at the most two lines long and it declares and instance of something. Something that won't break during runtime. Also macro's should not contain class definitions, or function definitions. If the macro contains code that needs to be stepped into using a debugger than the macro should be removed and replace with something else.
They can also be useful for wrapping custom tracing/debugging functionality. For instance you want custom tracing in debug builds but not release builds.
Anyways when you are working in legacy code like that, just be sure to remove a bit of the macro mess a bit at a time. If you keep it up, with enough time eventually you will remove them all and make life a bit easier for yourself. I have done this in the past, with especially messy macro's. What I do is turn on the compiler switch to have the preprocessor generate an output file. Then I raid that file, and copy the code, re-indent it, and replace the macro with the generated code. Thank goodness for that compiler feature.
Some of the legacy code I've worked with used macros very extensively in the place of methods. The reasoning was that the computer/OS/runtime had an extremely small stack, so that stack overflows were a common problem. Using macros instead of methods meant that there were fewer methods on the stack.
Luckily, most of that code was obsolete, so it is (mostly) gone now.
C89 did not have inline functions. If using a compiler with extensions disabled (which is a desirable thing to do for several reasons), then the macro might be the only option.
Although C99 came out in 1999, there was resistance to it for a long time; commercial compiler vendors didn't feel it was worth their time to implement C99. Some (e.g. MS) still haven't. So for many companies it was not a viable practical decision to use C99 conforming mode, even up to today in the case of some compilers.
I have used C89 compilers that did have an extension for inline functions, but the extension was buggy (e.g. multiple definition errors when there should not be), things like that may dissuade a programmer from using inline functions.
Another thing is that the macro version effectively forces that the function will actually be inlined. The C99 inline keyword is only a compiler hint and the compiler may still decide to generate a single instance of the function code which is linked like a non-inline function. (One compiler that I still use will do this if the function is not trivial and returning void).

Resources