What is this IN part of the parameter to a function? - c

I am trying to do some work on Windows drivers but I am having trouble understanding one part of the example source code. I have never seen this before in my C experience and I couldn't find anything on it. Anyways, I was wondering what the "IN" part of the parameter variables are? Below is an example of the header of a function. It is also possible for it to be a few other things like "OUT", "INOUT", "INOPT", and maybe more (couldn't find anything else).
VOID
PLxReadRequestComplete(
IN WDFDMATRANSACTION DmaTransaction,
IN NTSTATUS Status
)

Those are simply markers (from the early days of the Windows DDK) that describe the intended use of the parameter.
In normal builds the macros are defined as nothing, however they could conceivably be defined to implementation-specific keywords that allow the compiler (using SAL or other static code analysis tools) to perform deeper analysis about the correct use of the argument/parameter. I don't think that they're used for SAL because they simply aren't 'rich' enough to describe all the attributes that SAL likes to take into account. So I think they're mainly intended to communicate intent to programmers.

That's not standard C. Most likely, IN has been defined to have some other value using a #define -- i.e., a macro. Search your *.h files for #define IN, #define OUT, etc, and see if you can find out what.

Related

Cross-Platform C single header file and multiple implementations

I am working on an open source C driver for a cheap sensor that is used mostly for Arduino projects. The project is set up in such a way that it is possible to support multiple platforms outside the Arduino ecosystem, like the Raspberry Pi.
The project is set up with a platform.h file, with the intention of having different implementations of this header file. Like the example below:
platform.h
platform_arduino.c
platform_rpi.c
platform_windows.c
There is this (Cross-Platform C++ code and single header - multiple implementations) Stack Overflow post that goes fairly in depth in how to handle this for C++ but I feel like none of those examples really apply to this C implementation.
I have come up with some solutions like just adding the requirements for each platform at the top of the file.
#if SOME_REQUIREMENT
#include "platform.h"
int8_t t_open(void)
{
// Implementation here
}
#endif //SOME_REQUIREMENT
But this seems like a clunky solution.
It impacts readability of the code.1
It will probably make debugging conflicting requirements a nightmare.
1 Many editors (Like VS Code) try to gray out code which does not match requirements. While I want this most of the time, it is really annoying when working on cross-platform drivers. I could just disable it for the entirety of the project, but in other parts of the project it is useful. I understand that it could probably be solved using VS Code thing. However, I am asking for alternative methods of selecting the right file/code for the platform because I am interested in seeing what other strategies there are.
Part of the "problem" is that support for Arduino is the primary focus, which means it can't easily be solved with makefile magic. My question is, what are alternative ways of implementing a solution to this problem, that are still readable?
If it cannot be done without makefile magic, then that is an answer too.
For reference, here is a simplified example of the header file and implementation
platform.h
#ifndef __PLATFORM__
#define __PLATFORM__
int8_t t_open(void);
#endif //__PLATFORM__
platform_arduino.c
#include "platform.h"
int8_t t_open(void)
{
// Implementation here
}
this (Cross-Platform C++ code and single header - multiple implementations) Stack Overflow post that goes fairly in depth in how to handle this for C++ but I feel like none of those examples really apply to this C implementation.
I don't see why you say that. The first suggestions in the two highest-scoring answers are variations on the idea of using conditional macros, which not only is valid in C, but is a traditional approach. You yourself present an alternative along these lines.
Part of the "problem" is that support for Arduino is the primary focus, which means it can't easily be solved with makefile magic.
I take you to mean that the approach to platform adaptation has to be encoded somehow into the C source, as opposed to being handled via the build system. Frankly, this is an unusual constraint, except inasmuch as it can be addressed by use of the various system-identification macros provided by C compilers of interest.
Even if you don't want to rely specifically on makefiles, you should consider attributing some responsibility to the build system, which you can do even without knowing specifically what build system that is. For example, you can designate macro names, such as for_windows, etc that request builds for non-default platforms. You then leave it to the person building an instance of the driver to figure out how to configure their tools to provide the appropriate macro definition for their needs (which generally is not hard), based on your build documentation.
My question is, what are alternative ways of implementing a solution to this problem, that are still readable?
If the solution needs to be embodied entirely in the C source, then you have three main alternatives:
write code that just works correctly on all platforms, or
perform runtime detection and adaptation, or
use conditional compilation based on macros automatically defined by supported compilers.
If you're prepared to rely on macro definitions supplied by the user at build time, then the last becomes simply
use conditional compilation
Do not dismiss the first out of hand, but it can be a difficult path, and it might not be fully possible for your particular problem (and probably isn't if you're writing a driver or other code for a freestanding implementation).
Runtime adaptation could be viewed as a specific case of code that just works, but what I have in mind for this is a higher level of organization that performs runtime analysis of the host environment and chooses function variants and internal parameters suited to that, as opposed to those choices being made at compile time. This is a real thing that is occasionally done, but it may or may not be viable for your particular case.
On the other hand, conditional compilation is the traditional basis for platform adaptation in C, and the general form does not have the caveat of the other two that it might or might not work in your particular situation. The level of readability and maintainability you achieve this way is a function of the details of how you implement it.
I have come up with some solutions like just adding the requirements for each platform at the top of the file. [...] But this seems like a clunky solution.
If you must include a source file in your build but you don't want anything in it to actually contribute to the target then that's exactly what you must do. You complain that "It will probably make debugging conflicting requirements a nightmare", but to the extent that that's a genuine issue, I think it's not so much a question of syntax as of the whole different code for different platforms plan.
You also complain that the conditional compilation option might be a practical difficulty for you with your choice of development tools. It certainly seems to me that there ought to be good workarounds for that available from your tools and development workflow. But if you must have a workaround grounded only in the C language, then there is one (albeit a bad one): introduce a level of preprocessing indirection. That is, put the conditional compilation directives in a different source file, like so:
platform.c
#if defined(for_windows)
#include "platform_windows.c"
#else
#if defined(for_rpi)
#include "platform_rpi.c"
#else
#include "platform_arduino.c"
#endif
#endif
You then designate platform.c as a file to be built, but not (directly) any of the specific-platform files.
This solves your tool-presentation issue because when you are working on one of the platform-specific .c files, the editor is unlikely to be able to tell whether it would actually be included in a build or not.
Do note well that it is widely considered bad practice to #include files containing function implementations, or those not ending with an extension conventionally designating a header. I don't say otherwise about the above, but I would say that if the whole platform.c contains nothing else, then that's about the least bad variation that I can think of within the category.

Annotate functions with macros

I'm looking into Flutters embedder API because I might be calling it via Rust in an upcoming project and find this:
FLUTTER_EXPORT
FlutterResult FlutterEngineShutdown(FlutterEngine engine);
The FLUTTER_EXPORT part is (from what I can see) a macro defined as:
#ifndef FLUTTER_EXPORT
#define FLUTTER_EXPORT
#endif // FLUTTER_EXPORT
I'm by no means a C guru but I have done my fair share of C programming but have never had the opportunity to use something like this. What is it? I've tried to Google it but don't really know what to call it other than "annotation" which doesn't really feel like a perfect fit.
Flutter embedder.h
Also as pointed out - these macro can't be expanded to something meaningful - for example if you write this macro couple of time it will affect nothing.
Another nice use can be - (useful in testing of code) from here
You can pass -DFLUTTER_EXPORT=SOMETHING while using gcc. This will be useful in running your code for testing purposes without touching the codebase. Also this can be used in providing different kind of expnasion based on the compile time passed parameter - which can be used in different ways.
Also a significant part of my answer boasts of the visibility using the empty macro, gcc also provides a way to realize the same thing as described here using __attribute__ ((visibility ("default"))) (as IharobAlAsimi/nemequ mentioned) (-fvisibility=hidden) etc for the macro FLUTTER_EXPORT. The name also gives us a idea that it might be necessary to add the attribute __declspec(dllimport) (which means it will be imported from a dll). An example regarding gcc's usage of visibility support over there will be helpful.
It can useful in associating some kind of debug operation like this (By this I mean that these empty macros can be used like this also - though the name suggests that this was not the intended use)
#ifdef FLUTTER_EXPORT
#define ...
#else
#define ...
#endif
Here #define here will specify some printing or logging macro. And if not defined then it will be replaced with something blank statement. do{}while(0) etc.
This is going to be a bit rambling, but I wanted to talk about some stuff that came up in the comments in addition to your initial question.
On Windows, under certain situations the compiler needs to be explicitly told that certain symbols are to be publicly exposed by a DLL. In Microsoft's compiler (heretofore referred to as MSVC), this is done by adding a __declspec(dllexport) annotation to the function, so you would end up with something like
__declspec(dllexport)
FlutterResult FlutterEngineShutdown(FlutterEngine engine);
Alas, the declspec syntax is non-standard. While GCC does actually support it (IIRC it's ignored on non-Windows platforms), other conformant compilers may not, so it should only be emitted for compilers which support it. The path that the Flutter devs have taken is one easy way to do this; if FLUTTER_EXPORT isn't defined elsewhere, they simply define it to nothing, so that on compilers where __declspec(dllexport) is unnecessary the prototype you've posted would become
FlutterResult FlutterEngineShutdown(FlutterEngine engine);
But on MSVC you'll get the declspec.
Once you have the default value (nothing) set, you can start thinking about how to define special cases. There are a few ways to do this, but the most popular solutions would be to include a platform-specific header which define macros to the correct value for that platform, or to use the build system to pass a definition to the compiler (e.g., -DFLUTTER_EXPORT="__declspec(dllexport)"). I'm prefer to keep logic in the code instead of the build system when possible to make it easier to reuse the code with different build systems, so I'll assume that's the method for the rest of the answer, but you should be able to see the parallels if you choose the build-system route.
As you can imagine, the fact that there is a default definition (which, in this case, is empty) makes maintenance easier; instead of having to define every macro in each platform-specific header, you only have to define it in the headers where a non-default value is required. Furthermore, if you add a new macro you needn't add it to every header immediately.
I think that's pretty much the end of the answer to your initial question.
Now, if we're not building Flutter, but instead using the header in a library which links to the Flutter DLL, __declspec(dllexport) isn't right. We're not exporting the FlutterEngineShutdown function in our code, we're importing it from a DLL. So, if we want to use the same header (which we do, otherwise we introduce the possibility of headers getting out of sync), we actually want to map FLUTTER_EXPORT to __declspec(dllimport). AFAIK this isn't usually necessary even on Windows, but there are situations where it is.
The solution here is to define a macro when we're building Flutter, but never define it in the public headers. Again, I like to use a separate header, something like
#define FLUTTER_COMPILING
#include "public-header.h"
I'd also throw some include guards in, and a check to make sure the public API wasn't accidentally included first, but I'm to lazy to type it here.
Then you can define FLUTTER_EXPORT with something like
#if defined(FLUTTER_COMPILING)
#define FLUTTER_EXPORT __declspec(dllexport)
#else
#define FLUTTER_EXPORT __declspec(dllimport)
#endif
You may also want to add a third case, where neither is defined, for situations where you're building the Flutter SDK into an executable instead of building Flutter as a shared library then linking to it from your executable. I'm not sure if Flutter supports that or not, but for now let's just focus on the Flutter SDK as a shared library.
Next, let's look at a related issue: visibility. Most compilers which aren't MSVC masquerade as GCC; they define __GNUC__, __GNUC_MINOR__, and __GNUC_PATCHLEVEL__ (and other macros) to the appropriate values and, more importantly, if they're pretending to be GCC ≥ 4.2 they support the visibility attribute.
Visibility isn't quite the same as dllexport/dllimport. Instead, it's more like telling the compiler whether the symbol is internal ("hidden") or publicly visible ("default"). This is a bit like the static keyword, but while static restricts a symbol's visibility to the current compilation unit (i.e., source file), hidden symbols can be used in other compilation units but they're not exposed by the linker.
Hiding symbols which needn't be public can be a huge performance win, especially for C++ (which tends to expose a lot more symbols than people think). Obviously having a smaller symbol table makes the linking a lot faster, but perhaps more significant is that the compiler can perform a lot of optimizations which it can't or doesn't for public symbols. Sometimes this is as simple as inlining a function which otherwise wouldn't be, but another huge performance gain can come from the compiler being able to assume data from a caller is aligned, which in turn allows vectorization without unnecessary shuffling. Another might allow the compiler to assume that a pointer isn't aliased, or maybe a function is never called and can be pruned. Basically, there are lots of optimizations the compiler can only do if it knows it can see all the invocations of a function, so if you care about runtime efficiency you should never expose more than is necessary.
This is also a good chance to note that FLUTTER_EXPORT isn't a very good name. Something like FLUTTER_API or FLUTTER_PUBLIC would be better, so let's use that from now on.
By default, symbols are publicly visible, but you can change this by passing -fvisibility=hidden to the compiler. Whether or not you do this will determine whether you need to annotate public functions with __attribute__((visibility("default"))) or private functions with __attribute__((visibility("hidden"))), but I would suggest passing the argument so that if you forget to annotate something you'll end up with an error when you try to use it from a different module instead of silently exposing it publicly.
Two other things also came up in the comments: debugging macros and function annotations. Because of where FLUTTER_EXPORT is used, we know it's not a debugging macro, but we can talk about them for a second anyways. The idea is that you can insert extra code into your compiled software depending on whether or not a macro is defined. The idea is something like this:
#if defined(DISABLE_LOGGING)
# define my_log_func(msg)
#else
# define my_log_func(msg) my_log_func_ex(expr)
#endif
This is actually a pretty bad idea; think about code like this:
if (foo)
my_log_func("Foo is true!");
bar();
With the above definitions, if you compile with DISABLE_LOGGING defined, you'll end up with
if (foo)
bar();
Which probably isn't what you wanted (unless you're entering an obfuscated C competition, or trying to insert a backdoor).
Instead, what you usually want is (as coderredoc mentioned) basically a no-op statement:
#if defined(DISABLE_LOGGING)
# define my_log_func(msg) do{}while(0)
#else
# define my_log_func(msg) my_log_func_ex(expr)
#endif
You may end up with a compiler error in some weird situations, but compile-time errors are vastly preferable to difficult-to-find bugs like you can end up with for the first version.
Annotations for static analysis is another situation which was mentioned in the comments, and it's something I'm a huge fan of. For example, let's say we have a function in our public API which takes a printf-style format string. We can add an annotation:
__attribute__((format(2,3)))
void print_warning(Context* ctx, const char* fmt, ...);
Now if you try to pass an int to a %f, or forget an argument, the compiler can emit a diagnostic at compile time just like with printf itself. I'm not going to go into each and every one of these, but taking advantage of them is a great way to get the compiler to catch bugs before they make it into production code.
Now for some self-promotion. Pretty much all this stuff is highly platform-dependent; whether a feature is available and how to use it properly can depend on the compiler, compiler version, the OS, and more. If you like to keep your code portable you end up with a lots preprocessor cruft to get it all right. To deal with this, I put together a project called Hedley a while back. It's a single header you can drop into your source tree which should make it a lot easier to take advantage of this type of functionality without making people's eyes bleed when they look at your headers.

ZeroMemory Function giving me errors in windows.h?

basically I am programming on a Mac, but I'm using source code from a group at school that had "windows.h" included.
I did some research and apparently there is no replica of that file for OSX.
I saw an answer on a thread here that said it was possible to make a "dummy" windows.h file and just insert whatever #includes or function prototypes I needed. To do this I just went online and got the functions I needed from some Microsoft directories.
I proceeded to do that and everything was working fine until the ZeroMemory function gave me errors.
So, inside of my dummy "windows.h" file:
void ZeroMemory([in] PVOID Destination,[in] SIZE_T Length);
I get these errors:
Expected parameter declarator
Use of undeclared identifier 'in'
Expected ')'
Now, I have googled the function and its errors and I keep finding a bunch of code that just has this line of code in it, which doesn't really help much.
What I need to know is where do I go from here? Am I doing the right thing by creating this "dummy" windows.h file? Or is there another way to get around using windows.h?
The link I found the answer to use a dummy windows.h file is here.
I appreciate all the input, so if you have anything on your mind, please throw it down! Thanks so much everyone!
After changing some of the code according to the comments:
void ZeroMemory(PVOID Destination, SIZE_T Length);
I get these errors:
Unknown type name PVOID
Unknown type name SIZE_T
I was thinking there may be some definitions I am missing but these are TYPE names, so they must be coming out of something like a Struct? Correct me if I'm wrong please? :D
If your header has [in] annotations then you grabbed the wrong file, most likely the IDL file instead of the actual header. In the header it should be _In_ instead, which will be an empty macro. In any case, you'll still have problems because you'll be missing the definitions of things like SIZE_T as you discovered. Unless you want to go and replace every dependency you hit, I'd recommend just replacing the calls themselves with your own versions. For ZeroMemory(p,s), you should be able to replace it trivially with memset(p,0,s). This of course assumes you're only using trivial functionality in the Windows header. If you're using actual platform-specific stuff like windowing, input, etc. then you'll probably just need to get a machine or VM running Windows.
It is a very bad habit that developers on the windows platform tend to fall into, including "windows.h" in simple applications that otherwise conform to standard C or C++.
The most correct option would be to encourage the other students / teacher? to only use standard c or c++ header files when writing their applications. This will ensure that they do not use any windows api specific functions.
You can, of course, create a windows.h, and inline in any trivial windows methods (As MooseBoys answers, ZeroMemory can be trivially implemented with memset) to be able to compile simple programs without altering them, but sooner or later some program is going to use a windows api with no easy or convenient standard C / C++ or CoreFoundation (On OSX, the equivalent framework to access windowing things) equivalent.

Large C macros. What's the benefit?

I've been working with a large codebase written primarily by programmers who no longer work at the company. One of the programmers apparently had a special place in his heart for very long macros. The only benefit I can see to using macros is being able to write functions that don't need to be passed in all their parameters (which is recommended against in a best practices guide I've read). Other than that I see no benefit over an inline function.
Some of the macros are so complicated I have a hard time imagining someone even writing them. I tried creating one in that spirit and it was a nightmare. Debugging is extremely difficult, as it takes N+ lines of code into 1 in the a debugger (e.g. there was a segfault somewhere in this large block of code. Good luck!). I had to actually pull the macro out and run it un-macro-tized to debug it. The only way I could see the person having written these is by automatically generating them out of code written in a function after he had debugged it (or by being smarter than me and writing it perfectly the first time, which is always possible I guess).
Am I missing something? Am I crazy? Are there debugging tricks I'm not aware of? Please fill me in. I would really like to hear from the macro-lovers in the audience. :)
To me the best use of macros is to compress code and reduce errors. The downside is obviously in debugging, so they have to be used with care.
I tend to think that if the resulting code isn't an order of magnitude smaller and less prone to errors (meaning the macros take care of some bookkeeping details) then it wasn't worth it.
In C++, many uses like this can be replaced with templates, but not all. A simple example of Macros that are useful are in the event handler macros of MFC -- without them, creating event tables would be much harder to get right and the code you'd have to write (and read) would be much more complex.
If the macros are extremely long, they probably make the code short but efficient. In effect, he might have used macros to explicitly inline code or remove decision points from the run-time code path.
It might be important to understand that, in the past, such optimizations weren't done by many compilers, and some things that we take for granted today, like fast function calls, weren't valid then.
To me, macros are evil. With their so many side effects, and the fact that in C++ you can gain same perf gains with inline, they are not worth the risk.
For ex. see this short macro:
#define max(a, b) ((a)>(b)?(a):(b))
then try this call:
max(i++, j++)
More. Say you have
#define PLANETS 8
#define SOCCER_MIDDLE_RIGHT 8
if an error is thrown, it will refer to '8', but not either of its meaninful representations.
I only know of two reasons for doing what you describe.
First is to force functions to be inlined. This is pretty much pointless, since the inline keyword usually does the same thing, and function inlining is often a premature micro-optimization anyway.
Second is to simulate nested functions in C or C++. This is related to your "writing functions that don't need to be passed in all their parameters" but can actually be quite a bit more powerful than that. Walter Bright gives examples of where nested functions can be useful.
There are other reasons to use of macros, such as using preprocessor-specific functionality (like including __FILE__ and __LINE__ in autogenerated error messages) or reducing boilerplate code in ways that functions and templates can't (the Boost.Preprocessor library excels here; see Boost.ScopeExit or this sample enum code for examples), but these reasons don't seem to apply for doing what you describe.
Very long macros will have performance drawbacks, like increased compiled binary size, and there are certainly other reasons for not using them.
For the most problematic macros, I would consider running the code through the preprocessor, and replacing the macro output with function calls (inline if possible) or straight LOC. If the macros exists for compatibility with other architectures/OS's, you might be stuck though.
Part of the benefit is code replication without the eventual maintenance cost - that is, instead of copying code elsewhere you create a macro from it and only have to edit it once...
Of course, you could also just make a method to be called but that is sort of more work... I'm against much macro use myself, just trying to present a potential rationale.
There are a number of good reasons to write macros in C.
Some of the most important are for creating configuration tables using x-macros, for making function like macros that can accept multiple parameter types as inputs and converting tables from human readable/configurable/understandable values into computer used values.
I cant really see a reason for people to write very long macros, except for the historic automatic function inline.
I would say that when debugging complex macros, (when writing X macros etc) I tend to preprocess the source file and substitute the preprocessed file for the original.
This allows you to see the C code generated, and gives you real lines to work with in the debugger.
I don't use macros at all. Inline functions serve every useful purpose a macro can do. Macro allow you to do very weird and counterintuitive things like splitting up identifiers (How does someone search for the identifier then?).
I have also worked on a product where a legacy programmer (who thankfully is long gone) also had a special love affair with Macros. His 'custom' scripting language is the height of sloppiness. This was compounded by the fact that he wrote his C++ classes in C, meaning all class functions and variables were all public. Anyways, he wrote almost everything in macro's and variadic functions (Another hideous monstrosity foisted on the world). So instead of writing a proper template class he would use a Macro instead! He also resorted to macro's to create factory classes as well, instead of normal code... His code is pretty much unmaintanable.
From what I have seen, macro's can be used when they are small and are used declaratively and don't contain moving parts like loops, and other program flow expressions. It's OK if the macro is one or at the most two lines long and it declares and instance of something. Something that won't break during runtime. Also macro's should not contain class definitions, or function definitions. If the macro contains code that needs to be stepped into using a debugger than the macro should be removed and replace with something else.
They can also be useful for wrapping custom tracing/debugging functionality. For instance you want custom tracing in debug builds but not release builds.
Anyways when you are working in legacy code like that, just be sure to remove a bit of the macro mess a bit at a time. If you keep it up, with enough time eventually you will remove them all and make life a bit easier for yourself. I have done this in the past, with especially messy macro's. What I do is turn on the compiler switch to have the preprocessor generate an output file. Then I raid that file, and copy the code, re-indent it, and replace the macro with the generated code. Thank goodness for that compiler feature.
Some of the legacy code I've worked with used macros very extensively in the place of methods. The reasoning was that the computer/OS/runtime had an extremely small stack, so that stack overflows were a common problem. Using macros instead of methods meant that there were fewer methods on the stack.
Luckily, most of that code was obsolete, so it is (mostly) gone now.
C89 did not have inline functions. If using a compiler with extensions disabled (which is a desirable thing to do for several reasons), then the macro might be the only option.
Although C99 came out in 1999, there was resistance to it for a long time; commercial compiler vendors didn't feel it was worth their time to implement C99. Some (e.g. MS) still haven't. So for many companies it was not a viable practical decision to use C99 conforming mode, even up to today in the case of some compilers.
I have used C89 compilers that did have an extension for inline functions, but the extension was buggy (e.g. multiple definition errors when there should not be), things like that may dissuade a programmer from using inline functions.
Another thing is that the macro version effectively forces that the function will actually be inlined. The C99 inline keyword is only a compiler hint and the compiler may still decide to generate a single instance of the function code which is linked like a non-inline function. (One compiler that I still use will do this if the function is not trivial and returning void).

Typical C with C Preprocessor refactoring

I'm working on a refactoring tool for C with preprocessor support...
I don't know the kind of refactoring involved in large C projects and I would like to know what people actually do when refactoring C code (and preprocessor directives)
I'd like to know also if some features that would be really interesting are not present in any tool and so the refactoring has to be done completely manually... I've seen for instance that Xref could not refactor macros that are used as iterators (don't know exactly what that means though)...
thanks
Anybody interested in this (specific to C), might want to take a look at the coccinelle tool:
Coccinelle is a program matching and transformation engine which provides the language SmPL (Semantic Patch Language) for specifying desired matches and transformations in C code. Coccinelle was initially targeted towards performing collateral evolutions in Linux. Such evolutions comprise the changes that are needed in client code in response to evolutions in library APIs, and may include modifications such as renaming a function, adding a function argument whose value is somehow context-dependent, and reorganizing a data structure. Beyond collateral evolutions, Coccinelle is successfully used (by us and others) for finding and fixing bugs in systems code.
Huge topic!
The stuff I need to clean up is contorted nests of #ifdefs. A refactoring tool would understand when conditional stuff appears in argument lists (function declaration or definitions), and improve that.
If it was really good, it would recognize that
#if defined(SysA) || defined(SysB) || ... || defined(SysJ)
was really equivalent to:
#if !defined(SysK) && !defined(SysL)
If you managed that, I'd be amazed.
It would allow me to specify 'this macro is now defined - which code is visible' (meaning, visible to the compiler); it would also allow me to choose to see the code that is invisible.
It would handle a system spread across over 100 top-level directories, with varying levels of sub-directories under those. It would handle tens of thousands of files, with lengths of 20K lines in places.
It would identify where macro definitions come from makefiles instead of header files (aargh!).
Well, since it is part of the preprocessor... #include refactoring is a huge huge topic and I'm not aware of any tools that do it really well.
Trivial problems a tool could tackle:
Enforcing consistent case and backslash usage in #includes
Enforce a consistent header guarding convention, automatically add redundant external guards, etc.
Harder problems a tool could tackle:
Finding and removing spurious includes.
Suggest the use of predeclarations wherever practical.
For macros... perhaps some sort of scoping would be interesting, where if you #define a macro inside a block, the tool would automatically #undef it at the end of a block. Other quick things I can think of:
A quick analysis on macro safety could be helpful as a lot of people still don't know to use do { } while (0) and other techniques.
Alternately, find and flag spots where expressions with side-effects are passed as macro arguments. This could possibly be really helpful for things like... asserts with unintentional side-effects.
Macros can often get quite complex, so I wouldn't try supporting much more than simple renaming.
I will tell you honestly that there are no good tools for refactoring C++ like there are for Java. Most of it will be painful search and replace, but this depends on the actual task. Look at Netbeans and Eclipse C++ plugins.
I've seen for instance that Xref could
not refactor macros that are used as
iterators (don't know exactly what
that means though)
To be honest, you might be in over your head - consider if you are the right person for this task.
If you can handle reliable renaming of various types, variables and macros over a big project with an arbitrarily complex directory hierarchy, I want to use your product.
Just discovered this old question, but I wanted to mention that I've rescued the free version of Xrefactory for C, now named c-xrefactory, which manages to do some refactorings in macros such as rename macro, rename macro parameter. It is an Emacs plugin.

Resources