I have a working module on linux and one of the client wants it on windows.
There is very good discussion on similar topic here(https://ask.slashdot.org/story/04/08/12/1932246/cygwin-in-a-production-environment), I guess it leaning towards avoiding cygwin for productions but its about 13 years older discussion,there might have been issues but in about 13 years I hope cygwin might have been improved,matured and good for production use.
The code compiled just fine and seems to work ok under cygwin so its very tempting to take it forward rather than redoing it in windows native code.
But if there are really any unsolvable known issues and people are avoiding it for productions I would like to know.
The code heavily uses pthreads,sockets in no-wait
I've used Cygwin a fair bit, and have found it mostly unproblematic. I am aware of some of the reported problems, but haven't experienced them myself. Some things on Cygwin are much slower than the same code on Linux -- I notice this most with directory scans, but that probably isn't the only thing. People complain about fork() being slow, but that isn't really a surprise, as 'forking' isn't a native concept in Windows. If you're just using fork() to launch subprocesses, then conceivably the whole fork/exec thing could selectively be replaced with calls to native Windows APIs.
A potential limitation of Cygwin is that it requires Cygwin at run-time or, at least, a chunk of Cygwin infrastructure. MinGW might remove this restriction, but at the cost of leaving you to make a larger number of compatibility-related changes in your code (file locations, for example). The last time I looked, MinGW didn't have tooling as extensive as Cygwin, either, but it's probably good enough for many purposes.
I guess another possibility to consider these days is the Windows Subsystem for Linux (WSL) on Windows 10. I've found that code that builds for Cygwin usually builds and runs without changes on WSL, but I haven't really figured out what the relative advantages and disadvantages of Cygwin and WSL are.
I've not noticed problems with pthreads in Cygwin, MinGW, or WSL; although I guess any problems are likely to depend on the exact way you use threads. I can't comment on the no-wait socket issue, because that isn't something I've tried.
Incidentally, both Cygwin and MinGW will allow you to call native Windows API, and other functions in DLLs, if you need to. So the possibility exists to create a sort of "hybrid" application that uses POSIX-type functions and also Win32 APIs. This might be useful if it turns out that some things are much faster with Win32 functionality. I'm not sure this is possible with WSL.
I should say that my comment about time consuming updates refers as much to the need to perform a full Windows update as well as apt-get updates prior to running a wsl
script which will go past midnight. Running a separate memory recovery script every 4 hours or so is a partial solution. I have "only " 6GB RAM.
I downloaded library from http://sourceforge.net/projects/xsock/.
In INSTALL file are steps to run this libs.
I changed location to xsock/libxsock and type in terminal ./configure
Nothing happend... How to solve this?
cd' to the directory containing the package's source code and type
./configure' to configure the package for your system. If you're
using csh' on an old version of System V, you might need to type
sh ./configure' instead to prevent csh' from trying to execute
configure' itself.
Running `configure' takes a while. While running, it prints some
messages telling which features it is checking for.
Type `make' to compile the package.
...
4...
The library is broken, and cannot be built as distributed. A number of autoconf/automake files are missing from the archive.
Given that the library appears to have been primarily developed on Windows systems, it seems likely to me that the UNIX parts of the build process for this library have not been maintained, or may never have worked at all. My recommendation is that you find another library — this one seems to be largely unmaintained, and the code quality seems rather low.
I'm trying to generate spotify playlists (not text-based) and found this on Github: https://github.com/liesen/spotify-api-server
I have no experience in C programming so i don't really know where to start. Are there any relevant tutorials/articles on setting up a c-server similar to the one i'm trying to set up? on a pretty basic level.
I have a sneaking suspicion that building and using this C program isn't actually what you want (http://developer.spotify.com/en/spotify-apps-api/overview/ might be easier for you to get started with), but I'm going to help you anyway.
Most C projects have a README file that tells you how to build them. In this case, it says:
Make sure you have the required libraries
libspotify > 9
subversion (libsvn-dev) and its dependency, libapr
libevent >= 2.0
jansson >= 2.0
Update account.c with your credentials. A Spotify premium account is necessary.
Copy appkey.c into the directory and run make.
There are a few extra things that the README doesn't say, that an experienced developer will be able to guess at:
libsvn-dev and libapr are the names of Ubuntu packages (I think), so it is probably expecting your development machine to be running Ubuntu. You should probably install build-essentials as well (on a new machine, I would usually run apt-get install ${*-dev-packagages} and then apt-get build-dep ${*-dev-packages}. build-dep might download some packages that you don't need, but bandwidth is cheap, and debugging missing packages is a pain in the ass.
when it says libspotify > 9 it normally means "greater than 9 but less than 10" (if the first number in a C-library version number changes, it normally means "BEWARE: we broke things."). If you get build errors about wrong number of arguments to functions, this is probably why.
It says "run make" so there will be a file called Makefile somewhere. You need to cd into the directory that contains Makefile before typing make
make will probably produce an executable file somewhere. I usually find these by running ls and looking for items highlighted in green. If I can't find anything that way, I will read Makefile and note that "all" depends on "server" so I would look for an executable called "server".
You are jumping in at the deep end here (building someone else's experimental package as your first C program). If you get errors that you don't understand, it's not because you're stupid: it's because C is a brutal and archaic language, and it wasn't designed as a teaching language like Python was, or a beginner-friendly language like Javascript. Once you get used to it, you start to see steamtrain-like beauty of the language; the pain subsides to a dull ache, but it never truly goes away.
I'm really sorry if this sounds kinda dumb. I just finished reading K&R and I worked on some of the exercises. This summer, for my project, I'm thinking of re-implementing a linux utility to expand my understanding of C further so I downloaded the source for GNU tar and sed as they both seem interesting. However, I'm having trouble understanding where it starts, where's the main implementation, where all the weird macros came from, etc.
I have a lot of time so that's not really an issue. Am I supposed to familiarize myself with the GNU toolchain (ie. make, binutils, ..) first in order to understand the programs? Or maybe I should start with something a bit smaller (if there's such a thing) ?
I have little bit of experience with Java, C++ and python if that matters.
Thanks!
The GNU programs big and complicated. The size of GNU Hello World shows that even the simplest GNU project needs a lot of code and configuration around it.
The autotools are hard to understand for a beginner, but you don't need to understand them to read the code. Even if you modify the code, most of the time you can simply run make to compile your changes.
To read code, you need a good editor (VIM, Emacs) or IDE (Eclipse) and some tools to navigate through the source. The tar project contains a src directory, that is a good place to start. A program always start with the main function, so do
grep main *.c
or use your IDE to search for this function. It is in tar.c. Now, skip all the initialization stuff, untill
/* Main command execution. */
There, you see a switch for subcommands. If you pass -x it does this, if you pass -c it does that, etc. This is the branching structure for those commands. If you want to know what these macro's are, run
grep EXTRACT_SUBCOMMAND *.h
there you can see that they are listed in common.h.
Below EXTRACT_SUBCOMMAND you see something funny:
read_and (extract_archive);
The definition of read_and() (again obtained with grep):
read_and (void (*do_something) (void))
The single parameter is a function pointer like a callback, so read_and will supposedly read something and then call the function extract_archive. Again, grep on it and you will see this:
if (prepare_to_extract (current_stat_info.file_name, typeflag, &fun))
{
if (fun && (*fun) (current_stat_info.file_name, typeflag)
&& backup_option)
undo_last_backup ();
}
else
skip_member ();
Note that the real work happens when calling fun. fun is again a function pointer, which is set in prepare_to_extract. fun may point to extract_file, which does the actual writing.
I hope I walked you a great deal through this and shown you how I navigate through source code. Feel free to contact me if you have related questions.
The problem with programs like tar and sed is twofold (this is just my opinion, of course!). First of all, they're both really old. That means they've had multiple people maintain them over the years, with different coding styles and different personalities. For GNU utilities, it's usually pretty good, because they usually enforce a reasonably consistent coding style, but it's still an issue. The other problem is that they're unbelievably portable. Usually "portability" is seen as a good thing, but when taken to extremes, it means your codebase ends up full of little hacks and tricks to work around obscure bugs and corner cases in particular pieces of hardware and systems. And for programs as widely ported as tar and sed, that means there's a lot of corner cases and obscure hardware/compilers/OSes to take into account.
If you want to learn C, then I would say the best place to start is not trying to study code that others have written. Rather, try to write code yourself. If you really want to start with an existing codebase, choose one that's being actively maintained where you can see the changes that other people are making as they make them, follow along in the discussions on the mailing lists and so on.
With well-established programs like tar and sed, you see the result of the discussions that would've happened, but you can't see how software design decisions and changes are being made in real-time. That can only happen with actively-maintained software.
That's just my opinion of course, and you can take it with a grain of salt if you like :)
Why not download the source of the coreutils (http://ftp.gnu.org/gnu/coreutils/) and take a look at tools like yes? Less than 100 lines of C code and a fully functional, useful and really basic piece of GNU software.
GNU Hello is probably the smallest, simplest GNU program and is easy to understand.
I know sometimes it's a mess to navigate through C code, especially if you're not familiarized with it. I suggest you use a tool that will help you browse through the functions, symbols, macros, etc. Then look for the main() function.
You need to familiarize yourself with the tools, of course, but you don't need to become an expert.
Learn how to use grep if you don't know it already and use it to search for the main function and everything else that interests you. You might also want to use code browsing tools like ctags or cscope which can also integrate with vim and emacs or use an IDE if you like that better.
I suggest using ctags or cscope for browsing. You can use them with vim/emacs. They are widely used in the open-source world.
They should be in the repository of every major linux distribution.
Making sense of some code which uses a lot of macros, utility functions, etc, can be hard. To better browse the code of a random C or C++ software, I suggest this approach, which is what I generally use:
Install Qt development tools and Qt Creator
Download the sources you want to inspect, and set them up for compilation (usually just ./configure for GNU stuff).
Run qmake -project in the root of the source directory, to generate Qt .pro file for Qt Creator.
Open the .pro file in Qt Creator (do not use shadow build, when it asks).
Just to be safe, in Qt Creator Projects view, remove the default build steps. The .pro file is just for navigation inside Qt Creator.
Optional: set up custom build and run steps, if you want to build and run/debug under Qt Creator. Not needed for navigation only.
Use Qt Creator to browse the code. Note especially the locator (kb shortcut Ctrl+K) to find stuff by name, and "follow symbol under cursor" (kb shortcut F2), and "find usages" (kb shortcut Ctrl-Shift-U).
I had to take a look at "sed" just to see what the problem was; it shouldn't be that big. I looked and I see what the issue is, and I feel like Charleton Heston catching first sight of a broken statue on the beach. All of what I'm about to describe for "sed" might also apply to "tar". But I haven't looked at it (yet).
A lot of GNU code got seriously grunged up - to the point of unmaintainable morbid legacy - for reasons I don't know. I don't know exactly when it happened, maybe late 1990's or early 2000's, but it was like someone flipped a switch and suddenly, all the nice modular mostly self-contained code widgets got massively grunged with all sorts of extraneous entanglements having little or no connection to what the application itself was trying to do.
In your case, "sed": an entire library got (needlessly) dragged in with the application. This was the case at least as early as version 4.2 (the last version predating your query), probably before that - I'd have to check.
Another thing that got grunged up was the build system (again) to the point of unmaintainability.
So, you're really talking about legacy rescue here.
My advice ... which is generic for any codebase that's been around a long time ... is to dig as deep as you can and go back to its earliest forms first; and to branch out and look at other "sed"'s - like those in the UNIX archive.
https://www.tuhs.org/Archive/
or in the BSD archive:
https://github.com/freebsd
https://github.com/weiss/original-bsd
(the second one goes deeper into early BSD in its earlier commits.)
Many of the "sed"'s on the GNU page - but not all of them - may be found under "Downloads" as a link "mirrors" on the GNU sed page:
https://www.gnu.org/software/sed/
Version 1.18 is still intact. Version 1.17 is implicitly intact, since there is a 1.17 to 1.18 diff present there. Neither version has all the extra stuff piled on top of it. It's more representative of what GNU software looked like, before before becoming knotted up with all the entanglements.
It's actually pretty small - only 8863 lines for the *.c and *.h files, in all. Start with that.
For me the process of analysis of any codebase is destructive of the original and always entails a massive amount of refactoring and re-engineering; and simplification coming from just writing it better and more natively, while yet keeping or increasing its functionality. Almost always, it is written by people who only have a few years' experience (by which I mean: less than 20 years, for instance) and have thus not acquired full-fledged native fluency in the language, nor the breadth of background to be able to program well.
For this, if you do the same, it's strongly advised that you have some of test suite already in place or added. There's one in the version 4.2 software, for instance, though it may be stress-testing new capabilities added between 1.18 and 4.2. Just be aware of that. (So, it might require reducing the test suite to fit 1.18.) Every change you make has to be validated by whatever tests you have in your suite.
You need to have native fluency in the language ... or else the willingness and ability to acquire it by carrying out the exercise and others like it. If you don't have enough years behind you, you're going to hit a soft wall. The deeper you go, the harder it might be to move forward. That's an indication that you're not experienced enough yet, and that you don't have enough breadth. So, this exercise then becomes part of your learning experience, and you'll just have to plod through.
Because of how early the first versions date from, you will have to do some rewriting anyhow, just to bring it up to standard. Later versions can be used as a guide, for this process. At a bare minimum, it should be brought up to C99, as this is virtually mandated as part of POSIX. In other words, you should at least be at least as far up to date as the present century!
Just the challenge of getting it to be functional will be exercise enough. You'll learn a lot of what's in it, just by doing that. The process of getting it to be operational is establishing a "baseline". Once you do that, you have your own version, and you can start with the "analysis".
Once a baseline is established, then you can proceed full throttle forward with refactoring and re-engineering. The test suite helps to provide cover against stumbles and inserted errors. You should keep all the versions that you have (re)made in a local repository so that you can jump back to earlier ones, in case you need to track down the sudden emergence of test failures or other bugs. Some bugs, you may find, were rooted all the way back in the beginning (thus: the discovery of hidden bugs).
After you have the baseline (re)written to your satisfaction, then you can proceed to layer in the subsequent versions. On GNU's archive, 1.18 jumps straight to 2.05. You'll have to make a "diff" between the two to see where all the changes were, and then graft them into your version of 1.18 to get your version of 2.05. This will help you better understand both the issues that the changes made addressed, and what changes were made.
At some point you're going to hit GNU's Grunge Wall. Version 2.05 jumped straight to 3.01 in GNU's historical archive. Some entanglements started slipping in with version 3.01. So, it's a soft wall we have here. But there's also an early test suite with 3.01, which you should use with 1.18, instead of 4.2's test suite.
When you hit the Grunge Wall, you'll see directly what the entanglements were, and you'll have to decide whether to go along for the ride or cast them aside. I can't tell you which direction is the rabbit hole, except that SED has been perfectly fine for a long time, most or all of it is what is listed in and mandated by the POSIX standard (even the current one), and what's there before version 3 serves that end.
I ran diffs. Between 2.05 and 3.01, the diff file is 5000 lines. Ok. That's (mostly) fine and is natural for code that's in development, but some of that may be coming from the soft Grunge Wall. Running a diff on 3.01 versus 4.2 yields a diff file that over 60000 lines. You need only ask yourself: how can a program that's under 10000 lines - that abides by an international standard (POSIX) - be producing 60000 lines of differences? The answer is: that's what we call bloat. So, between 3.01 and 4.2, you're witnessing a problem that is very common to code bases: the rise of bloat.
So, that pretty much tells you which direction ("go along for the ride" versus "cast it aside") is the rabbit hole. I'd probably just stick with 3.01, and do a cursory review of the differences between 3.01 and 4.2 and of the change logs to get an overview of what the changes were, and just leave it at that, except maybe to find a different way to write in what they thought was necessary to change, if the reason for it was valid.
I've done legacy rescue before, before the term "legacy" was even in most people's vocabulary and am quick to recognize the hallmark signs of it. This is the kind of process one might go through.
We've seen it happen with some large codebases already. In effect, the superseding of X11 by Wayland was a massive exercise in legacy rescue. It's also possible that the ongoing superseding of GNU's gcc by clang may be considered instance of that.
I'm looking for a tool that, given a bit of C, will tell you what symbols (types, precompiler definitions, functions, etc) are used from a given header file. I'm doing a port of a large driver from Solaris to Windows and figuring out where things are coming from is getting to be difficult, so this would be a huge help. Any ideas?
Edit: Not an absolute requirement, but tools that work on Windows would be a plus.
Edit #2: To clarify what I'm trying to do, I have a codebase I'm trying to port, which brings in a large number of headers. What I'd like is a tool that, given foo.c, will tell me which symbols it uses from bar.h.
I like KScope, which copes with very large projects.
KScope http://img110.imageshack.us/img110/4605/99101zd3.png
I use on both Linux and Windows :
gvim + ctags + cscope.
Same environment will work on solaris as well, but this is of course force you to use vim as editor, i pretty sure that emacs can work with both ctags and cscope as well.
You might want give a try to vim, it's a bit hard at first, but soon you can't work another way. The most efficient editor (IMHO).
Comment replay:
Look into the cscope man:
...
Find functions called by this function:
Find functions calling this function:
...
I think it's exactly what are you looking for ... Please clarify if not.
Comment replay 2:
ok, now i understand you. The tools i suggested can help you understand code flow, and find there certain symbol is defined, but not what are you looking for.
Not what you asking for but since we are talking i have some experience with porting and drivers (feel free to ignore)
It seems like compiler is good enough for your task. You just starting with original file and let compiler find what missing part, it will be a lot of empty stubs and you will get you code compiled.
At least for beginning i suggest you to create a lot of stubs and modifying original code as less as possible, later on once you get it working you can optimize.
It's might be more complex depending on the type of driver your are porting (I'm assuming kernel driver), the Windows and Solaris subsystems are not so alike. We do have a driver working on both solaris and windows, but it was designed to be multi platform from the beginning.
emacs and etags.
And I leverage make to run the tag indexing for me---that way I can index a large project with one command. I've been thinking about building a master index and separate module indecies, but haven't gotten around to implementing this yet...
#Ilya: Would pistols at dawn be acceptable?
Try doxygen, it can produce graphs and/or HTML and highly customizable