Performing BLAST/SmithWaterman searches directly from my application - c

I'm working on a small application and thinking about integrating BLAST or other local alignment searches into my application. My searching has only brought up programs, which need to be installed and called as an external program.
Is there a way short of me implementing it from scratch? Any pre-made library perhaps?

Does it have to be in C, or would C++ also be OK? If so, you might want to look at the SeqAn library here.

This is a topic which has also to do with reproducibility of results: it is always better to use the raw blast binary provided by NCBI or UCSC, because it will make your results easeir to reproduce by other scientists and will save you a lot of time spent on writing tests (more time than you can imagine).
For the day-to-day work I have often used exonerate, a tool written in C which can do both global and local alignment, has a simple unix-like interface, and doesn't require to format your input as with blast.
Moreover, take in mind that people usually use a combination of makefiles and scripts to define a pipeline, instead of calling everything from a script: most programming languages are not good to define pipelines, while automated build tools like Make are not useful for scripting tasks. Have a look at these examples: http://skam.sourceforge.net/skam-intro.html http://swc.scipy.org/lec/build.html

I just stumbled across the thing I would have wanted: The NCBI C++ Toolkit. Thanks for all the suggestions though.

The BLAST algorithm was implemented ~20 years ago, it is now a very big algorithm and I cannot imagine it can be easily implemented from scratch. You can try to learn about it when looking at the sources of the 'blastall' program in the NCBI toolkit.
A simpler pairwise algorithm (Swith Waterman, Needleman-Wunsch )should be easier to implement:

Computational Molecular Biology: An Introduction has code for Smith-Waterman and other dynamic programming alignment algorithms.

I use NetBLAST through the blastcl3 client binary. I believe that the blastcl3 binary is a pretty thin client for the NetBLAST web service.
If so, it shouldn't be too hard to sniff the packets and implement your own client. Depending on your use case, this might be faster/easier than implementing your own alignment algorithm. It does, however, introduce a dependency to NCBI's web services.
http://www.ncbi.nlm.nih.gov/staff/tao/URLAPI/netblast.html

I posted a similar question (running BLAST (bl2seq) without creating sequence files)
Basically, the answer I came up with was running this command:
bl2seq -i<(echo sequence1) -j(echo sequence2) -p blastn
That pipes the result of the echo command to the bl2seq (blast 2 sequences) program.
But I couldn't get it to work via calling system from Python

Related

Mixing OCaml and C: is it worth the pain?

I am faced with the task of building a new component to be integrated into a large existing C codebase. The component is essentially a kind of compiler, and will be complicated enough that I would like to write it in OCaml (for reasons along the lines of those given here). I know that OCaml-C interaction is possible (as per the manual and this tutorial), but it looks somewhat painful.
What I'd like to know is whether others here have attempted large-scale integration of OCaml and C code, what were some of the unexpected gotchas they found, and whether at the end of the day they concluded that they would have been better off just writing the new code in C.
Note, I'm not trying to start a debate about the merits of functional versus imperative programming: let's just say we assume that OCaml happens to be the right tool for the job I have in mind, and the potential difficulty in integration is the only issue. I also don't have the option of rewriting the rest of the codebase.
To give a little more detail about the task: the component I need to implement is a certain kind of query optimizer that incorporates some research ideas my group at UC Davis is working on, and will be integrated into PostgreSQL so that we can run experiments. (A query optimizer is, essentially, a compiler.) The component would be invoked from C code, would function mostly independently but would make a certain number of calls to other PostgreSQL components to retrieve things like system catalog information, and would construct a complex C data structure (representing a physical query plan) as output.
Apologies for the somewhat open-ended question, but I'm hoping the community might be able to save me a little trouble :)
Thanks,
TJ
Great question. You should be using the better tool for the job.
If in fact your intentions are to use the better tool for the job (and you are sure lexx and yacc are going to be a pain) then I have something to share with you; it's not painful at all to call ocaml from c, and vice versa. Most of the time I've been writing ocaml calling C, but I have written a few the other way. They've mostly been debug functions that don't return a result. Although, the callings back and fourth is really about packing and unpacking the ocaml value type on the C side. That tutorial you mention covers all of that, and very well.
I'm opposed to Ron Savage remarks that you have to be an expert in the language. I recall starting out where I work, and within a few months, without knowing what a "functor" was, being able to call C, and writing a thousand lines of C for numerical recipes, and abstract data types, and there were some hiccups (not with unpacking types, but with garbage collection of an abstract data-types), but it wasn't bad at all. Most of the inner loops in the project are written in C --taking advantage of SSE, external libraries (lapack), tighter optimized loops, and some in-lined hand optimized assembly.
I think you might need to be experienced with designing a large project and demarcating functional and imperative sections. I would really assess how much ocaml you are going to be writing, and what kind of values you want to pass to C --I'm saying this because I'd be fearful of recommending to someone to pass a recursive data-structure from ocaml to C, actually, it would be lots of unpacking tuples, their contents, and thus a lot of possibility for confusion and bugs.
I one wrote a reasonably complex OCaml-C hybrid program. I was frustrated by what I found to be inadequate documentation, and I ended up spending too much time dealing with garbage collection issues. However, the resulting program worked and was fast.
I think there is a place for OCaml-C integration, but make sure it is worth the hassle. It might be simpler to have the programs communicate over a socket (assuming such IO operations won't eliminate the performance you want). It might also be more sane to just write the whole thing in C.
Interoperability is the achilles heel of standalone implementations of statically typed languages, particularly those without JIT compilation like OCaml. My own experience having been using OCaml for over 5 years is that the only reliable bindings are across simple APIs that do little more than pass large arrays, e.g. LAPACK. Even slightly more complicated bindings like those to FFTW took years to stabilize and others, like OpenGL and GLU, remain an unsolved problem. In particular, I found major bugs in binding code written by two of the authors of the OCaml compiler. If they cannot get it right then there is little hope for the rest of us...
However, all is not lost. The solution is simply to use looser bindings. Rather than handling interoperability at the C level with a low-level type-unsafe interface, use a high-level interface like XML-RPC with string passing or even over sockets. This is much easier to get right and, as you say, will let you leverage the enormous practical benefits offered by OCaml for this application.
My rule of thumb is to stick with the language / model / style used in the existing code-base, so that future maintenance developers inherit a consistent and understandable set of application code.
The only way I could justify something like what you are suggesting would be if:
You are an Expert at OCaml AND a Novice at C (so you'll be 20x as productive)
You have successfully integrated it with a C library before (apparently not)
If you are at all more familiar with C than OCaml, you've just lost any "theoretical" gain from OCaml being easier to use when writing a compiler - plus it seems at though you will have more peers familiar with C around you than OCaml.
That's my "grumpy old coder" 2 cents (which used to only cost a penny!).

What language should we use to let people extend our terminal/sniffer program?

We have a very versatile terminal/sniffer application which can do all sorts of things with TCP, UDP and serial connections.
We are looking to make it extensible -- i.e, allow people to write their own protocol parsers, highlighters, etc.
We created a C-like language for extending the product, and then discovered that for some coders, this presents a steep learning curve.
We are now pondering the question: Should we stick to C or go with something like Ruby or Lua?
C is beautiful for low-level stuff (like parsing binary data), because it supports pointers. But for exactly that reason, it can be tough to learn.
Ruby (etc) are easy to learn, but don't have pointers, so anything that has to do with parsing binary data gets ugly very fast.
What do you think? For extending a product that parses binary data -- Ruby/Lua or C/C++?
Would be great if you could give some background when you respond -- especially if you've done something similar.
Wireshark, the "world's foremost network protocol analyzer", is also a packet sniffer/analyzer, formerly also called Ethereal. It uses Lua to enable writing custom dissectors and taps, see the manual.
However, note that I have not used it, so I cannot tell how nice/effective/easy to learn the API is.
Like TCL, Lua was designed to be tightly integrated with an application. Personally, I find Lua's syntax and idioms to be much easier to deal with than TCL.
Lua is easy to integrate with an existing system, and easy to extend. It is also fairly easy to create safe sandboxes in which user-supplied code can run without full access to the innards of your product.
If you have an API written does it make a difference? The person using the C-like API would only have to understand the difference between passing by value or reference.
Your core does one thing very good, so fine. Let it be that way. I think you should create an API based on std in/out, just like the way of good unix design. Then anyone can extend it in any language of choice.
Tcl was designed with the goal to allow scripting for C programs, so it would be much easier to implement.
http://en.wikipedia.org/wiki/Tcl#Interfacing_with_other_languages
I second Johan's idea. Although in past when I had to do something like this I stuck to
C language APIs and people were restricted to use C language only. But now I look at it,
I realize that it would have been more efficient if we would have done the way Johan describes
PS: And by coincidence it was a protocol testing app using packet sniffer
perl, sed, awk, lex, antler, ... These are languages I'm somewhat familiar with that I'd like to write something like this in. It depends on the data flow, though.
It's really hard to say what the correct thing to use is. Something that I don't think anyone else has mentioned is to keep in mind that the scripts will have bugs. It's very easy to design something like this in such a way that bugs in the scripts (especially run time errors) either just show up a "error in script" or kill the whole system.
You should keep that the scripts should be unit testable and that failures should be reproducible.
I don't think it matters what you do as long as you do one thing, drop the in-house language. It sounds like you choose to make C into a scripting language. One issue I see with this is it will look familiar to C programmers, but not be the same. I can't imagine you have mimicked the semantics of C that would make existing C programmers comfortable. And as you have mentioned, others will find it hard to learn.
The company I am working at have developed their own language. It uses XML for structure so parsing is easy. The language grows "as needed." Meaning if a feature is missing then it will be added. I'm pretty sure it went from an XML database to something that needed control flow. But my point is that if you aren't thinking about building it as a language, then you'll be limiting what users can do with it unintentionally.
Personally I've been looking at how I can get the company to start taking advantage of Lua. And specifically Lua for several reasons. Lua was developed as an extension language that was general purpose. It easily interfaces with the language, including Python and Ruby. It is small and simple for use by non-programmers (not really needed in your case). It is simple enough to replace XML, INI... for configuration settings and powerful enough to replace the need for another programming language.
http://www.lua.org/spe.html

How can I best deploy a web application written in C?

Say I have fancy new algorithm written in C,
int addone(int a) {
return a + 1;
}
And I want to deploy as a web application, for example at
http://example.com/addone?a=5
which responds with,
Content-Type: text/plain
6
What is the best way to host something like this? I have an existing setup using Python mod_wsgi on Apache2, and for testing I've just built a binary from C and call as a subprocess using Python's os.popen2.
I want this to be very fast and not waste overhead (ie I don't need this other Python stuff at all). I can dedicate the whole server to it, re-compile anything needed, etc.
I'm thinking about looking into Apache C modules. Is that useful? Or I may build SWIG wrappers to call directly from Python, but again that seems wasteful if I'm not using Python at all. Any tips?
The easiest way should be to write this program as a CGI app (http://en.wikipedia.org/wiki/.cgi). It would run with any webserver that supports the Common Gateway Interface.
The output format needs to follow the CGI rules.
If you want to take full advantage of the web server capabilities then you can write an Apache module in C. That needs a bit more preparation but gives you full control.
Maybe this tiny dynamic webserver in C to be used with C language can help you.. it should be easy to use and self-contained.
Probably the fastest solution you can adopt according to the benchmarks shown on their homepage!
This article from yesterday has a good discussion on why not to use C as a web framework. I think an easy solution for you would be to use ctypes, it's certainly faster than starting a subprocess. You should make sure that your method is thread safe and that you check your input argument.
from ctypes import *
libcompute = CDLL("libcompute.so")
libcompute.addone(int(a))
I'm not convinced that you're existing general approach might not be the best one. I'm not saying that Apache/Python is necessarily the correct one but there is something compelling about separating the concerns in your architecture being composed of highly focused elements that are specialists in their functions within the overall system.
Having your C-based algorithm server being decoupled from the HTTP server may give you access to things like HTTP scalability and caching facilities that might otherwise have to be in-engineered (or reinvented) within your algorithm component if things are too tightly coupled.
I don't think performance concerns in of themselves are always the best or only reasons when designing an architecture. For example the a YAWS deployment with a C-based driver could be a very performant option.
I have just setup a web service using libmicrohttpd and have had amazing results. On a quad core I've been handling 20400 requests a second and the CPU is running only at 58%. This is probably going to be deployed on a server with 8 cores, so I'm expecting much better results. A very simple C service will be even faster!
I have tried GWAN, it is very good, but it's closed, and doesn't play well with virtual environments. I will give #Gil kudos being good at supporting it here though. We just had a few issues and found LibMicroHttpd works better for our needs.
If you go here, you may need to update your openssl if you're using CentOs from axivo
rpm -ivh --nosignature http://rpm.axivo.com/redhat/axivo-release-6-1.noarch.rpm
yum --disablerepo=* --enablerepo=axivo update openssl-devel
You can try Duda I/O, it only requires a Linux host: http://duda.io

practicing C over the summer

I'm a student with a bit of experience in Java and C++ (one semester each)
Currently, I'm going through K&R and working on the exercises in the book. However, I was thinking of what I could work on over the summer since I'm almost done with K&R and I will have a lot of free time soon.
I really like building command line applications so I was thinking of getting involved with the coreutils project somehow. My question is, is it too early for me to be messing with coreutils? Should I be working on something a bit simpler perhaps? I'm a bit new with the Linux/Open source world if that matters but I'm really enjoying it.
I've done some project euler problems and I don't really like it that much.
Download the Nethack sources. Play it. If you ever get past that stage, then add some new and interesting monsters, weapons, traps and other objects.
https://openhatch.org/search/
http://sourceforge.net/people/
http://www.fsf.org/campaigns/priority-projects/
http://savannah.gnu.org/people/?type_id=1
You can do various other things with C:
Use various data structures like Link-list, tree, hash, heap
Try coding various algorithm implementation
Play with various string manipulation
Work with basic system and socket programing
List goes on..
There are (I'd argue) probably only a couple of places where C is still used extensively in preference to C++, so if you want to make a difference in the Open Source world I'd recommend thinking about working in one of the following areas:
Device drivers, and indeed most aspects of OS kernels.
Interfaces to scripting languages (Python, Perl, Lua etc.)
In both cases, C++ has no significant advantage, or some significant disadvantages, over C.
I agree absolutely with Mark's comment above that it is difficult to join a mature project. I have recently been trying to get a Haskell binding put together for SWIG, and it has proven to be pretty tricky - and I say that with over 20 years of C and about 15 of C++ behind me!
The problem is that mature codebases usually are not so clean, and this means that it can be difficult to understand how things hang together.
If you have the case, working on ARM device such as a Pandora or one of the other small embedded devices you can pick up is a lot of fun, and will teach quite a bit. In many cases what you are looking for is a device with a 'community' Linux port, and for many of these there are some quite basic components which are not yet working.
Good luck, and have fun!
I agree with Jeremy O'Donoghue's answer (since I am also a Mobile Device developer). Go install a 32-bit linux distro (if you don't already have), and start hacking Android Source Code.
There are many mailing-lists dedicated for the Android and you might try discuss some idea from there.
And there is also Google Summer of Code if you can make it

What type of programs are best written in C [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Joel and company expound on the virtues of learning C and how the best way to learn a language is to actually write programs using that use it. To that effect, which types of applications are most suitable to the C programming language?
Edit:
I am looking for what C is good at. This likely does not coincide with the best way of learning C.
Code where you need absolute control over memory management. Code where you need to be utterly in control of speed versus memory trade-offs. Very low-level file manipulation (such as access to the raw devices).
Examples include OS kernel, and embedded apps.
In the late 1980s, I was head of the maintenance team on a C system that was more than a million lines of code. It did database access (Oracle), X Windows graphics, interprocess communications, all sorts of good stuff. It ran on VMS and several varieties of Unix. But if I were to recreate that system today, I sure wouldn't use C, I'd use Java. Others would probably use C#.
Low level functions such as OS kernel and drivers. For those, C is unbeatable.
You can use C to write anything. It is a very general purpose language. After doing so for a little while you will understand why there are other "higher level" languages available.
"Learn C", by all means, but don't don't stick with it for too long. Other languages are far more productive.
I think the gist of the people who say you need to learn C is so that you understand the relationship between high level code and the machine internals and what exaclty happens with bits, bytes, program execution, etc.
Learn that, and then move on.
Those 100 lines of python code that were accounting for 80% of your execution time.
Small apps that don't have a UI, especially when you're trying to learn.
Edit: After thinking a little more on this, I'd add the following: if you already know a higher-level language and you're trying to learn more about C, a good route may be to not create a whole new C app, but instead create a C DLL and add functions to it that you can call from the higher language. In this way you can replace simple functions that your high language already has to ensure that you C function does what it should (gives you pre-built testing), lets you code mostly in what you're familiar with, uses the language in a problem you're already familiar with, and teaches you about interop.
Anything where you think of using assembly.
Number crunching (for example, libraries to be used at a higher level from some other language like Python).
Embedded systems programming.
A lot of people are saying OS kernel and device drivers which are, of course, good applications for C. But C is also useful for writing any performance critical applications that need to use every bit of performance the hardware is capable of.
I'm thinking of applications like database management systems (mySQL, Oracle, SQL Server), web servers (apache, IIS), or even we browsers (look at the way chrome was written).
You can do so many optimizations in C that are just impossible in languages that run in virtual machines like Java or .NET. For instance, databases and servers support many simultaneous users and need to scale very well. A database may need to share data structures between multiple users (threads/processes), but do so in a way that efficiently uses CPU caches. In C, you can use an operating system call to determine the size of the cache, and then align a data structure appropriately to the cache line so that the line does not "ping pong" between caches when multiple threads access adjacent, but unrelated data (so called "false sharing). This is one example. There are many others.
A bootloader. Some assembly also required, which is actually very nice..
Where you feel the need for 100% control over your program.
This is often the case in lower layer OS stuff like device drivers,
or real embedded devices based on MCU:s etc etc (all this and other is already mentioned above)
But please note that C is a mature language that has been around for many years
and will be around for many more years,
it has many really good debugging tools and still a huge number off developers that use it.
(It probably has lost a lot to more trendy languages, but it is still huge)
All its strengths and weaknesses are well know, the language will probably not change any more.
So there are not much room for surprises...
This also means that it would probably be a good choice if you have a application with a long expected life cycle.
/Johan
Anything where you need a minimum of "magic" and need the computer to do exactly what you tell it to, no more and no less. Anything where you don't trust the "magic" of garbage collection to handle your memory because it might not be as efficient as what you can hand-code. Anything where you don't trust the "magic" of built-in arrays, strings, etc. not to waste too much space. Anything where you want to be able to reason about exactly what ASM instructions the compiler will emit for a given piece of code.
In other words, not too much in the real world. Most things would benefit more from higher level abstraction than from this kind of control. However, OS code, device drivers, and a few things that have to be near optimal in both space and speed might make sense to write in C. Higher level languages can do pretty well competing with C on speed, but not necessarily on space.
Embedded stuff, where memory-usage and cpu-speed matters.
The interrupt handler part of an OS (and maybe two or three more functions in it).
Even if some of you will now start to bash heavily on me now:
I dont think that any decent app should be written in C - it is way too error prone.
(and yes, I do know what I am talking about, having written an awful lot of code in C myself (OSes, compilers, VMs, network protocols, RT-control stuff etc.).
Using a high level language makes you so much more productive. Speed is typically gained by keeping the 10-90 rule in mind: 90% of CPU time is spent in 10% of your code (which you should optimize first).
Also, using the right algorithm might give more performance than optimizing bits in C. And doing things right in C is so much more trouble.
PS: I do really mean what I wrote in the second sentence above; you can write a complete high performance OS in a high level language like Lisp or Smalltalk, with only a few minor parts in C. Think how the 70's Lisp machines would fly on todays hardware...
Garbage collectors!
Also, simply programs whose primary job is to make operating-system calls. For example, I need to write a short C program called timeout that
Takes a command line as argument, with a number of seconds for that command to run
Forks two child processes, one to run the command and one to sleep for N seconds
When the first of the child processes exits, kills the other, then exits
The effect will be to run a command with a limit on wall-clock time.
I and others on this forum have tried several different solutions using shells and/or perl. All are convoluted and none quite do the right thing. In C the solution will be easy, because all the OS facilities are right where you can get at them.
A few kinds that I can think of:
Systems programming that directly uses Unix/Linux or Win32 system calls
Performance-critical code that doesn't have much string manipulation in it (e.g., number crunching)
Embedded programming or other applications that are severely resource-constrained
Basically, C gives you portable, efficient access to the native capabilities of the machine; everything else is your responsibility. In particular, string manipulation in C is tedious, error-prone, and generally nasty; the most effective way to do extensive string operations with C may be to use it to implement a language that handles strings better...
examples are: embedded apps, kernel code, drivers, raw sockets.
However if you find yourself more productive in C then go ahead and build whatever you wish. Use the language as a tool to get your problem solved.
c compiler
Researches in maths and physics. There are probably two alternatives: C and C++, but such features of the latter as encapsulation and inheritance are not very useful there. One could prefer to use C++ "as a better C" or just stay with C.
Well most people are suggesting system programming related things like OS Kernels , Device Drivers etc. These are difficult and Time consuming. Maybe the most fun thing to with C is console programming. Have you heard of the HAM SDK? It is a complete software development kit for the Nintendo GBA , and making games for it is fun. There is also the CC65 Compiler which supports NES Programming (Althought Not Completely). You can also make good Emulators. Trust Me , C is pretty helpful. I was originally a Python fan, and hated C because it was complex. But after yuoget used to it, you can do anything with C. Now I use CPython to embed Python in my C Programs(if needed) and code mostly in C.
C is also great for portability , There is a C Compiler for like every OS and Almost Every Console And Mobile Device. Ive even seen one that supports some calculators!
Well, if you want to learn C and have some fun at the same time, might I suggest obtaining NXC and a Lego Mindstorms set? NXC is a C compiler for the Lego Mindstorms.
Another advantage of this approach is that you can compare the effort to program the Mindstorms "brick" with C and then try LEJOS and see how it goes with Java.
All great fun.
Implicit in your question is the assumption that a 'high-level' language like Python or Perl (or Java or ...) is fast enough, small enough, ... enough for most applications. This is of course true for most apps and some choice X of language. Given that, your language of choice almost certainly has a foreign function interface (FFI). Since you're looking to learn C, create a module in the FFI built in C.
For example, let's assume that your tool of choice is Python. Reimplement a subset of Numpy in C. Since C is a pretty fast language, and has, in C99, a clear numeric library interface, you'll get the opportunity to experience the power of C in an appropriate setting.
ISAPI filters for Internet Information Server.
Before actually write C code, i would suggest first read good C code.
Choose subject you want to concentrate on, basically any application can be written in C, but i assume GUI application will be not your first choice, and find few open source projects to look into.
Not any open source project is best code to look. I assume that after you will select a subject there is a place for another question, ask for best open source project in the field.
Play with it, understand how it's working modify some functionality...
Best way to learn is learn from somebody good.
Photoshop plugin filters. Lots and lots of interesting graphical manipulation you can do with those and you'll need pure C to do it in.
For instance that's a gateway to fractal generation, fourier transforms, fuzzy algorithms etc etc. Really anything that can manipulate image/color data and give interesting results
Don't treat C as a beefed up assembler. I've done some serious app's in it when it was the natural language (e.g., the target machine was a Solaris box with X11).
Write something with some meat on it. Write a client server chess program, where the AI is on a server and the UI is displaying in X11; once you've done that you will really know C.
I wonder why nobody stated the obvious:
Web applications.
Any place where the underlying libraries are entirely in C is a good candidate for staying in C - openGL, Lua extensions, PHP extensions, old-school windows.h, etc.
I prefer C for anything like parsing, code generation - anything that doesn't need a lot of data structure (read OOP). It's library footprint is very light, because class libraries are non-existent. I realize I'm unusual in this, but just as lots of things are "considered harmful", I try to have/use as little data structure as possible.
Following on from what someone else said. C seems a good language to implement the language in which you write the rest of your software.
And (mutatis mutandis) the virtual machine which runs the rest of your software.
I'd say that with the minuscule runtime environment and it's self-contained nature, you might start by creating some CLI utilities such as grep or tail (or half the commands in Unix). Anything that uses only STDOUT, STDIN and file manipulation is a good candidate.
This isn't exactly answering your question because I wouldn't actually CHOOSE to use C in such an app, but I hope it's answering the question you meant to ask--"what would be a good type of app to use learn C on?"
C isn't actually that bad a language--it's pretty easily to understand your code at an assembly language level which is quite useful, and the language constructs are few, leaving a very sparse language.
To answer your actual question, the very best use of C is the one for which it was created--porting itself (and UNIX) to other CPU architectures. This explains the lack of language features very well; it also explains the existence of Pointers which are really just a construct to make the compiler work less--any code created with pointers could be created without it (just as well optimized), but it becomes much harder to create a compiler to do so.
digital signal processing, all Pure Data extensions are written in C, this can be done in other languages also but has had good success in C

Resources