Is Linked List still relevant? [closed] - c

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I came across this article: Should you ever use Linked List. It cites that given the technological advances in available memory and RAM structures, using arrays would be better than Linked List.
There's also an old question When to use a linked list over an array/array list?
Do the arguments in the article really hold and are/have Linked List become obsolete or what would be the scenarios where using a LinkedList would still be better than Arrays if the arguments are true ?
(Explainations for any point with example would be helpful)

Nonsense. O(n) will never beat constant-time. Any use of lists that's required to perform well for insertions with saved iterators will use linked lists. They're a fundamental structure and won't go away.
I'd spin the argument the other way: linked lists are more acceptable these days. On a 386, you have to be careful with performance, but now, we write programs in Python even and put up with their speed. From the amount of code written in languages that use a VM (or are interpreted) I think it's fair to say a lot of people aren't at the level of worrying about cache misses in their choice of data structure.
We have fast CPUs now, so often don't need to worry about the few extra instructions that might be needed in implementing our data structures. We can look at the uses we have, work out what requirements we have and pick our structures based on their asymptotic performance. This also makes the code more maintainable: you won't have to change in your code if you find out in six months' time that for n=100 list is quicker after all. Profiling is hard work, so we should be very comfortable in our CPU-guzzling days to pick the structure with the algorithmic properties we want rather than guessing at vector.

Related

write my own memory manager [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I would like to allot some huge dynamic memory and then write my own memory manager for it. i.e. As and when my code needs memory, I'd allot from this pool of memory. I want the algorithm to take care of internal and external fragmentation. Which is the most efficient algorithm for this?
For these criteria I'd go with Doug Lea's http://g.oswego.edu/dl/html/malloc.html, which maintains collections of blocks of store for each of a number of different sizes - it's quick to find the size you need, and reusing blocks of the same size reduces fragmentation. Note (http://entland.homelinux.com/blog/2008/08/19/practical-efficient-memory-management/) that this is NOT tuned for multi-threading.
If I was writing one myself I'd go for the http://en.wikipedia.org/wiki/Buddy_memory_allocation because it's fast and not commonly used in user space (not commonly used because it restricts the possible block sizes, leading to internal fragmentation). In fact, I did, some time ago - http://www.mcdowella.demon.co.uk/buddy.html
This question is amiguous because the term "most efficient" is not clear. You don't say in what terms it should be most efficient.
As an example: There is a strategy called first fit which might be faster than best fit but could lead to more out fragmenetation of the heap (a really bad thing). Best fit on the other hand reduces somewhat the outer fragmentation but still suffers from it while finding a chunk of free memory takes more time. There is also a strategy called buddy heap where you don't suffer from outer fragmentation but from inner fragmentation. But at least finding a free block is usually fast there.
You see choosing an algorithm really depends on your requirements. Should the allocation be fast or the fragmentation low? What's the allocation behavior? Are there small uneven chunks allocated and freed frequently or only big chunks? And there are even more factors playing a role here.
Maybe you wanted an answer like this. If not I recommend you clearify your requirements.

Vector implementation in C [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I tried googling it, but no solid answer. What are available, still maintained dynamic array implementations for C? What are pros and cons for every one of them, and what is the best one(speed/footprint ratio)? Just asking, so that I don't have to reinvent the wheel.
GArray from GLib does what you want.
If you are looking for something like NSMutableArray, from Objective-C, or something like ArrayList from Java, you won't find anything (std C, at least).
You can create your own dynamic array implementation in C, though. It will take you a few code lines and is not that hard to implement.
All you need to have in mind is Time vs Memory. You can do an implementation that allocates a new array with a bigger size, every time you push/add an element, and then pops it for you in the return or by reference, or you can reallocate memory every time. I don't see big advantages in neither one, except that realloc is a C library function that I think is low level implemented, meaning it is probably faster, and in matters of implementation I would go with the realloc one since it is faster to implement.
You can even build an api that gives you sorting types and clean all methods.
Now is up to you.
Hope this helps.

Why are C names shortened? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Why there is a function called strcat and not a function called stringConcatenation, or stringConcat or string_concat or something like that? Why there is a clrscr function and not clearScreen or clear_screen?
Does it have something to do with source code size in past days, where every byte was worth gold on overly-sized floppy disks? Or is this fueled by programmers' inherent laziness? Is it a convention?
This is partly historical.
In very old C compilers, there was no guarantee that more than the first 8 characters of an identifier name would be used to determine uniqueness. This meant that, originally, all identifiers had to be eight or fewer characters, so method names were all made short.
For details, see Identifiers in the C Book.
When C and its associated tools were first being developed, input devices were not nearly as easy to use as modern keyboards. I've never actually used an ASR-33 Teletype, but as I understand it typing stringConcatenation on such a beast was significantly more difficult than typing strcat (and without autocompletion, you would have had to type the entire name with no typos). It took a substantial amount of pressure to activate each key. Output was also painfully slow by modern standards.
This also explains why common Unix command names are so terse (mv and cp rather than move or rename and copy).
And it's probably also why old linkers only supported such short names. Programmers would generally create short names in the first place, so there was little point in using scarce memory to allow for longer ones.
In addition to all this, there's a case to be made that shorter names are just as good as longer ones. Names of library functions, whether strcat or stringConcatenation (or is it stringConcatenate? String_Concatenate? stringCatenation?) are essentially arbitrary. Ease of typing isn't as important as it once was, but it's still a consideration.

Programming challenges related to chemistry [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm looking for interesting programming puzzles, problems or challenges suitable for a class of chemistry majors learning C as their first programming language. Do you have any recommendations?
Project Euler is pretty good. They have some simple challenges that may be suitable.
These won't really do much to teach them C, though. Text books are much better for that.
Additionally, you could have them write a program to balance chemical reaction equations. That would be good for I/O and simple math.
Given a text file with a whole bunch of pressure/temperature/mole count as input, and using the ideal gas law equation, output the values of the volume for the gases and output the entire set of data (P,V,T and n) into a nicely formatted output file.
Should cover file i/o, basic function usage, and string formatting. Has the potential to cover arrays and stucts as well.
David, tasks that come to my mind would be:
calculation of the number/topology of isomers of hydrocarbons (cyclic and acyclic, saturated and unsturated)
numerical integration of optical spectra (absorption and fluorescence)
kinetical models
deconvolution of experimental data
modelling of thermodynamical cycles and their efficiencies
This seems a really vague question but assuming I'm a chemistry student that is learning C I would like to write programs that allows me to define molecules and compounds starting from simple elements.
I really don't know how to explain it, but maybe define your struct for a nitrogen atom, one for oxigen atom and have a way to bind it to produce water.. or maybe mixing to different substances too see what will come out programmatically..
You can try pex4fun. It allows you to learn algorithms in C# (which is close enough to C). pex4fun provides read-to-use classes and also engaging coding duels that turn learning into a game.

Interview questions on CUDA Programming? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I have an interview coming up in a week's time for an entry level position that involves programming in CUDA (hopefully with C).
I was wondering if anybody can suggest some interview questions that I can expect during the interview.
I have gone through the official programming guide but I'm not all that convenient right now.
Thanks.
Some questions I think you should prepare are:
How many different kind of memories are in a GPU ?
What means coalesced / uncoalesced?
Can you implement a matrix transpose kernel?
What is a warp ?
How many warps can run simultaneously inside a multiprocessor?
What is the difference between a block and a thread ?
Can thread communicate between them? and blocks ?
Can you describe how works a cache?
What is the difference between shared memory and registers?
Which algorithms perform better on the gpu? data bound or cpu bound?
Which steps will you perform to port of an application to cuda ?
What is a barrier ?
What is a Stream ?
Can you describe what means occupancy of a kernel?
What means structure of array vs array of structures?
"You have N vectors of length M (N>>M). Tell me how you would go about designing a kernel to evaluate the distance matrix. Pay special attention to the way the problem is sub-divided and to the way the thread co-operation can be used to improve occupancy.
How would your answer to this question change if M>>N?"
The idea here is not to get you writing code, but to get you thinking out loud. This shows that you really know how to use GPGPU technology and are not merely regurgitating the user guide.
If it's a scientific role then expect questions on floating point and numerical accuracy, in particular you should look at the reduction sample in the NVIDIA SDK since that illustrates a whole load of the points in Fabrizio's post too.

Resources