It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I was solving a practice problem on a site which states that
The purpose of this problem is to verify whether the method you are
using to read input data is sufficiently fast to handle problems
branded with the enormous Input/Output warning. You are expected to be
able to process at least 2.5MB of input data per second at runtime.
Also how do I optimize input/output routines other than printf and scanf?
It is operating system specific (because the C standard only knows about <stdio.h>). With Linux consider using low-level syscalls for efficiency, like open(2), mmap(2), read(2), pread(2), write(2). You might also want to use readahead(2). Don't forget to make I/O in rather large blocks (e.g. 128Kbytes), page aligned if possible. Read the Advanced Linux Programming book.
If restricted to standard C99 functions, use fread(3) on rather big chunks. Consider also increasing the internal buffer with setvbuf(3)
And 2.5Mbyte/sec is not very impressive. Probably, the bottleneck is the hardware, but you should be able to get perhaps 20 or 50Mbytes/sec on a standard desktop hardware. Using SSD would help a big lot.
Related
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I read that a running C program can be referred to as an "instance".
Is this really correct? The word instance is usually used for OOP.
And C also has "objects" hasn't it, but it's not the same as in OOP.
An "object" in C is just something in memory like a union with some value could be called an object can't it?
An "object" in C is just something in memory, but that's also true of all computer languages.
An object in real life is a thing that physically exists. Being in memory is the closest something in a program can come to physical existence, so we apply the same term.
An instance in real life is a specific example of a generic concept. The term has similar generality in computers. When you tell the computer to run a program, it generates an instance of that program, among many potential instances of running that program. Again, nothing specific to C, this terminology usually occurs in operating systems (which manage running of programs, and define what a "program" is).
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
How does GCC ensure that The Stack doesn't overflow?
Shouldn't it check the Size is less than the MAX it can retain and prompt user accordingly,esp when it is implicitly defined?WIll this not be a great programming paradigm?
It doesn't. If you recurse deep enough, you will overflow, and there's nothing the compiler can do about it.
edit: I should point out that at the time I answered this question, the question simply read:
"How does GCC ensure that The Stack doesn't overflow?"
Linux uses a "guard area". It puts one or more access-protect pages at the end of the stack for each thread.
If the program accesses the guard area, the OS handles the fault. If the thread is already using its max permitted stack then it terminates something (the thread or the whole process, I don't remember which). Otherwise it tries to map memory to the addresses occupied by the guard area for use as stack, and protects a new area beyond the end of the newly-enlarged stack.
Prompting the user isn't really suitable for an OS like Linux, in which many processes are not monitored by a user, and for that matter there may not be any logged-in user at the time the problem arises. So your process just fails. Since it's an all-purpose compiler, gcc doesn't attempt runtime user interaction either.
Other OSes and platforms may or may not have stack guard pages (Windows does). About all gcc really needs to do is to ensure that if the stack is going to be exceeded, it doesn't "miss" the guard page by jumping a long way forward.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I am currently trying to write a program that will be able to create stable a TCP connection and have complete control over the ISN numbers. I've been writing in C and I am at a point where my very limited knowledge has reached its limits and I was wondering if there's a better way of doing it.
What I tried was building the headers manually, using raw sockets to send and recieve the packets without the kernel interfering, which is a challenge.
So regardless of language, what do you reckon is the most efficient and easiest way of manipulating the ISN?
Well, ISN is generatred in a random way to prevent ISN perediction attack (http://www.thegeekstuff.com/2012/01/tcp-sequence-number-attacks/).
The Linux Network stack, use the function tcp_v4_init_sequence to generate the ISN (http://lxr.free-electrons.com/source/net/ipv4/tcp_ipv4.c#L101), this function call secure_tcp_sequence_number function (http://lxr.free-electrons.com/source/net/core/secure_seq.c#L106) to do the job. Take a look at this function and try to clone it so can use it with your code from userspace.
If you have enough time you can look at section 3 of the RFC 6528 (http://www.rfc-editor.org/rfc/rfc6528.txt), it describe an algorithm on how to generate ISN:
ISN = M + F(localip, localport, remoteip, remoteport, secretkey)
And try to implement it, if you want :)
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
Does anybody see the difference between these two lines?
1) ret = write( fd_out, local_bugger, bytes_to_move);
2) nwritten = write (fd, buf + total_written, size - total_written);
Obviously, not the naming conventions.
Specifically, one is writing over the network 4x faster than the other.
Looking for logic , flags, etc
THANKS
what are the values/types of all those? Right now this question can't be answered... does option 2) end up writing 4x as much data? What are the flag options on the fopens for the two handle? etc...
Right now I'll guess that it's because mars is ascendent in jupiter and the moon is gibbous waxing, causing the higgs bosons to mess with the quarks in your ethernet cable.
There could be two things at play here:
The size of the chunks you're writing. Small chunks incur more overhead. But that's unlikely to cause a big difference unless you're writing less than 16 bytes or so.
The details of the file descriptor you're writing to. How much buffering does it have? Is it going through a filesystem (NFS or CIFS)? Is it even going out over the same network?
In short, as Marc B answered: not enough information.
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I have to make a multiplayer game and give the users(on different) an option to change their screen resolution in order to sustain their hardware requirements.Similar to counterstrike.
How can I implement this in c ? how can I give the users sitting on different computers an option to change their screen resolution ?
There is no standard method in the C language or standard library, and this is entirely dependent on the graphics library you're writing the program in.
If you want a really simple way to do this, you can use xrandr and system():
#include <stdlib.h>
system("xrandr > resolutions.tmp"); // direct output to 'resolutions.tmp'
// retrieve possible resolutions from 'resolutions.tmp'
system("xrandr -s resolution_id"); // select a certain screen resolution
Edit: as you've mentioned you're using OpenGL on Ubuntu, you can follow some of the steps in the following article to change the resolution using library calls:
http://www.opengl.org/wiki/Programming_OpenGL_in_Linux:_Changing_the_Screen_Resolution
http://www.opengl.org/wiki/Programming_OpenGL_in_Linux:_Changing_the_Screen_Resolution worked for me. At the bottom you have command to compile, use it but add -std=c99. Keywords for google (no offense, I would appreciate them): opengl screen resolution
You will definitely use a library which handles the os-specific details for you. This library would be responsible for finding out which combinations fo screen resolution, colour depth and various buffers are available, and then you can choose one, or give the user the option to choose one.
For example, GLFW does this by way of its glfwGetVideoModes function.
The underlying code to do this is both platform-specific, and ugly. You want to spend some time writing your game not messing with it.