Loading characters from array in mips [closed] - arrays

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
When you load character from an array in mip does the data still exist at that position in the array ? if not, how can you loop thru the array and get each character within the array ? thanks (:

Though your question seem silly, it is actually a very legitimate question!
Form an outside perspective modern memories have a non-destructive readout.
This means that reading a memory location doesn't destroy the data held there.
So reading from an array won't destroy the item read.
Out of curiosity it is funny to note that internally, depending on the memory technology, reading may be a destructive operation (the common DRAM and the old Magnetic core memory are an example1) and that there exists (and existed) destructive memories.
MIPS could run in a system with destructive readout, that would be tricky however since MIPS is a Von Neumann architecture, instructions are read from the same memory where data are.
So reading an instruction would also destroy it.
Though one can arrange a mixed system where code is run from a non destructive memory and data is in a destructive one, such configuration is so unusual that you can safely assume that it wont never happen.
1 Read-only memory like ROM, PROM and in general non-volatile memories have non destructive reading (so do Flash ROMs). In general memory that stores "charges" have destructive readouts.

Related

Finding unused memory in process memory [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I'm looking for a reliable way to find unused memory in a C program's process since I need to "inject" some data into somewhere without it corrupting anything.
Whenever I find an area with only zeros in it, that's a good sign. However, no guarantees: It can still crash. All the non-zero memory is most likely being used for sure so it cannot be overwritten reliably (most memory has some kind of data in it).
I understand that you can't really know (without having the application's source code for instance) but are there any heuristics that make sense such as choosing certain segments or memory looking a certain way? Since the data can be 200KB this is rather large and finding an appropriate address range can be difficult/tedious.
Allocating memory via OS functions doesn't work in this context.
Without deep knowledge of a remote process you cannot know that any memory that is actually allocated to that process is 'unused'.
Just finding writable memory (regardless of current contents) is asking to crash the process or worse.
Asking the OS to allocate some more memory in the other process is the way to go, that way you know the memory is not used by the process and the process won't receive that address through an allocation of its own.

Will fseek back to the previous location be faster than seeking to a new location? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
If I have a C code
off_t off = ftello(f);
fseeko(f, some_location);
// do some work
fseeko(off);
Is the second fseeko as slow as the first one? I had thought the file blocks are always cached, so the second one could be much faster.
In my profiling results on Linux, the second fseek takes similar cost. Is this expected?
In most implementations, the fseek call is almost free since all it does is set the position in the FILE object. The cost will be incurred when you actually read data. At that point, it is very likely that rereading an already read block will benefit from the buffer cache. But it is also quite possible that the OS is doing speculative read-ahead so that blocks following recently read blocks are also in the buffer cache (as could be the case with your second seek).
For writing, measuring times is even more complicated because the blocks written are not necessarily committed immediately to permanent storage; the write system call returns as soon as the data has been copied into the buffer cache.
Is the second fseeko as slow as the first one?
It can be.
You see what you say about caching holds, but only cases where you deal with multiples of the FS block size.
I would suggest to read more in How is fseek() implemented in the filesystem?, since "The fseeko() function is identical to fseek(3)(see fseek(3)), respectively, except that the offset argument of fseeko().", as the ref suggests.

How would I read the contents of a large file in the heap without memory errors [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
The question I am asking is extremely simple. Lets just say I wanted to read a large file(6GB) without having the heap run out of memory. How would I do that. (What I am mainly asking is if there is a method to read part of the file clear the buffer and read the next part of the file)
The memory capacity and availability is platform and operating system dependent.
Some operating systems allow for memory mapping a file, in which the operating system manages the reading of data into memory for you.
Reading without overflow is accomplished by using block reading (a.k.a. fread in C and istream::read in C++). You tell the input function how much to read in the block and the function returns the quantity actually read. The block size should be less than or equal to the memory allocated for the data. The next read will start a the next location in the file. Perform in a loop to read in all the data.
Also, verify there is a reason to hold all the data in memory at the same time. Most programs only hold a small portion of the data for a limited time.

Advantages/disadvantages of mapping a whole file vs. blocks when needed [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
What are the advantages/disadvantages of mapping a whole file once vs. mapping large blocks when needed in an algorithm?
Intuitively, I would say it makes most sense just to map the whole file and then let the OS take care of reading/writing to disk when needed instead of making a lot of system calls since the OS is not actually reading the mapped file before accessed. At least on a 64 bit system where the address space isn't an issue.
Some context:
This is for an external priority heap developed during a course on I/O algorithms. Our measurements shows that is slightly better to just map the whole underlying file instead of mapping blocks (nodes in a tree) when needed. However, our professor does not trust our measurements and says that he didn't expect that behaviour. Is there anything we are missing?
We are using mmap with PROT_READ | PROT_WRITE and MAP_SHARED.
Thanks,
Lasse
If you have the VM space, just map the whole file. As others have already said, this allows the OS the maximum flexibility to do read-aheads, or even if it doesn't it will bring in the data as required through page faults which are much more efficient than system calls as they're being made when the system is already in the right kernel context.
Not sure why your professor doesn't expect that behaviour, but would be good to understand his rationale?

Garbage collection in a C compiled language [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
Let's say I have a garbage collected language that is compiled to C and through that to assembly. Then, how garbage collection works when it is compiled down to C? Does it become fully deterministic? Or is it contained in the resulting program as another program that runs periodically and collects garbage? This is probably a very easy, if not silly, question but I wanted some clarifications.
Even though it's compiling to C, such implementations typically link in a runtime library for the original language. That library contains the garbage collector for the higher-level language data. And the data structures used to represent the original language's data in C includes additional fields needed by the garbage collector.
Another technique they may use is conservative garbage collection.
One way to do something similar in a compiled language is done in iOS with ARC reference counting. It's technically not garbage collection but something similar. You would need to periodically search your programs memory for addresses that had been allocated pointing to the heap to see if it was ok to free the memory or not.
Bohem gc exists; however if you have an integer in the right range to be a pointer to a dead object entire graphs can leak. http://hboehm.info/gc/ In all a poor choice.

Resources