why unable to upload file using tftp? - c

Is it necessary to establish the connection each time during uploading the file in the multiple iteration for maintaining the stack size?
I got a calloc failed error.
I am using freertos with multithreading.

According to Wikipedia, yes, TFTP does not allow keeping the connection alive for multiple files.
If you are working with a small embedded system, its filesystem might not be designed to handle many files (even small ones) and you would want to reorganize the data into fewer.
Not sure what this has to do with stack size or running out of heap space. The question is very vague but you might want to account for scarce memory resources (using pencil and paper, even) to plan how the program will run, and avoid chasing these bugs every time a new feature is added.

Related

Can bad C code cause a Blue Screen of Death?

I am a new coder in c, recently moved over from python, but still like to challenge myself with fairly ambitious projects (like a chess program), and have found that my computer suffers an unusual number of BSODs, both when I am running a program and not (admittedly, attempting to use the entirety of my memory as a hash table may not have been the greatest idea).
So my question is, are these most likely caused by my crappy c code, or is it more likely that my 3 year old, overworked laptop is the culprit?
If it could be the code, what are the big things I should avoid doing so as to prevent this?
BSOD usually contains some information as to what caused it.
What information it contains, and how exactly it is displayed depends on the version of Windows you are running.
As can be seen from the list here:
https://hetmanrecovery.com/recovery_news/bsod-errors
Most BSOD errors come from device / driver / kernel code, and not from your typical userland program.
That said, it might be possible to trigger BSOD if your code uses particularly low level windows API, especially if you run it with administrator privileges.
Note, that simply filling up memory will result in allocations for your program failing, and possibly your program, but not the whole OS crashing.
Also, windows does place limits on how much an individual process can allocate.
One final note:
"3 year old laptop" does not provide enough information to tell anything about your hardware, since there are different tiers of laptops available, and some of the high end 3 year old ones will still be better performing then a mid tier one bought yesterday.
As a troubleshooting measure, I would recommend backing up your data, making a clean install of your OS (aka "format the machine"), then making sure all your drivers are up to date.
You may also want to try hardware diagnostic tools, such as memtes86, check SMART on your storage, etc.
It's not supposed to be possible for anything you do in an ordinary "user space" program to crash the whole computer. Something else must also be wrong. Here are some possibilities:
If you are making the computer do CPU- and RAM-intensive work for long periods, you may stress the hardware to the point where a marginally defective component fails. Usually it's either the RAM, the power supply, or the cooling fans at fault.
Make sure your power supply is rated for all of the kit you have, running simultaneously. Make sure you have enough airflow for the amount of heat you're generating; check for dust-clogged heatsinks and fans that aren't actually spinning. If you have more than one RAM stick, take one out at a time and see if that makes the problem disappear.
I'd like to tell you to get error-correcting RAM if you don't have it already, but for infuriating market differentiation reasons you'd have to replace the motherboard and CPU as well. It's still worth doing, in the long run, but it amounts to replacing the whole computer.
You may be tickling a bug in the OS or the drivers. The most probable culprit is the GPU driver, particularly if your program does anything graphical. Regrettably, all you can do about this is make sure you're fully patched up.

CLion uses system memory excessively

I recently started to use CLion, on Windows 7 64-bit, for editing C files.
One thing that bothers me a lot is that it uses too much system memory. It doesn't cause out of memory error as asked in another question. Actually CLion shows much lesser memory consumption in IDE (~500 mb out of ~2000 mb) than it takes from system (~1000 mb). You can see a snapshot of the system memory usage and CLion's memory display below:
I use CLion not for C++ but for C projects. My project isn't that big (~5 c files < 300 lines and ~10 h files). I don't use it to compile the project, I just use it for editing. And during the snapshot there was no user program running by it. And CLion wasn't showing any processes running (indexing etc). It is a general behaviour.
I'm not sure if what I experience is something expected/normal, or it is caused because of my system setup, project settings or the way I use the IDE.
Is there any known causes for excessive memory usage? Can you suggest practices to decrease memory usage?
The post is 2 years old, but I am also having this issue with CLion 2018.1, and I imagine, others do, too. Some tips that worked for me:
Excluding directories from indexing.
Deleting source files I don't need.
Resolving a circular dependency between two classes. (Note: I can't vouch it was exactly that, because I tried several things at once, and it seems odd that such a powerful IDE would be affected by such an issue, but I can't rule it out.)
If it's really bad, the indexing can be paused. Guaranteed to reduce the memory usage. Of course, the intelligent completion won't work then.
Currently the RAM usage is stable at ~1 Gb with RocksDB, RapidJson, and ~50 classes.
UPDATE: tweaking clion64.exe.vmoptions reduced the consumption radically.
Same issue here. I haven't used CLion just sitting there so that I do not have to open again, 2 projects few files open, nothing major, still eating up +3GB is not something that I can accept, switching back to Sublime, that works fine, as others have mentioned I am using it only for editing/refactoring, compilation happens in Terminal.
(PyCharm has similar issues)
CLion need to index and support all information about the system headers to provide you smart completion, auto-import and symbol resolution. Your project is the smallest part of code base for analyzing.
I have heard about version 2020.3, which brings option to switch off refreshing files.
https://intellij-support.jetbrains.com/hc/en-us/community/posts/360007093580-How-to-disable-refreshing-files-after-build
Unfortunately I cannot try it out in my professional development environment.

embedded linux, application state freeze, relaunch

We have an embedded application, now it requires its state to be saved and reloaded. Just like in PC games, where you save it before you have to go out and breath some fresh air.The product is quiet evolutionary in nature, no proper design so identifying data to be saved is not an option.
The software is in C so all data has fixed addresses (.data segment), its also deterministic,a and no dynamic memory allocations. So theoretically I take a back up of this data segment in a file and on relaunch of application update it back from the file. This approach will probably save a lot more data than what is required, but I am ok with it.
How can I do this in short execution time ?
Also how can I identify the start and end of .data segment in run-time ?
You want application checkpointing, so perhaps the Berkley Lab Checkpoint Restart library might help you.
You could perhaps use the mmap(2) system call, if you are sure all the data has fixed addresses, etc...
To know about your current memory segments and mappings, read (from your application) the /proc/self/maps file. There is also /proc/self/smaps etc. Learn more about proc(5), ie /proc/

Library or tools for managing shared mmapped files

Disclaimer: This is probably a research question as I cannot find what I am looking for, and it is rather specific.
Problem: I have a custom search application that needs to read between 100K and 10M files that are between 0.01MB to about 10.0MB each. Each file contains one array that could be directly loaded as an array via mmap. I am looking for a solution to prefetch files into RAM before they are needed and if the system memory is full, eject ones that were already processed.
I know this sounds a lot like a combination of OS memory management and something like memcached. What I am actually looking for is something like memcached that doesn't return strings or values for a key, but rather the address for the start of a chosen array. In addition, (this is a different topic) I would like to be able to have the shared memory managed such that the distance between the CPU core and the RAM is the shortest on NUMA machines.
My question is: "does a tool/library like this already exist?"
Your question is related to this one
I'm not sure you need to find a library. You just need to understand how to efficiently use system calls.
I believe the readahead system call could help you.
Indeed you have many many files (and perhaps too much of them). I hope that your filesystem is good enough, or that they are in many directories. Having millions of files may become a concern if not tuned appropriately (but I won't dare help on this).
I don't know if it is your application who writes & reads that many files. Perhaps you might consider switching to a fast DBMS like PostGresQL or MySQL, or perhaps you could use GDBM.
I have once done this for a search-engine kind of application. It used an LRU chain, which was also addressable (via a hash table) by file-id, and memory-address IIRC. On every access, the hot items were repositioned to the head of the LRU chain. When memory got tight (mmap can fail ...) the tail of the LRU-chain was unmapped.
The pitfall of this scheme is that the program can get blocked on pagefaults. And since it was single threaded, it was really blocked. Altering this to a multithreaded architecture would involve protecting the hash and LRU structures by locks and semaphores.
After that, I realised that I was doing double buffering: the OS itself has a perfect LRU diskbuffer mechanism, which is probably smarter then mine. Just open()ing or mmap()ing every single file on every request is only one sytemcall away, and (given recent activity) just as fast, or even faster than the buffering layer.
wrt DBMS: using a DBMS is a clean design, but you have the overhead of minimal 3 systemcalls just to get the first block of data. And it will certainly (always) block. But it lends itself reasonably for a multi-threaded design, and relieves you from the pain of locks and buffer management.

Memory footprint on windows

My C application on windows is running a for loop in which it dumps numerous entries into some data structure and then saves the same in an xml. Now, i want to know the memory footprint it is taking to do the same. Are there any tools available?
Task Manager is the way I do it. It's simple and easy.
But it only works if you're trying to measure very large memory footprints. But applications with large footprints are probably the only cases where you'd need to measure the usage anyway.
If you want to measure memory usage accurate to the byte, I would just build a simple wrapper around malloc() and free() that increments some global value. (if the app is threaded, a lock might also be needed)
Task Manager is one way to do it. I prefer Process Explorer because it gives a lot more info than Task Manager.

Resources