How to destroy a filesystem [closed] - filesystems

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
We have some USB storage devices were the filesystem is corrupted (RAW). We don't really know the exact reason, so I would like to simulate the problem. Are there any tools or methods to destroy the filesystem. Or better yet, are there any guidelines on how NOT to destroy the filesystem.
thanks!

There are scenarios in which shred may not work with all the file systems.
One alternative apart from shred is to dd the whole partition itself.
Assuming your device is /dev/sdb or something like that , you can dd the whole partition using the following command ,
dd if=/dev/zero of=/dev/sdb
This should overwrite all the contents of partitions including the data structures(super blocks etc) of the previous file system with zero , effectively killing the file system for you.

If you have access to a Linux system, you can use shred.
Insert the USB Storage, execute dmesg | tail to get the name of the created device (if the computer you use has only one HD it is probably /dev/sdb), then execute the following command as root or with sudo (replace /dev/sdX with the actual device):
shred /dev/sdX
If your usb device has multiple partitions you can execute shred on a single partition instead of the whole device if you give it the appropriate device file handle (e.g. /dev/sdb1).
shred works by overwriting the specified file (or in this case the whole partition or device) multiple times with random data - if your USB Storage is large, this can take a very long time.

Related

Life expectancy of usb stick when datalogging [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I know that on average a flashdrive has a life expectancy of roughly 100,000 write cycles. This raises a question for me
I have written a program where I write some values to a csv file on a usb stick every 6 seconds. Every day a new file is created. The machine is a Sigmatek PLC programmed in structure text (similar to pascal) with C library for file handling. The code looks something like file fopen (opens todays file), write some values to the stream along with a timestamp, then file fclose (close the file).
I heard someone say this could mean my usb stick will not last a long time since I'm opening and closing the file every 6 seconds. He suggested I opened the file, write values every 6 seconds as usual, and then close after 10 or 20 minutes, this way the usb stick would last a lot longer. His reasoning being that the usb stick will only be written to at the moment you would actually close the file with Fclose. Can someone confirm this?
Or will this perhaps not become a problem at all even if im opening and closing every 6 seconds, since the usb stick has 16gb of memory, and will only run out of memory after a looooong time (1 file is 500kb max, one file created evey day) , therefore I'm only writing and not writing and erasing from memory? Is the 100,000 write cycles lifetime based on purely writing or writing, erasing and re-writing?
First, regarding a fclose() every 10-20 minutes. This depends on the buffering mode (for C, see setvbuf). In buffered mode, what you were told is correct - any buffered data is written at the time of a fclose(). However, there is an increased risk of losing data (e.g. sudden power loss means unwritten buffer is lost).
We've also made embedded systems using writable flash (not USB). 100,000 write cycles is hugely variable. It means "P/E" (program-erase) cycles. If you're only appending data, then at the rate you cite, I would not bother too much about it. If you're doing other things like erasing/compressing log files which could result in the same storage location being written multiple times, then you need to think more about it. You'd also need to look at what is being done by the OS - for example, any type of auto-defrag should preferably not be enabled.

C fastest way to continously write data to file [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 3 years ago.
Improve this question
I have a string composed of some packet statistics, such as packet length, etc.
I would like to store this to a csv file, but if I use the standard fprintf to write to a file, it writes incredibly slowly, and I end up losing information.
How do I write information to a file as quickly as possible in order to minimize information loss from packets. Ideally I would like to support millions of packets per second, which means I need to write millions of lines per second.
I am using XDP to get packet information and send it to the userspace via an eBPF map if that matters.
The optimal performance will depend on the hard drive, drive fragmentation, the filesystem, the OS and the processor. But optimal performance will never be achieved by writing small chunks of data that do not align well with the filesystem's disk structure.
A simple solution would be to use a memory mapped file and let the OS asynchronously deal with actually committing the data to the file - that way it is likely to be optimal for the system you are running on without you having to deal with all the possible variables or work out the optimal write block size of your system.
Even with regular stream I/O you will improve performance drastically by writing to a RAM buffer. Making the buffer size a multiple of the block size of your file system is likely to be optimal. However since file writes may block if there is insufficient buffering in the file system itself for queued writes or write-back, you may not want to make the buffer too large if the data generation and the data write occur in a single thread.
Another solution is to have a separate write thread, connected to the thread generating the data via a pipe or queue. The writer thread can then simply buffer data from the pipe/queue until it has a "block" (again matching the file system block size is a good idea), then committing the block to the file. The pipe/queue then acts a buffer storing data generated while the thread is stalled writing to the file. The buffering afforded by the pipe, the block, the file system and the disk write-cache will likely accommodate any disk latency so long at the fundamental write performance of the drive is faster then the rate at which data to write is being generated - nothing but a faster drive will solve that problem.
Use sprintf to write to a buffer in memory.
Make that buffer as large as possible, and when it gets full, then use a single fwrite to dump the entire buffer to disk. Hopefully by that point it will contain many hundreds or thousands of lines of CSV data that will get written at once while you begin to fill up another in-memory buffer with more sprintf.

Socket select() Time Switching? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have created 6 Sockets and simultaneously listening to all of them using select. I want to find out how much time does the CPU take switching from 1 socket to another. Does anyone know; if not can someone guide me please on how to compute this problem!
I think you may have misunderstood what the select call is actually doing, the man page for select says the following:
Three independent sets of file descriptors are watched. Those
listed in readfds will be watched to see if characters become
available for reading (more precisely, to see if a read will not
block; in particu- lar, a file descriptor is also ready on
end-of-file), those in writefds will be watched to see if a
write will not block, and those in exceptfds will be watched for
exceptions. On exit, the sets are modified in place to indicate
which file descriptors actually changed status. Each of the three
file descriptor sets may be specified as NULL if no file descriptors
are to be watched for the corresponding class of events.
So when your call to select returns what it will tell you is which, if any, of the file descriptors are (in your case) ready to be read from. It's then up to you to decide which to read and how to read it.
If you can I'd reccomend tracking down a copy of Unix Network Programming (by Stevens, Fenner, Rudoff). This will give you all the background information and example C code that you will ever want on network programming.
Or look at the tutorial here

Segments in RAM memory [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I am confused with the segments in RAM memory,please clarify following doubts
RAM has been been dived as User space and Kernel space;is this memory division is done by O/S or it is done by H/W(CPU).
What are the contents of kernel space;as far as i have understood there will be kernel image only,please correct me if i am wrong.
Where does this code,data,stack and heap segments exist?
a) Does User and Kernel space has separate code,data,stack and heap segments?
b) Is this segments are created by H/W or (O/S).
Can i find the amount of memory occupied by Kernel space and User Space?
a) Is there any Linux command (or) system calls to find this?
Why the RAM has been divided into user space and kernel space?
a) I fell it is done to keep the kernel safe from application program is it so?is this is the only reason.
I am a beginner so please suggest me some good books,links and the way to approach these concepts.
I took up the challenge and tried with rather short answers:
Execution happens in user and kernel space. BIOS & CPU support the OS at detecting and separating resources/address ranges such as main memory and devices (-> related question) to establish the protected mode. In protected mode, memory is separated via virtual address spaces, which are mapped page wise (usually blocks of 4096 byte) to real addresses of physical memory via the MMU (Memory Management Unit).
From user space, one cannot accesses memory directly (in real mode), one has to access it via the MMU, which acts like a transparent proxy with access protection. Access errors are known as segmentation fault, access violation, segmentation violation (SIGSEGV), which are abstracted with NullPointerException (NPE) in high level programming languages like Java.
Read about protected mode, real mode and 'rings'.
Note: Special CPUs, such as in embedded systems, don't necessarily have an MMU and could therefore be limited to special OSes like µClinux or FreeRTOS.
A kernel does also allocate buffers, the same goes for drivers (e.g. IO buffers for disks, network interfaces and GPUs).
Generally, resources exist per space and process/thread
a) The kernel puts its own, protected stack on top of the user space stack (per thread) and has also separate code (also 'text'), data and heap segments. Also, each process has its own resources.
b) CPU architectures have certain requirements (depends upon the degree of support they offer), but in the end, it is the software (kernel & user space libraries used for interfacing), which create these structures.
Every reasonable OS provides at least one way to do that.
a) Try sudo cat /proc/slabinfo or simply sudo slabtop
Read 1.
a) Primarily, yes, just like user space processes are isolated from each other, except for special techniques such as CMA (Cross Memory Attach) for fast direct access in newer kernels.
Search the stack sites for recommended books
What can cause segmentation faults in C++?

Shared memory and IPC [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I was reading a tutorial about shared memory and found the following statement: "If a process wishes to notify another process that new data has been inserted to the shared memory, it will have to use signals, message queues, pipes, sockets, or other types of IPC.". So what is the main advantage of using shared memory and other type of IPC for notifying only instead of using an IPC that doesn't need any other IPC type to be used, like message queue and socket for example?
The distinction here is IPC mechanisms for signalling versus shared state.
Signalling (signals, message queues, pipes, etc.) is appropriate for information that tends to be short, timely and directed. Events over these mechanisms tend to wake up or interrupt another program. The analogy would be, "what would one program SMS to another?"
Hey, I added a new entry to the hash table!
Hey, I finished that work you asked me to do!
Hey, here's a picture of my cat. Isn't he cute?
Hey, would you like to go out, tonight? There's this new place called the hard drive.
Shared memory, compared with the above, is more effective for sharing relatively large, stable objects that change in small parts or are read repeatedly. Programs might consult shared memory from time to time or after receiving some other signal. Consider, what would a family of programs write on a (large) whiteboard in their home's kitchen?
Our favorite recipes.
Things we know.
Our friends' phone numbers and other contact information.
The latest manuscript of our family's illustrious history, organized by prison time served.
With these examples, you might say that shared memory is closer to a file than to an IPC mechanism in the strictest sense, with the obvious exceptions that shared memory is
Random access, whereas files are sequential.
Volatile, whereas files tend to survive program crashes.
An example of where you want shared memory is a shared hash table (or btree or other compound structure). You could have every process receive update messages and update a private copy of the structure, or you can store the hash table in shared memory and use semaphores for locking.
Shared memory is very fast - that is the main advantage and reason you would use it. You can use part of the memory to keep flags/timestamps regarding the data validity, but you can use other forms of IPC for signaling if you want to avoid polling the shared memory.
Shared memory is used to transfer the data between processes (and also to read/write disk files fast). If you don't need to transfer the data and need to only notify other process, don't use shared memory - use other notification mechanisms (semaphores, events, etc) instead.
Depending on the amount of data to be passed from process to process, shared memory would be more efficient because you would minimize the number of times that data would be copied from userland memory to kernel memory and back to userland memory.

Resources