Data destruction [closed] - filesystems

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
There are many file shredders programs that one can use in order to delete permanently one file. What I want to know is some implementation details. For example, considering Gutmann algorithm, how it should work with file and file system? Should an application iterate over all hdd cluster in order to overwrite them? Or it will be enough to open one file, change it content in some way and after that to delete it?
Vice versa, how to restore deleted file? I have not found a lot of information for these topics.
I will be very thankful for your replies.

You could look at the source code of the shred utility which is a part of the GNU core utils found on Linux.The basic idea is to make multiple passes over the disk blocks.There are also some assumptions made about the way the underlying files system commits these writes. See info coreutils 'shred invocation' for more information.
Restoring deleted files are done best when you know the internal layout of the file system in question and how the delete operation is implemented on it. For example, many drivers for the FAT file system just mark the directory entry as deleted but leave the file's content in tact. (Until and unless it is over-written by new files that you create). So you could just take a dump of the disk and scan through the raw data looking for what you want.

Related

Getting the website fill-in form data to spreadsheat or database [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I'm facing a rather logistic problem and would need suggestions on which way should we go.
Since our hosting provider doesn't allow mysql bases, we had to put a HTML5/CSS3/Bootstrap website (instead of wordpress).
What we would need now is creating a page where we would allow our visitors to fill-in certain forms. For the sake of argument, let's say this page would allow three different fill-in forms with fields like Name, contact info, etc.
My question would be how could we automate the process where the information from those forms goes directly to a database or some sort of a spreadsheet, so we could easily look the data without having to manually insert it?
Is that even possible? What technology should we use, which way should we go?
Thank you.
Databases are file based. So at the end of the day a mySQL database is nothing else, than a file somewhere in your file tree. So, you could use PHP to create a .csv (or .txt whatever) file and keep your data there directly, avoiding the need of a DBMS.
The filesystem methods built in PHP are very useful for your endeavour. Especially fopen, fwrite and fclose.
<?php
$fp = fopen('data.txt', 'a'); //append, instead of override
fwrite($fp, 'formData');
fclose($fp);
?>
?>
Of course this solution only works, if your hoster allows PHP.

When is data big enough to use Hadoop? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
My employer runs a Hadoop cluster, and as our data is rarely larger than 1GB, I have found that Hadoop is rarely needed to meet the needs of our office (this isn't big data), but my employer seems to want to be able to say we're using our Hadoop cluster, so we're actively seeking out data that needs analysis using our big fancy tool.
I've seen some reports saying that anything less than 5tb shouldn't utilize hadoop. What's the magic size where Hadoop becomes a practical solution to data analysis?
There isn't something like magic size. Hadoop is not only about the amount of data, it include resources and processing "cost". It's not the same process one image that could require a lot of memory and CPU than parse a text file. And haoop is being used for both.
To justify the use of hadoop you need to answer the follow questions:
Is your process able to run in one machine and complete the work on time ?
How fast is your data growing?
It's not the same read one time by day 5TB to generate a report than to read 1GB ten times by second from a customer facing API. But if you haven't facing these kind of problems before, very probably that you don't require use hadoop to process your 1GB :)

Database for read and append only [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
Basically my application needs to dump data daily into a database. But for any data written down, there is no need to update.
Hence, is appending to csv or json file sufficient for the purpose. Or it will be more computationally efficient to write in standard SQL?
Edit
Use-Case Update
I am expecting to store one entry of for each particular activity count daily. There are about 6-8 activities.
It is exactly like a log in some sense. I would like to perform some analysis with the trend of activities for example. There is no relations between different activities though.
If say in some cases there might be a need for update, would that imply a proper database will be more suitable rather than text file?
It depends on the nature of the data, but there may be another style of database other than an SQL one which could be suitable, like MongoDB which essentially stores JSON objects.
SQL is great when you need entities to have relationships to each other, or if you can take advantage of the type of select queries it can provide you with.
Database systems do have some overhead and could have some gotchas you might not expect, like loading up a heap of crap into memory so it's ready to be searched.
But storing text files can have drawbacks, like it might become difficult to manage your data in the future.
It basically sounds like your use-case is similar to logging, in which case dumping it into a file is fine.

tracking / monitoring system [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have been developing an application which will track the work history of employees in the office.
Hence I need to track the following details of my users Ubuntu users.
Applications opened.
Duration for which application was running.
If the application is like a text editor/video codec like VLC – what files were opened and for what duration.
Also I want to track the copy/paste history of files/folders on removable media.
Could anyone help me to suggest the header files and functions in C/shell/Perl which would help me to track this?
Please Note: I am not expecting the keystrokes to be monitored for the sake of privacy.
It may be that some of these requirements can't be fulfilled, however suggestions on possible features will be appreciated.
Take a look at the man page for 'proc'. The procfs is a filesystem mounted on /proc. Under that directory is a folder for each process by process ID. Of interest to you will be the fd folder for each process. For example, for a process with a PID of 5, the fd folder is
/proc/5/fd
The fd folder contains a symbolic link for each file handle open by the process. To listen for changes(new processes being launched, new files being opened) on the proc filesystem, I suggest inotify. However, it does have limitations with respect to procfs.

Retrieving data if control file is lost in oracle Database [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Suppose if the databases control file is lost permanently (no backup).
Can we retrieve the data from data file in Oracle database?
In general - 'yes'. But the circumstances matter. If you know what should be in the control file then you can recreate it (or rather, them; they should be multiplexed anyway) - see this article for example. That uses the create controlfile command with appropriate options and parameters to recreate the control file matching your existing data files. Really make sure you understand what it's doing and what impact it may have - you don't want to make things worse than they already are.
Or google for "oracle recover control file".
Don't rely on this being possible though - it's no substitute for a real backup and recovery strategy.

Resources