I have a huge 1.9 GB data file without extension I need to open and get some data from, the problem is this data file is extension-less and I need to know what extension it should be and what software I can open it with to view the data in a table.
here is the picture :
Its only 2 lines file, I already tried csv on excel but it did not work, any help ?
I have never use it but you could try this:
http://mark0.net/soft-tridnet-e.html
explained here:
http://www.labnol.org/software/unknown-file-extensions/20568/
The third "column" of that line looks 99% chance to be from php's print_r function (with newlines imploded to be able to stored on a single line).
There may not be a "format" or program to open it with if its just some app's custom debug/output log.
A quick google found a few programs to split large files into smaller units. THat may make it easier to load into something (may or may not be n++) for reading.
It shouldnt be too hard to mash out a script to read the lines and reconstitute the session "array" into a more readable format (read: vertical, not inline), but it would to be a one-off custom job, since noone other than the holder of your file would have a use for it.
Related
I have many files that are extracted into .txt with a batch file. But they don't have the headers. I've read that a possible solution from here that is to add to a .txt with the headers the exported rows.
With this:
echo. >> titles.txt
type data.txt >> titles.txt
This takes a lot of time and is not efficient, since it is adding the big file to the file with the text.
Another possible solution is to add to the SQL query the titles hardcoded, but this will change the type of the columns (is they are numeric they will be changed to varchar).
Is there a way to insert in the first row of the data txt the headers and not doing vice-versa?
I might be wrong, but as far as I am informed (and as far as I know from earlier experiments in doing as described): No, it is not possible! The mentioned Tasks are acting on the file sequentially. You can either open a file for reading, writing or appending. If you open the titles.txt file for writing, it is overwritten - and with this empty. If you open it for appending, it can only append to the end of the file - so you can only write the data after the Header... the only way it might work - but which is pretty nasty - is to append the title to the end of the file and during later processing (e.g. xls or whatever) Resort the rows and put the last one to the beginning. But as mentioned: nasty and not really the way to go.
If the number of files to process is a bigger problem than any individual file size, switching from bcp to sqlcmd might help.
I come to you completely stumped. I do some side work for a company that uses an old DOS based program to input and retrieve data. This is a legacy piece of software, and they have since moved to either QuickBooks or Outlook for all of their address or billing related needs. However there have been some changes made, and they work with this database fairly regularly. Since the computer that this software is on, is running XP (and none of the other computers in the office can run it) they're looking to phase this software out for when the computer inevitably explodes.
TLDR; I have an old .csv file (roughly two years) that has a good chunk of information on it, but again it's two years old. I have another file called ml.dat (I'm assuming masterlist.dat) that's in the same folder as this legacy software. I open it with notepad and excel and am presented with information like this:
S;Û).;PÃS;*p(â'a,µ,
The above chunk of text is recognized much less within notepad or excel. It's a lot more of the unrecognized squares.
Some of the information is actually readable however. I can for example read the occasional town name, or person's name but I'm unable to get all of the information since there's a lot missing. Perhaps the data isn't in unicode or something? I have no idea. Any suggestions? I'm ultimately trying to take this information and toss it into either quickbooks or outlook.
Please help!
Thanks
Edit: I'm guessing the file might be encrypted since .dat's are usually clear text? Any thoughts?
.DAT files can be anything, they are usually just application data. Since there is readable text, then it is very unlikely that this file is encrypted. Instead you are seeing ASCII representations of the bytes of other content. http://www.asciitable.com/ Assuming single byte values, the number 77 might appear in the file somewhere as M.
Your options:
Search for some utility to load and translate the dat file for that application.
Set up an appropriate dos emulator so you can run this application on another box, or even a virtual machine running freedos or something.
Figure out the file format and then write a program to translate the data.
For #3, you can attach a debugger to the application to trace how the file is read and written. Alternatively you can try to figure out record boundaries (if all the records are the same size, then things are a little bit easier.) Then you can use known values to try to find field boundaries. If you can find (or reverse compile) the source code, then that could also give you insight into the file format.
1 is your best bet, and #2 will buy you some time so that you don't need that original machine anymore. #3 would likely be something to outsource.
If you can find the source or file format, then you just recreate whatever data structure was dumped to the file and read the file into it.
To find which exe opens it, you can do something like:
for %f in (*.exe) do find "ml.dat" %f -c
Assuming the original application was written in C then there would be code something like this to read the first record from the file:
struct SecretData
{
int first;
double money;
char city[10];
};
FILE* input;
struct SecretData secretdata;
input = fopen("ml.dat", "rb");
fread(&data, sizeof(data), 1, input);
fclose(input);
(The file would have been written with fwrite.) Basically you need to figure out the innards of the SecretData structure to be able to read the file.
There likely wasn't a separate utility used to make the file, dumping data and reading it back from a file is relatively easy in most languages.
I'm not sure if it was asked, but I couldn't find anything like this.
My program uses a simple .txt file for log purposes, It just creates/opens a file and appends lines.
After some time, I started to log quite a lot of activities, so the file became too large and hardly readable. I know, that it's not write way to do this, but I simply need to have a readable file.
So I thought maybe there's a simple file format for log files and a soft to view it or if you'd have any other suggestions on this question?
Thanks for help in advance.
UPDATE:
It's access 97 application. I'm logging some activities like Form Loading, SELECT/INSERT/UPDATE to MS SQL Server ...
The log file isn't really big, I just write the duration of operations, so I need a simple way to do this.
The Log file is created on a user's machine. It's used for monitoring purposes logging some activities' durations.
Is there a way of viewing that kind of simple Log file highlighted with an existing tool?
Simply, I'd like to:
1) Write smth like "'CurrentTime' 'ActivityName' 'Duration in milliseconds' " (maybe some additional information like query string) into a file.
2) Open it with a tool and view it highlighted or somehow more readable.
ANSWER: I've found a nice tool to do all I've wanted. Check my answer.
LogExpert
The 3 W's :
When, what and where.
For viewing something like multitail ("tail on steroids") http://www.vanheusden.com/multitail/
or for pure ms windows try mtail http://ophilipp.free.fr/op_tail.htm
And to keep your files readable, you might want to start new files when if the filesize of the current log file is over certain limit. Example:
activity0.log (1 mb)
activity1.log (1 mb)
activity2.log (1 mb)
activity3.log (1 mb)
activity4.log (205 bytes)
A fairly standard way to deal with logging from an application into a plain text file is to:
split the logs into different program functional areas.
rotate the logs on a daily/weekly basis (i.e. split the log on a size or date basis)
so the current log is "mylog.log" or whatever, and yesterday's was "mylog.log.1" or "mylog.ddmmyyyy.log"
This keeps the size of the active log manageable. And then you can just have expiry rules so that old logs get thrown away on a regular basis.
In addition it would be a good idea to have different log levels for your application (info/warning/error/fatal) so that you're not logging more than is necessary.
First, check you're only logging things that are useful.
If it's all useful, make sure it is easily parsable by tools such as grep, that way you can find the info you want. Make sure you have the type of log entry, the date/time all conforming to a layout.
Build yourself a few scripts to extract the information for you.
Alternatively, use separate log files for different types of entries.
Basically you better just split logs according to severity. You'll rarely need to read all logs for the whole system. For example apache allows to configure error log and access log, pretty obvious what info exactly they have.
If you're under linux system grep is your best tool to search through logs for specific entries.
Look at popular logfiles like /var/log/syslog on Unix to get ideas:
MMM DD HH:MM:SS hostname process[pid]: message
Example out of my syslog:
May 11 12:58:39 raphaelm anacron[1086]: Normal exit (1 job run)
But to give you the perfect answer we'd need more information about what you are logging, how much and how you want to read the logs.
If only the size of the log file is the problem, I recommend using logrotate or something similar. logrotate watches log files and, depending on how you configured it, after a given time or when the log file exceeds a given size, it moves the log file to an archive directory and optionally compresses it. Then the original log file is truncated. For example, you could configure it to archive the log file every 24 hours or whenever the files size exceeds 500kb.
If this is a program, you might investigate apache logging libraries (http://logging.apache.org/) Out of the box, they'll give you a decent logging format out of the box. They're also customizable, so you can simplify your parsing job.
If this is a script, see some of the other answers.
LogExpert
I've found it here. Filter is better, than in mtail. There's an option of highlighting just adding a string and the app is nice and readable. You can customize columns as you like.
just had a general question about how to approach a certain problem I'm facing. I'm fairly new to C so bear with me here. Say I have a folder with 1000+ text files, the files are not named in any kind of numbered order, but they are alphabetical. For my problem I have files of stock data, each file is named after the company's respective ticker. I want to write a program that will open each file, read the data find the historical low and compare it to the current price and calculate the percent change, and then print it. Searching and calculating are not a problem, the problem is getting the program to go through and open each file. The only way I can see to attack this is to create a text file containing all of the ticker symbols, having the program read that into an array and then run a loop that first opens the first filename in the array, perform the calculations, print the output, close the file, then loop back around moving to the second element (the next ticker symbol) in the array. This would be fairly simple to set up (I think) but I'd really like to avoid typing out over a thousand file names into a text file. Is there a better way to approach this? Not really asking for code ( unless there is some amazing function in c that will do this for me ;) ), just some advice from more experienced C programmers.
Thanks :)
Edit: This is on Linux, sorry I forgot to metion that!
Under Linux/Unix (BSD, OS X, POSIX, etc.) you can use opendir / readdir to go through the directory structure. No need to generate static files that need to be updated, when the file system has the information you want. If you only want a sub-set of stocks at a given time, then using glob would be quicker, there is also scandir.
I don't know what Win32 (Windows / Platform SDK) functions are called, if you are developing using Visual C++ as your C compiler. Searching MSDN Library should help you.
Assuming you're running on linux...
ls /path/to/text/files > names.txt
is exactly what you want.
opendir(); on linux.
http://linux.die.net/man/3/opendir
Exemple :
http://snippets.dzone.com/posts/show/5734
In pseudo code it would look like this, I cannot define the code as I'm not 100% sure if this is the correct approach...
for each directory entry
scan the filename
extract the ticker name from the filename
open the file
read the data
create a record consisting of the filename, data.....
close the file
add the record to a list/array...
> sort the list/array into alphabetical order based on
the ticker name in the filename...
You could vary it slightly if you wish, scan the filenames in the directory entries and sort them first by building a record with the filenames first, then go back to the start of the list/array and open each one individually reading the data and putting it into the record then....
Hope this helps,
best regards,
Tom.
There are no functions in standard C that have any notion of a "directory". You will need to use some kind of platform-specific function to do this. For some examples, take a look at this post from Cprogrammnig.com.
Personally, I prefer using the opendir()/readdir() approach as shown in the second example. It works natively under Linux and also on Windows if you are using Cygwin.
Approach 1) I would just have a specific directory in which I have ONLY these files containing the ticker data and nothing else. I would then use the C readdir API to list all files in the directory and iterate over each one performing the data processing that you require. Which ticker the file applies to is determined only by the filename.
Pros: Easy to code
Cons: It really depends where the files are stored and where they come from.
Approach 2) Change the file format so the ticker files start with a magic code identifying that this is a ticker file, and a string containing the name. As before use readdir to iterate through all files in the folder and open each file, ensure that the magic number is set and read the ticker name from the file, and process the data as before
Pros: More flexible than before. Filename needn't reflect name of ticker
Cons: Harder to code, file format may be fixed.
but I'd really like to avoid typing out over a thousand file names into a text file. Is there a better way to approach this?
I have solved the exact same problem a while back, albeit for personal uses :)
What I did was to use the OS shell commands to generate a list of those files and redirected the output to a text file and had my program run through them.
On UNIX, there's the handy glob function:
glob_t results;
memset(&results, 0, sizeof(results));
glob("*.txt", 0, NULL, &results);
for (i = 0; i < results.gl_pathc; i++)
printf("%s\n", results.gl_pathv[i]);
globfree(&results);
On Linux or a related system, you could use the fts library. It's designed for traversing file hierarchies: man fts,
or even something as simple as readdir
If on Windows, you can use their Directory Management API's. More specifically, the FindFirstFile function, used with wildcards, in conjunction with FindNextFile
I am building a basic POS app for my cousin's pharmacy store so that he can dump the software he is currently using and save on license cost.All the medicines name which he has painfully entered into the software have been stored in a file with .d01 extension.
What i want is a way to read the contents of the .d01 file programmatically so that i can import the name of the medicines into my app.
The s/w from which my cousin uses is built in Foxpro(coz i see a lot of .cdx,.idx,.dbf files) and the file which i want to import is with .d01 extension. when i open the file in notepad it is something like this
http://img192.imageshack.us/img192/5528/foxpro.jpg
So I assume its somekind of database table or something. So can anyone please help me in reading this file, as i am not at all aware of foxpro.
Thanks a lot in advance to all those who take out time to reply.
hey guys thank you very much for replying so promptly.. I tried the solution suggested by Otávio and it worked, i will now write a small utility to read the dbf.
It has a good chance of being just a regular .dbf file. Copy it somewhere safe, change the extension to dbf and see if you can open it from foxpro.
Although it may have .cdx files, the actual paste of the file does not appear to be a visually recognizable header format of a VFP table... even if part of a database container. The characters around each column name don't look right. It may be from another language that also utilized "Compound Indexes". I even saw an article about Sybase's IAnywhere too. If worst-case scenario, and it is determined to be a possible fixed-length per row and no dynamic column sizes, you might take the file, strip off what appears to be the header and leave just the data and stream read it in based on how many constant characters are determined for the length. yeah, brute force, but just an option. Again, it doesn't LOOK like a VFP table.
BTW, what is the name of the software he is using... I'd look into that to see if any other type of indication to its source.
It looks sort of like a DBF file - maybe Clipper or something.