How to open and edit .CNDF Satellite Channel File - file-format

After hours on Google I cant find any good reference info on the .CNDF file format for Satellite channel lists stored on Sat box's.
I do know that there is .NDF (uncompressed) & .CNDF (Compressed) versions of the file format.
I basically want to open & un-compress the .CNDF file in my C# app, play around with it, then compress and save it out again.
Any pointers to the file format data sheet or any decent examples, especially the compression method used. Thanks

Related

How would I store different types of data in one file

I need to store data in a file in this format
word, audio, jpeg
How would I store that all in one file? Is it even possible do would I need to store links to other data files in place of the audio and jpeg. Would I need a custom file format?
1. Your own filetype
As mentioned by #Ken White you would need to be creating your own custom file format for this sort of thing, which would then mean creating your own parser type. This could be achieved in almost any language you wanted but since you are planning on using word format, then maybe C# would be best for you. However, this technique could be quite complicated and take a relatively large amount of time to thoroughly test your file compresser / decompressor, but may be best depending on your needs.
2. Command line utilities
Another way to go about this would be to use a bash script to combine all of the files into one file, and then decompress it at the other end. For example the steps could involve:
Combine files using windows copy / linux cat command on command line
Create a metdata file of your own that says how many files are in this custom file, and how much memory each one takes up (could be a short XML or JSON file for example...)
Use the linux split command or install a Windows command line file splitter program (here's just one example) to split the file back into whatever components have made it up.
This way you only have to create a really small file type, and let the OS utilities handle the combining of them for you.
Example on Windows:
Copy all of the files in your current directory into one output file called 'file.custom'
copy /b * file.custom
Generate custom file format describing metadata (i.e. get the file size on disk in C# example here). This is just maybe what I would do in JSON. SO formatting was being annoying so here's a link (Copy paste it into an editor or online JSON viewer).
Use a decompress windows / linux command line tool to decompress each files to the exact length (and export it back to the exact name) specified in the JSON (metadata) file. (More info on splitting files on this post).
3. ZIP files
You could always store all of the files in a compressed zip file, and then just use a zip compressor, expander as and when you like to retreive any number of file formats stored within.
I found a couple of examples of :
Combining multiple files into one ZIP file in only C# .net,
Unzipping ZIP files in C#
Zipping & Unzipping with only windows built-in utilities
Zipping & Unzipping in Linux command line
Good Zipping/Unzipping library in Java
Zipping/Unzipping in Python

Batch Scripts - Alternative Output to Notepad?

Good Afternoon,
Basically, I have a batch script that does the following:
Pings multiple network machines
Writes the result to a txt file
Searches the created txt file for certain keywords, then outputs that to a separate txt file
Displays the 2nd txt file
It's all very nice and lovely, basically if the machine pings fine then all it shows is the "Pinging xxxxx [10.xxx.xxx.xxx] with 32 bytes of data:" & "Packets: Sent = 4, Received = 4, Lost = 0 (0% loss)" lines.
If it detects anything other than a good ping, it will report the error message in addition to the 2 lines above.
Which leads me to the question...
Is there an alternative output to notepad for the log file? The information that is displays is exactly what I need, but because it's pinging 20+ assets it looks extremely cluttered and ugly. Are there any decent alternatives to display the log file?
Apologies if this is a stupidly basic question, it's after lunch so my brain is only functioning at 0.01% of it's usual power...
I made good experience with writing a .tex file and compiling it with latex (or its derivatives) into a .pdf file. Also, I use a bash script to write a .html document. An .xml file is a nice way to provide the data and an associated .xsl style sheet will add the appropriate visual representation.

Detecting the database a .DAT file belongs to

I have a set of .DAT files present along side a set of .IDX files with the same name.
The goal is to be able to open these files and read its contents, parsing it into a new format. The problem: I have no idea what database the data is being stored in! The files contain no headers or clues, they are binary, and the resource from which I have received these has no idea as to its storage mechanism.
So the question is: What are some common databases which store databases in .DAT files and store their indexes in .IDX files with the same name? Is there an application I can use in Linux or Windows which can detect the database?
EDIT :-
File names:
price.dat
price.idx
Here is a hex dump of the beginning of the .DAT file:
030D04806420500FFE3E0500002078581001C000738054E0C0099804138100402550080442090082403C101F7406010080C0A010201002010C006FC0246C0403FE00B041C051F0091BFE042F812FE054F8177E066F81BFE078F8207E08AF824FE09CF8297E0AEF82DFE0C0F8327E0D2F836FE0E4F83B7E0F6F83FE5FEFF47C06608480FA91F003C0213101F1BFDFE804220100F500D2A00388430801E04028D4390D128B46804024010A067269FCA546003C0844060E11F084B9E1377850
Here is a hex dump of the beginning of the .IDX file:
030D04805820100FFD7E0000397FEB60050410007300246A3060068220009BE0401030088B3903F740E010C80402410281402030094004C708004DC058880FFC052F015EBFE042F812FE054F8177E066F81BFE078F8207E08AF824FE09CF8297E0AEF82DFE0C0F8327E0D2F836FE0E4F83B7E0F6F83FFE108F8447E11AF848FE12CF84D7E13EF851FE150F8567E162F85AFE174F85F7E186F863FE198F8687E1AAF86CFE1BCF8717E1CEF875FE1E0F87A7E1F2F87EF5FEFF005E30901714
Both files uniquely start out with 030D04806420500FF wonder if this is a good start?
Did a quick search on Google but it didn't return anything...
END EDIT :-
Any other ideas?
Thanks much in advance!
There is a faircom ODBC driver called 'ctreeODBC_RO.exe' which should be capable.

Which file types are worth compressing (zipping) for remote storage? For which of them the compressed size/original size ratio is << 1?

I am storing documents in sql server in varbinary(max) fileds, I use filestream optionally when a user has:
(DB_Size + Docs_Size) ~> 0.8 * ExpressEdition_Max_DB_Size
I am currently zipping all the files, anyway this is done because the Document Read/Write work was developed 10 years ago where Storage was more expensive than now.
Many files when zipped are almost as big as the original (a zipped pdf is about 95% of original size). And anyway unzipping has some overhead, that becomes twice when I need also to "Check-in"/Update the file because I need to zip it.
So I was thinking of giving to the users the option to choose whether the file type will be zipped or not by providing some meaningful default values. For my experience I would impose the following rules:
1) zip by default: txt, bmp, rtf
2) do not zip by default: jpg, jpeg, Microsoft Office files, Open Office files, png, tif, tiff
Could you suggest other file types chosen among the most common or comment on the ones I listed here?
.doc and .mdb files actually tend to compress rather well, if i remember correctly. The Office 2007 equivalents (.docx and .accdb), though, are zip files already...so compressing them is pretty much useless.
Don't forget HTML and XML files. Zip by default.
I commend you on being able to recognize what are and aren't compressed file types. You probably already understand this, but I'll rant here:
Do not double-up compression methods! Each compression method adds its own header adding to file size, and since the data has already had its statistical redundancies eliminated as best as it could by one method, it's probably not going to be able to compressed further via another method. Take this set of files for example:
46,494,380 level0.wav
43,209,258 level1.wav.zip
43,333,266 level2.wav.zip.rar
43,339,894 level3.wav.zip.rar.gz
43,533,989 level4.wav.zip.rar.gz.bz2
All of these files contain the same data.
The first compression method worked well to eliminate redundancies, but each successive compression method just added to the file size, not to mention the headache of decrypting the file later.
The best method of compression is usually the first one applied.
28,259,406 level1.wav.flac <~ using a compression method meant for the file.

.dbc --> .csv

I have a little utility that converts .dbc files to .csv files, trouble is, somewhere in the conversion some data is lost/destroyed/whatever. I input a.dbc into converter, it produces a.csv. I delete a.dbc,and then run a.csv back through the converter, and I come back with a "slightly" different .dbc file then I had started with.
Does anyone know any better way of converting these files? Without loss of information..
I open both files in HexCMP (compares two hex files, show's you the differences) and the differences are totally random through out the file.
Sounds like this is nothing more than a buggy utility.
If you convert the same .dbc file to a .csv file twice in a row, do you get the exact same .csv file? If you run the .csv through twice do you get the same .dbc file out both times? That would at least tell you which side of the conversion the bugs are in.
Do you have access to FoxPro to export the file as a CSV directly from FoxPro without using the utility? That would allow you to compare the CSV file created from FoxPro versus your utility to try and narrow down where the problem is.

Resources