How to read an excel file created by Microsoft Excel by CH376 IC? - file

In my embedded system I am using a CH376 IC (PDF link) for file handling. I am able to detect a Flash Disk, but not able to read the excel file created by Microsoft Excel. The excel file is created on the PC and copied in the Flash Disk.
I want to create a database in an Excel file on PC and after creating it, I want to upload in to my embedded system for this I need read the file created.
Please help me to read the file.

The .xls and .xlsx file formats are both extremely complex. Parsing them is unlikely to be feasible in an embedded environment. (In particular, .xlsx is a PKZIP archive containing XML data -- you will need a minimum of 32 KB of SRAM just to decompress the file containing the cell data, and even more to parse it.)
Use a different file format. Consider using .csv, for instance -- it's just a text file, with one row of data on each line, so it's pretty straightforward to work with.

Related

Manipulation of Excel files in TwinCAT

Is there an available library for reading/writing of Excel files, particularly XLSX or XLSM for TwinCAT 3? How about TDMS files? Obviously I'd prefer something open source and free, if available.
Thank you
Using TwinCAT you can make CSV files, JSON files, XML files.
Next, after write files, you can use Python language to save data as excel files.
There is some book for python called : "Automate the Boring Stuff with Python, 2nd Edition: Practical Programming for Total Beginners"
There are some examples how to read write and modify Excel and Word files.
but remember that CSV can be opened in Excel using IMPORT or just using CTRL+C/CTRL+V. Delimiter in TwinCAT is located in Global variables - read about it in Beckhoff Information System (btw. google search works better and faster than Beckhoff's search on website)
info about CSV function blocks from this page:
https://infosys.beckhoff.com/content/1033/tcplclib_tc2_utilities/34977931.html?id=7903313200164417832
https://infosys.beckhoff.com/content/1033/tcplclib_tc2_utilities/34979467.html?id=1113952616781398655

When I loaded file/s into Snowflake Stage i see difference in number of bytes loaded in snowflake stage compared to the files in my local system

When I loaded file/s into Snowflake Stage i see difference in number of bytes loaded compared to the files in my local system, does anyone know the reason for this issue ? How it can be resolved.
File size in my local is 16622146 bytes, after loaded into stage it shows as 16622160 bytes, i have checked with .csv and .txt file types. (I know .txt file is not supported in snowflake).
I compressed the file and loaded into snowflake stage using snowsql using put command.
When loading small files from the local file system, snowflake automatically compresses the file. Please try that option.
Refer to this section of the documentation
https://docs.snowflake.com/en/user-guide/data-load-prepare.html#data-file-compression
This will help you to avoid any data corruption related issues during the compression.
Thanks
Balaji

HSQL data file sparse?

I have a situation where my HSQL database has stopped working citing that it has exceeded the size limit
org.hsqldb.HsqlException: error in script file line: 41 file input/output errorerror org.hsqldb.HsqlException: wrong database file version: requires large database support opening file - file hsql/mydb.data;
When i checked the size of the data file using "du -sh" it was only around 730MB, but when i performed a "ls -alh" it gave me a shocking size of 16G which explains why HSQL probably reports it as a 'large database'. So, the data file seems to be a "sparse file"
But,nobody did change the data file to a sparse file, does HSQL maintain the data file as a sparse file? Or has the file system marked it as sparse?
How do i work around this to get back my HSQL database without corrupting data in it? I was thinking on using the hsqldb.cache_file_scale property, but it would still mean that I would hit the problem again when the file grows to 64G
Just in case if it matters I am running it on a Debian 3.2 box and it runs on Java 7u25.
You need to perform CHECKPOINT DEFRAG to compact the file from time to time.
When a lot of data is deleted, the space in the .data file is lost. The above command rewrites the current .data file to a much smaller new file.
If the file has already grown very large, or if you need to have a huge database, you can connect with the hsqldb.large_data=true property that enables very large databases.
http://hsqldb.org/doc/2.0/guide/dbproperties-chapt.html

File extension .DB - What kind of database is it exactly?

I have a database file with .DB file extension. I have been googling and it looks like SQLite. I tried to connect to it using SQLite and SQLite3 drivers and I am getting an error "File is encrypted or not a database".
So I dont know if file is encrypted or it is not an SQLite database. Are there any other options what should the .DB extension should be? How do I find out that file is encrypted?
I tried to open it in the text editor and it is mostly a mess of charaters and some times there are words visible. I have uploaded the file here: http://cl.ly/3k0E01373r3v182a3p1o for the closer look.
Thank you for your hints and ideas what to do and how to work with this file.
Marco Pontello's TrID is a great way to determine the type of any file.
TrID is simple to use. Just run TrID and point it to the file to be analyzed. The file will be read and compared with the definitions in the database. Results are presented in order of highest probability.
Just download the executable and the latest definitions file into the same directory and then run TrID:
trid.exe "path/to/file.xyz"
It will output a list of possible file types for the file with a confidence rating. Here's a screenshot of using TrID to analyze a SQLite database file:
There's also a GUI version called TrIDNet:
If you're on a Unix-like platform (Mac OS X, Linux, etc), you could try running file myfile.db to see if that can figure out what type of file it is. The file utility will inspect the beginning of the file, looking for any clues like magic numbers, headers, and so on to determine the type of the file.
Look at the first 30 bytes of the file (open it in Notepad, Notepad++ or another simple text viewer). There's usually some kind of tag or extension name in there.
Both SQLite 2 and SQLite 3 have a very clear message: SQLite format 3 for SQLite 3 (obviously) and This file contains an SQLite 2.1 database for SQLite 2.
Note that encrypted SQLite databases don't have a header like that since the entire file is encrypted. See siyw's comment below.
On a Unix-like system (or Cygwin under Windows), the strings utility will search a file for strings, and print them to stdout. Might help you narrow the field.
There are a lot of programs besides database programs that use a "db" extension, including
ArcView Object Database File (ESRI)
MultiEdit
Netscape
Palm
and so on. Google "file extensions" for some sites that catalog file extensions and the programs that use them.
There's no conclusive way to know, because SQLite encrypts the entire database file, including the header.
Further, there's not a lot of difference to you, except for possible error text to a user if you're prompting them for a password.

Clearing a file after reading with SSIS

Is there any built in way to read a file with SSIS and after reading it clearing the file of all content?
Use a File System Task in the Control Flow to either delete or move the file. If you want an empty file, then you can recreate the file with another File System Task after you have deleted it.
My team generally relies on moving files to archive folders after we process a file. The archive folder is compressed whereas the working folder is uncompressed. We setup a process with our Data Center IT to archive the files in the folders to tape on a regular schedule. This gives us full freedom to retrieve any raw files we have processed while getting them off the SAN without requiring department resources.
What we do is create a template file (that just has headers) and then copy it to a file of the name we want to use for processing.

Resources