Rigorous definition for CSV file reading/writing - c

I have written my own CSV reader/writer in C to store records in a character column in an ODBC database. Unfortunately I have discovered many edge cases that trip over my implementation, and I have come to the conclusion my problem is that I have not rigorously defined the rules for CSV. I've read RFC4180, but it seems incomplete and does not resolve ambiguities.
For example, should "" be considered an empty token or a double quote? Do quotes match outside-in or left to right? What do I do with an input string that has unmatched single quotes? The real mess begins when I have nested tokens, which doubles up the escaped quotation characters.
What I really need is a definitive CSV standard that I can implement in code. Every time I feel I have nailed every corner case, I find another one. I am sure this problem has been mulled over and solved many times over by superior minds to mine, has anyone written a rigorous definition of CSV that I can implement in code? I realise C is not the ideal language here, but I don't have a choice about the compiler at this stage; nor can I use a third party library (unless it compiles with C-90). Boost is not an option as my compiler doesn't support C++. I have contemplated ditching CSV for XML, but it seems like overkill for storing a few tokens in a 256 character database record. Anyone made a definitive CSV spec?

There is no standard (see Wikipedia's article, in particular http://en.wikipedia.org/wiki/Comma-separated_values#Lack_of_a_standard), so in order to use CSV, you need to follow the general principle of being conservative in what you generate and liberal in what you accept. In particular:
Do not use quotation marks for blank fields. Simply write an empty field (two adjacent delimiters, or a delimiter in the first/last position of the line).
Quote any field containing a quotation mark, comma, or newline.

Find the most authoritative CSV library you trust and read the source. CSV is not so complicated that you won't be able to understand its rules from a comprehensive reading of a source implementation. I have been happy with Java's opencsv. Perl's is here, and so forth.

According to RFC 4180, fields should be parsed from left to right to correctly interpret a double quote. In some contexts "" is an escaped double quote (when inside a quoted field), otherwise it's either an empty string or two double quotes (when inside an otherwise non-empty field value).
For example, consider a file with 4 records (1 column):
"field""value" CRLF
"" CRLF
field""value CRLF
"field value" extra CRLF
"field""value" - should be read as field"value
"" - should be read as an empty string
field""value - should be read as field""value
"field value" extra - could be read as field value extra or you can reject it
Record 4 is really an invalid field so you can either accept it or reject it.
When you start reading a field, you need to check if the first character read is a double quote or not. If the first character is a double quote, the field value is quoted and you need to read until you find an unescaped closing double quote. In this case you can ignore new lines and comma characters, since the field is quoted - it only ends when you encouter a closing double quote.
If the first character is not a double quote then all double quotes in the field value should be treated as literal double qoutes. In this case you reach the end of the field when you encounter a comma or a new line character.
Based on this, I'd recommend to always quote all fields when you write out records and write a proper parser to parse records when you read data. This way you can store any data in your CSV files (even multiline text with embedded quotes) and your format will be clear. When reading a CSV file, I'd fail all files that cannot be correctly parsed - if this is a database, you can expect users to not to mess with the records manually, unless they know what they're doing.

Related

Splitting string in C by blank spaces, besides when said blank space is within a set of quotes

I'm writing a simple Lisp in C without any external dependencies (please do not link the BuildYourOwnLisp), and I'm following this guide as a basis to parse the Lisp. It describes two steps in tokenising a S-exp, those steps being:
Put spaces around every paranthesis
Split on white space
The first step is easy enough, I wrote a trivial function that replaces certain substrings with other substrings, but I'm having problems with the second step. In the article it only uses the string "Lisp" in its examples of S-exps; if I were to use strtok() to blindly split by whitespace, any string in a S-exp that had a space within it would become fragmented and interpreted incorrectly by my Lisp. Obviously, a language limited to single-word strings isn't very useful.
How would I write a function that splits a string by white space, besides when the text is in between two double quotes?
I've tried using regex, but from what I can see of the POSIX regex.h library and PCRE, just extracting the matches would be incredibly laborious in terms of the amount of auxillary code I'd have to write, which would only serve to bloat my codebase. Besides, one of my goals with this project was to use only ANSI C, or, if need be, C99, solely for the sake of portability - fiddling with the POSIX library and the Win32 API would just fatten my code and make moving my lisp around a nightmare.
When researching this problem I came across this StackOverflow answer; but the approved answer only sends the tokenised string onto stdout, which isn't useful for me; I'd ideally have the tokens in a char** so that I could then parse them into useful in memory data structures.
As well as this, the approved answer on the aforementioned SO question is written to be restricted to specifically my problem - ideally, I'd have myself a general purpose function that would allow me to tokenise a string, except when a substring is between two of charachter x. This isn't a huge deal, it's just that I'd like my codebase to be clean and composable.
You have two delimiters: the space and double quotes.
You can use the strcspn (or with example: cppreference - strcspn) function for that.
Iterate over the string and look for the delimiters (space and quotes). strcspn returns if such a delimiter was found. If a space was found, continue looking for both. If a double quote was found, the delimiter chages from " \"" (space and quotes) to "\"" (double quotes). If you then hit the quotes again, change the delimiter to " \"" (space and quotes).
Based on your comment:
Lets say you have a string like
This is an Example.
The output would be
This
is
an
Example.
If the string would look like
This "is an" Example.
The output would be
This
is an
Example.

Parsing an iCalendar file in C

I am looking to parse iCalendar files using C. I have an existing structure setup and reading in all ready and want to parse line by line with components.
For example I would need to parse something like the following:
UID:uid1#example.com
DTSTAMP:19970714T170000Z
ORGANIZER;CN=John Doe;SENT-BY="mailto:smith#example.com":mailto:john.doe#example.com
CATEGORIES:Project Report, XYZ, Weekly Meeting
DTSTART:19970714T170000Z
DTEND:19970715T035959Z
SUMMARY:Bastille Day Party
Here are some of the rules:
The first word on each line is the property name
The property name will be followed by a colon (:) or a semicolon (;)
If it is a colon then the property value will be directly to the right of the content to the end of the line
A further layer of complexity is added here as a comma separated list of values are allowed that would then be stored in an array. So the CATEGORIES one for example would have 3 elements in an array for the values
If after the property name a semi colon is there, then there are optional parameters that follow
The optional parameter format is ParamName=ParamValue. Again a comma separated list is supported here.
There can be more than one optional parameter as seen on the ORGANIZER line. There would just be another semicolon followed by the next parameter and value.
And to throw in yet another wrench, quotations are allowed in the values. If something is in quotes for the value it would need to be treated as part of the value instead of being part of the syntax. So a semicolon in a quotation would not mean that there is another parameter it would be part of the value.
I was going about this using strchr() and strtok() and have got some basic elements from that, however it is getting very messy and unorganized and does not seem to be the right way to do this.
How can I implement such a complex parser with the standard C libraries (or the POSIX regex library)? (not looking for whole solution, just starting point)
This answer is supposing that you want to roll your own parser using Standard C. In practice it is usually better to use an existing parser because they have already thought of and handled all the weird things that can come up.
My high level approach would be:
Read a line
Pass pointer to start of this line to a function parse_line:
Use strcspn on the pointer to identify the location of the first : or ; (aborting if no marker found)
Save the text so far as the property name
While the parsing pointer points to ;:
Call a function extract_name_value_pair passing address of your parsing pointer.
That function will extract and save the name and value, and update the pointer to point to the ; or : following the entry. Of course this function must handle quote marks in the value and the fact that their might be ; or : in the value
(At this point the parsing pointer is always on :)
Pass the rest of the string to a function parse_csv which will look for comma-separated values (again, being aware of quote marks) and store the results it finds in the right place.
The functions parse_csv and extract_name_value_pair should in fact be developed and tested first. Make a test suite and check that they work properly. Then write your overall parser function which calls those functions as needed.
Also, write all the memory allocation code as separate functions. Think of what data structure you want to store your parsed result in. Then code up that data structure, and test it, entirely independently of the parsing code. Only then, write the parsing code and call functions to insert the resulting data in the data structure.
You really don't want to have memory management code mixed up with parsing code. That makes it exponentially harder to debug.
When making a function that accepts a string (e.g. all three named functions above, plus any other helpers you decide you need) you have a few options as to their interface:
Accept pointer to null-terminated string
Accept pointer to start and one-past-the-end
Accept pointer to start, and integer length
Each way has its pros and cons: it's annoying to write null terminators everywhere and then unwrite them later if need be; but it's also annoying when you want to use strcspn or other string functions but you received a length-counted piece of string.
Also, when the function needs to let the caller know how much text it consumed in parsing, you have two options:
Accept pointer to character, Return the number of characters consumed; calling function will add the two together to know what happened
Accept pointer to pointer to character, and update the pointer to character. Return value could then be used for an error code.
There's no one right answer, with experience you will get better at deciding which option leads to the cleanest code.

What's the best separator/delimiter character(s) for a plaintext db file? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
What's the best separator/delimiter character(s) for a plaintext db file?
I considered using |, ,, <TAB>, ;, etc. But they all seem to be possible to break when the nearby entries have special enough characters.
So, the experienced database users, what delimiter character(s) do you suggest to use?
Well, there are few separator characters in US-ASCII, hex 1c, 1d, 1e and 1f. The plain text shouldn't contain them.
1c FS ␜ ^\ File Separator
1d GS ␝ ^] Group Separator
1e RS ␞ ^^ Record Separator
1f US ␟ ^_ Unit Separator
No matter which character you choose as your separator, you'll want to escape any instance of that character in your data.
Perhaps tilde(~), or go to a high-ASCII character.
Either way, if there's any chance that it could sneak into your data, you'd want to escape it before writing to your plaintext file.
I think the best way to join string with a three cherries '###'.
For a particular data warehousing situation where we had control over the source file, but escaping and qualifying were onerous, we were able to make the business decision that one extended ASCII character would be stripped from the data (if it ever occurs, which it hasn't).
On creation of the delimited source file, we stripped out any instances of █ (alt+219) in the data and use that character for the delimiter.
Bonus, that character is really easy to spot.
Actually, it depends on the type of data you are trying to separate, we needed a separator for the machine events data and a couple of them were proposed:
=) or ^_^.
We chose ^_^ because it actually worked based on the number of samples tested and it also looks cute!
Personally I like using « as a delimiter character to split data in CSV files, I don't think I've ever found a naturally occurring instance of « and » personally, so here are my two cents about it.
I usually prefer non-printable characters like "\u0001", for instance I use this as a column delimiter in most of my Azure Data Analytics U-SQL Scripts. That is assuming you can use a multi-character custom delimiter
You could use the special separator characters (hex 1c -> 1f), yet they are non-printable, and some technologies have issues processing data containing them.
So, plan B, if your data is in UTF-8, you could pick a random UTF-8 character that is extremely unlikely to appear in any source data you receive.
Yet, even then, if you want to be sure you'll not run into issues, you better always scan your entire dataset for this character, and if it appears, simply pick another UTF-8 character.
I tend to hate encapsulation with a passion, and avoid it whenever possible, as explained in my post under the chapter 'encapsulation' here: https://theonemanitdepartment.wordpress.com/2014/12/15/the-absolute-minimum-everyone-working-with-data-absolutely-positively-must-know-about-file-types-encoding-delimiters-and-data-types-no-excuses/
If you can't control the data being put into it, don't use a plain text db. There can be no generally right answer here. Without context or constraints this is a false question.
To wit:
If I said I was only going to accept lower case letters as data, I could use any other symbol as a separator. Even, say, the number 9, and I'd be fine. No symbol other than a lower case character would be better than any other.
Conversely, if said I could accept any character, then I don't have any characters left for a separator, and I'd be left with a very sorry database that could only store a single value.
If you have to try too hard to get your db into plain text, you probably want a binary db. Have you looked at sqlite? It's pretty darned easy to use, is available in many contexts, and comes with a ton of benefits over a plain text db.
If you have the option of a string as column separator, use "" as delimiter. You can make up any string for that matter and gives you flexibility.
I've used an ePUB convertor before and the delimiter char was the notational quote character, anywhere it had been used it would be rewritten to file as #, simple but effective even if it did destroy the sample material being produced.
I propose the interrobang character "‽". More details: https://en.wikipedia.org/wiki/Interrobang

Parsing a stream of data for control strings

I feel like this is a pretty common problem but I wasn't really sure what to search for.
I have a large file (so I don't want to load it all into memory) that I need to parse control strings out of and then stream that data to another computer. I'm currently reading in the file in 1000 byte chunks.
So for example if I have a string that contains ASCII codes escaped with ('$' some number of digits ';') and the data looked like this... "quick $33;brown $126;fox $a $12a". The string going to the other computer would be "quick brown! ~fox $a $12a".
In my current approach I have the following problems:
What happens when the control strings falls on a buffer boundary?
If the string is '$' followed by anything but digits and a ';' I want to ignore it. So I need to read ahead until the full control string is found.
I'm writing this in straight C so I don't have streams to help me.
Would an alternating double buffer approach work and if so how does one manage the current locations etc.
If I've followed what you are asking about it is called lexical analysis or tokenization or regular expressions. For regular languages you can construct a finite state machine which will recognize your input. In practice you can use a tool that understands regular expressions to recognize and perform different actions for the input.
Depending on different requirements you might go about this differently. For more complicated languages you might want to use a tool like lex to help you generate an input processor, but for this, as I understand it, you can use a much more simple approach, after we fix your buffer problem.
You should use a circular buffer for your input, so that indexing off the end wraps around to the front again. Whenever half of the data that the buffer can hold has been processed you should do another read to refill that. Your buffer size should be at least twice as large as the largest "word" you need to recognize. The indexing into this buffer will use the modulus (remainder) operator % to perform the wrapping (if you choose a buffer size that is a power of 2, such as 4096, then you can use bitwise & instead).
Now you just look at the characters until you read a $, output what you've looked at up until that point, and then knowing that you are in a different state because you saw a $ you look at more characters until you see another character that ends the current state (the ;) and perform some other action on the data that you had read in. How to handle the case where the $ is seen without a well formatted number followed by an ; wasn't entirely clear in your question -- what to do if there are a million numbers before you see ;, for instance.
The regular expressions would be:
[^$]
Any non-dollar sign character. This could be augmented with a closure ([^$]* or [^$]+) to recognize a string of non$ characters at a time, but that could get very long.
$[0-9]{1,3};
This would recognize a dollar sign followed by up 1 to 3 digits followed by a semicolon.
[$]
This would recognize just a dollar sign. It is in the brackets because $ is special in many regular expression representations when it is at the end of a symbol (which it is in this case) and means "match only if at the end of line".
Anyway, in this case it would recognize a dollar sign in the case where it is not recognized by the other, longer, pattern that recognizes dollar signs.
In lex you might have
[^$]{1,1024} { write_string(yytext); }
$[0-9]{1,3}; { write_char(atoi(yytext)); }
[$] { write_char(*yytext); }
and it would generate a .c file that will function as a filter similar to what you are asking for. You will need to read up a little more on how to use lex though.
The "f" family of functions in <stdio.h> can take care of the streaming for you. Specifically, you're looking for fopen(), fgets(), fread(), etc.
Nategoose's answer about using lex (and I'll add yacc, depending on the complexity of your input) is also worth considering. They generate lexers and parsers that work, and after you've used them you'll never write one by hand again.

UTF-8 tuple storage using lowest common technological denominator, append-only

EDIT: Note that due to the way hard drives actually write data, none of the schemes in this list work reliably. Do not use them. Just use a database. SQLite is a good simple one.
What's the most low-tech but reliable way of storing tuples of UTF-8 strings on disk? Storage should be append-only for reliability.
As part of a document storage system I'm experimenting with I have to store UTF-8 tuple data on disk. Obviously, for a full-blown implementation, I want to use something like Amazon S3, Project Voldemort, or CouchDB.
However, at the moment, I'm experimenting and haven't even firmly settled on a programming language yet. I have been using CSV, but CSV tend to become brittle when you try to store outlandish unicode and unexpected whitespace (eg vertical tabs).
I could use XML or JSON for storage, but they don't play nice with append-only files. My best guess so far is a rather idiosyncratic format where each string is preceded by a 4-byte signed integer indicating the number of bytes it contains, and an integer value of -1 indicates that this tuple is complete - the equivalent of a CSV newline. The main source of headaches there is having to decide on the endianness of the integer on disk.
Edit: actually, this won't work. If the program exits while writing a string, the data becomes irrevocably misaligned. Some sort of out-of-band signalling is needed to ensure alignment can be regained after an aborted tuple.
Edit 2: Turns out that guaranteeing atomicity when appending to text files is possible, but the parser is quite non-trivial. Writing said parser now.
Edit 3: You can view the end result at http://github.com/MetalBeetle/Fruitbat/tree/master/src/com/metalbeetle/fruitbat/atrio/ .
I would recommend tab delimiting each field and carriage-return delimiting each record.
Within each string, Replace all characters that would affect the field and record interpretation and rendering. This would include control characters (U+0000–U+001F, U+007F–U+009F), non-graphical line and paragraph separators (U+2028, U=2029), directional control characters (U+202A–U+202E), and the byte order mark (U+FEFF).
They should be replaced with escape sequences of constant length. The escape sequences should begin with a rare (for your application) character. The escape character itself should also be escaped.
This would allow you to append new records easily. It has the additional advantage of being able to load the file for visual inspection and modification into any spreadsheet or word processing program, which could be useful for debugging purposes.
This would also be easy to code, since the file will be a valid UTF-8 document, so standard text reading and writing routines may be used. This also allows you to convert easily to UTF-16BE or UTF-16LE if desired, without complications.
Example:
U+0009 CHARACTER TABULATION becomes ~TB
U+000A LINE FEED becomes ~LF
U+000D CARRIAGE RETURN becomes ~CR
U+007E TILDE becomes ~~~
etc.
There are a couple of reasons why tabs would be better than commas as field delimiters. Commas appear more commonly within normal text strings (such as English text), and would have to be replaced more frequently. And spreadsheet programs (such as Microsoft Excel) tend to handle tab-delimited files much more naturally.
Mostly thinking out loud here...
Really low tech would be to use (for example) null bytes as separators, and just "quote" all null bytes appearing in the output with an additional null.
Perhaps one could use SCSU along with that.
Or it might be worth to look at the gzip format, and maybe ape it, if not using it:
A gzip file consists of a series of "members" (compressed data sets).
[...]
The members simply appear one after another in the file, with no additional information before, between, or after them.
Each of these members can have an optional "filename", comment, or the like, and i believe you can just keep appending members.
Or you could use bencode, used in torrent-files. Or BSON.
See also Wikipedia's Comparison of data serialization formats.
Otherwise i think your idea of preceding each string with its length is probably the simplest one.

Resources