I am trying to write a small Base64 coder/decoder program, and I'm trying to figure out if there are any rules, or any guidelines, or expected behavior, when a run into a character that is not valid.
I could fail fast (complain and exit), ignore non-valid characters (like I would do for newlines, etc.), or do a junk-in, junk-out approach (where the data will be partially decoded, and the rest depends on the severity or the exact number of errors).
On a similar point: I imagine I should ignore newlines (like in PEM files, where lines are broken at a 64-character length), but are there any other control characters that I could expect, and should ignore properly?
If it is of any interest, I'm coding in pure (vanilla) C, which doesn't already have the library for it. But that detail shouldn't really matter for the answer I'm looking for.
Thanks.
My apologies. The RFC's on MIME (1341, 1521, 2045) contain the following paragraph, which I could not find until now:
The output stream (encoded bytes) must be represented in lines of no more than 76 characters each. All line breaks or other characters not found in Table 1 must be ignored by decoding software. In base64 data, characters other than those in Table 1, line breaks, and other white space probably indicate a transmission error, about which a warning message or even a message rejection might be appropriate under some circumstances.
In any case, it is appropriate that this question and answer be available ono StackOverflow.
P.S. If there are other standards of Base64 with other guidelines, links and quotes are appropriate in other answers.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I want to be confident that using BOM mark for file encoding is absolutely needed for a file for the following reasons.
A information of a file must be self-contained. We didn't figure out a clear algorithm for identifying which encoding is appropriate for a file.
For the compatibility issue about the shebang line, this issue need to be corrected inside the script language because the encoding is much higher concept than the shebang line.
For the first claim, I have difficult time to determine which encoding is right or not for a file. Therefore, applying different encoding for a file appeared frequently and I guess that most of the fresh developers encounter this situation and ignore the weird characters in a file due to different encoding strategy.
I already recognize the compatibility is an important aspect for software maintenance. However, I think that the old rule that makes the system confuse is changed for future steps.
Is that any thought or any movement to make adding BOM mark as official? Or is there any critical reason that we must not introduce BOM mark? (e.g. A clear algorithm to identify the encoding file exists.)
My understanding comes from the following link, so the additional link to change my perspective would be a great pleasure.
What's the difference between UTF-8 and UTF-8 without BOM?
Thanks,
Your first assumption is wrong. We have protocols to define what a file (or a packet) contain and how to interpret the contain. We should always split meta-data with data. You practically are pushing to put BOM as meta-data, which describe the following bytes, but this is not enough. Text data is not a so useful information: we still need to understand and interpret what it is the meaning of text data. The more obvious part is about interpreting U+0020 (white space) either as a printed character or as a control data. HTML interpret as the second (two whitespaces are not so special, or a white space and a new line, but in <pre>). But also: we have a mail, a mailbox, a markdown file, a HTML, etc.. BOM doesn't help alone. But so, for your first point, we need to add more information, and so on. But then we have a general container format (metadata, with one or more data), so it is not more text, and it is not BOM which help us.
If you need BOM, you already lost the battle, and you can have a BOM which it is not really a BOM, but real data in other encoding. Using 2 bytes or 3 bytes are not enough (shebang, which it is old, used 4 bytes, #! /, now space is not more required, but in any case it is an old protocol, when files were not heavy exchanged, and the path is relevant (nobody execute random files, and if it not a shebang, it was an a.out file).
And you are discussing a old stuff. Now everything is UTF-8. No need to BOM. Microsoft is just making thing more complex: Unix, Linux, macos did a short transition without much hurt (and no "flag day"). Also web: it is UTF-8 by default. Your question is about programming languages, but there UTF-8 is fine: they uses ASCII in syntax, and what it is in strings it doesn't matter so much: it is standard to treat strings and Unicode as opaque objects, but for few cases, else you will miss something from Unicode (e.g. splitting combining chars, splitting emoji e.g. in language which works with UTF16 code units, etc.).
UTF-16 is not one thing you will write programs. It may be used by API (fixed length may be/seem better), or ev. for data, but usually not for coding.
And BOM doesn't help, if you do not modify all scripts/programs (but so, lets' do it as "all is UTF-8"): it is not seldom to find program sources in multiple encoding (and on the same file): you may have copied-pasted the copyright (and so author name) with a simple editor (and with one encoding), then strings may be on other encoding, and few comments (and committers name) maybe on a different one. And git (and other tools) just check lines so it may insert lines with wrong encoding: git has very few information, and users often have incorrect configuration. So you may break sources which where ok (just because encoding problems were just in comments).
Then a short comment on the second assumption, which it is also difficult.
You want to split layers, but this is very problematic: we have scripts which contain binary data at the end, so operating system should not try to transcode the script (and so then to remove BOM), because first part may be just text, but some part may requires exact the correct bytes. (And some Unicode test files are also in this category, and they are text, possibly with some invalid code).
Just use UTF-8 without BOM and all things become much simpler.
If you type ps aux into your terminal and make the window really small, the output of the command will not wrap and the format is still very clear.
When I use printf and output my 5 or 6 strings, sometimes the length of my output exceeds that of the terminal window and the strings wrap to the next line which totally screws up the format. How can I write my program such that the output continues to the edge of the window but no further?
I've tried searching for an answer to this question but I'm having trouble narrowing it down and thus my search results never have anything to do with it so it seems.
Thanks!
There are functions that can let you know information about the terminal window, and some others that will allow you to manipulate it. Look up the "ncurses" or the "termcap" library.
A simple approach for solving your problem will be to get the terminal window size (specially the width), and then format your output accordingly.
There are two possible answers to fix your problem.
Turn off line wrapping in your terminal emulator(if it supports it).
Look into the Curses library. Applications like top or vim use the Curses library for screen formatting.
You can find, or at least guess, the width of the terminal using methods that other answers describe. That's only part of the problem however -- the tricky bit is formatting the output to fit the console. I don't believe there's any alternative to reading the text word by word, and moving the output to the next line when a word would overflow the width. You'll need to implement a method to detect where the white-space is, allowing for the fact that there could be multiple white spaces in a row. You'll need to decide how to handle line-breaking white-space, like CR/LF, if you have any. You'll need to decide whether you can break a word on punctuation (e.g, a hyphen). My approach is to use a simple finite-state machine, where the states are "At start of line", "in a word", "in whitespace", etc., and the characters (or, rather character classes) encountered are the events that change the state.
A particular complication when working in C is that there is little-to-no built-in support for multi-byte characters. That's fine for text which you are certain will only ever be in English, and use only the ASCII punctuation symbols, but with any kind of internationalization you need to be more careful. I've found that it's easiest to convert the text into some wide format, perhaps UTF-32, and then work with arrays of 32-bit integers to represent the characters. If your text is UTF-8, there are various tricks you can use to avoid having to do this conversion, but they are a bit ugly.
I have some code I could share, but I don't claim it is production quality, or even comprehensible. This simple-seeming problem is actually far more complicated than first impressions suggest. It's easy to do badly, but difficult to do well.
Okay basically what I'm asking is:
Let's say I use PathFindFileNameA on a unicode enabled path. I obtain this path via GetModuleFileNameA, but since this api doesn't support unicode characters (italian characters for example) it will output junk characters in that part of the system path.
Let's assume x represents a junk character in the file path, such as:
C:\Users\xxxxxxx\Desktop\myfile.sys
I assume that PathFindFileNameA just parses the string with strtok till it encounters the last \\, and outputs the remainder in a preallocated buffer given str_length - pos_of_last \\.
The question is, will the PathFindFileNameA parse the string correctly even if it encounters the junk characters from a failed unicode conversion (since the multi-byte API reciprocal is called), or will it crash the program?
Don't answer something like "Well just use MultiByteToWideChar", or "Just use a wide-version of the API". I am asking a specific question, and a specific answer would be appreciated.
Thanks!
Why you think that Windows API only do strtok? I used to hear that all xxA APIs are redirected to xxW APIs before win10 are released.
And I think the answer to this question is quite simple. Just write a easy program and then set the code page to what you want.Running that program and the answer goes out.
P.S.:personally I think that GetModuleFileNameA will work correctly even if there are junk characters because Windows will store the Image Name as an UNICODE_STRING internally. And even if you uses MBCS, the junk code does not contain zero bytes, and it will work as usual since it's just using strncpy.
Sorry for my last answer :)
I feel like this is a pretty common problem but I wasn't really sure what to search for.
I have a large file (so I don't want to load it all into memory) that I need to parse control strings out of and then stream that data to another computer. I'm currently reading in the file in 1000 byte chunks.
So for example if I have a string that contains ASCII codes escaped with ('$' some number of digits ';') and the data looked like this... "quick $33;brown $126;fox $a $12a". The string going to the other computer would be "quick brown! ~fox $a $12a".
In my current approach I have the following problems:
What happens when the control strings falls on a buffer boundary?
If the string is '$' followed by anything but digits and a ';' I want to ignore it. So I need to read ahead until the full control string is found.
I'm writing this in straight C so I don't have streams to help me.
Would an alternating double buffer approach work and if so how does one manage the current locations etc.
If I've followed what you are asking about it is called lexical analysis or tokenization or regular expressions. For regular languages you can construct a finite state machine which will recognize your input. In practice you can use a tool that understands regular expressions to recognize and perform different actions for the input.
Depending on different requirements you might go about this differently. For more complicated languages you might want to use a tool like lex to help you generate an input processor, but for this, as I understand it, you can use a much more simple approach, after we fix your buffer problem.
You should use a circular buffer for your input, so that indexing off the end wraps around to the front again. Whenever half of the data that the buffer can hold has been processed you should do another read to refill that. Your buffer size should be at least twice as large as the largest "word" you need to recognize. The indexing into this buffer will use the modulus (remainder) operator % to perform the wrapping (if you choose a buffer size that is a power of 2, such as 4096, then you can use bitwise & instead).
Now you just look at the characters until you read a $, output what you've looked at up until that point, and then knowing that you are in a different state because you saw a $ you look at more characters until you see another character that ends the current state (the ;) and perform some other action on the data that you had read in. How to handle the case where the $ is seen without a well formatted number followed by an ; wasn't entirely clear in your question -- what to do if there are a million numbers before you see ;, for instance.
The regular expressions would be:
[^$]
Any non-dollar sign character. This could be augmented with a closure ([^$]* or [^$]+) to recognize a string of non$ characters at a time, but that could get very long.
$[0-9]{1,3};
This would recognize a dollar sign followed by up 1 to 3 digits followed by a semicolon.
[$]
This would recognize just a dollar sign. It is in the brackets because $ is special in many regular expression representations when it is at the end of a symbol (which it is in this case) and means "match only if at the end of line".
Anyway, in this case it would recognize a dollar sign in the case where it is not recognized by the other, longer, pattern that recognizes dollar signs.
In lex you might have
[^$]{1,1024} { write_string(yytext); }
$[0-9]{1,3}; { write_char(atoi(yytext)); }
[$] { write_char(*yytext); }
and it would generate a .c file that will function as a filter similar to what you are asking for. You will need to read up a little more on how to use lex though.
The "f" family of functions in <stdio.h> can take care of the streaming for you. Specifically, you're looking for fopen(), fgets(), fread(), etc.
Nategoose's answer about using lex (and I'll add yacc, depending on the complexity of your input) is also worth considering. They generate lexers and parsers that work, and after you've used them you'll never write one by hand again.
EDIT: Note that due to the way hard drives actually write data, none of the schemes in this list work reliably. Do not use them. Just use a database. SQLite is a good simple one.
What's the most low-tech but reliable way of storing tuples of UTF-8 strings on disk? Storage should be append-only for reliability.
As part of a document storage system I'm experimenting with I have to store UTF-8 tuple data on disk. Obviously, for a full-blown implementation, I want to use something like Amazon S3, Project Voldemort, or CouchDB.
However, at the moment, I'm experimenting and haven't even firmly settled on a programming language yet. I have been using CSV, but CSV tend to become brittle when you try to store outlandish unicode and unexpected whitespace (eg vertical tabs).
I could use XML or JSON for storage, but they don't play nice with append-only files. My best guess so far is a rather idiosyncratic format where each string is preceded by a 4-byte signed integer indicating the number of bytes it contains, and an integer value of -1 indicates that this tuple is complete - the equivalent of a CSV newline. The main source of headaches there is having to decide on the endianness of the integer on disk.
Edit: actually, this won't work. If the program exits while writing a string, the data becomes irrevocably misaligned. Some sort of out-of-band signalling is needed to ensure alignment can be regained after an aborted tuple.
Edit 2: Turns out that guaranteeing atomicity when appending to text files is possible, but the parser is quite non-trivial. Writing said parser now.
Edit 3: You can view the end result at http://github.com/MetalBeetle/Fruitbat/tree/master/src/com/metalbeetle/fruitbat/atrio/ .
I would recommend tab delimiting each field and carriage-return delimiting each record.
Within each string, Replace all characters that would affect the field and record interpretation and rendering. This would include control characters (U+0000–U+001F, U+007F–U+009F), non-graphical line and paragraph separators (U+2028, U=2029), directional control characters (U+202A–U+202E), and the byte order mark (U+FEFF).
They should be replaced with escape sequences of constant length. The escape sequences should begin with a rare (for your application) character. The escape character itself should also be escaped.
This would allow you to append new records easily. It has the additional advantage of being able to load the file for visual inspection and modification into any spreadsheet or word processing program, which could be useful for debugging purposes.
This would also be easy to code, since the file will be a valid UTF-8 document, so standard text reading and writing routines may be used. This also allows you to convert easily to UTF-16BE or UTF-16LE if desired, without complications.
Example:
U+0009 CHARACTER TABULATION becomes ~TB
U+000A LINE FEED becomes ~LF
U+000D CARRIAGE RETURN becomes ~CR
U+007E TILDE becomes ~~~
etc.
There are a couple of reasons why tabs would be better than commas as field delimiters. Commas appear more commonly within normal text strings (such as English text), and would have to be replaced more frequently. And spreadsheet programs (such as Microsoft Excel) tend to handle tab-delimited files much more naturally.
Mostly thinking out loud here...
Really low tech would be to use (for example) null bytes as separators, and just "quote" all null bytes appearing in the output with an additional null.
Perhaps one could use SCSU along with that.
Or it might be worth to look at the gzip format, and maybe ape it, if not using it:
A gzip file consists of a series of "members" (compressed data sets).
[...]
The members simply appear one after another in the file, with no additional information before, between, or after them.
Each of these members can have an optional "filename", comment, or the like, and i believe you can just keep appending members.
Or you could use bencode, used in torrent-files. Or BSON.
See also Wikipedia's Comparison of data serialization formats.
Otherwise i think your idea of preceding each string with its length is probably the simplest one.