Can someone please explain the difference between the LZSS and the LZ77 algorithm. I've been looking online for a couple of hours but I couldn't find the difference. I found the LZ77 algorithm and I understood its implementation.
But, how does LZSS differ from LZ77? Let's say if we have an string "abracadabra" how is LZSS gonna compress it differently from LZ77? Is there a C pseudo-code that I could follow?
Thank you for your time!
Unfortunately, both terms LZ77 and LZSS tend to be used very loosely, so they do not really imply very specific algorithms. When people say that they compressed their data using an LZ77 algorithm, they usually mean that they implemented a dictionary based compression scheme, where a fixed-size window into the recently decompressed data serves as the dictionary and some words/phrases during the compression are replaced by references to previously seen words/phrases within the window.
Let us consider the input data in the form of the word
abracadabra
and assume that window can be as large as the input data. Then we can represent "abracadabra" as
abracad(-7,4)
Here we assume that letters are copied as is, and that the meaning of two numbers in brackets is "go 7 positions back from where we are now and copy 4 symbols from there", which reproduces "abra".
This is the basic idea of any LZ77 compressor. Now, the devil is in the detail. Note that the original word "abracadabra" contains 11 letters, so assuming ASCII representation the word, it is 11 bytes long. Our new representation contains 13 symbols, so if we assume the same ASCII representation, we just expanded the original message, instead of compressing it. One can prove that this can sometimes happen to any compressor, no matter how good it actually is.
So, the compression efficiency depends on the format in which you store the information about uncompressed letters and back references. The original paper where the LZ77 algorithm was first described (Ziv, J. & Lempel, A. (1977) A universal algorithm for sequential data compression. IEEE Transactions on information theory, 23(3), 337-343) uses the format that can be loosely described here as
(0,0,a)(0,0,b)(0,0,r)(0,1,c)(0,1,d)(0,3,a)
So, the compressed data is the sequence of groups of three items: the absolute (not relative!) position in the buffer to copy from, the length of the dictionary match (0 means no match was found) and the letter that follows the match. Since most letters did not match anything in the dictionary, you can see that this is not a particularly efficient format for anything but very compressible data.
This inefficiency may well be the reason why the original form of LZ77 has not been used in any practical compressors.
SS in the "LZSS" refers to a paper that was trying to generalize the ideas of dictionary compression with the sliding window (Storer, J. A. & Szymanski, T. G. (1982). Data compression via textual substitution. Journal of the ACM, 29(4), 928-951). The paper itself looks at several variations of dictionary compression schemes with windows, so once again, you will not find an explicit "algorithm" in it. However, the term LZSS is used by most people to describe the dictionary compression scheme with flag bits, e.g. describing "abracadabra" as
|0a|0b|0r|0a|0c|0a|0d|1-7,4|
where I added vertical lines purely for clarity. In this case numbers 0 and 1 are actually prefix bits, not bytes. Prefix bit 0 says "copy the next byte into the output as is". Prefix bit 1 says "next follows the information for copying a match". Nothing else is really specific, term LZSS is used to say something specific about the use of these prefix signal bits. Hopefully you can see how this can be done compactly, in fact much more efficiently than the format described in LZ77 paper.
As you can see, I am trying to filter various bad words. I have some code to do so. I am using C, and also this is for a GTK application.
char LowerEnteredUsername[EnteredUsernameLen];
for(unsigned int i = 0; i < EnteredUsernameLen; i++) {
LowerEnteredUsername[i] = tolower(EnteredUsername[i]);
}
LowerEnteredUsername[EnteredUsernameLen+1] = '\0';
if (strstr(LowerEnteredUsername, (char[]){LetterF, LetterU, LetterC, LetterK})||strstr(LowerEnteredUsername, (char[]){LetterF, LetterC, LetterU, LetterK})) {
gtk_message_dialog_set_markup((GtkMessageDialog*)Dialog, "This username seems to be innapropriate.");
UsernameErr = 1;
}
My issue is, is that, it will only filter the last bad word specified in the if statement. In this example, "fcuk". If I input "fuck," the code will pass that as clean. How can I fix this?
(char[]){LetterF, LetterU, LetterC, LetterK}
(char[]){LetterF, LetterC, LetterU, LetterK}
You’ve forgotten to terminate your strings with a '\0'. This approach doesn’t seem to me to be very effective in keeping ~bad words~ out of source code, so I’d really suggest just writing regular string literals:
if (strstr(LowerEnteredUsername, "fuck") || strstr(LowerEnteredUsername, "fcuk")) {
Much clearer. If this is really, truly a no-go, then some other indirect but less error-prone ways are:
"f" "u" "c" "k"
or
#define LOWER_F "f"
#define LOWER_U "u"
#define LOWER_C "c"
#define LOWER_K "k"
and
LOWER_F LOWER_U LOWER_C LOWER_K
Doing human-language text processing in C is painful because C's concept of strings (i.e. char*/char[] and wchar_t*/wchar_t[]) are very low-level and are not expressive enough to easily represent Unicode text, let alone locate word-boundaries in text and match words in a known dictionary (also consider things like inflection, declension, plurals, the use of diacritics to evade naive string matching).
For example - your program would need to handle George carlin's famous Seven dirty words quote:
https://www.youtube.com/watch?v=vbZhpf3sQxQ
Someone was quite interested in these words. They kept referring to them: they called them bad, dirty, filthy, foul, vile, vulgar, coarse, in poor taste, unseemly, street talk, gutter talk, locker room language, barracks talk, bawdy, naughty, saucy, raunchy, rude, crude, lude, lascivious, indecent, profane, obscene, blue, off-color, risqué, suggestive, cursing, cussing, swearing... and all I could think of was: shit, piss, fuck, cunt, cocksucker, motherfucker, and tits!
This could be slightly modified to evade a naive filter, like so:
Someone was quite interested in these words. They kept referring to them: they called them bad, dirty, filthy, foul, vile, vulgar, coarse, in poor taste, unseemly, street talk, gutter talk, locker room language, barracks talk, bawdy, naughty, saucy, raunchy, rude, crude, lude, lascivious, indecent, profane, obscene, blue, off-color, risqué, suggestive, cursing, cussing, swearing... and all I could think of was: shít, pis$, phuck, cunt, сocksucking, motherfúcker, and títs!
Above, some of the words have simple replacements done, like s to $, others had diacritics added like u to ú, and some are just homonyms), however some of the other words in the above look the same but actually contain homographs or "invisible" characters like Unicode's zero-width-space, so they would evade naive text matching systems.
So in short: Avoid doing this in C. if you must, then use a robust and fully-featured Unicode handling library (i.e. do not use the C Standard Library's string functions like strstr, strtok, strlen, etc).
Here's how I would do it:
Read in input to a binary blob containing Unicode text (presumably UTF-8).
Use a Unicode library to:
Normalize the encoded Unicode text data (see https://en.wikipedia.org/wiki/Unicode_equivalence )
Identify word boundaries (assuming we're dealing with European-style languages that use sentences comprised of words).
Use a linguistics library and database (English alone is full of special-cases) to normalize each word to some singular canonical form.
Then lookup each morpheme in a case-insensitive hash-set of known "bad words".
Now, there are a few shortcuts you can take:
You can use regular-expressions to identify word-boundaries.
There exist Unicode-aware regular-expression libraries for C, for example PCRE2: http://www.pcre.org/current/doc/html/pcre2unicode.html
You can skip normalizing each word's inflections/declensions if you're happy with having to list those in your "bad word" list.
I would write working code for this example, but I'm short on time tonight (and it would be a LOT of code), but hopefully this answer provides you with enough information to figure out the rest yourself.
(Pro-tip: don't match strings in a list by checking each character - it's slow and inefficient. This is what hashtables and hashsets are for!)
I have a C program that now I need to do support to UTF-8 characters. What must I know in order to perform that? I've always hear how problematic is handle it in a C/C++ environment. Why exactly is it problematic? How does it differ from an usual C character, also its size? Can I do it without any operating system help, in pure C and still make it portable? what else I should have asked but I didn't? what I'm looking for implement is it: The characters are a name with accents(like french word: résumé) that I need to read it and put into a symbol table and then search and print them from a file. It's part of my configuration file parsing(very much .ini-like)
There's an awesome article written by Joel Spolsky, one of the Stack Overflow creators.
The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)
Apart from that, you might want to query some other Q&A's regarding this subject, like Handling special characters in C (UTF-8 encoding).
As cited in the aforementioned Q&A, Tips on Using Unicode with C/C++ might give you the basics.
Two good links that i have used in the past:
The-Basics-of-UTF8
reading-unicode-utf-8-by-hand-in-c
valter
I received a MD5 hash and a Regular Expression which have the same plaintext..
How do I use the Regular Expression to crack the MD5 hash and find the text behind the MD5?
b89e49cab317f2681be60fb3d1c0f8f8
[(a|c|d)n-t\|]{8}
The idea would be to use the regex as a template and generate inputs that satisfy it.
You can search for a regex visualizer to see this, but what that one says is any of the characters ()acd| or any character between n and t (inclusive) in any order, repeated eight times. I tested this in hashcat, and the regex is correct despite it looking like it means something else. A shorter way to write that would be [acd|()n-t]{8}.
So you start generating 8 character strings with those values and taking the md5 of them. You can do this in almost any programming language but Python is a good choice. Look up the hashlib library, it has a function md5. You'll call the function hexdigest on that and compare it to the provided hash.
>>> import hashlib
>>> hashlib.md5(b'cybering').hexdigest()
'61e4feebe66ad22349e292d1462afd3a'
Additionally, if you want to use cracking software, look up JohnTheRipper or hashcat. You should be able to provide them a dictionary and have it attempt to break the hash. I was able to solve this with hashcat on my 980ti in ~5 seconds. This tutorial helped me set up the custom charset and mask to perform the attack.
Have fun!
One approach would be to generate all possible eight-character combinations (with repetition) of the 19 characters allowed by the regex. Test each combination by computing the md5 hash and comparing it to the one you were given.
That would be 13^8 = 815,730,721 possible combinations to check. The answer will likely be found before checking all of them.
I was able to whip out a little Node.js program on my laptop that found the solution in about 4 minutes (I split the problem up using workers to take advantage of multiple CPU cores).
Edit: I thought the regex had n-z instead of n-t so the search space was actually much smaller.
You cant crack the md5 hash value it has used one way hashing algorithm.
Does anybody know of any data that relates to the frequency of the types of mistakes the people make when they misspell a word? I'm not referring to words themselves, but tje errors that are made by the typist. For example, I personally make transposition errors the most followed by deletion errors (that is, not including a letter I should), substitution errors and lastly, insertion errors. However, it would not surprise me to find out that typing a wrong letter (a substitution error, e.g., xat instead of cat) is more frequent than not including a letter.
My purpose is to be able to make best guesses at correcting a word when I only have the original user's input. The idea being that if one type of error is more frequent than others, then it's more likely that correcting a word via that type of operation is correct. I don't object to using a database of commonly misspelt words but I prefer an algorithmic solution to depending on a corpus--especially if it might be faster.
You could try using something like calculating the Levenshtein distance between the mistyped word and the words in a dictionary. I'm not sure that's what you want though.