I need to use both compression and encryption in a project. There are two programs in the project.
In the first program, an ascii text file is first compressed and then encrypted. Further operations follow on this encrypted version of the file. However, a second program in the project follows the reverse process i.e. first decrypts and then decompresses to get the original ascii text file.
I've implemented the encryption module (aes via openssl) and it works fine. But when i looked for compression options in linux, i found that gzip, zlib etc throw their own versions of the file i.e. filename.gz or some other extension, the contents of which are not purely ascii. (For instance, i see diamond shaped symbols when i view the output in the terminal) Beause of this, i'm unable to read the compressed file completely in my C program.
So in short, i require a compressed file which contains only ascii characters. Is this possible by any means?
Finally resolved the issue. The program is handling everything correctly.
On the sending side:
compression: gzip -c secret.txt -9 > compressed.txt.gz
encryption: openssl enc -aes-256-cbc -a -salt -in compressed.txt.gz -out encrypted.txt
The compression output (gz) is given as an input for encryption which outputs a text file. The resulting output is purely ascii.
On the receiving side:
decryption: openssl enc -d -aes-256-cbc -a -in decryptme.txt -out decrypted.txt.gz
decompression: gunzip -c decrypted.txt.gz > message.txt
You can add uuencode / uudecode filter in between compression and encryption -- or you might want to loosen the restriction of the compressed data to be in ascii form: options:
read binary data from you c-program.
(e.g. char buffer[256]; c=fread(buffer,1,256,stdin); )
convert the data to hexadecimal format
static char encrypted_file[]={ 0x01,0x6e, ... };
Related
I'm trying to upload files containing special characters on our platform via the exec command but the characters are always interpreted and it fails.
For example if I try to upload a mémo.txt file I get the following error:
/bin/cp: cannot create regular file `/path/to/dir/m\351mo.txt': No such file or directory
The UTF8 is correctly configured on the system and if I run the command on the shell it works fine.
Here is the TCL code:
exec /bin/cp $tmp_filename $dest_path
How can I make it work?
The core of the problem is what encoding is being used to communicate with the operating system. For exec and filenames, that encoding is whatever is returned by the encoding system command (Tcl has a pretty good guess at what the correct value for that is when the Tcl library starts up, but very occasionally gets it wrong). On my computer, that command returns utf-8 which says (correctly!) that strings passed to (and received from) the OS are UTF-8.
You should be able to use the file copy command instead of doing exec /bin/cp, which will be helpful here as that's got less layers of trickiness (it avoids going through an external program which can impose its own problems). We'll assume that that's being done:
set tmp_filename "foobar.txt"; # <<< fill in the right value, of course
set dest_path "/path/to/dir/mémo.txt"
file copy $tmp_filename $dest_path
If that fails, we need to work out why. The most likely problems relate to the encoding though, and can go wrong in multiple ways that interact horribly. Alas, the details matter. In particular, the encoding for a path depends on the actual filesystem (it's formally a parameter when the filesystem is created) and can vary on Unix between parts of a path when you have a mount within another mount.
If the worst comes to the worst, you can put Tcl into ISO 8859-1 mode and then do all the encoding yourself (as ISO 8859-1 is the “just use the bytes I tell you” encoding); encoding convertto is also useful in this case. Be aware that this can generate filenames that cause trouble for other programs, but it's at least able to let you get at it.
encoding system iso98859-1
file copy $tmp_filename [encoding convertto utf-8 $dest_path]
Care might be needed to convert different parts of the path correctly in this case: you're taking full responsibility for what's going on.
If you're on Windows, please just let Tcl handle the details. Tcl uses the Wide (Unicode) Windows API directly so you can pretend that none of these problems exist. (There are other problems instead.)
On macOS, please leave encoding system alone as it is correct. Macs have a very opinionated approach to encodings.
I already tried the file copy command but it says error copying
"/tmp/file7k5kqg" to "/path/to/dir/mémo.txt": no such file or
directory
My reading of your problem is that, for some reason, your Tcl is set to iso8859-1 ([encoding system]), while the executing environment (shell) is set to utf-8. This explains why Donal's suggestion works for you:
encoding system iso8859-1
file copy $tmp_filename [encoding convertto utf-8 $dest_path]
This will safely pass utf-8 encoded bytearray down to any syscall: é or \xc3\xa9 or \u00e9. Watch:
% binary encode hex [encoding convertto utf-8 é]
c3a9
% encoding system iso8859-1; exec xxd << [encoding convertto utf-8 é]
00000000: c3a9 ..
This is equivalent to [encoding system] also being set to utf-8 (as to be expected in an otherwise utf-8 environment):
% encoding system
utf-8
% exec xxd << é
00000000: c3a9 ..
What you are experiencing (without any intervention) seems to be a re-coding of the Tcl internal encoding to iso8859-1 on the way out from Tcl (because of [encoding system], as Donal describes), and a follow-up (and faulty) re-coding of this iso8859-1 value into the utf-8 environment.
Watch the difference (\xe9 vs. \xc3\xa9):
% encoding system iso8859-1
% encoding system
iso8859-1
% exec xxd << é
00000000: e9
The problem it then seems is that \xe9 is to be interpreted in your otherwise utf-8 env, like:
$ locale
LANG="de_AT.UTF-8"
...
$ echo -ne '\xe9'
?
$ touch `echo -ne 'm\xe9mo.txt'`
touch: m?mo.txt: Illegal byte sequence
$ touch mémo.txt
$ ls mémo.txt
mémo.txt
$ cp `echo -ne 'm\xe9mo.txt'` b.txt
cp: m?mo.txt: No such file or directory
But:
$ cp `echo -ne 'm\xc3\xa9mo.txt'` b.txt
$ ls b.txt
b.txt
Your options:
(1) You need to find out why Tcl picks up iso8859-1, to begin with. How did you obtain your installation? Self-compiled? What are the details (version)?
(2) You may proceed as Donal suggests, or alternatively, set encoding system utf-8 explicitly.
encoding system utf-8
file copy $tmp_filename $dest_path
If I use the Google Cloud Storage File Transfer console
https://console.cloud.google.com/storage/transfer?project=XXXX
How do I generate an MD5 string for my image? Say my image is located at https://www.planwallpaper.com/static/images/desktop-year-of-the-tiger-images-wallpaper.jpg for example.
I can easily get the bytes value, but how would I generate the MD5 for this?
The docs were a bit vague. Any ideas?
An MD5 hash is used to ensure the data transferred into GCS is imported correctly. HTTPS data transfers include a variety of built-in checksums, but for very large imports of many, many files, errors can and do show up, and so GCS wants to be sure that each object that it downloads is exactly what you think it is.
An MD5 is a 128 bit number that is the result of running the MD5 algorithm on an object. This number can be represented in a variety of ways (the popular md5sum command uses hexadecimal strings). GCS asks that you represent this number as a base64 encoding. Here's a command that can generate an MD5 sum in the right format:
openssl md5 -binary NameOfSourceFile | openssl enc -base64
There's a standard GCS object that can be used to validate your MD5 logic. The object https://storage.googleapis.com/md5-test/md5-test has a base64'd MD5 string of BfnRTwvHpofMOn2Pq7EVyQ==.
I have a problem with encrypting a string.
I have the command to encrypt a file using OpenSSL.
But I wanted to know the encrypt the string not the File.
The command for encrypting a file is:
system("openssl des3 -e -nosalt -in %s -out %s -k %s > /tmp/sys; cat /tmp/sys", src, dest, key);
where src and dest are the two file names.
what are the options available with OPEN SSL.
In above in and out are options for encrypting a text file.
I need the option to encrypting character array variable.
As pointed out on another question (by someone who dug through the code more than I did):
https://opensource.conformal.com/viewgit/?a=viewblob&p=cyphertite&h=899259f8ba145c11087088ec83153db524031800&hb=6782c6839d847fbed0aed8c55917e78b5684110f&f=cyphertite/ct_crypto.c
has the code you need to use OpenSSL natively to perform encryption/decryption in your app.
Happy hacking!
I have a text file that I have to sign with RSA private key and then append this signature and do an AES encryption over this "text file+signature".
For demonstration reasons I am testing such an encrypted file.
I am writing a simple program in C to do the following:
First do an RSA sign(1024 bit) on a text file.
Then append the signature to the text file
Then do an AES encryption over the file.
Then perform the AES decryption
Then remove the 128 byte signature from the file.
Then do an RSA verification of the original text file and the text file after decryption.
Here are my questions:
Is it a good idea to append a binary signature to a text file?
If no what is the general way this is done?
I tried a simple program to do the above but I always get one or two junk characters on AES decryption and therefore RSA verification fails.
Do please suggest.
an AES decrypt of such a file and then remove the 128 byte(1024 bit modulus) signature.
The ad-hoc standard for embedding crypto information in text files was introduced by Privacy Enhanced Mail some time ago: Basically the binary information is encoded in base-64 and appended to the text file along with a header line to identify the "snip" point for the added content.
Here is a sample of what it typically looks like (this chunk would be added to the end of the existing text file)
-----BEGIN PRIVACY-ENHANCED MESSAGE-----
Proc-Type: 4,ENCRYPTED
Content-Domain: RFC822
DEK-Info: DES-CBC,F8143EDE5960C597
Originator-ID-Symmetric: linn#zendia.enet.dec.com,,
Recipient-ID-Symmetric: linn#zendia.enet.dec.com,ptf-kmc,3
Key-Info: DES-ECB,RSA-MD2,9FD3AAD2F2691B9A,
B70665BB9BF7CBCDA60195DB94F727D3
Recipient-ID-Symmetric: pem-dev#tis.com,ptf-kmc,4
Key-Info: DES-ECB,RSA-MD2,161A3F75DC82EF26,
E2EF532C65CBCFF79F83A2658132DB47
LLrHB0eJzyhP+/fSStdW8okeEnv47jxe7SJ/iN72ohNcUk2jHEUSoH1nvNSIWL9M
8tEjmF/zxB+bATMtPjCUWbz8Lr9wloXIkjHUlBLpvXR0UrUzYbkNpk0agV2IzUpk
J6UiRRGcDSvzrsoK+oNvqu6z7Xs5Xfz5rDqUcMlK1Z6720dcBWGGsDLpTpSCnpot
dXd/H5LMDWnonNvPCwQUHt==
-----END PRIVACY-ENHANCED MESSAGE-----
I'm trying to implement ftp commands GET and PUT thru a UNIX socket for file transfer using usual functions like fread(), fwrite(), send() and recv().
It works fine for text files, but fails for binary files (diff says: "binary files differ")
Any suggestions regarding the following will be appreciated:
Are there any specific commands to read and write binary data?
Can diff be used to compare binary files?
Is it possible to send the binary parts in chunks of memory?
the FTP protocol has 2 modes of operation: text and binary.
try it in any FTP client -- I believe the commands for switching in between are ASCII and BIN. The text mode has only effect from what I recall on the CR/LF pairs though.
If you're reading from a file and then writing the file's data to the socket, make sure you open the file in binary mode.
Yes, diff can be used to compare binary files, typically with the -q option to suppress the actual printing of differences, which rarely makes sense for binary files. You can also use md5 or cmp if you have them.