Resteasy: what does bytes have to do with producing and consuming "text/plain"? - resteasy

I'm not sure why the below produces and consumes "text/plain" since we're dealing with Bytes.
According to oracle: http://docs.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html
Bytes is 8 bits and "has a minimum value of -128 and a maximum value of 127 (inclusive)." So what does that have to do with "text"?
#Path("/")
public class MyService
{
#GET
#Produces("text/plain")
public byte[] get()
{
return "hello world".getBytes();
}
#POST
#Consumes("text/plain")
public void post(byte[] bytes)
{
System.out.println(new String(bytes));
}
}
I can understand Post clearly produces text as it's String.
Please keep answers understandable. Thanks again.

A String is a list of characters. Each character points to a value in a charset table. So if you look in the ASCII or UTF-8 table you can see that the decimal value 65 points to the capital 'A'.
With this information you can convert from a byte to a String and back:
new String(new byte[] {65}) // A
"A".getBytes("UTF-8")[0] // 65
If you have a restful resource your JAX-RS runtime will choose the according MessageBodyWriter for the ContentType you have specified with #Produces("text/plain"). This MessageBodyWriter knows how to convert a byte[] to text/plain.
But as #Thilo already commented: Returning a String seems clearer (as long as you don't read e.g. a byte[] from a file and don't want to convert it to a String on your own).

Related

EVP_KEY get raw private key in C

Good day,
I have been trying to do a simple exercise where I could generate the public and the private key using RSA with Openssl and print them both. My code looks something like this:
size_t private_key_len = KEY_LENGTH;
EVP_PKEY *pkey = EVP_RSA_gen(KEY_LENGTH);
if (pkey == NULL)
{
fprintf(stderr, "error: rsa gen\n");
ERR_print_errors_fp(stderr);
return NULL;
}
unsigned char *private_key = calloc((KEY_LENGTH + 1),sizeof(unsigned char));
EVP_PKEY_get_raw_private_key(pkey, private_key, &private_key_len);
printf("%s\n",private_key);
So normally it should print the private key, given that KEY_LENGTH is 1024, however it just prints nothing (The zeros initialized by calloc). I have tried with malloc too, the result is similar the only difference being that it prints 0xBE.
So basically the array private_key is never filled, and I have no idea why.
What am I missing to make this work?
Thanks in advance!
Quoting the man page with emphasis changed:
EVP_PKEY_get_raw_private_key() fills the buffer provided by priv with raw private key data. The size of the priv buffer should be in *len on entry to the function, and on exit *len is updated with the number of bytes actually written. If the buffer priv is NULL then *len is populated with the number of bytes required to hold the key. The calling application is responsible for ensuring that the buffer is large enough to receive the private key data. This function only works for algorithms that support raw private keys. Currently this is: EVP_PKEY_HMAC, EVP_PKEY_POLY1305, EVP_PKEY_SIPHASH, EVP_PKEY_X25519, EVP_PKEY_ED25519, EVP_PKEY_X448 or EVP_PKEY_ED448.
Notice that RSA is not one of the supported algorithms.
You can "print" an RSA key either by converting each of its components (n,e,d,p,q,dp,dp,qinv) to printable form, which EVP_PKEY_print_private does for you, or getting the encoding of the whole key in PEM which is 'printable' in the sense of being printable and typable characters, but not the sense of being easily understood (or copied or created) by people, with PEM_write_PrivateKey or PEM_write_RSAPrivateKey.
Also, the value you pass to EVP_RSA_gen is in bits, but the size of displayed components of an RSA key (other than e, which is small) will be in hex or decimal digits or (mostly) base64 characters.

CGo Convert go string to *C.uchar

var originalMsg *C.uchar
C.ecall_pay_w(8, 10, &originalMsg, &signature)
originalMsgStr := fmt.Sprintf("%c", originalMsg)
//Todo convert originalMstStr to same value with originalMsg
i have to convert go str(originalMsgStr) to *C.uchar type which is same value with originalMsg.
How can i do it?
You get a C-string back from your call to C.ecall_pay_w and want to convert that C-string to a Go-string. You can do this by manually following the C-string until you reach the terminating 0.
Assuming that:
There is a terminating 0 at the end of the C-string
The C-string is encoded as ASCII, so every byte represents an ASCII character (in the range [0..127]). This means it is both ASCII and UTF-8 at the same time because UTF-8 is backward compatible to ASCII.
Then your solution could be this:
func convertCStringToGoString(c *C.uchar) string {
var buf []byte
for *c != 0 {
buf = append(buf, *c)
c = (*C.uchar)(unsafe.Pointer(uintptr(unsafe.Pointer(c)) + 1))
}
return string(buf)
}
Note that doing "unsafe" things like this in Go is cast-heavy. That was done on purpose by the Go authors. You need to convert to unsafe.Pointer before you can convert to uintptr. The uintptr can be added to (+ 1) while the unsafe.Pointer does not support that. These are the reasons for that much casting.
I do not know Go in much detail, but do not forget that in C the *C.uchar would be something like unsigned char * which is often used to reference a string (Null-terminated array of characters).
Here you use fmt.Sprintf("%c", originalMsg), with %c which expects a single char, so apart from the language detail on how you would cast the resulting string to a *C.uchar, you most probably have lost content already.
%c the character represented by the corresponding Unicode code point
From https://golang.org/pkg/fmt/#hdr-Printing

Get size of Blob / String in bytes in apex ?

I wants to know what the size of string/blob in apex.
What i found is just size() method, which return the number of characters in string/blob.
What the size of single character in Salesforce ?
Or there is any way to know the size in bytes directly ?
I think the only real answer here is "it depends". Why do you need to know this?
The methods on String like charAt and codePointAt suggest that UTF-16 might be used internally; in that case, each character would be represented by 2 or 4 bytes, but this is hardly "proof".
Apex seems to be translated to Java and running on some form of JVM and Strings in Java are represented internally as UTF-16 so again that could indicate that characters are 2 or 4 bytes in Apex.
Any time Strings are sent over the wire (e.g. as responses from a #RestResource annotated class), UTF-8 seems to be used as a character encoding, which would mean 1 to 4 bytes per character are used, depending on what character it is. (See section 2.5 of the Unicode standard.)
But you should really ask yourself why you think your code needs to know this because it most likely doesn't matter.
You can estimate string size doing the following:
String testString = 'test string';
Blob testBlob = Blob.valueOf(testString);
// below converts blob to hexadecimal representation; four ones and zeros
// from blob will get converted to single hexadecimal character
String hexString = EncodingUtil.convertToHex(testBlob);
// One byte consists of eight ones and zeros so, one byte consists of two
// hex characters, hence the number of bytes would be
Integer numOfBytes = hexString.length() / 2;
Another option to estimate the size would be to get the heap size before and after assigning value to String variable:
String testString;
System.debug(Limits.getHeapSize());
testString = 'testString';
System.debug(Limits.getHeapSize());
The difference between two printed numbers would be the size a string takes on the heap.
Please note that the values obtained from those methods will be different. We don't know what type of encoding is used for storing string in Salesforce heap or when converting string to blob.

JavaMail API getSubject(), subject has multiple "=?utf-8?B?~?=", how can I parse?

My mail Subject is
Subject: =?utf-8?B?7IOI66Gc7Jq0IOyEpOusuOyhsOyCrOqwgCDsi5zsnpHrkJjs?=
=?utf-8?B?l4jsirXri4jri6QhIOydtCDquLDtmowg64aT7LmY7KeAIOuniOyEuOya?=
=?utf-8?B?lCE=?=
mimeMessage.getSubject() returns below:
The black diamonds are failed characters. And language is Korean.
And the below is correct subject:
I concatenated the raw data like below, MimeUtility.decodeText() returns good result.
(delete \r\n, delete inner "=?utf-8?B?" and "?=")
MimeUtility.decodeText(=?utf-8?B?7IOI66Gc7Jq0IOyEpOusuOyhsOyCrOqwgCDsi5zsnpHrkJjsl4jsirXri4jri6QhIOydtCDquLDtmowg64aT7LmY7KeAIOuniOyEuOyalCE=?=)
The result is:
How can I parse the subject which has multiple lines?
The problem is that the mailer that encoded this text encoded it incorrectly. What mailer was used to create this message?
The 16 bit Korean Unicode characters are converted to a stream of 8 bit bytes in UTF-8 format. The 8 bit bytes are then encoded using base64 encoding.
The MIME spec (RFC 2047) requires that each encoded word contain complete characters:
Each 'encoded-word' MUST represent an integral number of characters.
A multi-octet character may not be split across adjacent 'encoded-
word's.
In your example above, the bytes representing one of the Korean characters are split across multiple encoded words. Combining them into one encoded word, as you have done, allows the text to be decoded correctly.
This is a bug in the mailer that created the message and should be reported to the owner of that mailer.
Unfortunately, there's no good workaround in JavaMail for such a broken mailer.
I created function which decodes recursively 5 times.
/*
* Decodes 5 times encoded text with MimeUtility.decodeText()
*/
private String decode(String encoded) throws UnsupportedEncodingException {
String result = MimeUtility.decodeText(encoded);
int counter = 0;
while (result.contains("=?") && counter < 5) {
counter++;
String end = result.substring(result.indexOf("=?"));
result = result.substring(0, result.indexOf("=?")) + MimeUtility.decodeText(end);
}
return result;
}

how to check if a string is matched with another string

I saved some user information in .txt file. every time when application launches it checks some specific things, like device id is same as saved or not? 2 type of returns are possible from device id with some character differences.
if i saved this string "pWch7r1fzu tILmQIMjIylBZxJk=" in txt file. i want to make this accepted by application if device id return this string "pWch7r1fzu+tILmQIMjIylBZxJk=" or this string "pWch7r1fzu tILmQIMjIylBZxJk=".
clearly, i have no idea how to achieve this. i have tried index of and instr. they are seems to be fine. like if string matches its return zero and if not -1.
but if device id is this "hfhejkfnenknBG+hhhh" it's returns -1 as well. i do not want it to accept by application.
finally what i want is to get exact string. differences of one or two char is okay not more than that.
In this case string a being the given string and b being the to be matched string.
string a = "abcdefghi";
string b = "abcdefghi";
char[] c = a.ToCharArray();
char[] d = b.ToCharArray();
if(b.Take(b.Length).SequenceEqual(a))
{
// success
}

Resources