A test file with delimited Data and one of the field is amount and it arrives as a string. To be converted to float. As per camel-bindy i tried a way, but ended with same value without precesions.
Input : 12345678
#DataField(name="AMT", trim=true, pos = 15 , precision=2)
private BigDecimal amount;
Route:
.unmarshal().bindy(BindyType.Csv, Test.class)
O/P:
12345678.00
Its not getting converted to 123456.78
Please help me with any suggestions.
Annotation #DataField provides another method implementation which supports this kind of feature,
#DataField(name="AMT", trim=true, pos = 15 , precision=2, impliedDecimalSeparator = true)
private BigDecimal amount;
impliedDecimalSeparator decides the decimal position for incoming string field and converts as required.
Related
What is the Kotlin 1.5 command to convert a 16 bit integer to a Byte of length 2? Secondary problem is that outputstream needs a string at the end so it can convert with toByteArray()
# Original Python Code
...
i = int((2**16-1)*ratio) # 16 bit int
i.to_bytes(2, byteorder='big')
output = (i).to_bytes(2, byteorder='big')
# Kotlin Code so far
var i = ((2.0.pow(16) - 1) * ratio).toInt() // Convert to 16 bit Integer
print("16 bit Int: " + i)
output = .....
....
...
val outputStream: OutputStream = socket.getOutputStream()
outputStream.write(output.toByteArray()) // write requires ByteArray for some reason
It is simple math, so it is probably the best to calculate manually and define as an extension function:
fun Int.to2ByteArray() : ByteArray = byteArrayOf(toByte(), shr(8).toByte())
Then you can use it:
output = i.to2ByteArray()
outputStream.write(output)
Note, this function writes the integer in little-endian. If you need big-endian the just reverse the order of items in the array. You can also add some min/max checks if you need them.
Also, if you only need 16-bit values then you can consider using Short or UShort instead of Int. It doesn't change much regarding the memory usage, but it could be a cleaner approach - we could name our extension just toByArray() and we would not need min/max checks.
I had tried to convert btye array to string in charset-8, but it's not working. Can someone guide me please.
Here is how I convert Bitmap to bytearray
private fun BitmapToByteArray(): ByteArray
{
val stream = ByteArrayOutputStream()
btm1!!.compress(Bitmap.CompressFormat.PNG, 100, stream)
val bitmapdata: ByteArray = stream.toByteArray()
return bitmapdata
}
Here is how I encrypt the data
private fun encrypting_data(bitmapdata: ByteArray): String {
val key = secretkey!!.text.toString()
val btm1 = bitmapdata.toString(Charsets.UTF_8)
val s = btm1
//generating key from given secret key
val skey: Key = SecretKeySpec(key.toByteArray(), "AES")
print(skey.toString())
val c: Cipher = Cipher.getInstance("AES")
c.init(Cipher.ENCRYPT_MODE, skey)
//encrypting text string
val re = c.doFinal(s.toByteArray())
//converting encrypted string to base64
val re_base64 = Base64.encodeToString(re, Base64.NO_WRAP or Base64.NO_PADDING)
Log.e("aaAA", re_base64.toString())
//converting each chr of base64 string to binary and combining it
for (i in re_base64) {
var single_b_string = Integer.toBinaryString((i.toInt()))
//if binary str is less than 8 bit then making it 8 bit by adding 0's
if (single_b_string.length < 8) {
for (j in 1..(8 - single_b_string.length)) {
single_b_string = "0" + single_b_string
}
}
//final binary string to hide in image
b_string = b_string + single_b_string
}
Log.e("barraylength", b_string.toString())
Log.e("barray", b_string!!.length.toString())
return b_string.toString()
}
please guide me, thank you
Short answer: none.
Charsets are used to map characters to binary and vice-versa. It doesn't make sense to decode the bytes of an image into a string using a character encoding. There is even a chance that you find sequences of bytes that are not valid sequences in the character encoding that you choose, so they will not be converted to characters correctly.
Sometimes it's necessary to use text to represent binary data (e.g. when using text-only transports/media to store it).
In these cases, you can use other kinds of encodings, for instance Base64, but I guess you know about it because you're already sort of using base64 here as well.
Note that, in your current code, you are converting a ByteArray (bitmapdata) into a String (btm1/s) only to convert it back into a ByteArray (s.toByteArray()). Why do you even need to do so?
I have an ESP8266 (Arduino) that receives a string (as per the Arduino String class library) of 20 numbers ranging from 0 to 200, comma delimited.
I would like to parse and place the values into an array of integers (e.g. int IntArray[21];. This is what the String looks like:
dataFromClient = "1,2,1,0,1,1,0,1,0,25,125,0,175,100,0,25,175,0,50,125";
I have tried numerous times for the past 2 weeks and I keep getting into "string" hell! Any help would be greatly appreciated.
You should give more details about what you have tried so far.
Since you are using the Arduino libraries you can use the toInt() member function of the string class.
unsigned int data_num = 0;
int data[21];
// loop as long as a comma is found in the string
while(dataFromClient.indexOf(",")!=-1){
// take the substring from the start to the first occurence of a comma, convert it to int and save it in the array
data[ data_num ] = dataFromClient.substring(0,dataFromClient.indexOf(",")).toInt();
data_num++; // increment our data counter
//cut the data string after the first occurence of a comma
dataFromClient = dataFromClient.substring(dataFromClient.indexOf(",")+1);
}
// get the last value out of the string, which as no more commas in it
data[ data_num ] = dataFromClient.toInt();
In this code the string is consumed until only the last value is left in the string. If you want to persist the data in the string you can define a position variable as the substring start point and updating it on every loop cycle to the position after the next comma
I am facing simple issue in below program. In below code I am just subtracting numbers and expected output is "89.50" but it is printing 90. May I know the reason and help me with code to get expected output.
public class BigDecimal_Prb {
public static void main(String[] args) throws java.lang.Exception
{
MathContext mc = new MathContext(2);
List<BigDecimal> list = new ArrayList<BigDecimal>();
list.add(BigDecimal.valueOf(30));
list.add(BigDecimal.valueOf(120.00));
BigDecimal [] nums = (BigDecimal[]) list.toArray(new BigDecimal[0]);
BigDecimal reaminingAmt=nums[1].subtract(nums[0], mc);
BigDecimal dedAmt=new BigDecimal(0.5);
BigDecimal ans = reaminingAmt.subtract(dedAmt,mc);
System.out.println(ans);
}
}
Subtraction certainly works, but the other things you do cause the "unexpected" result.
A MathContext contains two main elements: a precision and a rounding mode. I guess you understood the rounding mode, but not the precision.
The precision is the number of significant digits of a BigDecimal. In other words, if you set it to 2, you can expect the number to be rounded to two digits, i.e. 90. If you want a certain number of digits after the decimal point, use the scale:
BigDecimal ans = reaminingAmt.subtract(dedAmt).setScale(2);
yes. Finally got it.
No it's not a scaling issue. The "solution" I found has the side-effect of changing the result of the BigDecimal.toString() method. However, the intent of setScale() is to fine-tune the results of internal calculations performed on the BigDecimal value. If you only want the scale of your BigDecimal to be 2 because that's all you need for calculation results to be acceptable, that's fine, set the scale. But if you're using setScale() so that your output will show "120.00" instead of "120.0," then you're using the wrong approach and doing it for the wrong reasons, IMO.
If you need the scale of 2, change all your instance creations to use the BigDecimal(String) constructor. That will retain the .00 part, and you will then get the scale of 2 you have been looking for.
public class BigDecimal_Prb {
public static void main(String[] args) throws java.lang.Exception
{
MathContext mc = new MathContext(4, RoundingMode.HALF_DOWN);
List<BigDecimal> list = new ArrayList<BigDecimal>();
list.add(BigDecimal.valueOf(30));
list.add(BigDecimal.valueOf(120.00));
BigDecimal [] nums = (BigDecimal[]) list.toArray(new BigDecimal[0]);
BigDecimal reaminingAmt=nums[1].subtract(nums[0], mc);
BigDecimal dedAmt=new BigDecimal(0.5);
BigDecimal ans = reaminingAmt.subtract(dedAmt,mc).setScale(2, RoundingMode.HALF_DOWN);
System.out.println(ans.toString());
}
}
I use this code to encode UTF8String in ASN1:
const char *charExtensionValue = "test value тест тест with some cyrillic symbols";
CERT_NAME_VALUE myNameValue;
myNameValue.dwValueType = CERT_RDN_UTF8_STRING;
myNameValue.Value.cbData = (DWORD)(strlen(charExtensionValue)+1)*2;
myNameValue.Value.pbData = (LPBYTE)charExtensionValue;
CERT_BLOB encodedBlob;
bool checkASN1Encoding = CryptEncodeObjectEx(X509_ASN_ENCODING | PKCS_7_ASN_ENCODING, X509_ANY_STRING, &myNameValue, CRYPT_ENCODE_ALLOC_FLAG, NULL, &encodedBlob.pbData, &encodedBlob.cbData);
CryptEncodeObjectEx works well, without any errors, but the result is not expected:
OCTET STRING, encapsulates {
UTF8String "ø§³û¦© Ґѐô´
What am I doing wrong?
the docs say CERT_RDN_UTF8_STRING means the value member must be "An array of 16 bit Unicode characters UTF8 encoded on the wire as a sequence of one, two, or three, eight-bit characters." but charExtensionValue points to an array of 8 bit characters. Also you are calculating the string as if it is a UTF-16 string which it is not. – Stuart