sha3 with option hex encoding vs non hex encoding with escaped string - web3js

I was trying to figure out why sha3("\x80") is different from sha3("0x80", {encoding:'hex'} )
web3.sha3( "\x80" )
"0x0f50dcb7f76b82d3cf8c843adacd5cb4d1ce1b6de2ef1f2557f196d07c26f08e"
web3.sha3( "0x80" , { encoding : 'hex' } )
"0x56e81f171bcc55a6ff8345e692c0f86e5b48e01b996cadc001622fb5e363b421"
but it works if my bytes are all < 0x80. For example if i try it with \x70 :
web3.sha3( "\x70" )
"0x2304e88f144ae9318c71b0fb9e0f44bd9e0c6c58fb1b5315a35fd8b4b2a444ab"
web3.sha3( "0x70" , { encoding : 'hex' } )
"0x2304e88f144ae9318c71b0fb9e0f44bd9e0c6c58fb1b5315a35fd8b4b2a444ab"
The reason i m asking is because while i can use { encoding : 'hex' } with web3.js but this option is not available in tronWeb.js's tronWeb.sha3() which only takes 1 argument . So i m thinking of a workaround if i don't have the { encoding : 'hex' } option to use.

Related

dejagnu scan-assembler regex

I want to scan an assembly file with DejaGnu for:
ld.h %d2,
But I can't explain the dot character and the numeric character.
I tried the following: (one backslash, two backslashes and bracketing)
/* { dg-final { scan-assembler-times "ld\.h\t%d\d," 1 { target { tricore-*-* } } } } */
/* { dg-final { scan-assembler-times "ld\\.h\t%d\\d," 1 { target { tricore-*-* } } } } */
/* { dg-final { scan-assembler-times "ld[.]h\t%d[0-9]," 1 { target { tricore-*-* } } } } */
The case with one backslash: it simply disappears, so the meaning of the regex is changing.
With two backslashes: it keeps both of them, so no matches during the scan.
With brackets: an error appears.
The output:
Running ../../../../testsuite/mytest/TX-1234.exp ...
PASS: testsuite/mytest/size16.c (test for excess errors)
FAIL: testsuite/mytest/size16.c scan-assembler-times \tld.h\t%dd, 1
FAIL: testsuite/mytest/size16.c scan-assembler-times \tld\\.h\t%d\\d, 1
ERROR: (DejaGnu) proc "." does not exist.
So, how do I explain the \. character or a \d decimal one?
this:
(ld\.h \%d\d\,)
will validate ld.h %d2,
but you still have to check if it does not take wrong results in addition to all needed and tweek it if it is a case.
enjoy
The expression goes through a filter, so the right explanation of a dot character is \\\. ( \\\. -> \. ). The \d not works in DejaGnu. Use \[0-9\] instead ( \[0-9\] -> [0-9] ).

How can i use or convert fgetc() function from C in Python 3?

I want to use fgetc() function from C in Python 3. I think i have to use:
ord( fp.read( 1 ) ) but i am not sure. More specific i want to convert the following code from C to Python3:
for(i=0;i<xdimension;i++)
{
for(j=0;j<ydimension;j++)
{
temp=1.164 * (fgetc(fp) - 16);
if (temp>255) { temp=255 ; }
if(i>=N || j>=M) { printf("i=%d,j=%d\n",i,j);}
framed[i*M+j]=temp;
}
}

accessing range of values in arduino array

I have data packet coming in like this in Arduino.
Data: 12345678901234
I can access the 5th value using the code below.
id = sx1272.packet_received.data[4];
My question is how do I access a rang of values?
I tried this, but the colon throws an error.
char[2] id;
if( sx1272.packet_received.length > 4 )
{
id = sx1272.packet_received.data[4:5];
}
Arrays in C++ don't allow this syntax.
What you should do is something like this:
char[2] id;
if( sx1272.packet_received.length > 5 )
{
id[0] = sx1272.packet_received.data[4];
id[1] = sx1272.packet_received.data[5];
}

Spacing out every 2 characters in a string?

my string is "37829300".
How can I space out every 2 characters in the string so the result can be "37 82 93 00".
I am trying to achieve this in vc++.
Thanks.
I understand I may have to use #include iostream but I am lost on how to do it properly.
Couldn't find a fancy one-liner regular-expression, so let's do it the manual way.
private static string AddSpaceAfterTwoDigits(string input)
{
string output = string.Empty;
MatchCollection arr = Regex.Matches(input, #"\d\d");
if ( arr.Count > 0 )
{
output = arr[0].Groups[0].Value; // Add the first with no space
for ( int i = 1; i < arr.Count; i++ )
{
output += " " + arr[i].Groups[0].Value;
}
}
return output;
}
The code is in C#, but it's a fairly straight-forward conversion to C++/CLI.
The code assumes an input of an even number of digits.

Token return values in ANTLR 3 C

I'm new to ANTLR, and I'm attempting to write a simple parser using C language target (antler3C). The grammar is simple enough that I'd like to have each rule return a value, eg:
number returns [long value]
:
( INT {$value = $INT.ivalue;}
| HEX {$value = $HEX.hvalue;}
)
;
HEX returns [long hvalue]
: '0' 'x' ('0'..'9'|'a'..'f'|'A'..'F')+ {$hvalue = strtol((char*)$text->chars,NULL,16);}
;
INT returns [long ivalue]
: '0'..'9'+ {$ivalue = strtol((char*)$text->chars,NULL,10);}
;
Each rule collects the return value of it's child rules until the topmost rule returns a nice struct full of my data.
As far as I can tell, ANTLR allows lexer rules (tokens, eg 'INT' & 'HEX') to return values just like parser rules (eg 'number'). However, the generated C code will not compile:
error C2228: left of '.ivalue' must have class/struct/union
error C2228: left of '.hvalue' must have class/struct/union
I did some poking around, and the errors make sense - the tokens end up as generic ANTLR3_COMMON_TOKEN_struct, which doesn't allow for a return value. So maybe the C target just doesn't support this feature. But like I said, I'm new to this, and before I go haring off to find another approach I want to confirm that I can't do it this way.
So the question is this: 'Does antler3C support return values for lexer rules, and if so what is the proper way to use them?'
Not really any new information, just some details on what #bemace already mentioned.
No, lexer rules cannot have return values. See 4.3 Rules from The Definitive ANTLR reference:
Rule Arguments and Return Values
Just like function calls, ANTLR parser and tree parser rules can have
arguments and return values. ANTLR lexer rules cannot have return
values [...]
There are two options:
Option 1
You can do the transforming to a long in the parser rule number:
number returns [long value]
: INT {$value = Long.parseLong($INT.text);}
| HEX {$value = Long.parseLong($HEX.text.substring(2), 16);}
;
Option 2
Or create your own token that has, say, a toLong(): long method:
import org.antlr.runtime.*;
public class YourToken extends CommonToken {
public YourToken(CharStream input, int type, int channel, int start, int stop) {
super(input, type, channel, start, stop);
}
// your custom method
public long toLong() {
String text = super.getText();
int radix = text.startsWith("0x") ? 16 : 10;
if(radix == 16) text = text.substring(2);
return Long.parseLong(text, radix);
}
}
and define in the options {...} header in your grammar to use this token and override the emit(): Token method in your lexer class:
grammar Foo;
options{
TokenLabelType=YourToken;
}
#lexer::members {
public Token emit() {
YourToken t = new YourToken(input, state.type, state.channel,
state.tokenStartCharIndex, getCharIndex()-1);
t.setLine(state.tokenStartLine);
t.setText(state.text);
t.setCharPositionInLine(state.tokenStartCharPositionInLine);
emit(t);
return t;
}
}
parse
: number {System.out.println("parsed: "+$number.value);} EOF
;
number returns [long value]
: INT {$value = $INT.toLong();}
| HEX {$value = $HEX.toLong();}
;
HEX
: '0' 'x' ('0'..'9'|'a'..'f'|'A'..'F')+
;
INT
: '0'..'9'+
;
When you generate a parser and lexer, and run this test class:
import org.antlr.runtime.*;
import java.io.*;
public class Main {
public static void main(String[] args) throws Exception {
ANTLRStringStream in = new ANTLRStringStream("0xCafE");
FooLexer lexer = new FooLexer(in);
CommonTokenStream tokens = new CommonTokenStream(lexer);
FooParser parser = new FooParser(tokens);
parser.parse();
}
}
it will produce the following output:
parsed: 51966
The first options seems the more practical in your case.
Note that, as you can see, the examples given are in Java. I have no idea if option 2 is supported in the C target/runtime. I decided to still post it to be able to use it as a future reference here on SO.
Lexer rules must return Token objects, because that's what the Parser expects to work with. There may be a way to customize the type of token object used, but it's easier just to convert tokens to values in the lowest-level parser rules.
social_title returns [Name.Title title]
: SIR { title = Name.Title.SIR; }
| 'Dame' { title = Name.Title.DAME; }
| MR { title = Name.Title.MR; }
| MS { title = Name.Title.MS; }
| 'Miss' { title = Name.Title.MISS; }
| MRS { title = Name.Title.MRS; };
There is a third option: You can pass an object as argument to the lexer rule. This object contains a member that represents the lexer's return value. Within the lexer rule, you can set the member. Outside the lexer rule, at the point you call it, you can get the member and do whatever you want with this 'return value'.
This way of parameter passing corresponds to the 'var' parameters in Pascal or the 'out' parameters in C++ and other programming languages.

Resources