How to decode a solana transaction? - cryptocurrency

I am trying to decode a serialized Solana raw transaction the issue I am facing is I couldn't find any files nor code related to this on the Solana web3 JS library could anyone please advice ?
I was going through this file
https://github.com/solana-labs/solana-web3.js/blob/master/src/transaction.ts
thanks

This is how instructions are encoded. As for instruction data, the question is if the serialized data is self-describing or not. If it isn't you will have to view the code of the program you are sending the instruction to, since: Programs are free to specify how information is encoded into the instruction data byte array.

To decode an unsigned transaction The txBufferFromHex variable should contain
{ 01 + empty 64 byte signature (64 bytes of 00) + unsigned transaction }
then the from method will out put decoded instruction set
const tx = Transaction.from(txBufferFromHex);

Related

nanopb - how to optimize encoding for runtime speed

I am using nanopb to do logging on an embedded system: my logging messages will be .proto messages.
Speed of encoding is the most important factor; I have plenty of RAM and flash.
QUESTION
Specific to nanopb, how do I minimize encoding time?
I know of all the C optimizations I could make: inline functions, set pb_encode functions in RAM instead of flash, etc
Would it make a difference if I use all fixed-size types in my .proto file?
The easy methods I'm aware of are:
Encode to memory buffer and enable PB_BUFFER_ONLY at compile time.
If platform is little-endian and size of byte is 8 bits, you can define PB_LITTLE_ENDIAN_8BIT. It should be detected automatically on most compilers, though.
In your message definition, minimize the amount of submessages. They require separate size calculations which take time.
These could result in speed improvements up to 2x.
It is further possible to speed up encoding by programming direct calls to the encoding functions, instead of going through the message structure and descriptor loops. I expect this to increase encoding speed up to 5x.
Some knowledge of protobuf encoding spec will be needed.
For example, for this message:
message LogMessage {
uint64 timestamp = 1;
float batterylevel = 2;
string logmessage = 3;
}
you could do:
void writelog(const char *logmsg)
{
uint8_t buffer[256];
pb_ostream_t stream = pb_ostream_from_buffer(buffer, sizeof(buffer));
// Get system state
uint64_t timestamp = get_system_timestamp();
float batterylevel = get_batterylevel();
// Encode timestamp
pb_encode_tag(&stream, PB_WT_VARINT, LogMessage_timestamp_tag);
pb_encode_varint(&stream, timestamp);
// Encode battery level
pb_encode_tag(&stream, PB_WT_32BIT, LogMessage_batterylevel_tag);
pb_encode_fixed32(&stream, &batterylevel);
// If we have explicit message, encode it also.
if (logmsg)
{
pb_encode_tag(&stream, PB_WT_STRING, LogMessage_logmessage_tag);
pb_encode_string(&stream, logmsg, strlen(logmsg));
}
// Save the encoded message data to storage
save_log(buffer, stream.bytes_written);
}
While this results in hard-coding part of the message definition, the backward- and forward compatibility of protobuf messages works as usual. The tag numbers can be accessed by the Message_field_tag macros that are generated by nanopb generator. The field types are defined in the C code, but they are also embedded in the encoded messages so any mismatches will be detected on the decoding side.

C - How to determine the amount of bytes for JSON messages

I am working on a Linux-based project consisting of a "core" application, written in C, and a web server, probably written in Python. The core and web server must be able to communicate with each other over TCP/IP. My focus is on the core application, in C.
Because of the different programming languages used for the core and web server, I am looking for a message protocol which is easy to use in both languages. Currently I think JSON is a good candidate. My question, however, is not so much about the message protocol, but about how I would determine the amount of bytes to read from (and maybe send to) the socket, specifically when using a message protocol like JSON, or XML.
As I understand it, whether you use JSON, XML, or some other message protocol, you cannot include the size of the message in the message itself, because in order to parse the message, you would need the entire message and therefore need to know the size of it in advance. Note that by "message" I mean the data formatted according to the used message protocol.
I've been thinking and reading about the solution to this, and have come to the following two possibilities:
Determine the largest possible size of a message, say 500 bytes, and based on that determine the buffer size, say 512 bytes, and add padding to each message so that 512 bytes are sent;
Prepend each message with its size in "plain text". If the size is stored in an Int (4 bytes), then the receiver first reads 4 bytes from the socket and using those 4 bytes, determines how many bytes to read next for the actual message;
Because all of the offered solutions I've read weren't specifically for the use of some message protocol, like JSON, I think it's possible that maybe I am missing out on something.
So, which of the two possibilities I offered is the best, or, am I not aware of some other solution to this problem?
Kind regards.
This is a classic problem encountered with streams, including those of TCP, often called the "message boundary problem." You can search around for more detailed answers than what I can give here.
To determine boundaries, you have some options:
Fixed length with padding like you said. Unless you have very small messages, not adviseable.
Prepend with size like you said. If you want to get fancy and support large messages without wasting too many bytes, you can use a variable length quantity, where you use a bit to determine whether to read more bytes for the size. #alnitak mentioned a drawback in the comments I neglected, which is that you can't start sending until you know the size.
Bound with some byte you don't use anywhere else (JSON and XML are text-only, so '\0' works with ASCII or any UTF). Simple but slower on the receiving end because you have to scan every byte this way.
Edit: JSON, XML, and many other formats can also be parsed on-the-fly to determine boundaries (e.g. each { must be closed with } in JSON), but I don't see any advantage to doing this.
If this isn't just a learning experience, you can instead use an existing protocol to do this all for you. HTTP (inefficient) or gRPC (more efficient), for example.
Edits: I originally said something totally wrong about having to include a checksum to handle packet loss in spite of TCP... TCP won't advance until those packets are properly received, so that's not an issue. IDK what I was thinking.

C convert char* to network byte order before transfer

I'm working on a project where I must send data to server and this client will run on different os's. I knwo the problem with endians on different machines, so I'm converting 'everything' (almost) to network byte order, using htonl and vice-versa on the other end.
Also, I know that for a single byte, I don't need to convert anything. But, what should I do for char*? ex:
send(sock,(char*)text,sizeof(text));
What's the best approach to solve this? Should I create an 'intermediate function' do intercept this 'send', then really send char-by-char of this char array? If so, do I need to convert every char to network byte order? I think no, since every char is only one byte.
Thinking of this, if I create this 'intermediate functions', I don't have to convert nothing more to network byte order, since this function will send char by char, thus don't need conversion of endians.
I any advice on this.
I am presuming from your question that the application layer protocol (more specifically everything above level 4) is under your design control. For single byte-wide (octet-wide in networking parlance) data there is no issue with endian ordering and you need do nothing special to accommodate that. Now if the character data is prepended with a length specifier that is, say 2 octets, then the ordering of the bytes must be treated consistently.
Going with network byte ordering (big-endian) will certainly fill the bill, but so would consistently using little-endian. Consistency of byte ordering on each end of a connection is the crucial issue.
If the protocol is not under your design control, then the protocol specification should offer guidance on the issue of byte ordering for multi-byte integers and you should follow that.

Read Exif GPS info using Delphi

I need to get geolocation info from my photos. Lat/Lon and GPSVersion.
I've already found some info related to this question, I compared different EXIF headers and found a hexadecimal dump that gives me coordinates - now I need to get it from the file.
The question might seem very simple. How do I open a JPEG-file in Delphi to get necessary hexadecimal dumps?
Already tried to read Chars and Integers, but nothing worked. I would like not to use any external libraries for this task if possible.
This is basically my major question, but I'll be extremely happy if anyone could answer one more.
Is there an easy way to search GPS tags without searching the file for specific dumps? Now I'm looking for a strange combination 12 00 02 00 07 00, which really works. I've read EXIF documentation but I couldn't really understand the thing with GPS Tags.
If you require no external libraries, you can do this with TFileStream and an array of byte. I've done this in a project to obtain the 'picture taken date', the GPS lat-long coordinates are just another field in the EXIF header. I don't have the code here but the method is straight-forward: once you have a TFileStream to the JPEG file:
Read the first 2 bytes, check it is in fact $FF $D8 (just to be sure it's a valid JPEG)
Read the next 2 bytes, check if it's $FF $E1
if it's not, depending on which segment it is, read two more bytes (or a word) and skip that many bytes (by calling the stream's Seek method), there's a list of segments here: https://en.wikipedia.org/wiki/JPEG#Syntax_and_structure; then repeat
If it is, read 4 bytes and see if it's 'Exif' ($45 $78 $69 $66)
What follows is $00 $00 and a 8-byte TIFF header which holds general information like endianness, followed by the EXIF tags you need to work through and grab the ones you need, I had a quick search and found a list here: http://www.exiv2.org/tags.html
Since it's safe to assume that the EXIF data is in the first kilobytes of the JPEG file, you could read this much in a byte array (or TMemoryStream) and process the data there, which should perform better than separate small reads from a TFileStream.

Porting an application from little-endian to big-endian architecture

I have a TCP server developed on x86 architecture using C under Linux using berkley socker API. The server runs fine without any problems. But now for some reasons I have to run the server on MIPS architecture which has a big-endian architecture.
The server and the clients communicate through a set of predefined protocol. I will give an example of how a server sends a simple message to the clients:
struct echo_req req;
req.header.version = OFP_VERSION;
req.header.type = OFPT_ECHO_REQUEST;
req.header.length = htons (sizeof req);
req.header.xid = htonl(y);
req.data = htonl (456);
char data[sizeof (req)];
data[0] = req.header.version;
data[1] = req.header.type;
memcpy (data + 2, &req.header.length, 2);
memcpy (data + 4, &req.header.xid, 4);
memcpy (data + 8, &req.data, 4);
if ((send (sock_fd, &data, sizeof (data), 0) == -1))
{
printf ("Error in sending echo request message\n");
exit (-1);
}
printf("Echo Request sent!\n");
As you can see I use htonl and htons for any type longer than a byte to convert it to network byte order. After making up a packet I serialize and pack the data in char array and finally send it over to the netowrk.
Now before I run my server on Big-endian architecture I wanted to clear out a few things. In my perception as I memcpy the data and pack it, if I send it over the network it shouldn't cause any problems on the big-endian architecture as memcpy will perform a byte by byte copy of the data into the array and hence there shouldn't be any problem with the byte ordering when running on Big-endian. Yet I wanted to get the opinion of you people out there which I persume know a lot more than I do as I am still a beginner in network programming :). Please guide me in this whether I am on the right track or not. All help much appreciated.
Thanks
Yes, memcpy just copies bytes in order from a source to a destination.
Without seeing the rest of your code, it's impossible to say that you've used hton(l|s) everywhere you should. It's also possible that you've done something like copying a floating point number byte for byte, which doesn't necessarily work, independent of endianness issues.
I don't see any obvious problems in the code you've posted above though.
Have you made sure you use ntoh/ntos when receiving data as well?
BTW you should simply use the struct for sending data; re-assembling it into the character array does nothing but take CPU time and possibly bear errors.

Resources