Delphi convert hex to PByteArray and back - delphi-10.3-rio

How would I go about converting my hex string into pByteArray and back? using BinToHex and HexToBin RTL Functions.
My attempt is as followed.
function BuffertoHex(ABuf: pByte; ALen: Cardinal): string; overload;
begin
SetLength(Result, 3 * ALen - 1);
BintoHex(#ABuf, pchar(Result), sizeof(ABuf));
end;
function BuffertoHex(ABytes: TArray<byte>): string; overload;
begin
Result := BuffertoHex(pByte(ABytes), Length(ABytes));
end;
function HexToBuffer(LText: String): pByteArray; overload;
var
ABytes: TArray<byte>;
begin
FillChar(ABytes, sizeof(ABytes), #0);
HexToBin(pchar(LText), #ABytes, Length(ABytes));
Result := pByteArray(ABytes);
end;
My function BuffertoHex outputs odd hex values at the beginning. Example:
808725DBF40100001 00 00 00 00 00 00 0D 66 6F 6F 74 70 72 69 6E 74 ....
Yet How do I convert an array of bytes to string with Delphi? approach works just fine. Example:
9D D0 01 00 00 01 00 00 00 00 00 00 0D 66 6F 6F 74 ....

You got a few mistakes in your sources:
SizeOf is a function that return size of variable, not size of data;
You do not get any memory of your result variable, so the memory manager will think that memory of ABytes variable is not used when you leave the function and can use it for another purpose.
Here is simplified workable source of your function:
function HexToBuffer(const LText: String; out ALength : cardinal): pByteArray;
begin
ALength := length(LText) div 2; //calculate length of result (div 2 coz string 'FF' is 1 byte with value 255)
GetMem(Result, ALength); //get memory for result variable
HexToBin(PChar(LText), Result^, ALength); //do converting
end;
Using:
procedure TForm2.Button1Click(Sender: TObject);
var
xBytes: pByteArray;
l : cardinal;
i: Integer;
begin
xBytes := HexToBuffer('FF8000', l);
try
for i := 0 to l - 1 do
xBytes[i].ToString; //do your staff
finally
FreeMem(xBytes);
end;
end;

Related

Interpret DBus Messages

I was trying to interpret the bytes in a DBus Message as specified in https://dbus.freedesktop.org/doc/dbus-specification.html. This is taken from a pcap while using the Frida tool.
The bytes are
0000 6c 01 00 01 08 00 00 00 01 00 00 00 70 00 00 00
0010 01 01 6f 00 15 00 00 00 2f 72 65 2f 66 72 69 64
0020 61 2f 48 6f 73 74 53 65 73 73 69 6f 6e 00 00 00
0030 02 01 73 00 16 00 00 00 72 65 2e 66 72 69 64 61
0040 2e 48 6f 73 74 53 65 73 73 69 6f 6e 31 35 00 00
0050 08 01 67 00 05 61 7b 73 76 7d 00 00 00 00 00 00
0060 03 01 73 00 17 00 00 00 47 65 74 46 72 6f 6e 74
0070 6d 6f 73 74 41 70 70 6c 69 63 61 74 69 6f 6e 00
0080 00 00 00 00 00 00 00 00
There are some fields which I am uncertain what they mean. Appreciate if anyone can provide some guidance on this.
0x6C: Refers to little endian
0x01: Message Type (Method Call)
0x00: Bitwise of OR flags
0x01: Major Protocol Version
0x08000000: Length of Message Body (Little Endian), starting from end of Header. This should be referring to the eight null bytes at the end?
0x01000000: Serial of this Message (Little Endian)
0x70000000: (Little Endian) Not sure what this represents? This value does correspond to the length of the array of struct, excluding trailing null bytes, that starts from 0x0010 and ends at 0x007F.
0x01: Decimal Code for Object Path
0x01: Not sure what this represents?
0x6F: DBus Type 'o' for Object Path
0x15: Length of the Object Path string
You want to look at the part of the specification that tells you what the message format is.
But to answer your questions:
0x08000000: Length of Message Body (Little Endian), starting from end of Header. This should be referring to the eight null bytes at the end?
Correct.
0x70000000: (Little Endian) Not sure what this represents? This value does correspond to the length of the array of struct, excluding trailing null bytes, that starts from 0x0010 and ends at 0x007F.
That's the length of the array in the header. The DBus header is of a variable size - after the first few bytes, it is an array of struct(byte,variant). As per the documentation, that looks like a(yv) if you were to express this as a DBus type signature.
0x01: Decimal Code for Object Path
0x01: Not sure what this represents?
This is where the parsing gets interesting: in our struct, the signature is yv, so the first 0x01 is telling us that this struct entry is the header field for Object Path, as you have seen. However, we now need to parse what the variant contains inside of it. To marshal a variant, you first marshal a signature, which in this case is 1 byte long: 01 6f 00. Note that signatures can be a max of 255 bytes long, so unlike other strings they only have a 1-byte length at the front. As a string, that is o, which tells us that this variant contains an object path inside of it. Since object paths are strings, we then decode the next bytes as a string(keeping note that the leading 4 bytes are the string length): 15 00 00 00 2f 72 65 2f 66 72 69 64 61 2f 48 6f 73 74 53 65 73 73 69 6f 6e 00
If I've done the conversion correctly, that says /re/frida/HostSession
This is taken from a pcap
If it's a standard pcap (or pcapng) D-Bus capture file, using the LINKTYPE_DBUS link-layer type, then Wireshark should be able to read it and, at least to some degree, interpret the messages (i.e., it has code that understands the message format, as defined by the specification to which #rm5248 referred (and to which the LINKTYPE_DBUS entry in the list of link-layer header types refers), so you might not have to interpret all the bytes by yourself.

How to Decode XML Blob field in D7

I'm having a problem trying to decode XML data returned by an instance of MS SQL Server 2014 to an app written in D7. (the version of Indy is the one which came with it, 9.00.10).
Update When I originally wrote this q, I was under the impression that the contents of the blob field needed to be Base64-decoded, but it seems that that was wrong. Having followed Remy Lebeau's suggestion, the blob stream contains recognisable text in the field names and field values before decoding but not afterwards.
In the code below, the SQL in the AdoQuery is simply
Select * from Authors where au_lname = 'White' For XML Auto
the Authors table being the one in the demo 'pubs' database. I've added the "Where" clause to restrict the size of the result set so I can show a hex dump of the returned blob.
According to the Sql Server OLH, the default type of the returned data when 'For XML Auto' is specified is 'binary base64-encoded format'. The data type of the single field of the AdoQuery is ftBlob, if I let the IDE create this field.
Executing the code below generates an exception "Uneven size in DecodeToStream". At the call to IdDecoderMIME.DecodeToString(S), the length of the string S is 3514, and 3514 mod 4 is 2, not 0 as it apparently should be, hence the exception. I've confirmed that the number of bytes in the field's value is 3514, so there's no difference between the size of the variant and the length of the string, i.e. nothing has gone awol in between.
procedure TForm1.FormCreate(Sender: TObject);
var
SS : TStringStream;
Output : String;
S : String;
IdDecoderMIME : TIdDecoderMIME;
begin
SS := TStringStream.Create('');
IdDecoderMIME := TIdDecoderMIME.Create(Nil);
try
AdoQuery1.Open;
TBlobField(AdoQuery1.Fields[0]).SaveToStream(SS);
S := SS.DataString;
IdDecoderMIME.FillChar := #0;
Output := IdDecoderMIME.DecodeToString(S);
Memo1.Lines.Text := S;
finally
SS.Free;
IdDecoderMIME.Free;
end;
end;
I'm using this code:
procedure TForm1.FormCreate(Sender: TObject);
var
SS : TStringStream;
MS : TMemoryStream;
Output : String;
begin
SS := TStringStream.Create('');
MS := TMemoryStream.Create;
try
AdoQuery1.Open;
TBlobField(AdoQuery1.Fields[0]).SaveToStream(SS);
SS.WriteString(#13#10);
Output := SS.DataString;
SS.Position := 0;
MS.CopyFrom(SS, SS.Size);
MS.SaveToFile(ExtractFilePath(Application.ExeName) + 'Blob.txt');
finally
SS.Free;
MS.Free;
end;
end;
A hex dump of the Blob.Txt file looks like this
00000000 44 05 61 00 75 00 5F 00 69 00 64 00 44 08 61 00 D.a.u._.i.d.D.a.
00000010 75 00 5F 00 6C 00 6E 00 61 00 6D 00 65 00 44 08 u._.l.n.a.m.e.D.
00000020 61 00 75 00 5F 00 66 00 6E 00 61 00 6D 00 65 00 a.u._.f.n.a.m.e.
00000030 44 05 70 00 68 00 6F 00 6E 00 65 00 44 07 61 00 D.p.h.o.n.e.D.a.
00000040 64 00 64 00 72 00 65 00 73 00 73 00 44 04 63 00 d.d.r.e.s.s.D.c.
00000050 69 00 74 00 79 00 44 05 73 00 74 00 61 00 74 00 i.t.y.D.s.t.a.t.
00000060 65 00 44 03 7A 00 69 00 70 00 44 08 63 00 6F 00 e.D.z.i.p.D.c.o.
00000070 6E 00 74 00 72 00 61 00 63 00 74 00 44 07 61 00 n.t.r.a.c.t.D.a.
00000080 75 00 74 00 68 00 6F 00 72 00 73 00 01 0A 02 01 u.t.h.o.r.s.....
00000090 10 E4 04 00 00 0B 00 31 37 32 2D 33 32 2D 31 31 .......172-32-11
000000A0 37 36 02 02 10 E4 04 00 00 05 00 57 68 69 74 65 76.........White
000000B0 02 03 10 E4 04 00 00 07 00 4A 6F 68 6E 73 6F 6E .........Johnson
000000C0 02 04 0D E4 04 00 00 0C 00 34 30 38 20 34 39 36 .........408 496
000000D0 2D 37 32 32 33 02 05 10 E4 04 00 00 0F 00 31 30 -7223.........10
000000E0 39 33 32 20 42 69 67 67 65 20 52 64 2E 02 06 10 932 Bigge Rd....
000000F0 E4 04 00 00 0A 00 4D 65 6E 6C 6F 20 50 61 72 6B ......Menlo Park
00000100 02 07 0D E4 04 00 00 02 00 43 41 02 08 0D E4 04 .........CA.....
As you can see, some of it is legible (field names and contents), some of it not. Does anyone recognise this format and know how to clean it up into the plain text I get from executing the same query in SS Management Studio, i.e. how do I successfully extract the XML from the result set?
Btw, I get the same result (including the contents of the Blob.Txt file) using both the MS OLE DB Provider for Sql Server and the Sql Server Native Client 11 provider, and using Delphi Seattle in place of D7.
Given that the code accesses an external database, this code is the closest I can get to an MCVE.
Update #2 The decoding problem vanishes if I change the Sql query to
select Convert(Text,
(select * from authors where au_lname = 'White' for xml AUTO
))
which gives the result (in SS) of
<authors au_id="172-32-1176" au_lname="White" au_fname="Johnson" phone="408 496-7223" address="10932 Bigge Rd." city="Menlo Park" state="CA" zip="94025" contract="1"/>
but I'm still interested to know how to get this to work without needing the Convert(). I've noticed that if I remove the Where clause from the Sql, what is returned is not well-formed XML - it contains a series of nodes, one per data row, but there is no enclosing root node.
Also btw, I realise that I can avoid this problem by not using "For XML Auto", I'm just interested in how to do it correctly. Also, I don't need any help parsing the XML once I've managed to extract it.
Add the TYPE Directive to specify that you want XML returned.
select *
from Authors
where au_lname = 'White'
for xml auto, type
You can't simply decode the binary blob into XML.
You can use TADOCommand and direct its output stream to an XML document object e.g.:
const
adExecuteStream = 1024;
var
xmlDoc, RecordsAffected: OleVariant;
cmd: TADOCommand;
xmlDoc := CreateOleObject('MSXML2.DOMDocument.3.0'); // or CoDomDocument30.Create;
xmlDoc.async := False;
cmd := TADOCommand.Create(nil);
// specify your connection string
cmd.ConnectionString := 'Provider=SQLOLEDB;Data Source=(local);...';
cmd.CommandType := cmdText;
cmd.CommandText := 'select top 1 * from items for xml auto';
cmd.Properties['Output Stream'].Value := xmlDoc;
cmd.Properties['XML Root'].Value := 'RootNode';
cmd.CommandObject.Execute(RecordsAffected, EmptyParam, adExecuteStream);
xmlDoc.save('d:\test.xml');
cmd.Free;
This results a well-formed XML with enclosing root node RootNode.

Delphi FireDAC with MS Access 2010 database. Why does it convert ACE to Jet?

I have converted a .mdb database to a .accdb database by following these steps:
https://support.office.com/en-us/article/Convert-a-database-to-the-accdb-file-format-69abbf06-8401-4cf3-b950-f790fa9f359c
(using MS Access 2010)
After the conversion, the .accdb file starts with the following:
(database.accdb, file header viewed with hex editor), which is what I intended...
00 01 00 00 53 74 61 6E 64 61 72 64 20 41 43 45 20 44 42 00 02 00 00 00 B5 6E 03 62 60 09 C2 55 E9 A9 67 72 40 3F 00 9C 7E 9F 90 FF 85 9A 31 C5
....Standard ACE DB.....µn.b`.ÂUé©gr#?.œ~Ÿ.ÿ…š1Å
After opening the database, dropping a table, re-creating the table and doing some inserts with TFDConnection / TFDPhysMSAccessDriverLink / TFDBatchMove / TFDBatchMoveDataSetReader / TFDBatchMoveDataSetWriter and the following code
FAccessDB := TFDConnection.Create(Self);
FAccessDB.Name := '';
FAccessDB.Params.Clear;
FAccessDB.Params.Add('DriverID=MSAcc_Direct');
FAccessDB.LoginPrompt := False;
// FDPhysMSAccessDriverLink1
FFDPhysMSAccessDriverLink1 := TFDPhysMSAccessDriverLink.Create(Self);
FFDPhysMSAccessDriverLink1.Name := '';
FFDPhysMSAccessDriverLink1.DriverID := 'MSAcc_Direct';
// Table_Out
FFDTable_Out := TFDTable.Create(Self);
FFDTable_Out.Name := '';
FFDTable_Out.Connection := FAccessDB;
// FDBatchMove1
FFDBatchMove1 := TFDBatchMove.Create(Self);
FFDBatchMove1.Name := '';
FFDBatchMove1.OnError := FDBatchMove1Error;
FFDBatchMove1.OnFindDestRecord := FDBatchMove1FindDestRecord;
FFDBatchMove1.OnProgress := FDBatchMove1Progress;
// FDBatchMoveDataSetReader1
FFDBatchMoveDataSetReader1 := TFDBatchMoveDataSetReader.Create(Self);
FFDBatchMoveDataSetReader1.Name := '';
// FDBatchMoveDataSetWriter1
FFDBatchMoveDataSetWriter1 := TFDBatchMoveDataSetWriter.Create(Self);
FFDBatchMoveDataSetWriter1.Name := '';
// FDBatchMove1
FFDBatchMove1.Reader := FFDBatchMoveDataSetReader1;
FFDBatchMove1.Writer := FFDBatchMoveDataSetWriter1;
FFDBatchMove1.Options := [poIdentityInsert];
FAccessDB.Params.Values['Database'] := 'database.accdb';
FAccessDB.Connected := True;
aDropTableSQL := 'DROP TABLE ' + FTablenameDest;
FAccessDB.ExecSQL(aDropTableSQL);
FAccessDB.Commit;
aCreateTableSQL := 'CREATE TABLE ' + FTablenameDest; // plus the rest
//of the create statement
FFDTable_Out.TableName := FTablenameDest;
FFDTable_Out.Active := True;
FFDBatchMoveDataSetReader1.DataSet := FDataSetSrc; // a TDataset from
// another database
FFDBatchMoveDataSetWriter1.DataSet := FFDTable_Out;
FFDBatchMoveDataSetWriter1.Direct := True;
FFDBatchMoveDataSetReader1.DataSet.Active := True;
FFDBatchMoveDataSetWriter1.DataSet.Active := True;
FFDBatchMove1.Mode := dmAlwaysInsert;
FFDBatchMove1.Execute;
FAccessDB.Commit;
FAccessDB.Connected := False;
FFDMSAccessService1 := TFDMSAccessService.Create(Self);
FFDMSAccessService1.Name := '';
FFDMSAccessService1.Database := 'database.accdb';
FFDMSAccessService1.DestDatabase := 'database.accdb_temp.accdb';
FFDMSAccessService1.DBVersion := avAccess2007;
FFDMSAccessService1.Compact; // <-- seems to convert here...
the file header of database.accdb becomes
00 01 00 00 53 74 61 6E 64 61 72 64 20 4A 65 74 20 44 42 00 01 00 00 00 B5 6E 03 62 60 09 C2 55 E9 A9 67 72 40 3F 00 9C 7E 9F 90 FF 85 9A 31 C5
....Standard Jet DB.....µn.b`.ÂUé©gr#?.œ~Ÿ.ÿ…š1Å
again, which it also was before conversion from .mdb to .accdb
it seems to me, that 'Standard Jet DB' means old format (.mdb)
and 'Standard ACE DB' means new format (.accdb)
Does FireDAC convert it back? why?
How can I keep the new Access Format (.accdb, ACE DB)?
Just received an answer from Embarcadero (very fast answer, cool!):
Dmitry Arefiev wrote:
This is a known issue. At moment TFDMSAccessService does not really support avAccess2007.
--
With best regards,
Dmitry Arefiev / FireDAC Architect

Decoding date storage in legacy database aka "fun" with numbers

I'm writing a utility to rip records out of a legacy DB(that we can't query), and I'm having trouble interpreting how a date field is stored.
All Dates will be in MM/DD/YYYY format. Hex will be bytes(2 digits) separated by spaces.
What we know:
Hours and mins are stored in a different location. Adding an hour or min to
the datetime does not effect the 4 bytes in question
The field that corresponds to the Month, day and year is 4 bytes:
01/01/1800 == 70 8E 00 00
01/15/1800 == 7E 8E 00 00
01/16/1800 == 7F 8E 00 00
01/31/1800 == 8E 8E 00 00
02/01/1800 == 8F 8E 00 00
02/02/1800 == 90 8E 00 00
02/15/1800 == 9D 8E 00 00
02/16/1800 == 9E 8E 00 00
02/28/1800 == AA 8E 00 00
02/29/1800 == AB 8E 00 00 #PLACEHOLDER FOR LEAP YEAR
03/01/1800 == AC 8E 00 00
12/01/1800 == BF 8F 00 00
12/02/1800 == C0 8F 00 00
12/03/1800 == C1 8F 00 00
12/15/1800 == CD 8F 00 00
12/16/1800 == CE 8F 00 00
12/30/1800 == DC 8F 00 00
12/31/1800 == DD 8F 00 00
01/01/1801 == DE 8F 00 00
12/31/1801 == 4A 91 00 00
Anyone have any ideas? And yes, I'm familiar with epoch time.
There are 4 bytes. Each new day increments the byte farthest to the left. Once that byte gets to "FF" it adds 1 to the byte to the right of it. Try this.. (Written in Ruby)
def parse_date(hex)
actual_known_date = "1/1/2050".to_date
known_date = "21F30100"
total_days_since_known_date = 0
first_byte = hex[0,2]
second_byte = hex[2,2]
third_byte = hex[4,2]
fourth_byte = hex[6,2]
known_first_byte = known_date[0,2]
known_second_byte = known_date[2,2]
known_third_byte = known_date[4,2]
known_fourth_byte = known_date[6,2]
byte_4_days = known_fourth_byte.hex - fourth_byte.hex
byte_3_days = 0
byte_2_days = 0
byte_1_days = 0
if known_third_byte.hex >= third_byte.hex
byte_3_days = known_third_byte.hex - third_byte.hex
else
byte_4_days -= 1
ktb = known_third_byte.hex + 256
byte_3_days = ktb - third_byte.hex
end
if known_second_byte.hex >= second_byte.hex
byte_2_days = known_second_byte.hex - second_byte.hex
else
byte_3_days -= 1
ktb = known_second_byte.hex + 256
byte_2_days = ktb - second_byte.hex
end
if known_first_byte.hex >= first_byte.hex
byte_1_days = known_first_byte.hex - first_byte.hex
else
byte_2_days -= 1
ktb = known_first_byte.hex + 256
byte_1_days = ktb - first_byte.hex
end
total_days_since_known_date = (byte_1_days + (byte_2_days * 256) + (byte_3_days * (256 * 256)) + (byte_4_days * (256 * 256 * 256)))
number_of_leap_days = 0
date_we_want = actual_known_date - (total_days_since_known_date).days
return date_we_want
end

Reading a NFC Mifare card with NXP Reader Library

I'm trying to read the content of a Mifare Ultralight card using the NFC Reader Library.
I'm totally new with NFC and I'm using this github repository to start.
The code in this repo allows to detect which type of card is detected (Mifare, Mifare ultralight ...) and read the UID of the card. I added this code in order to read the content of a Mifare ultralight card:
uint8_t bBufferReader[96];
memset(bBufferReader, '\0', 0x60);
PH_CHECK_SUCCESS_FCT(status, phalMful_Read(&alMful, 4, bBufferReader));
int i;
for(i = 0; i < 96; i++){
printf("%02X", bBufferReader[i]);
}
I have a card that contains the text "Hello world" and when I read it, the piece of code above print the following bytes:
0103A010440312D1010E5402667248650000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000
So I'm reading some stuff out of my card, however there is no traces of a "Hello world"
text.
Probably missing something (might be a big something or, hopefully, a little something). Any help would be great !
Edit
So I made some good progress. Mifare ultralight contains 16 pages of 4 bytes, 0 - 3 are for internal usage (serial number, lock etc ...) and 4 - 15 are for user data. I can now read the content of my cards, however, just a few question remains:
I'm reading a card that contains an URL, www.google.com, here is what I got:
03 0F D1 01 -> Page 1, 4 bytes of non text data, not sure what it is
0B 55 01 67 -> Page 2, 3 bytes of non text data, then 1 bytes for the "g"
6F 6F 67 6C -> Page 3, 4 bytes for "oogl"
65 2E 63 6F -> Page 4, 4 bytes for "e.co"
6D FE 00 00 -> Page 6, 1 byte for "m", 1 byte for I don't know
00 00 00 00 -> Other pages are just empty
00 00 00 00
00 00 00 00
00 00 00 00
00 00 00 00
00 00 00 00
00 00 00 00
So I have got 7 bytes of data + my url, "google.com" + 1 byte FE
I can't find what are these 7 prefix bytes + this 1 trailing byte...
Edit again
Ok got it, it's the NDEF message format.
Yes it is NDEF format!
03 NDEF Message
0F length
Record 1
D1 - MB, ME, SR, TNF=”NFC Forum well-known type”
01 Type length
0B Payload length
55 Type - “U”(Abbrivation for URL)
67 6F 6F 67 6C 65 2E 63 6F 6D (google.com)
Record 2
FE Terminator NDEF

Resources