Generate UUID length of 35 char in JMeter - uuid

I am trying to generate unique ID in Jmeter only for 35 char (length) But Jmeter {__UUDI()} function is creating 36 char (length) of unique ID. How Can I reduce the length?

From jmeter reference of the UUID function, the UUID is a 128 bit random combination of values. This gets converted into a string of chars based on a rule assigning n bits to each character. Using ASCII, you should either get 32 or 36 chars. If 36 chars, you should get four dashes - in it. If you want 35 chars, you could try removing one dash.
I found this a useful and clear reference.
Some more thoughts on UUIDs:
A universally unique identifier (UUID) is a 128-bit number used to identify information in computer systems. The term globally unique identifier (GUID) is also used, typically in software created by Microsoft.
[...]
In its canonical textual representation, the 16 octets of a UUID are represented as 32 hexadecimal (base-16) digits, displayed in 5 groups separated by hyphens, in the form 8-4-4-4-12 for a total of 36 characters (32 alphanumeric characters and 4 hyphens). - Wikipedia
What I'm suggesting is that you could use a non-canon representation like 8-8-4-12.

Related

How many bytes will be required to store number in binary and text files respectively

If I want to store a number, let's say 56789 in a file, how many bytes will be required to store it in binary and text files respectively? I want to know how bytes are allocated to data in binary and text files.
It depends on:
text encoding and number system (decimal, hexadecimal, many more...)
signed/not signed
single integer or multiple (require separators)
data type
target architecture
use of compressed encodings
In ASCII a character takes 1 byte. In UTF-8 a character takes 1 to 4 bytes, but digits always take 1 byte. In UTF-16 or Unicode it takes 2 or more bytes per character.
Non-ASCII formats may require additional 2 bytes (initial BOM) for the file, this depends on the editor and/or settings used when the file was created.
But let's assume you store the data in a simple ASCII file, or the discussion becomes needlessly complex.
Let's also assume you use the decimal number system.
In hexadecimal you use digits 0-9 and letters a-f to represent numbers. A decimal (base-10) like 34234324423 would be 7F88655C7 in hexadecimal (base-16). In the first system we have 11 digits, in the second just 9 digits. The minimum base is 2 (digits 0 and 1) and the common maximum base is 64 (base-64). Technically, with ASCII you could go as high as base-96 maybe base-100, but that's very uncommon.
Each digit (0-9) will take one byte. If you have signed integers, an additional minus sign will lead the digits (so negative numbers charge 1 additional byte).
In some circumstances you may want to store several numerals. You will need a separator to tell the numerals apart. A comma (,), colon (:), semicolon (;), pipe (|) or newline (LF, CR or on Windows CRLF, which takes 2 bytes) have all been observed in the djungle as legit separators of numerals.
What is a numeral? The concept or idea of the quantity 8 that is IN YOUR HEAD is the number. Any representation of that concept on stone, paper, magnetic tape, or pixels on a screen are just that: REPRESENTATIONS. They are symbols which stand for what you understand in your brain. Those are numerals. Please don't ever confuse numbers with numerals, this distinction is the foundation of mathematics and computer science.
In these cases you want to count an additional character for the separator per numeral. Or maybe per numeral minus one. It depends on if you want to terminate each numeral with a marker or separate the numerals from each other:
Example (three digits and three newlines): 6 bytes
1<LF>
2<LF>
3<LF>
Example (three digits and two commas): 5 bytes
1,2,3
Example (four digits and one comma): 5 bytes
2134,
Example (sign and one digit): 2 bytes
-3
If you store the data in a binary format (not to be confused with the binary number system, which would still be a text format) the occupied memory depends on the integer type (or, better, bit length of the integer).
An octet (0..255) will occupy 1 byte. No separators or leading signs required.
A 16-bit float will occupy 2 bytes. For C and C++ the underlying architecture must be taken into account. A common integer on a 32-bit architecture will take 4 bytes. The very same code, compiled against a 64-bit architecture, will take 8 bytes.
There are exceptions to those flat rules. As an example, Google's protobuf uses a zig-zag VarInt implementation that leverages variable length encoding.
Here is a VarInt implementation in C/C++.
EDIT: added Thomas Weller's suggestion
Beyond the actual file CONTENT you will have to store metadata about the file (for bookkeeping such as the first sector, the filename, access permissions and more). This metadata is not shown for the file occupying space on disk, but actually is there.
If you store each numeral in a separate file such as the numeral 10 in the file result-10, these metadata entries will occupy more space than the numerals themselves.
If you store ten, hundred, thousands or millions/billions of numerals in one file, that overhead becomes increasingly irrelevant.
More about metadata here.
EDIT: to be clearer about file overhead
The overhead is under circumstances relevant, as discussed above.
But it is not a differentiator between textual and binary formats. As doug65536 says, however you store the data, if the filesystem structure is the same, it does not matter.
A file is a file, independently if it contains binary data or ASCII text.
Still, the above reasoning applies independently from the format you choose.
The number of digits needed to store a number in a given number base is ceil(log(n)/log(base)).
Storing as decimal would be base 10, storing as hexadecimal text would be base 16. Storing as binary would be base 2.
You would usually need to round up to a multiple of eight or power of two when storing as binary, but it is possible to store a value with an unusual number of bits in a packed format.
Given your example number (ignoring negative numbers for a moment):
56789 in base 2 needs 15.793323887 bits (16)
56789 in base 10 needs 4.754264221 decimal digits (5)
56789 in base 16 needs 3.948330972 hex digits (4)
56789 in base 64 needs 2.632220648 characters (3)
Representing sign needs an additional character or bit.
To look at how binary compares to text, assume a byte is 8 bits, each ASCII character would be a byte in text encoding (8 bits). A byte has a range of 0 to 255, a decimal digit has a range from 0 to 9. Each character (8 bits) can encode about 3.32 bits of a number per byte (log(10)/log(2)). A binary encoding can store 8 bits of a number per byte. Encoding numbers as text takes about 2.4x more space. If you pad out your numbers so they line up in fields, then numbers are very poor storage encoding, with a typical width being 10 digits you'll be storing 80 bits, which would be only 33 bits of binary encoded data.
I am not too developed in this subject; however, I believe it would not just be a case of the content, but also the META-DATA attached. But if you were just talking about the number, you could store it in ASCII or in a binary form.
In binary, 56789 could be converted to 1101110111010101; there is a 'simple' way to work this out on paper. But, http://www.binaryhexconverter.com/decimal-to-binary-converter is a website you can use to convert it.
1101110111010101 has 16 characters, therefore 16 bits which is two bytes.
Each integer is usually around 4 bytes of storage. So if you are storing the number in binary in the text file, and the binary equivalent is 1101110111010101, there are 16 integers in that binary number. 16 * 4 = 64. So your number will take up about 64 bytes of storage. If your integers were stored in 64bit rather than 32bit, each integer would instead take up 8 bytes of storage, so your total would equal 128 bytes.
Before you post any question, you should do your research.
Size of the file depends on many factors but for the sake of simplicity, in text format numbers will occupy 1 byte for each character if you are using UTF-8 encoding. On the other hand a binary value for long data type will take 4 bytes.

How to simply generate a random base64 string compatible with all base64 encodings

In C, I was asked to write a function to generate a random Base64 string of length 40 characters (30 bytes ?).
But I don't know the Base64 flavor, so it needs to be compatible with many version of Base64.
What can I do ? What is the best option ?
All the Base64 encodings agree on some things, such as the use of [0-9A-Za-z], which are 62 characters. So you won't get a full 64^40 possible combinations, but you can get 62^40 which is still quite a lot! You could just generate a random number for each digit, mod 62. Or slice it up more carefully to reduce the amount of entropy needed from the system. For example, given a 32-bit random number, take 6 bits at a time (0..63), and if those bits are 62 or 63, discard them, otherwise map them to one Base64 digit. This way you only need about 8, 32-bit integers to make a 40-character string.
If this system has security considerations, you need to consider the consequences of generating "unusual" Base64 numbers (e.g. an attacker could detect that your Base64 numbers are special in having only 62 symbols with just a small corpus--does that matter?).

SQLite3 stores nonreadable text

I used SQLite3 to implement small application to read from or write to a database. Some records that need to be added to the database are Arabic texts and when they are stored to the database they converted to non-readable and non-understood texts. I use these APIs for write & read:
sqlite3_open
sqlite3_prepare
sqlite3_bind_text
sqlite3_step
What can I do to solve the problem ?
It is most likely that your text is in non-ASCII encoding. For example, in unicode.
This is because ASCII table has only characters represented by integer numbers from 0 to 127. So there is nothing that can be used to represent Arabic letters. For example, Unicode is using five different ranges to represent Arabic language:
Arabic (0600—06FF, 224 characters)
Arabic Supplement (0750—077F, 48 characters)
Arabic Presentation Forms-A (FB50—FDFF, 608 characters)
Arabic Presentation Forms-B (FE70—FEFF, 140 characters)
Rumi Numeral Symbols (10E60—10E7F, 31 characters)
And since there could be more letters/characters that a 8-bit value (char type, which has a length of 1 byte) would allow, wide character is used to represent some (or even all) of those letters.
As a result, the length of the string in characters will be different from length of the string in bytes. My assumption is that when you use sqlite3_bind_text function, you pass a number of characters as a fourth parameter, whereas it should be a number of bytes. Or you could misinterpret this length when reading the string back from the database. The sqlite3_bind_text documentation is saying this about the fourth parameter:
In those routines that have a fourth argument, its value is the number
of bytes in the parameter. To be clear: the value is the number of
bytes in the value, not the number of characters. If the fourth
parameter is negative, the length of the string is the number of bytes
up to the first zero terminator.
Make sure you do the right thing there.
See also:
Wide characters
Unicode
Arabic characters in Unicode
Good luck!

Maximum length for MD5 input/output

What is the maximum length of the string that can have md5 hashed? Or: If it has no limit, and if so what will be the max length of the md5 output value?
MD5 processes an arbitrary-length message into a fixed-length output of 128 bits, typically represented as a sequence of 32 hexadecimal digits.
The length of the message is unlimited.
Append Length
A 64-bit representation of b (the length of the message before the
padding bits were added) is appended to the result of the previous
step. In the unlikely event that b is greater than 2^64, then only
the low-order 64 bits of b are used.
The hash is always 128 bits. If you encode it as a hexdecimal string you can encode 4 bits per character, giving 32 characters.
MD5 is not encryption. You cannot in general "decrypt" an MD5 hash to get the original string.
See more here.
You can have any length, but of course, there can be a memory issue on the computer if the String input is too long. The output is always 32 characters.
The algorithm has been designed to support arbitrary input length. I.e you can compute hashes of big files like ISO of a DVD...
If there is a limitation for the input it could come from the environment where the hash function is used. Let's say you want to compute a file and the environment has a MAX_FILE limit.
But the output string will be always the same: 32 hex chars (128 bits)!
A 128-bit MD5 hash is represented as a sequence of 32 hexadecimal digits.
You may want to use SHA-1 instead of MD5, as MD5 is considered broken.
You can read more about MD5 vulnerabilities in this Wikipedia article.
There is no limit to the input of md5 that I know of. Some implementations require the entire input to be loaded into memory before passing it into the md5 function (i.e., the implementation acts on a block of memory, not on a stream), but this is not a limitation of the algorithm itself. The output is always 128 bits. Note that md5 is not an encryption algorithm, but a cryptographic hash. This means that you can use it to verify the integrity of a chunk of data, but you cannot reverse the hashing.
Also note that md5 is considered broken, so you shouldn't use it for anything security-related (it's still fine to verify the integrity of downloaded files and such).
md5 algorithm appends the message length to the last 64 bits of the last block, thus it would be fair to say that the message can be 2^64 bits long (18 e18 bits).
Max length for MD5 input : largest definable and usable stream of bit
A stream of bit definition constraints can depend on operating system, hardware constraints, programming language and more...
Length for MD5 output : Fixed-length always 128 bits
For easier display, they are usually displayed in hex, which because each hex digit (0-1-2-3-4-5-6-7-8-9-A-B-C-D-E-F) takes up 4 bits of space, so its output can be displayed as 32 hex digits.
128 bits = 16 bytes = 32 hex digits
The md5 output is always 32 characters.Therefore, when setting character limit for the password in the database, do not give a value below 32 characters. If you give a value below 32, the password will be incompletely recorded in the database and therefore users will encounter an error while logging into the system.

Maximum MIMEType Length when storing type in DB

What are people using as the length of a MIMEType field in their databases? The longest one we've seen so far is 72 bytes:
application/vnd.openxmlformats-officedocument.wordprocessingml.document
but I'm just waiting for a longer one. We're using 250 now, but has anyone seen a longer MIMEType than that?
Edit: From the accepted answer, 127 for type and sub-type each, so that's 254 max, plus the '/' is a limit of 255 for the combined value.
According to RFC 4288 "Media Type Specifications and Registration Procedures", type (eg. "application") and subtype (eg "vnd...") both can be max 127 characters. So including the slash, the maximum length is 255.
Edit: Meanwhile, that document has been obsoleted by RFC 6838, which does not alter the maximum size but adds a remark:
Also note that while this syntax allows names of up to 127
characters, implementation limits may make such long names
problematic. For this reason, <type-name> and <subtype-name> SHOULD
be limited to 64 characters.

Resources