Confusion in max connections allowed in AWS RDS types - database

I know we can find the max connections by {DBInstanceClassMemory/12582880}(12582880 is the magic number for my AWS resources). However, for db.m4.large (8 GB RAM) I checked online at many places that the maxConnections are 648. However, when I made the calculations, I found
8 * 1000000000/12582880 = 635.7844944877
8 * 1024 * 1024 * 1024 / 12582880 = 682.6684027822
Similarly for db.t2.small
2 * 1000000000/12582880 = 158.9461236219
2 * 1024 * 1024 * 1024 / 12582880 = 170.6671006955
acc to the internet: 150
Please help with finding the correct number. I cannot open MySQL console on the AWS instance due to some restrictions.

This AWS page has a table of containers and max_connections limits (default):
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Managing.Performance.html
excerpt:
db.t2.small 45
db.t2.medium 90
db.t3.small 45
db.t3.medium 90
db.r3.large 1000
Text says:
the default connection limit is derived using a formula based on the DBInstanceClassMemory value.

Related

Is it possible to determine ENCRYPTBYKEY maximum returned value by the clear text type?

I am going to encrypted several fields in existing table. Basically, the following encryption technique is going to be used:
CREATE MASTER KEY ENCRYPTION
BY PASSWORD = 'sm_long_password#'
GO
CREATE CERTIFICATE CERT_01
WITH SUBJECT = 'CERT_01'
GO
CREATE SYMMETRIC KEY SK_01
WITH ALGORITHM = AES_256 ENCRYPTION
BY CERTIFICATE CERT_01
GO
OPEN SYMMETRIC KEY SK_01 DECRYPTION
BY CERTIFICATE CERT_01
SELECT ENCRYPTBYKEY(KEY_GUID('SK_01'), 'test')
CLOSE SYMMETRIC KEY SK_01
DROP SYMMETRIC KEY SK_01
DROP CERTIFICATE CERT_01
DROP MASTER KEY
The ENCRYPTBYKEY returns varbinary with a maximum size of 8,000 bytes. Knowing the table fields going to be encrypted (for example: nvarchar(128), varchar(31), bigint) how can I define the new varbinary types length?
You can see the full specification here
So lets calculate:
16 byte key UID
_4 bytes header
16 byte IV (for AES, a 16 byte block cipher)
Plus then the size of the encrypted message:
_4 byte magic number
_2 bytes integrity bytes length
_0 bytes integrity bytes (warning: may be wrongly placed in the table)
_2 bytes (plaintext) message length
_m bytes (plaintext) message
CBC padding bytes
The CBC padding bytes should be calculated the following way:
16 - ((m + 4 + 2 + 2) % 16)
as padding is always applied. This will result in a number of padding bytes in the range 1..16. A sneaky shortcut is to just add 16 bytes to the total, but this may mean that you're specifying up to 15 bytes that are never used.
We can shorten this to 36 + 8 + m + 16 - ((m + 8) % 16) or 60 + m - ((m + 8) % 16. Or if you use the little trick specified above and you don't care about the wasted bytes: 76 + m where m is the message input.
Notes:
beware that the first byte in the header contains the version number of the scheme; this answer does not and cannot specify how many bytes will be added or removed if a different internal message format or encryption scheme is used;
using integrity bytes is highly recommended in case you want to protect your DB fields against change (keeping the amount of money in an account confidential is less important than making sure the amount cannot be changed).
The example on the page assumes single byte encoding for text characters.
Based upon some tests in SQL Server 2008, the following formula seems to work. Note that #ClearText is VARCHAR():
52 + (16 * ( ((LEN(#ClearText) + 8)/ 16) ) )
This is roughly compatible with the answer by Maarten Bodewes, except that my tests showed the DATALENGTH(myBinary) to always be of the form 52 + (z * 16), where z is an integer.
LEN(myVarCharString) DATALENGTH(encryptedString)
-------------------- -----------------------------------------
0 through 7 usually 52, but occasionally 68 or 84
8 through 23 usually 68, but occasionally 84
24 through 39 usually 84
40 through 50 100
The "myVarCharString" was a table column defined as VARCHAR(50). The table contained 150,000 records. The mention of "occasionally" is an instance of about 1 out of 10,000 records that would get bumped into a higher bucket; very strange. For LEN() of 24 and higher, there were not enough records to get the weird anomaly.
Here is some Perl code that takes a proposed length for "myVarCharString" as input from the terminal and produces an expected size for the EncryptByKey() result. The function "int()" is equivalent to "Math.floor()".
while($len = <>) {
print 52 + ( 16 * int( ($len+8) / 16 ) ),"\n";
}
You might want to use this formula to calculate a size, then add 16 to allow for the anomaly.

Load distribution in LoadRunner

I want to control TPH / TPS using HP LoadRunner. In JMeter we can do it by using constant throughput timer or if any one has alternative ways then please share.
For example:
Transaction A-Login (100 TPH)
Transaction B-Search Product (1000 TPH)
Transaction C-Add Product in cart (200 TPH)
Transaction D-Payment (200 TPH)
Transaction E-Logout (100 TPH)
If all of these transactions are in different scripts, no problem, since you can set different pacing and run time settings to each script.
I assume your problem is that all of these transactions are in the same script. In this case, the only solution is to create a parameter in your script, let's call this parameter iterator, and set its type as iteration number. This way, this parameter will be with value 1 in the first iteration, value 2 in the second, etc. etc.
Now you can use this parameter before calling each transaction.
Let's say your maximum TPH is 1,000. Then set the script's run time settings pace to 1,000 TPH. But if you want a certain transaction to run less than that, let's say only 100 TPH, then you need to run it every 10th iteration only (1,000 / 100 = 10).
To do that, in your script, you can use iterator % 10:
// Cast the iterator parameter to an int
var i;
i = atoi(lr_eval_string("{iterator}"));
// This will run 100 TPH
if ((i % 10) == 0)
{
lr_start_transaction("Login");
// Do login
...
lr_end_transaction("Login", LR_AUTO);
}
And another example, to run 200 TPH, you can use iterator % 5:
// This will run 200 TPH
if ((i % 5) == 0)
{
lr_start_transaction("Add Product");
// Do Add Product
...
lr_end_transaction("Add Product", LR_AUTO);
}

Numericals on Token Bucket

Question
For a host machine that uses the token bucket algorithm for congestion control, the token bucket has a capacity of 1 mega byte and the maximum output rate is 20 mega bytes per second. Tokens arrive at a rate to sustain output at a rate of 10 mega bytes per second. The token bucket is currently full and the machine needs to send 12 mega bytes of data. The minimum time required to transmit the data is _____________ seconds.
My Approach
Initially token bucket is full. the rate at which it is emptying is (20-10) Mbps. time take to empty token bucket of 1 mb is 1/10 i.e 0.1 sec
But answer is given as 1.2sec .
Token bucket has a capacity of 1 mega byte (maximum capacity C )
Here one byte is considered as one token
⇒ C=1 M tokens
output rate is 20 mega bytes per second (M=20MBps)
Tokens arrive at a rate to sustain output at a rate of 10 mega bytes per second
⇒20-R=10
⇒ Input Rate R=10MBps
Unlike Leaky Bucket , idle hosts can capture and save up c ≤ C tokens in order to send larger bursts later. s
When we begin transfer the tokens present in token buckt is transmitted at once to the network
ie. if initially capacity of token bucket is 'c' then c tokens will
be instantly be present in the network.
Time to Empty the token bucket
c: is the inital capacity of token bucket
R: every sec we are getting R tokens
M : evey seconds M tokens are produced
INPUT FLOW : Then the number of packets that are ready to enter the network during a time interval 't' is c+Rt
OUTPUT FLOW : Then the number of packets that are ready to enter the network during a time interval 't' is Mt
INPUT FLOW = OUTPUT FLOW
⇒ c+Rt = Mt
t= c/M-R =1/20-10 =0.1sec
Given that Token bucket is full (c=C)
Now , We have got two cases
To transfer 1M tokens , Will it be instantly with t=0
Or to transfer 1M tokens , we take 10/ 20-10 = 0.1sec ?
To transfer 1M (inital token) tokens , Will it be instantly with t=0
Consider the equation
INPUTFLOW = c+Rt
This means that
" c tokens (initally contained in token bucket ) are transmitted without any delays "
Unlike Leaky bucket , token buckets can keep on reserving token if the sender is idle .Once it is ready to send the packets . Packets will take the token and will be transmitted to the network. ⇒ c And then we are adding the R tokens produced in 't' time to finnaly get the INPUTFLOW
⇒ 1 MB is transmitted instantly . Now we are left with 11 MB to transmit
To trnasfer remaining 11 MB
at t=0 we begin transmitting 11 MB data.
at t=0.1sec : 1MB (1 MB transfered)
at t=0.2sec : 1MB (2 MB transfered)
..
..
at t=1.1 sec : 1MB (11 MB transfered )
Therefore to transfer 12MB it takes 1.1sec + 0 sec = 1.1 sec
Transfer 1M (inital token) tokens , we take = 0.1sec
( if it take 0.1 sec for 1MB i could argue that it will take 1.2ssec for 12MB )
then during 0.1sec . 01 *10MBps = 1M tokens are fulled up .
t=0s : begin to transfer 12 MB data.
t=0.1s : 1MB
t=0.2s : 1MB (2 MB transfered)
t=0.3s : 1MB (3 MB transfered)
..
..
t=1.2s : 1MB (12 MB transfered)
Therefore to transfer 12MB it takes 1.2sec
Question does clearly mention about this part . Hence it is common practice to always follw the best case .
Therefore the answer would be 1.1 sec
More Information : Visit Gate Overflow - Gate 2016 Question on Token Bucket

What is an efficient algorithm to input 4B integers to a text file

Let's say I want to write 1,2,3,4....up to 4.096B in a text file. What would be a time efficient way to do it. Just doing it sequentially is taking a long time. So wondering if there's a distributed way.
Thanks to all your comments on my question. It helped me solve this problem in a reasonable amount of time. Here's what I did -
Create a file using Excel to create a million integers from 0 - 1000000
Upload this file in Hadoop
Write a Hive query with 4296 lines like below -
a0 = SELECT IPDecimal + (100000 * 1) + 1 AS IPDecimal FROM #file;
a1 = SELECT IPDecimal + (100000 * 2) + 1 AS IPDecimal FROM #file;
.
.
.
a4295 = SELECT IPDecimal + (100000 * 4295) + 1 AS IPDecimal FROM #file;
Output the result of each SELECT statement above in a separate file and then consolidate the integers in the 4296 files in one single file

C - Calling chmod function causes unexpected results

I'm writing a program that needs to be able to set file permissions, but for whatever reason, chmod is not behaving in the way I would expect it to. For a couple tests, I attempted to create two different files (fileOne.txt, and fileTwo.txt). fileOne.txt should have permissions set to 600, while fileTwo.txt should have permissions set to 777.
Running my program results in the following:
fileOne.txt having permissions set to ---x-wx--T
fileTwo.txt having permissions set to -r----x--t
?? WHAT?
Below is my code. The results of my printf are as anticipated, (600, 777), so why would chmod not like this?
printf("chmod = %d\n", (int)getHeader.p_owner * 100 + (int)getHeader.p_group * 10 + (int)getHeader.p_world);
chmod(getHeader.file_name, (int)getHeader.p_owner * 100 + (int)getHeader.p_group * 10 + (int)getHeader.p_world);
The UNIX file system permissions are octal base and not decimal base. So multiplying it by 100 and 10 will give you unexpected results.
Permissions are reported in octal so 600 is in fact 0600 in C (or 384 in decimal).
Hence code should be:
printf("chmod = %d\n", (int)getHeader.p_owner * 100 + (int)getHeader.p_group * 10 + (int)getHeader.p_world);
chmod(getHeader.file_name, (int)getHeader.p_owner * 0100 + (int)getHeader.p_group * 010 + (int)getHeader.p_world);

Resources