Numericals on Token Bucket - congestion-control

Question
For a host machine that uses the token bucket algorithm for congestion control, the token bucket has a capacity of 1 mega byte and the maximum output rate is 20 mega bytes per second. Tokens arrive at a rate to sustain output at a rate of 10 mega bytes per second. The token bucket is currently full and the machine needs to send 12 mega bytes of data. The minimum time required to transmit the data is _____________ seconds.
My Approach
Initially token bucket is full. the rate at which it is emptying is (20-10) Mbps. time take to empty token bucket of 1 mb is 1/10 i.e 0.1 sec
But answer is given as 1.2sec .

Token bucket has a capacity of 1 mega byte (maximum capacity C )
Here one byte is considered as one token
⇒ C=1 M tokens
output rate is 20 mega bytes per second (M=20MBps)
Tokens arrive at a rate to sustain output at a rate of 10 mega bytes per second
⇒20-R=10
⇒ Input Rate R=10MBps
Unlike Leaky Bucket , idle hosts can capture and save up c ≤ C tokens in order to send larger bursts later. s
When we begin transfer the tokens present in token buckt is transmitted at once to the network
ie. if initially capacity of token bucket is 'c' then c tokens will
be instantly be present in the network.
Time to Empty the token bucket
c: is the inital capacity of token bucket
R: every sec we are getting R tokens
M : evey seconds M tokens are produced
INPUT FLOW : Then the number of packets that are ready to enter the network during a time interval 't' is c+Rt
OUTPUT FLOW : Then the number of packets that are ready to enter the network during a time interval 't' is Mt
INPUT FLOW = OUTPUT FLOW
⇒ c+Rt = Mt
t= c/M-R =1/20-10 =0.1sec
Given that Token bucket is full (c=C)
Now , We have got two cases
To transfer 1M tokens , Will it be instantly with t=0
Or to transfer 1M tokens , we take 10/ 20-10 = 0.1sec ?
To transfer 1M (inital token) tokens , Will it be instantly with t=0
Consider the equation
INPUTFLOW = c+Rt
This means that
" c tokens (initally contained in token bucket ) are transmitted without any delays "
Unlike Leaky bucket , token buckets can keep on reserving token if the sender is idle .Once it is ready to send the packets . Packets will take the token and will be transmitted to the network. ⇒ c And then we are adding the R tokens produced in 't' time to finnaly get the INPUTFLOW
⇒ 1 MB is transmitted instantly . Now we are left with 11 MB to transmit
To trnasfer remaining 11 MB
at t=0 we begin transmitting 11 MB data.
at t=0.1sec : 1MB (1 MB transfered)
at t=0.2sec : 1MB (2 MB transfered)
..
..
at t=1.1 sec : 1MB (11 MB transfered )
Therefore to transfer 12MB it takes 1.1sec + 0 sec = 1.1 sec
Transfer 1M (inital token) tokens , we take = 0.1sec
( if it take 0.1 sec for 1MB i could argue that it will take 1.2ssec for 12MB )
then during 0.1sec . 01 *10MBps = 1M tokens are fulled up .
t=0s : begin to transfer 12 MB data.
t=0.1s : 1MB
t=0.2s : 1MB (2 MB transfered)
t=0.3s : 1MB (3 MB transfered)
..
..
t=1.2s : 1MB (12 MB transfered)
Therefore to transfer 12MB it takes 1.2sec
Question does clearly mention about this part . Hence it is common practice to always follw the best case .
Therefore the answer would be 1.1 sec
More Information : Visit Gate Overflow - Gate 2016 Question on Token Bucket

Related

Confusion in max connections allowed in AWS RDS types

I know we can find the max connections by {DBInstanceClassMemory/12582880}(12582880 is the magic number for my AWS resources). However, for db.m4.large (8 GB RAM) I checked online at many places that the maxConnections are 648. However, when I made the calculations, I found
8 * 1000000000/12582880 = 635.7844944877
8 * 1024 * 1024 * 1024 / 12582880 = 682.6684027822
Similarly for db.t2.small
2 * 1000000000/12582880 = 158.9461236219
2 * 1024 * 1024 * 1024 / 12582880 = 170.6671006955
acc to the internet: 150
Please help with finding the correct number. I cannot open MySQL console on the AWS instance due to some restrictions.
This AWS page has a table of containers and max_connections limits (default):
https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Managing.Performance.html
excerpt:
db.t2.small 45
db.t2.medium 90
db.t3.small 45
db.t3.medium 90
db.r3.large 1000
Text says:
the default connection limit is derived using a formula based on the DBInstanceClassMemory value.

3 byte array-for time format {Hour}{Minute}{Second}

I am reading system time(Register: TM11), and I want to get minute data from system time.
system time is in this data format = 3 bytes:{Hour}{Minute}{Second}
I am not sure, how to extract "minute" data, using array format, C code as below.
In my C code below, i use read_register function for reading system time, and use pointer (byte*)&systime[1]) to extract "minute". -not sure this is correct way to do so.
let's say, time now is 07:48:29 AM then, TM11 will show 07,48,29
I want to extract "48", which is minute, from TM11.
time interval: 15 minute.
Time Passed = 48 % 15 = 3 minute.
Putting this calculation in the C,
byte systime[2];
//declare "systime" variable as 3 byte array and to store TM11 //
byte time_interval = 15; //time interval is 15 minute
read_register (TM11,(byte*)&systime[1]);
//let's say read data value of TM11 is, 07:48:29 AM
//"read_register"function is to read the value of TM11 register
//(byte*)&systime[1] =try to point "minute" in TM11 register, systime[1]=48
//I am not sure whether hour will store in systime[0], minute will store in systime[1],//
elaps_time = systime[1] % time_interval;
//elapsed time calculation = 48 % 15 = 3

How do I change the power level between two nodes?

How do I decrease the losses by increasing power level?
Here is the code I am using:
https://github.com/maibewakoofhu/Unet
I am changing the power level using:
phy[1].powerLevel = -20.dB;
At noise level 68dB, power level = -20dB all DatagramReq are sent successfully.
At noise level 70dB, power level = -20dB the DatagramReq fails.
Now, increasing the power level to as high as 125dB, still the DatagramReq fails.
I created a simpler version of your simulation to test the SNR and packet-loss relationship:
import org.arl.fjage.RealTimePlatform
import org.arl.unet.sim.channels.BasicAcousticChannel
platform = RealTimePlatform
channel = [
model: BasicAcousticChannel,
carrierFrequency: 25.kHz,
bandwidth: 4096.Hz,
spreading: 2,
temperature: 25.C,
salinity: 35.ppt,
noiseLevel: 73.dB,
waterDepth: 1120.m
]
simulate {
node 'C', address: 31, location: [180.m, 0, -1000.m], web: 8101
node 'A', address: 21, location: [0.m, 0.m, 0.m], web: 8102
}
The web: entries allows us to interact with each of the nodes to explore what is happening. I connect to each of the nodes (http://localhost:8101/shell.html and http://localhost:8102/shell.html) and subscribe phy to see all physical layer events.
Now, from node A, I try broadcasting frames to see (at various power levels) if node C receives them:
> plvl -20
OK
> phy << new TxFrameReq()
AGREE
On node C, you'll see receptions, if successful:
phy >> RxFrameStartNtf:INFORM[type:CONTROL rxTime:3380134843]
phy >> RxFrameNtf:INFORM[type:CONTROL from:21 rxTime:3380134843]
or bad frames if not:
phy >> RxFrameStartNtf:INFORM[type:CONTROL rxTime:3389688843]
phy >> BadFrameNtf:INFORM[type:CONTROL rxTime:3389688843]
Observations:
- At plvl -20 dB, almost all frames fail.
- At plvl -10 dB, almost all frames are successful.
- At plvl -16 dB, I get a frame loss of about 19%.
The transition between all frames failing to all succeeding is expected to be quite sharp, as is typical in reality for stationary noise, as the FEC performance tends to be quite non-linear. So you'll expect big differences in frame loss rate around the transition region (in this example, at around -16 dB).
Do also note the plvl 125 dB isn't valid (range of plvl is given by phy.minPowerLevel to phy.maxPowerLevel, -96 dB to 0 dB by default). So setting that would have not worked:
> plvl 125
phy[1]: WARNING: Parameter powerLevel set to 0.0
phy[2]: WARNING: Parameter powerLevel set to 0.0
phy[3]: WARNING: Parameter powerLevel set to 0.0
phy: WARNING: Parameter signalPowerLevel set to 0.0

Is it possible to determine ENCRYPTBYKEY maximum returned value by the clear text type?

I am going to encrypted several fields in existing table. Basically, the following encryption technique is going to be used:
CREATE MASTER KEY ENCRYPTION
BY PASSWORD = 'sm_long_password#'
GO
CREATE CERTIFICATE CERT_01
WITH SUBJECT = 'CERT_01'
GO
CREATE SYMMETRIC KEY SK_01
WITH ALGORITHM = AES_256 ENCRYPTION
BY CERTIFICATE CERT_01
GO
OPEN SYMMETRIC KEY SK_01 DECRYPTION
BY CERTIFICATE CERT_01
SELECT ENCRYPTBYKEY(KEY_GUID('SK_01'), 'test')
CLOSE SYMMETRIC KEY SK_01
DROP SYMMETRIC KEY SK_01
DROP CERTIFICATE CERT_01
DROP MASTER KEY
The ENCRYPTBYKEY returns varbinary with a maximum size of 8,000 bytes. Knowing the table fields going to be encrypted (for example: nvarchar(128), varchar(31), bigint) how can I define the new varbinary types length?
You can see the full specification here
So lets calculate:
16 byte key UID
_4 bytes header
16 byte IV (for AES, a 16 byte block cipher)
Plus then the size of the encrypted message:
_4 byte magic number
_2 bytes integrity bytes length
_0 bytes integrity bytes (warning: may be wrongly placed in the table)
_2 bytes (plaintext) message length
_m bytes (plaintext) message
CBC padding bytes
The CBC padding bytes should be calculated the following way:
16 - ((m + 4 + 2 + 2) % 16)
as padding is always applied. This will result in a number of padding bytes in the range 1..16. A sneaky shortcut is to just add 16 bytes to the total, but this may mean that you're specifying up to 15 bytes that are never used.
We can shorten this to 36 + 8 + m + 16 - ((m + 8) % 16) or 60 + m - ((m + 8) % 16. Or if you use the little trick specified above and you don't care about the wasted bytes: 76 + m where m is the message input.
Notes:
beware that the first byte in the header contains the version number of the scheme; this answer does not and cannot specify how many bytes will be added or removed if a different internal message format or encryption scheme is used;
using integrity bytes is highly recommended in case you want to protect your DB fields against change (keeping the amount of money in an account confidential is less important than making sure the amount cannot be changed).
The example on the page assumes single byte encoding for text characters.
Based upon some tests in SQL Server 2008, the following formula seems to work. Note that #ClearText is VARCHAR():
52 + (16 * ( ((LEN(#ClearText) + 8)/ 16) ) )
This is roughly compatible with the answer by Maarten Bodewes, except that my tests showed the DATALENGTH(myBinary) to always be of the form 52 + (z * 16), where z is an integer.
LEN(myVarCharString) DATALENGTH(encryptedString)
-------------------- -----------------------------------------
0 through 7 usually 52, but occasionally 68 or 84
8 through 23 usually 68, but occasionally 84
24 through 39 usually 84
40 through 50 100
The "myVarCharString" was a table column defined as VARCHAR(50). The table contained 150,000 records. The mention of "occasionally" is an instance of about 1 out of 10,000 records that would get bumped into a higher bucket; very strange. For LEN() of 24 and higher, there were not enough records to get the weird anomaly.
Here is some Perl code that takes a proposed length for "myVarCharString" as input from the terminal and produces an expected size for the EncryptByKey() result. The function "int()" is equivalent to "Math.floor()".
while($len = <>) {
print 52 + ( 16 * int( ($len+8) / 16 ) ),"\n";
}
You might want to use this formula to calculate a size, then add 16 to allow for the anomaly.

Need to optimize Teradata query

I am trying to optimize the following teradata query. Can anyone please on this. It is taking lot of time to retrieve records.
select top 100 I.item_sku_nbr,L.loc_nbr,MIS.MVNDR_PRTY_ID from
QA_US_MASTER_VIEWS.item I,
qa4_US_MASTER_VIEWS.location L,
qa4_US_MASTER_VIEWS.item_str IST,
qa4_US_MASTER_VIEWS.mvndr_item_str MIS
where MIS.str_LOC_ID = L.loc_id and
mis.str_loc_id = IST.str_loc_id and
IST.str_loc_id = L.loc_id and
MIS.ITEM_STAT_CD = IST.ITEM_STAT_CD and
IST.ITEM_ID = I.ITEM_ID and
MIS.ITEM_ID = IST.ITEM_ID and
I.ITEM_STAT_CD = 100 and
IST.curr_rmeth_cd = 2 and
MIS.curr_dsvc_typ_cd = 3 and
MIS.OK_TO_ORD_FLG = 'Y' and
MIS.EFF_END_DT = DATE '9999-12-31' and
IST.EFF_END_DT = DATE '9999-12-31' and
MIS.ACTV_FLG ='Y' and
IST.ACTV_FLG ='Y' and I.ACTV_FLG='Y'
Explain plan for QA_US_MASTER.LOCATION:
1) First, we lock QA_US_MASTER.LOCATION in view
qa4_US_MASTER_VIEWS.Location for access.
2) Next, we do an all-AMPs RETRIEVE step from QA_US_MASTER.LOCATION
in view qa4_US_MASTER_VIEWS.Location by way of an all-rows scan
with no residual conditions into Spool 1 (group_amps), which is
built locally on the AMPs. The size of Spool 1 is estimated with
high confidence to be 10,903 rows (1,613,644 bytes). The
estimated time for this step is 0.01 seconds.
3) Finally, we send out an END TRANSACTION step to all AMPs involved
in processing the request.
-> The contents of Spool 1 are sent back to the user as the result of
statement 1. The total estimated time is 0.01 seconds.
Explain plan for qa4_US_MASTER_VIEWS.item_str :
1) First, we lock QA_US_MASTER.item_str in view
qa4_US_MASTER_VIEWS.item_str for access.
2) Next, we do an all-AMPs RETRIEVE step from QA_US_MASTER.item_str
in view qa4_US_MASTER_VIEWS.item_str by way of an all-rows scan
with no residual conditions into Spool 1 (group_amps), which is
built locally on the AMPs. The input table will not be cached in
memory, but it is eligible for synchronized scanning. The result
spool file will not be cached in memory. The size of Spool 1 is
estimated with low confidence to be 1,229,047,917 rows (
325,697,698,005 bytes). The estimated time for this step is 4
minutes and 51 seconds.
3) Finally, we send out an END TRANSACTION step to all AMPs involved
in processing the request.
-> The contents of Spool 1 are sent back to the user as the result of
statement 1. The total estimated time is 4 minutes and 51 seconds.
Explain plan for QA_US_MASTER.ITEM:
1) First, we lock QA_US_MASTER.ITEM in view qa4_US_MASTER_VIEWS.item
for access.
2) Next, we do an all-AMPs RETRIEVE step from QA_US_MASTER.ITEM in
view qa4_US_MASTER_VIEWS.item by way of an all-rows scan with no
residual conditions into Spool 1 (group_amps), which is built
locally on the AMPs. The size of Spool 1 is estimated with high
confidence to be 1,413,284 rows (357,560,852 bytes). The
estimated time for this step is 0.40 seconds.
3) Finally, we send out an END TRANSACTION step to all AMPs involved
in processing the request.
-> The contents of Spool 1 are sent back to the user as the result of
statement 1. The total estimated time is 0.40 seconds.
Explain plan for QA_US_MASTER.MVNDR_ITEM_STR:
1) First, we lock QA_US_MASTER.MVNDR_ITEM_STR in view
qa4_US_MASTER_VIEWS.mvndr_item_str for access.
2) Next, we do an all-AMPs RETRIEVE step from
QA_US_MASTER.MVNDR_ITEM_STR in view
qa4_US_MASTER_VIEWS.mvndr_item_str by way of an all-rows scan with
no residual conditions into Spool 1 (group_amps), which is built
locally on the AMPs. The input table will not be cached in memory,
but it is eligible for synchronized scanning. The result spool
file will not be cached in memory. The size of Spool 1 is
estimated with high confidence to be 1,316,279,746 rows (
327,753,656,754 bytes). The estimated time for this step is 6
minutes and 4 seconds.
3) Finally, we send out an END TRANSACTION step to all AMPs involved
in processing the request.
-> The contents of Spool 1 are sent back to the user as the result of
statement 1. The total estimated time is 6 minutes and 4 seconds.
Explain plan for Whole query:
1) First, we lock QA_US_MASTER.ITEM in view QA_US_MASTER_VIEWS.item
for access, we lock QA_US_MASTER.LOCATION in view
qa4_US_MASTER_VIEWS.location for access, we lock
QA_US_MASTER.MVNDR_ITEM_STR in view
qa4_US_MASTER_VIEWS.mvndr_item_str for access, and we lock
QA_US_MASTER.item_str in view qa4_US_MASTER_VIEWS.item_str for
access.
2) Next, we execute the following steps in parallel.
1) We do an all-AMPs RETRIEVE step from QA_US_MASTER.LOCATION in
view qa4_US_MASTER_VIEWS.location by way of an all-rows scan
with no residual conditions into Spool 3 (all_amps)
(compressed columns allowed), which is duplicated on all AMPs.
The size of Spool 3 is estimated with high confidence to be
1,013,979 rows (20,279,580 bytes). The estimated time for
this step is 0.03 seconds.
2) We do an all-AMPs RETRIEVE step from QA_US_MASTER.ITEM in
view QA_US_MASTER_VIEWS.item by way of an all-rows scan with
a condition of ("(QA_US_MASTER.ITEM in view
QA_US_MASTER_VIEWS.item.ITEM_STAT_CD = 100) AND
(QA_US_MASTER.ITEM in view QA_US_MASTER_VIEWS.item.ACTV_FLG =
'Y')") into Spool 4 (all_amps) (compressed columns allowed)
fanned out into 14 hash join partitions, which is duplicated
on all AMPs. The size of Spool 4 is estimated with low
confidence to be 30,819,363 rows (678,025,986 bytes). The
estimated time for this step is 0.81 seconds.
3) We do an all-AMPs JOIN step from Spool 3 (Last Use) by way of an
all-rows scan, which is joined to QA_US_MASTER.item_str in view
qa4_US_MASTER_VIEWS.item_str by way of an all-rows scan with a
condition of
("(QA_US_MASTER.item_str in view
qa4_US_MASTER_VIEWS.item_str.CURR_RMETH_CD = 2) AND
((QA_US_MASTER.item_str in view
qa4_US_MASTER_VIEWS.item_str.EFF_END_DT = DATE '9999-12-31') AND
(QA_US_MASTER.item_str in view
qa4_US_MASTER_VIEWS.item_str.ACTV_FLG = 'Y'))"). Spool 3 and
QA_US_MASTER.item_str are joined using a dynamic hash join, with a
join condition of ("QA_US_MASTER.item_str.STR_LOC_ID = LOC_ID").
The input table QA_US_MASTER.item_str will not be cached in memory.
The result goes into Spool 5 (all_amps) (compressed columns
allowed), which is built locally on the AMPs into 14 hash join
partitions. The size of Spool 5 is estimated with no confidence
to be 69,133,946 rows (2,419,688,110 bytes). The estimated time
for this step is 1 minute and 8 seconds.
4) We do an all-AMPs JOIN step from Spool 4 (Last Use) by way of an
all-rows scan, which is joined to Spool 5 (Last Use) by way of an
all-rows scan. Spool 4 and Spool 5 are joined using a hash join
of 14 partitions, with a join condition of ("(ITEM_ID = ITEM_ID)
AND (ACTV_FLG = ACTV_FLG)"). The result goes into Spool 6
(all_amps) (compressed columns allowed), which is redistributed by
the hash code of (QA_US_MASTER.item_str.STR_LOC_ID,
QA_US_MASTER.item_str.ITEM_STAT_CD, QA_US_MASTER.item_str.ITEM_ID,
QA_US_MASTER.ITEM.ITEM_ID, QA_US_MASTER.LOCATION.LOC_ID) to all
AMPs into 33 hash join partitions. The size of Spool 6 is
estimated with no confidence to be 36,434,893 rows (1,603,135,292
bytes). The estimated time for this step is 9.11 seconds.
5) We do an all-AMPs RETRIEVE step from QA_US_MASTER.MVNDR_ITEM_STR
in view qa4_US_MASTER_VIEWS.mvndr_item_str by way of an all-rows
scan with a condition of ("(QA_US_MASTER.MVNDR_ITEM_STR in view
qa4_US_MASTER_VIEWS.mvndr_item_str.CURR_DSVC_TYP_CD = 3) AND
((QA_US_MASTER.MVNDR_ITEM_STR in view
qa4_US_MASTER_VIEWS.mvndr_item_str.EFF_END_DT = DATE '9999-12-31')
AND ((QA_US_MASTER.MVNDR_ITEM_STR in view
qa4_US_MASTER_VIEWS.mvndr_item_str.ACTV_FLG = 'Y') AND
(QA_US_MASTER.MVNDR_ITEM_STR in view
qa4_US_MASTER_VIEWS.mvndr_item_str.OK_TO_ORD_FLG = 'Y')))") into
Spool 7 (all_amps) (compressed columns allowed) fanned out into 33
hash join partitions, which is redistributed by the hash code of (
QA_US_MASTER.MVNDR_ITEM_STR.ITEM_ID,
QA_US_MASTER.MVNDR_ITEM_STR.STR_LOC_ID,
QA_US_MASTER.MVNDR_ITEM_STR.ITEM_STAT_CD,
QA_US_MASTER.MVNDR_ITEM_STR.ITEM_ID,
QA_US_MASTER.MVNDR_ITEM_STR.STR_LOC_ID) to all AMPs. The input
table will not be cached in memory, but it is eligible for
synchronized scanning. The size of Spool 7 is estimated with no
confidence to be 173,967,551 rows (5,914,896,734 bytes). The
estimated time for this step is 2 minutes and 23 seconds.
6) We do an all-AMPs JOIN step from Spool 6 (Last Use) by way of an
all-rows scan, which is joined to Spool 7 (Last Use) by way of an
all-rows scan. Spool 6 and Spool 7 are joined using a hash join
of 33 partitions, with a join condition of ("(STR_LOC_ID =
STR_LOC_ID) AND ((ITEM_STAT_CD = ITEM_STAT_CD) AND ((ITEM_ID =
ITEM_ID) AND ((ACTV_FLG = OK_TO_ORD_FLG) AND ((ACTV_FLG = ACTV_FLG)
AND ((EFF_END_DT = EFF_END_DT) AND ((ACTV_FLG = ACTV_FLG) AND
((OK_TO_ORD_FLG = ACTV_FLG) AND ((ITEM_ID = ITEM_ID) AND
(STR_LOC_ID = LOC_ID )))))))))"). The result goes into Spool 2
(all_amps) (compressed columns allowed), which is built locally on
the AMPs. The size of Spool 2 is estimated with no confidence to
be 12,939,628 rows (336,430,328 bytes). The estimated time for
this step is 4.00 seconds.
7) We do an all-AMPs STAT FUNCTION step from Spool 2 by way of an
all-rows scan into Spool 10, which is redistributed by hash code
to all AMPs. The result rows are put into Spool 1 (group_amps),
which is built locally on the AMPs. This step is used to retrieve
the TOP 100 rows. Load distribution optimization is used.
If this step retrieves less than 100 rows, then execute step 8.
The size is estimated with no confidence to be 100 rows (3,200
bytes).
8) We do an all-AMPs STAT FUNCTION step from Spool 2 (Last Use) by
way of an all-rows scan into Spool 10 (Last Use), which is
redistributed by hash code to all AMPs. The result rows are put
into Spool 1 (group_amps), which is built locally on the AMPs.
This step is used to retrieve the TOP 100 rows. The size is
estimated with no confidence to be 100 rows (3,200 bytes).
9) Finally, we send out an END TRANSACTION step to all AMPs involved
in processing the request.
-> The contents of Spool 1 are sent back to the user as the result of
statement 1.
There's no ORDER BY in your query, so you just want 100 random rows?
In Teradata the TOP is done after the full result set has been created. You should move the TOP into a Derived Table like:
select I.item_sku_nbr,L.loc_nbr,MIS.MVNDR_PRTY_ID from
QA_US_MASTER_VIEWS.item I,
qa4_US_MASTER_VIEWS.location L,
(SELECT TOP 100 * FROM qa4_US_MASTER_VIEWS.item_str) IST,
qa4_US_MASTER_VIEWS.mvndr_item_str MIS
where MIS.str_LOC_ID = L.loc_id and
mis.str_loc_id = IST.str_loc_id and
IST.str_loc_id = L.loc_id and
MIS.ITEM_STAT_CD = IST.ITEM_STAT_CD and
IST.ITEM_ID = I.ITEM_ID and
MIS.ITEM_ID = IST.ITEM_ID and
I.ITEM_STAT_CD = 100 and
IST.curr_rmeth_cd = 2 and
MIS.curr_dsvc_typ_cd = 3 and
MIS.OK_TO_ORD_FLG = 'Y' and
MIS.EFF_END_DT = DATE '9999-12-31' and
IST.EFF_END_DT = DATE '9999-12-31' and
MIS.ACTV_FLG ='Y' and
IST.ACTV_FLG ='Y' and I.ACTV_FLG='Y'

Resources