How do I change the power level between two nodes? - unetstack

How do I decrease the losses by increasing power level?
Here is the code I am using:
https://github.com/maibewakoofhu/Unet
I am changing the power level using:
phy[1].powerLevel = -20.dB;
At noise level 68dB, power level = -20dB all DatagramReq are sent successfully.
At noise level 70dB, power level = -20dB the DatagramReq fails.
Now, increasing the power level to as high as 125dB, still the DatagramReq fails.

I created a simpler version of your simulation to test the SNR and packet-loss relationship:
import org.arl.fjage.RealTimePlatform
import org.arl.unet.sim.channels.BasicAcousticChannel
platform = RealTimePlatform
channel = [
model: BasicAcousticChannel,
carrierFrequency: 25.kHz,
bandwidth: 4096.Hz,
spreading: 2,
temperature: 25.C,
salinity: 35.ppt,
noiseLevel: 73.dB,
waterDepth: 1120.m
]
simulate {
node 'C', address: 31, location: [180.m, 0, -1000.m], web: 8101
node 'A', address: 21, location: [0.m, 0.m, 0.m], web: 8102
}
The web: entries allows us to interact with each of the nodes to explore what is happening. I connect to each of the nodes (http://localhost:8101/shell.html and http://localhost:8102/shell.html) and subscribe phy to see all physical layer events.
Now, from node A, I try broadcasting frames to see (at various power levels) if node C receives them:
> plvl -20
OK
> phy << new TxFrameReq()
AGREE
On node C, you'll see receptions, if successful:
phy >> RxFrameStartNtf:INFORM[type:CONTROL rxTime:3380134843]
phy >> RxFrameNtf:INFORM[type:CONTROL from:21 rxTime:3380134843]
or bad frames if not:
phy >> RxFrameStartNtf:INFORM[type:CONTROL rxTime:3389688843]
phy >> BadFrameNtf:INFORM[type:CONTROL rxTime:3389688843]
Observations:
- At plvl -20 dB, almost all frames fail.
- At plvl -10 dB, almost all frames are successful.
- At plvl -16 dB, I get a frame loss of about 19%.
The transition between all frames failing to all succeeding is expected to be quite sharp, as is typical in reality for stationary noise, as the FEC performance tends to be quite non-linear. So you'll expect big differences in frame loss rate around the transition region (in this example, at around -16 dB).
Do also note the plvl 125 dB isn't valid (range of plvl is given by phy.minPowerLevel to phy.maxPowerLevel, -96 dB to 0 dB by default). So setting that would have not worked:
> plvl 125
phy[1]: WARNING: Parameter powerLevel set to 0.0
phy[2]: WARNING: Parameter powerLevel set to 0.0
phy[3]: WARNING: Parameter powerLevel set to 0.0
phy: WARNING: Parameter signalPowerLevel set to 0.0

Related

What do the values in 'alsa --dump-hw-params' represent?

I am having problems configuring ALSA on my RHEL 7.5 machine.
Part of my solution is to attempt to change settings in /etc/asound.conf. I have tried numerous permutations but I continue to hear "jitter" in my sounds (.raw files).
I am using the 'aplay --dump-hw-params to get the params for my sound HW.
Using this command:
aplay --dump-hw-params Front_Center.wav
These are the results I get:
Playing WAVE 'Front_Center.wav' : Signed 16 bit Little Endian, Rate 48000 Hz, Mono
HW Params of device "default":
--------------------
ACCESS: MMAP_INTERLEAVED MMAP_NONINTERLEAVED MMAP_COMPLEX RW_INTERLEAVED RW_NONINTERLEAVED
FORMAT: S8 U8 S16_LE S16_BE U16_LE U16_BE S24_LE S24_BE U24_LE U24_BE S32_LE S32_BE U32_LE U32_BE FLOAT_LE FLOAT_BE FLOAT64_LE FLOAT64_BE MU_LAW A_LAW IMA_ADPCM S24_3LE S24_3BE U24_3LE U24_3BE S20_3LE S20_3BE U20_3LE U20_3BE S18_3LE S18_3BE U18_3LE U18_3BE
SUBFORMAT: STD
SAMPLE_BITS: [4 64]
FRAME_BITS: [4 640000]
CHANNELS: [1 10000]
RATE: [4000 4294967295)
PERIOD_TIME: (11609 11610)
PERIOD_SIZE: (46 49864571)
PERIOD_BYTES: (23 4294967295)
PERIODS: (0 17344165)
BUFFER_TIME: [1 4294967295]
BUFFER_SIZE: [92 797831566]
BUFFER_BYTES: [46 4294967295]
TICK_TIME: ALL
--------------------
I'd like to know what the values within parens and braces mean in general.
Are they ranges?
What is the difference between the use of parens vs. braces?
Thanks,
Ian
Minimum and maximum values as supported by the specific hardware device you are using.

GPS fix_type Value = 4

After looking through the logs of a recent test flight my craft reported a value of 4 for the variable fix_type of class dronekit.GPSInfo(eph, epv, fix_type, satellites_visible).
eph and epv had no value and satellites_visible varied between 9 and 12.
The flight was 30 minutes long. The GPS module is ublox gps + compass module.
Indoors I get fix_type 0 or 1 as expected, but out doors I get 3-4? I can find info on a 3D fix, but what does a 4D GPS fix mean?
How is this variable getting set in the source code?
class GPSInfo(object):
"""
Standard information about GPS.
If there is no GPS lock the parameters are set to ``None``.
:param Int eph: GPS horizontal dilution of position (HDOP).
:param Int epv: GPS vertical dilution of position (VDOP).
:param Int fix_type: 0-1: no fix, 2: 2D fix, 3: 3D fix
:param Int satellites_visible: Number of satellites visible.
.. todo:: FIXME: GPSInfo class - possibly normalize eph/epv? report fix type as string?
"""
def __init__(self, eph, epv, fix_type, satellites_visible):
self.eph = eph
self.epv = epv
self.fix_type = fix_type
self.satellites_visible = satellites_visible
def __str__(self):
return "GPSInfo:fix=%s,num_sat=%s" % (self.fix_type, self.satellites_visible)
Other UBLOX GPS code struct gpsData defines fix type as having the following values:
GNSSfix Type:
0: no fix
1: dead reckoning only
2: 2D-fix
3: 3D-fix
4: GNSS + dead reckoning combined,
5: time only fix
So the values 4 and 5 likely do not mean 4D and 5D fixes as you assume given the values for 2 and 3, and make sense given the scenario you describe.

BPF write fails with 1514 bytes

I'm unable to write 1514 bytes (including the L2 information) via write to /dev/bpf. I can write smaller packets (meaning I think the basic setup is correct), but I see "Message too long" with the full-length packets. This is on Solaris 11.2.
It's as though the write is treating this as the write of an IP packet.
Per the specs, there 1500 bytes for the IP portion, 14 for the L2 headers (18 if tagging), and 4 bytes for the checksum.
I've set the feature that I thought would prevent the OS from adding its own layer 2 information (yes, I also find it odd that a 1 disables it; pseudo code below):
int hdr_complete = 1;
ioctl(bpf, BIOCSHDRCMPLT, &hdr_complete);
The packets are never larger than 1514 bytes (they're captured via a port span and start with the source and destination MAC addresses; I'm effectively replaying them).
I'm sure I'm missing something basic here, but I'm hitting a dead end. Any pointers would be much appreciated!
Partial Answer: This link was very helpful.
Update 3/20/2017
Code works on Mac OS X, but on Solaris results in repeated "Interrupted system call" (EINTR). I'm starting to read scary things about having to implement signal handling, which I'd rather not do...
Sample code on GitHub based on various code I've found via Google. On most systems you have to run this with root privileges unless you've granted "net_rawaccess" to the user.
Still trying to figure out the EINTR issue. Output from truss:
27158/1: 0.0122 0.0000 write(3, 0x08081DD0, 1514) Err#4 EINTR
27158/1: \0 >E1C09B92 4159E01C694\b\0 E\005DC82E1 #\0 #06F8 xC0A81C\fC0A8
27158/1: 1C eC8EF14 Q nB0BC 4 V #FBDE8010FFFF8313\0\00101\b\n ^F3 W # C E
27158/1: d SDD G14EDEB ~ t sCFADC6 qE3C3B7 ,D9D51D VB0DFB0\b96C4B8EC1C90
27158/1: 12F9D7 &E6C2A4 Z 6 t\bFCE5EBBF9C1798 r 4EF "139F +A9 cE3957F tA7
27158/1: x KCD _0E qB9 DE5C1 #CAACFF gC398D9F787FB\n & &B389\n H\t ~EF81
27158/1: C9BCE0D7 .9A1B13 [ [DE\b [ ECBF31EC3 z19CDA0 #81 ) JC9 2C8B9B491
27158/1: u94 iA3 .84B78AE09592 ;DA ] .F8 A811EE H Q o q9B 8A4 cF1 XF5 g
27158/1: EC ^\n1BE2C1A5C2 V 7FD 094 + (B5D3 :A31B8B128D ' J 18A <897FA3 u
EDIT 7 April 2017
The EINTR problem was the result of a bug in the sample code that I placed on GitHub. The code was not associating the bpf device with the actual interface and Solaris was throwing the EINTR as a result.
Now I'm back to the "message too long" problem that I still haven't resolved.

Numericals on Token Bucket

Question
For a host machine that uses the token bucket algorithm for congestion control, the token bucket has a capacity of 1 mega byte and the maximum output rate is 20 mega bytes per second. Tokens arrive at a rate to sustain output at a rate of 10 mega bytes per second. The token bucket is currently full and the machine needs to send 12 mega bytes of data. The minimum time required to transmit the data is _____________ seconds.
My Approach
Initially token bucket is full. the rate at which it is emptying is (20-10) Mbps. time take to empty token bucket of 1 mb is 1/10 i.e 0.1 sec
But answer is given as 1.2sec .
Token bucket has a capacity of 1 mega byte (maximum capacity C )
Here one byte is considered as one token
⇒ C=1 M tokens
output rate is 20 mega bytes per second (M=20MBps)
Tokens arrive at a rate to sustain output at a rate of 10 mega bytes per second
⇒20-R=10
⇒ Input Rate R=10MBps
Unlike Leaky Bucket , idle hosts can capture and save up c ≤ C tokens in order to send larger bursts later. s
When we begin transfer the tokens present in token buckt is transmitted at once to the network
ie. if initially capacity of token bucket is 'c' then c tokens will
be instantly be present in the network.
Time to Empty the token bucket
c: is the inital capacity of token bucket
R: every sec we are getting R tokens
M : evey seconds M tokens are produced
INPUT FLOW : Then the number of packets that are ready to enter the network during a time interval 't' is c+Rt
OUTPUT FLOW : Then the number of packets that are ready to enter the network during a time interval 't' is Mt
INPUT FLOW = OUTPUT FLOW
⇒ c+Rt = Mt
t= c/M-R =1/20-10 =0.1sec
Given that Token bucket is full (c=C)
Now , We have got two cases
To transfer 1M tokens , Will it be instantly with t=0
Or to transfer 1M tokens , we take 10/ 20-10 = 0.1sec ?
To transfer 1M (inital token) tokens , Will it be instantly with t=0
Consider the equation
INPUTFLOW = c+Rt
This means that
" c tokens (initally contained in token bucket ) are transmitted without any delays "
Unlike Leaky bucket , token buckets can keep on reserving token if the sender is idle .Once it is ready to send the packets . Packets will take the token and will be transmitted to the network. ⇒ c And then we are adding the R tokens produced in 't' time to finnaly get the INPUTFLOW
⇒ 1 MB is transmitted instantly . Now we are left with 11 MB to transmit
To trnasfer remaining 11 MB
at t=0 we begin transmitting 11 MB data.
at t=0.1sec : 1MB (1 MB transfered)
at t=0.2sec : 1MB (2 MB transfered)
..
..
at t=1.1 sec : 1MB (11 MB transfered )
Therefore to transfer 12MB it takes 1.1sec + 0 sec = 1.1 sec
Transfer 1M (inital token) tokens , we take = 0.1sec
( if it take 0.1 sec for 1MB i could argue that it will take 1.2ssec for 12MB )
then during 0.1sec . 01 *10MBps = 1M tokens are fulled up .
t=0s : begin to transfer 12 MB data.
t=0.1s : 1MB
t=0.2s : 1MB (2 MB transfered)
t=0.3s : 1MB (3 MB transfered)
..
..
t=1.2s : 1MB (12 MB transfered)
Therefore to transfer 12MB it takes 1.2sec
Question does clearly mention about this part . Hence it is common practice to always follw the best case .
Therefore the answer would be 1.1 sec
More Information : Visit Gate Overflow - Gate 2016 Question on Token Bucket

capturing webcam stream with V4L2 failed

I am getting started with the V4L2 framework on Ubuntu 10.4.
currently I am using an webcam to do some tests. I am following this documentation to start, the installation was worked fine. I downloaded and compiled the application example. The problems is video output,I call the executable using:
# modprobe -r pwc
# modprobe -v pwc fps=15 compression=3 mbufs=4 fbufs=4 size=vga
# ./capturer_mmap -D /dev/video0 -w 640*480 -p 0 | ./viewer -w 640*480 -p 0
given this output:
Output on terminal:
window size 640*480
Video bytespreline = 1280
Display:
Image byte order = LSBFirst
Bitmap unit = 32
Bitmap bit order = LSBFirst
Bitmap pad = 32
Window:
Depth = 24
Red mask = 0x00ff0000
Green mask = 0x0000ff00
Blue mask = 0x000000ff
Bits per R/G/B = 8
Image byte order = LSBFirst
Bitmap unit = 32
Bitmap bit order = LSBFirst
Bitmap pad = 32
Depth = 24
Red mask = 0x00ff0000
Green mask = 0x0000ff00
Blue mask = 0x000000ff
Bits per pixel = 32
Bytes per line = 2560
IsShared = True
XIO: fatal IO error 11 (Resource temporarily unavailable) on X server ":0.0"
after 431 requests (19 known processed) with 0 events remaining.
root#my-laptop:/home/foo/V4l2_samples-0.4.1# ./capturer_mmap -D /dev/video0 -w 640*480 -p 0 | ./viewer -w 640*480 -p 0
window size 640*480
Video bytespreline = 1280
Display:
Image byte order = LSBFirst
Bitmap unit = 32
Bitmap bit order = LSBFirst
Bitmap pad = 32
Window:
Depth = 24
Red mask = 0x00ff0000
Green mask = 0x0000ff00
Blue mask = 0x000000ff
Bits per R/G/B = 8
Image byte order = LSBFirst
Bitmap unit = 32
Bitmap bit order = LSBFirst
Bitmap pad = 32
Depth = 24
Red mask = 0x00ff0000
Green mask = 0x0000ff00
Blue mask = 0x000000ff
Bits per pixel = 32
Bytes per line = 2560
IsShared = True
XIO: fatal IO error 11 (Resource temporarily unavailable) on X server ":0.0"
after 101 requests (19 known processed) with 0 events remaining.
I have no idea how to fix this. I belive the probrem is in C code because I can to use webcam with Webcam Chesse application. Any help is very appreciated. Thanks a lot!
it looks like you are displaying the image in a completely wrong format.
when working with v4l2, you should definitely check out "libv4l" (packaged in debian, so also available in ubuntu). v4l2 allows a device to output it's frames in any of a very large number of video-formats, some of those are compressed (e.g. using jpeg).
core v4l2 does not provide any means to convert the image into a given format your application supports, so in theory your application must support all possible formats.
in order to avoid code duplication (each v4l2-capable application faces the same problem!), libv4l was created: it allows low-level access to the device, but at the sametime guarantees that the frame can be access using a few standard formats.
e.g. if the device only supports jpeg-output and your app requests RGB32 frames, libv4l will transparently convert for you.
you can even use libv4l with some LD_PRELOAD tricks, in order to make it work with applications that have been compiled without libv4l-support (just to check whether my suggestion makes sense)

Resources