unetsim: Is there a functionality to move a node from one coordinate to another by specifying the coordinates only and not velocity or direction? - unetstack

I'm trying to move a mobile AUV (node) on a particular path by specifying coordinates in the form of (x,y,z). As far as I have explored, UnetSim only offers nodes to move by specifying velocity and directions. Is there any way to make a node move to a particular set of locations in order after being deployed?

The MobilityModel in UnetSim Nodes has a mechanism to define piece-wise location information for various times in the Simulation.
The node.motionModel property is a List of HashMaps which can contain any of the following keys:
time: Time at which the mobility action is valid.
location: Coordinates ([-50.m, -50.m, 0]).
speed : Speed in mps (1.mps).
heading : Heading in degrees (30.deg).
turnRate: Rate of turn (1.dps).
diveRate: Rate of diving (-0.1.mps).
So using the time and location key we can achieve what you are trying to do.
The UnetIDE comes bundled with an example for simulating mobility. In this example, there are 4 sub-examples of various ways mobility can be simulated.
The 3rd example, Triangular motion (with diving), can be easily updated to indicate coordinates at various points of time during the simulation as follow.
println 'Simulation AUV-3: Triangular motion (with dive)'
simulate 15.minutes, {
def n = node('AUV-3', location: [-50.m, -50.m, 0], mobility: true)
n.startup = trackAuvLocation
n.motionModel = [[time: 0.minutes, location: [-50.m, -50.m, 0] ],
[time: 3.minutes, location: [-100.m, -50.m, 0] ],
[time: 4.minutes, location: [-100.m, -100.m, 0] ],
[time: 7.minutes, location: [-50.m, -100.m, 0] ],
[time: 10.minutes, location: [-50.m, -50.m, 0] ]]
}
This will give the following plot if plotted using the plot-tracks.groovy tool.

Related

How to find available neighbors of a node in unetstack

I am developing an energy based routing protocol in which a node has to know its available neighbours, so that it can get the energy details of the neighbour nodes and decide its next hop.
a) How to find available neighbours for a node?
b) Among the use of PDU and RemoteGetParamReq, which method suits well to retrieve energy of neighbour nodes?
a) If you are writing your own agent, you could send a broadcast frame to query neighbors, and have your agent on the neighbors respond to the frame with a random backoff (to avoid MAC collisions). An alternative hack could be to use the RouteDisoveryReq (see https://unetstack.net/svc-31-rdp.html) with the to address set to a non-existent node. This will cause all 1-hop neighbors to re-broadcast your route discovery request, and you will get RouteDiscoveryNtf for each of those neighbors.
Example script demonstrating the hack (rdpdemo.groovy):
// settings
attempts = 1 // try only a single attempt at discovery
phantom = 132 // non-existent node address
timeout = 10000 // 10 second timeout
println 'Starting discovery...'
n = [] // collect list of neighbors
rdp << new RouteDiscoveryReq(to: phantom, count: attempts)
while (ntf = receive(RouteDiscoveryNtf, timeout)) {
println(" Discovered neighbor: ${ntf.nextHop}")
n << ntf.nextHop // add neighbor to list
}
n = n.unique() // remove duplicates
println("Neighbors: ${n}")
Example run (simulation samples/rt/3-node-network.groovy on node 3):
> rdpdemo
Starting discovery...
Discovered neighbor: 1
Discovered neighbor: 2
Neighbors: [1, 2]
>
b) The answer to this depends on how you expose your energy information. If you expose it as a parameter, you can use the RemoteGetParamReq to get it. But if you are already implementing some protocol in your agent, it is easy enough to have a specific PDU to convey the information.

CNN with RGB input and BW binary output

I am a beginner to deep learning and I am working with Keras built on top of Tensorflow. I am trying to using RGB images (540 x 360) resolution to predict bounding boxes.
My labels are binary (black/white) 2 dimensional np array of dimensions (540, 360) where all pixels are 0 except for the box edges which are a 1.
Like this:
[[0 0 0 0 0 0 ... 0]
[0 1 1 1 1 0 ... 0]
[0 1 0 0 1 0 ... 0]
[0 1 0 0 1 0 ... 0]
[0 1 1 1 1 0 ... 0]
[0 0 0 0 0 0 ... 0]]
There can be more than one bounding box in every picture. A typical image could look like this:
So, my input has the dimension (None, 540, 360, 3), output has dimensions (None, 540, 360) but if I add an internal array I can change the shape to (None, 540, 360, 1)
How would I define a CNN model such that my model could fit this criteria? How can I design a CNN with these inputs and outputs?
You have do differentiate between object detection and object segmentation. While both can be used for similar problems, the underlying CNN architectures look very different.
Object detection models use a CNN classification/regression architecure, where the output refers to the coordinates of the bounding boxes. It's common practice to use 4 values belonging to vertical center, horizontal center, width and height of each bounding box. Search for Faster R-CNN, SSD or YOLO to find popular object detection models for keras. In your case you would need to define a function that converts the current labels to the 4 coordinates I mentioned.
Object segmentation models commonly use an architecture referred to as encoder-decoder networks, where the original image is scaled down and compressed on the first half and then brought back to it's original resolution to predict a full image. Search for SegNet, U-Net or Tiramisu to find popular object segmentation models for keras. My own implementation of U-Net can be found here. In your case you would need to define a custom function, that fills all the 0s inside your bounding boxes with 1s. Understand that this solution will not predict bounding boxes as such, but segmentation maps showing regions of interest.
What is right for you, depends on what precisely you want to achieve. For getting actual bounding boxes you want to perform an object detection. However, if you're interested in highlighting regions of interest that go beyond rectangle windows a segmentation may be a better fit. In theory, you can use your rectangle labels for a segmentation, where the network will learn to create better masks than the inaccurate segmentation of the ground truth, provided you have enough data.
This is a simple example of how to write intermediate layers to achieve the output. You can use this as a starter code.
def model_360x540(input_shape=(360, 540, 3),num_classes=1):
inputs = Input(shape=input_shape)
# 360x540x3
downblock0 = Conv2D(32, (3, 3), padding='same')(inputs)
# 360x540x32
downblock0 = BatchNormalization()(block0)
downblock0 = Activation('relu')(block0)
downblock0_pool = MaxPooling2D((2, 2), strides=(2, 2))(block0)
# 180x270x32
centerblock0 = Conv2D(1024, (3, 3), padding='same')(downblock0_pool)
#180x270x1024
centerblock0 = BatchNormalization()(center)
centerblock0 = Activation('relu')(center)
upblock0 = UpSampling2D((2, 2))(centerblock0)
# 180x270x32
upblock0 = concatenate([downblock0 , upblock0], axis=3)
upblock0 = Activation('relu')(upblock0)
upblock0 = Conv2D(32, (3, 3), padding='same')(upblock0)
# 360x540x32
upblock0 = BatchNormalization()(upblock0)
upblock0 = Activation('relu')(upblock0)
classify = Conv2D(num_classes, (1, 1), activation='sigmoid')(upblock0)
#360x540x1
model = Model(inputs=inputs, outputs=classify)
model.compile(optimizer=RMSprop(lr=0.001), loss=bce_dice_loss, metrics=[dice_coeff])
return model
The downblock represents the block of layers which perform downsampling(MaxPooling2D).
The centerblock has no sampling layer.
The upblock represents the block of layers which perform up sampling(UpSampling2D).
So here you can see how (360,540,3) is being transformed to (360,540,1)
Basically, you can add such blocks of layers to create your model.
Also check out Holistically-Nested Edge Detection which will help you better with the edge detection task.
Hope this helps!
I have not worked with keras but I will provide a solution approach in more generalized way which can be used on any framework.
Here is full procedure.
Data preparation: I know your labels are edges of boxes which will also work but i will recommend that instead of edges you prepare dataset marking complete box like given in sample (I have marked for two boxes). Now your dataset have three classes (Box,Edges of box and background). Create two lists, Image and label.
Get a pre-trained model (RESNET-51 recommended) solver and train prototxt from here, Remove fc1000 layer and add de-convolution/up-sampling layers to match your input size. use paddding in first layer to make it square and crop in deconvolution layer to match input output dimensions.
Transfer weights from previously trained network (Original) and train your network.
Test your dataset and create bounding boxes using detected blobs.

TensorFlow learn.Estimator : is it naive to call fit() many times? Because I get ResourceExhaustedError

I am learning machine learning using TensorFlow. I have been through a couple of tutorials but I still have a hard time trying to find what are the good ways of training a model. Recently I implemented a CNN model I found in the litterature. The model must take a crop of a certain size centered on a given pixel and predict the label of this pixel. It does that for each pixel of the image. I used:
classifier = tf.learn.Estimator(model_fn=cnn_model_fn, model_dir="./cnn")
with cnn_model_fn beeing a function I implemented.
For each training image, we take 3000 crops randomly, so I can't load all theses images and their crops to memory. The way I found is by loading one image at a time, extract the 3000 crops and then call classifier.fit() to train on the 3000 crops. Then loop for each image in my dataset.
for i in range(len(filenames)):
...
image = misc.imread(filenames[i])
labels = misc.imread(groundTruth[i]) #labels for each pixels
input_classifier = preprocess(image,...) #crops 3000 images in image and do other things
input_labels = preprocess_labels(labels, ...) #take the corresponding 3000 labels
classifier.fit(x = input_classifier,
y = input_labels,
batch_size = 30
steps = 100)
It worked fine for 100 images, but if I try on the whole dataset (2000 images), it always stops and give an error of ResourceExhausted.
...
[everything goes well]
...
iteration :227/2000
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating
TensorFlow device (/gpu:0) -> (device: 0, name: Tesla K40c, pci bus
id: 0000:01:00.0)
INFO:tensorflow:Create CheckpointSaverHook.
I tensorflow/core/common_runtime/gpu/gpu_device.cc:975] Creating
TensorFlow device (/gpu:0) -> (device: 0, name: Tesla K40c, pci bus
id: 0000:01:00.0)
Traceback (most recent call last):
File "train-cnn.py", line 78, in <module>
classifier.fit(x= input_classifier, y=input_labels,batch_size=30, steps=100)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/util/deprecation.py", line 280, in new_func
...
...
...
tensorflow.python.framework.errors_impl.ResourceExhaustedError: cnn/graph.pbtxt.tmp32bcc6311c164c29b91177d17d05d669
I don't see why it gets OOM... I have suspicions that it is because of the way I call fit() in loop. After each fit(), a ckpt is saved and it must be restored right after to train on the next image. So is it a bad way to train a model?
running estimator.fit in a loop with smaller steps is not a good idea. I would put all input logic into an input_fn. then run estimator.fit only once with more steps.
An example of reading data from different files can be found here: tf.contrib.learn.read_batch_examples

NetLogo: 2048 bot optimisation

I am trying to make a Netlogo simulation of a 2048 game. I have implemented three heuristic functions determined by weight parameters and want to use behaviour space to run simulations and check what is the best strategy for winning this game.
Procedure search uses export/import-world primitives to search over possible moves and chooses the move for which the heuristic function has the highest value.
The problem is that this procedure is very slow (due to the import-world function which is being called four times each turn). Do you have any ideas how to implement this without exporting and importing world so often?
This is a project for my Introduction to AI class. It is due in a couple of days and I can't seem to find any solutions.
The relevant part of the code is below. Procedures move-(direction) all work properly and variable moveable? is true if the square can move in said direction and false otherwise. It is checked in procedure moveable-check called by move-(direction).
I would very much appreciate your help. :)
to search
let x 0
let direction "down"
export-world "state.csv"
move-up
ifelse not any? squares with [moveable?]
[set h-value -5000]
[set x h-value
set direction "up"
import-world "state.csv"]
export-world "state.csv"
move-down
ifelse not any? squares with [moveable?]
[set h-value -5000]
[if h-value > x
[set x h-value
set direction "down"]
import-world "state.csv"]
export-world "state.csv"
move-left
ifelse not any? squares with [moveable?]
[set h-value -5000]
[if h-value > x
[set x h-value
set direction "left"]
import-world "state.csv"]
export-world "state.csv"
move-right
ifelse not any? squares with [moveable?]
[set h-value -5000]
[if h-value > x
[set x h-value
set direction "right"]
import-world "state.csv"]
ifelse direction = "up"
[move-up
print "up"]
[ifelse direction = "down"
[move-down
print "down"]
[ifelse direction = "right"
[move-right
print "right"]
[move-left
print "left"]]]
if not any? squares with [moveable?]
[
ask squares [set heading heading + 90]
moveable-check
if not any? squares with [moveable?]
[ask squares [set heading heading + 90]
moveable-check
if not any? squares with [moveable?]
[ask squares [set heading heading + 90]
moveable-check
if not any? squares with [moveable?]
[stop]]]
]
end
The most important, and difficult, information you need to be able to save and restore is the squares. This is pretty easy to do without import-world and export-world (note that the following uses NetLogo 6 syntax; if you're still on NetLogo 5, you'll need to use the old task syntax in the foreach):
to-report serialize-state
report [(list xcor ycor value)] of squares
end
to restore-state [ state ]
clear-squares
foreach state [ [sq] ->
create-squares 1 [
setxy (item 0 sq) (item 1 sq)
set heading 0 ;; or whatever
set value item 2 sq
]
]
end
value above just shows how to store arbitrary variables of your squares. I'm not sure what data you have associated with them or need to restore. The idea behind this code is that you're storing the information about the squares in a list of lists, where each inner list contains the data for one square. The way you use this then is:
let state serialize-state
;; make changes to state that you want to investigate
restore-state state
You may need to store some globals and such as well. Those can be stored in local variables or in the state list (which is more general, but more difficult to implement).
A few other ideas:
Right now it looks like you're only looking one state ahead, and at only one possible position for the new square that's going to be placed (make sure you're not cheating by know where exactly the new square is going to be). Eventually, you may want to do arbitrary look ahead using a kind of tree search. This tree gets really big really fast. If you do that, you'll want to use pruning strategies such as: https://en.wikipedia.org/wiki/Alpha%E2%80%93beta_pruning . Also, that makes the state restoration stuff more difficult, but still doable. You'll be storing a stack of states rather than a single state.
Instead of set heading heading + 90 you can just do right 90 or rt 90.

Scale radius of circle based on range of string - Mapbox

I'm trying to dynamically set the radius of circles plotted with a dataset that has three columns: Latitude / Longitude / # of Sessions. Data imports fine and all the locations plot correctly with the # of sessions as the label.
Here's the scenario:
The radius should be based on the number of sessions, so a lat/lon pair with 5 sessions is 1px, a lat/long pair with 5,000 sessions is 10px, etc.
Is there a way to have this dynamically set in a dataset? I can create layer "bands" myself by adding multiple instances of the dataset and filtering to 1-10, 11-100, etc., but it'd be great to set a "min" radius and a "max" radius and have it auto-scale based on available data.
Is there a way to do this in Mapbox?
Assuming I understand this right - basically you have some type of data based off of the session # you'd like to draw a bigger or smaller circle on that lat, long pairing.
What you can do is originally import the data followed by taking the # of sessions and physically creating a set to render on the map (some type of array or something). You can use mapbox markers to render these icons or circle and physically assign them a size.
Prior to this you will have to pre-determine a function to take the # of sessions and map them to a physical radius value - say 1000 sessions = a radius of 10 and 50,000 sessions = a radius of 500.
For example in an ios app I created this is my code, using - https://github.com/mapbox/react-native-mapbox-gl
markersArray = markersArray.concat({
coordinates:[bList[i].latitude, bList[i].longitude],
'type': 'point',
title: bList[i].bname,
subtitle: bList[i].data,
id: bList[i].o_ID.toString(),
startTime: bList[i].startTime,
endTime: bList[i].endTime,
annotationImage: {
url: (bList[i].type === 'drink') ? (drinkUrl) : (foodUrl),
height: 30,
width: 30
},
rightCalloutAccessory: {
url: 'image!info-icon',
height: 20,
width: 20
}
});

Resources