NetLogo - transferring data between neighboring turtles - data-transfer

Is there any way to transfer data between turtles?
I would like to send and receive data between a turtle and its neighboring turtles, but I don't know how...

In the Code Examples section of NetLogo's Models Library, check out Commmunication T-T Example. ("T-T" is short for "turtle to turtle".)

Related

Extract motion vectors from versatile video coding

How do I go about extracting motion vector into a .txt or .xml file from VVC VTM reference software. I managed to extract the motion vectors to a text file but I don't have a proper index indicating which motion vector belongs where. If anyone could guide me on getting proper index along with motion vectors, that would be very helpful.
Are you doing it at the encoder side?
If so, I suggest that you move to the decoder side and do this:
Encode the sequence from which you want to extract MVs.
Modify the decoder so it prints the MV of each coding unit, if any (e.g. not intra). To do so, you may go to CABAC Reader.cpp file, somewhere inside coding_unit() function, and find the place where MV is parsed. There, in addition to the parsed MV, you have access to coordinates of the ongoing CU.
Decode your encoded bitstream with the modified VTM decoder and print what you wanted to be printed.
As Mosen's answer, I recommend you to extract any information(include MVs) from the decoder.
If you just want to extract MVs to file, you may utilize traverseCU().
VTM's picture class has CodingStructure class which traverses all CUs in picture(even CTU or CU can be treated as CodingStructure class, so you can use traverseCU() at block level too).
So I suggest you to
Access picture class(its name might be different, e.g., m_pcPic at DecLib.cpp) at the decoder side(insert you code before/after execute loop filters).
Iterate each CUs in picutre by using traverseCU().
Extract MVs from every CU you accessed, and save those information(MVs, indices, etc.)
Although there might be better ways to answer your question, i hope this answer helps you.

How can I get current microphone input level with C WinAPI?

Using Windows API, I want to implement something like following:
i.e. Getting current microphone input level.
I am not allowed to use external audio libraries, but I can use Windows libraries. So I tried using waveIn functions, but I do not know how to process audio input data in real time.
This is the method I am currently using:
Record for 100 milliseconds
Select highest value from the recorded data buffer
Repeat forever
But I think this is way too hacky, and not a recommended way. How can I do this properly?
Having built a tuning wizard for a very dated, but well known, A/V conferencing applicaiton, what you describe is nearly identical to what I did.
A few considerations:
Enqueue 5 to 10 of those 100ms buffers into the audio device via waveInAddBuffer. IIRC, when the waveIn queue goes empty, weird things happen. Then as the waveInProc callbacks occurs, search for the sample with the highest absolute value in the completed buffer as you describe. Then plot that onto your visualization. Requeue the completed buffers.
It might seem obvious to map the sample value as follows onto your visualization linearly.
For example, to plot a 16-bit sample
// convert sample magnitude from 0..32768 to 0..N
length = (sample * N) / 32768;
DrawLine(length);
But then when you speak into the microphone, that visualization won't seem as "active" or "vibrant".
But a better approach would be to give more strength to those lower energy samples. Easy way to do this is to replot along the μ-law curve (or use a table lookup).
length = (sample * N) / 32768;
length = log(1+length)/log(N);
length = max(length,N)
DrawLine(length);
You can tweak the above approach to whatever looks good.
Instead of computing the values yourself, you can rely on values from Windows. This is actually the values displayed in your screenshot from the Windows Settings.
See the following sample for the IAudioMeterInformation interface:
https://learn.microsoft.com/en-us/windows/win32/coreaudio/peak-meters.
It is made for the playback but you can use it for capture also.
Some remarks, if you open the IAudioMeterInformation for a microphone but no application opened a stream from this microphone, then the level will be 0.
It means that while you want to display your microphone peak meter, you will need to open a microphone stream, like you already did.
Also read the documentation about IAudioMeterInformation it may not be what you need as it is the peak value. It depends on what you want to do with it.

GradCam Implementation in TFJS

I'm trying to implement GradCam (https://arxiv.org/pdf/1610.02391.pdf) in tfjs, based on the following Keras Tutorial (http://www.hackevolve.com/where-cnn-is-looking-grad-cam/) and a simple image classification demo from tfjs, similar to (https://github.com/tensorflow/tfjs-examples/blob/master/webcam-transfer-learning/index.js) with a simple dense, fully-connected layer at the end.
However, I'm not able to retrieve the gradients needed for the gradcam computation. I tried different ways to retrieve gradients for the last sequential layer, but did not succeed, as types of tf.LayerVariable from the respective layer is not convertible to the respective type of tf.grads or tf.layerGrads.
Did anybody already succeeded to get the gradients from sequential layer to a tf.function like object?
I'm not aware of the ins and outs of the implementation, but I think something along the lines of this: http://jlin.xyz/advis/ is what you're looking for?
Source code is available here: https://github.com/jaxball/advis.js (not mine!)
This official example in the tfjs-examples repo should be close to, if not exactly, what you want:
https://github.com/tensorflow/tfjs-examples/blob/master/visualize-convnet/cam.js#L49

best method of turning millions of x,y,z positions of particles into visualisation

I'm interested in different algorithms people use to visualise millions of particles in a box. I know you can use Cloud-In-Cell, adaptive mesh, Kernel smoothing, nearest grid point methods etc to reduce the load in memory but there is very little documentation on how to do these things online.
i.e. I have array with:
x,y,z
1,2,3
4,5,6
6,7,8
xi,yi,zi
for i = 100 million for example. I don't want a package like Mayavi/Paraview to do it, I want to code this myself then load the decomposed matrix into Mayavi (rather than on-the-fly rendering) My poor 8Gb Macbook explodes if I try and use the particle positions. Any tutorials would be appreciated.
Analysing and creating visualisations for complex multi-dimensional data is complex. The best visualisation almost always depends on what the data is, and what relationships exists within the data. Of course, you are probably wanting to create visualisation of the data to show and explore relationships. Ultimately, this comes down to trying different posibilities.
My advice is to think about the data, and try to find sensible ways to slice up the dimensions. 3D plots, like surface plots or voxel renderings may be what you want. Personally, I prefer trying to find 2D representations, because they are easier to understand and to communicate to other people. Contour plots are great because they show 3D information in a 2D form. You can show a sequence of contour plots side by side, or in a timelapse to add a fourth dimension. There are also creative ways to use colour to add dimensions, while keeping the visualisation comprehensible -- which is the most important thing.
I see you want to write the code yourself. I understand that. Doing so will take a non-trivial effort, and afterwards, you might not have an effective visualisation. My advice is this: use a tool to help you prototype visualisations first! I've used gnuplot with some success, although I'm sure there are other options.
Once you have a good handle on the data, and how to communicate what it means, then you will be well positioned to code a good visualisation.
UPDATE
I'll offer a suggestion for the data you have described. It sounds as though you want/need a point density map. These are popular in geographical information systems, but have other uses. I haven't used one before, but the basic idea is to use a function to enstimate the density in a 3D space. The density becomes the fourth dimension. Something relatively simple, like the equation below, may be good enough.
The point density map might be easier to slice, summarise and render than the raw particle data.
The data I have analysed has been of a different nature, so I have not used this particular method before. Hopefully it proves helpful.
PS. I've just seen your comment below, and I'm not sure that this information will help you with that. However, I am posting my update anyway, just in case it is useful information.

Best way to take input for a graph Data Structure in C?

I am working on a basic graph implementation(Adj List based) in C so that I can re-use the basic structure to solve all graph related problems.
To map a graph I draw on a paper,I want the best and easiest way.
Talking of the way I take the input rather then how should I go about implementing it! :)
Should I make an input routine which asks for all the nodes label first and then asks for what all edges are to be connected based on two labels?
What could be a good and quick way out? I want an easy way out which lets me spend less amount of energy on the "Input".
Best is to go for input of an edge list,
that is triplets of,
Source, Destination, Cost
This routine can be used to fill Adj List and Adj Matrix.
With the latter, you would need to properly initialize the Matrix though and setup a convention to determine non existent edges.
Here you find details about representation of graph:
Graph-internal-representaion
However here some codes in c++ and java are also given,which you can easily convert to C codes.

Resources