I am a new user of omnet++ and I´d like to know if someone can help me to create the following function: on an intermediate and forwarder node I need to store the amount of packets arriving for 10ms and for next 10ms not storing and again for 10ms store and so on. Thank´s.
Related
We are using Flink 1.9.1.
We have a source, a process function, and a sink. The application consumes and produces to kinesis.
The input rate (produced by a simulator) is 20 events per second. The per second output rate for the process function shows 14 per second. The back pressure metrics for the source is shown as OK (green). The event count (Number of events sent by the source) and the number of events received by the process function also match with very little delay.
But this count does not match the event count pushed by the simulator. This count matches the 14 per second rate.
Now my question is, does Flink regulate the input rate automatically?
In my case, how is the input rate controlled at 14 per second?
If it is not, is there any other metric that I should be looking at that I'm missing?
It's not possible to force a Flink pipeline to consume events at a particular rate. By design, there is limited buffering in the network stack, and the slowest task in the execution graph will dictate the rate at which the pipeline will consume and process events.
The back pressure monitoring (that green OK signal) is not a definitive guide to whether back pressure is occuring. So long as the job is able to make steady forward progress, it probably won't indicate that there's a problem. You could examine some of the network queue metrics to get more insight: e.g., inPoolUsage, outPoolUsage, inputQueueLength. See Flink Network Stack Vol. 2: Monitoring, Metrics, and that Backpressure Thing for a lot more on this topic.
20 events per second seems very slow, so I am a bit surprised that something can't keep up with that rate, but that appears to be what's happening.
Is there a way to enforce a steady poll rate using the google-cloud-pubsub client?. I want to avoid scenarios where if there is spike in the publish rate, the pull request rate also tend to increase.
The client provides FlowControl settings, by setting the maxOutstanding messages. From my understanding, it sets the max batch size during a pull operation.
I want to understand how to create a constant pull rate, say 1000 RPS.
Message Flow Control can be used to set the maximum number of messages being processed at a given time (i.e., setting max_messages in the case of the python client), which indirectly sets the maximum rate at which messages are received.
While it doesn’t allow you to directly set the exact number of messages received per second (that would depend on the time it takes to process a message and the number of messages being processed), it should avoid scenarios where you get a spike in publish rate.
If you really need to set a rate in messages received per second, AFAIK it’s not made available directly on the client libraries, so you’d have to implement it yourself using an asynchronous pull and using some timers to acknowledge the messages at your desired rate.
In an attempt to make the system time on an ODroid as close to realtime as possible, I've tried adding a real time clock to the ODroid. The RTC has an accuracy of +/- 4ppm.
Without the realtimeclock, I would get results like this (Synced with NTP-server every 60 seconds). The blue is an Orange Pi for comparison. The x-axis is the sample, and the y-axis is the offset reported by the NTP-server in ms.
So what I tried, was the same thing (Though more samples, but same interval), but instead of just syncing with the NTP-server, I did the following:
Set the system time to the hw-clock time.
Sync with the NTP-server to update the system time, and record the offset given by the server
Update the HW-clock to the systemtime, since it has just been synced to realtime.
Then I wait 60 seconds and repeat. I didn't expect it to be perfect, but what I got shocked me a little bit.
What in the world am I looking at? The jitter becomes less and less, and follows an almost straight line, but when it reaches the perfect time (about 410 minutes in....), it the seems to continue, and let the jitter and offset grow again.
Can anyone explain this, or maybe tell me what I'm doing wrong?
This is weird!
So you are plotting the difference between your RTC time and the NTP server time. Where is the NTP server located? In the second plot you are working in a range of a couple hundred ms. NTP has accuracy limitations. From wikipedia:
https://en.wikipedia.org/wiki/Network_Time_Protocol
NTP can usually maintain time to within tens of milliseconds over the
public Internet, and can achieve better than one millisecond accuracy
in local area networks under ideal conditions. Asymmetric routes and
network congestion can cause errors of 100 ms or more
Your data is a bit weird looking though.
So I have a map from Key -> Struct
My key will be a devices IP address and the Value(Struct) will hold a devices IP address and a time which after that amount of time has elapsed will make the key-value pair expire and so be deleted from the map.
I am fairly new at this so was wondering what would be a good way to go about it.
I have googled around and seem to find a lot on time-based maps in Java only
EDIT
After coming across this I think I may have to create a map with items in it , and then have a deque in parallel with references to each elem. Then periodically call clean and if it has been in there longer than x amount of time delete it.
Is this correcto r can anyone suggest a more optimal way of doing it ?
I've used three approaches to solve a problem like this.
Use a periodic timer. Once every time quantum, get all the expiring elements and expire them. Keep the elements in timer wheels, see scheme 7 in this paper for ideas. The overhead here is that the periodic timer will kick in when it has nothing to do and buckets have a constant memory overhead, but this is the most efficient thing you can do if you add and remove things from the map much more often than you expire elements from it.
Check all elements for the shortest expiry time. Schedule a timer to kick in after that amount of time. In the timer, remove the expired element and schedule the next timer. Reschedule the timer every time a new element is added if its expiration time is shorter than the currently scheduled timer. Keep the elements in a heap for fast lookup of who needs to expire first. This has a quite large insertion and deletion overhead, but is pretty efficient when the most common deletion from the map is through expiry.
Every time you access the map, check if the element you're accessing is expired. If it is, just throw it away and pretend it wasn't there in the first place. This could be quite inefficient because of all the calls to check timestamp on every access and doesn't work if you need to perform some action on expiry.
There is a single consumer and single producer thread. The producer thread data acquisition is slow. It queries a socket for data and the time it takes to produce data for the consumer is significantly longer than the time it takes for the consumer to process and send the data out. The problem is I am updating a display so I want the updates to slow down so they appear continuous rather than update in bursts.
I am using a double buffer right now but the consumer is waiting too long for the buffers to be swapped because the producer is taking too long to produce data. Perhaps if I slice up the data into smaller blocks and use a queue instead? That way the producer would feed the consumer a little at a time? Has anyone ever run into this problem?
Why not have a thread that updates the screen once a sec? The thread can sleep for a sec, wake up, check into what the producer and the consumer are doing, and update the screen based on their progress. You would get updates every second. If you want them faster or slower, then change the timer interval.
I am going to lock the send rate to the client to a frequency based on the rate of the data request. I originally thought the producer was going to be much faster than it was so that is why I made it into a producer/consumer thread. This is more like a frame rate problem where I need to synchronize the output at a consistent rate.