Xbee series 1 Data throughput decreases if data is increased - xbee

hi I have an xbee connected to my sensors which samples 14 bytes every 5ms. Other xbee is connected to PC.When is send 210K bytes of data at 115k baudrate my throughput will be 99% but when i increase the data to 700K at the same speed, throughput will be 69%. I am absolutely stumped. why is it that my throughput reduces drastically when i increase the number of data sent keeping the speed constant. Thanks

Related

Speed up insert into questdb using influxDBLine protocol

I'm doing a project related to storing live trading data.
Currently, we are using questDB and insert to it using influxdb line protocol(TCP).
Here is the link https://questdb.io/docs/reference/api/ilp/overview.
However, the rate of packets dropped is about 5% and, I want it to be less than 1%.
A packet is likely to be dropped when the number of packets received per minute is more than 2 million.
Are there any way to speed up the insert speed or any other time series database I can try?

How do Transactions Per Second work on Hedera

Suppose I send 1000 transactions per second and the finality is 2-3 sec. Will it take 2-3 sec to finalize all of those 1000 transactions? Or will it be a finalize delay difference between the first and last transaction?
Each individual transaction will finalise within a few seconds so yes if the consensus latency is say 4s, they will each reach consensus in 4s from when they were received by a node. Since it will take you 1s to send all 1000, it will take 5s overall for all 1000 to reach consensus.
The analogy I sometimes use to distinguish throughput and latency is to use the difference between the speed of light and sound.
The time it takes for a flash of light or sound to reach someone is latency.
Now, you can flash or sound once a second or 100 times a second, that's throughput. The fact it takes sound longer to travel to its destination (latency) doesn't prevent 100 sounds being sent in one second.

How could database have worse benchmark results on faster disk?

I'm benchmarking comparable (2vCPU, 2G RAM) server (Ubuntu 18.04) from DigitalOcean (DO) and AWS EC2 (t3a.small).
The disk benchmark (fio) goes inline with the results of https://dzone.com/articles/iops-benchmarking-disk-io-aws-vs-digitalocean
In summary:
DO --
READ: bw=218MiB/s (229MB/s), 218MiB/s-218MiB/s (229MB/s-229MB/s), io=3070MiB (3219MB), run=14060-14060msec
WRITE: bw=72.0MiB/s (76.5MB/s), 72.0MiB/s-72.0MiB/s (76.5MB/s-76.5MB/s), io=1026MiB (1076MB), run=14060-14060msec
EC2 --
READ: bw=9015KiB/s (9232kB/s), 9015KiB/s-9015KiB/s (9232kB/s-9232kB/s), io=3070MiB (3219MB), run=348703-348703msec
WRITE: bw=3013KiB/s (3085kB/s), 3013KiB/s-3013KiB/s (3085kB/s-3085kB/s), io=1026MiB (1076MB), run=348703-348703msec
which shows DO disk more than 10 times faster than the EBS of EC2
However, sysbench following https://severalnines.com/database-blog/how-benchmark-postgresql-performance-using-sysbench is showing DO slower than EC2 (using Postgres 11 default configuration, read-write test on oltp_legacy/oltp.lua )
DO --
transactions: 14704 (243.87 per sec.)
Latency (ms):
min: 9.06
avg: 261.77
max: 2114.04
95th percentile: 383.33
EC2 --
transactions: 20298 (336.91 per sec.)
Latency (ms):
min: 5.85
avg: 189.47
max: 961.27
95th percentile: 215.44
What could be the explanation?
Sequential read/write throughput matters for large sequential scans, stuff like data warehousing, loading a large backup, etc.
Your benchmark is OLTP which does lots of small quick queries. For this sequential throughput is irrelevant.
For reads (SELECTs) the most important factor is having enough RAM to keep your working set in cache and not do any actual IO. Failing that, it is read random access time.
For writes (UPDATE,INSERT) then the fsync latency, which is the time required to commit data to stable storage, is the most important factor since the database will only finish a COMMIT when data has been written.
Most likely the EC2 has better random access and fsync performance. Maybe it uses SSDs or battery-backed cache.
Sequential bandwidth and latency / iops are independent parameters.
Some workloads (like DBs) depend on latency for lots of small IOs. Or throughput for lots of small IO operations, iops (IOs per second).
In addition to IOPS vs throughput which others mentioned. I also wanted to point out that they are both pretty similar numbers. 240 tps vs 330 tps. you could add or subtract almost that much by just doing things like vacuum, analyze, or let it sit there for a while.
there could be other factors too. CPU speed could be different, there could be one performance for short burst vs throttling a heavy user, there could be presence or absence of huge_pages, different cache timings, memory speeds, or different nvme drivers. the point is 240 is not as much less than 330 as you might think.
Update: something else to point out is that OLTP read/write transactions arent necessary bottlenecked by disk performance. if you have sync off, then it really isnt.
I dont know exactly what the sysbench legacy OLTP read write test is doing, but I suspect its more like a bank xaction touching multiple records, using indexes, ... its probably not some sort of raw max insertion rate, or MAX CRUD operation rate benchmark.
I get 1000 tps on my desktop in the write heavy benchmark against pg13, but i can insert something like 50k records per second, each being ~ 100 bytes records from just a single process python client during bulk loads. and nearly 100k w/ sync off.

Information about energy of a node

I want to get the information about the energy in the node, so those neighbouring nodes can reroute the data packets when the neighbouring node energy is less.
Currently UnetStack simulator doesn't provide energy measurements directly. However, it's not hard to do yourself for simulations. See this discussion for some suggestions:
https://unetstack.net/support/viewtopic.php?id=81:
The current version of UnetStack does not have any energy model per se. But the trace and logs provide you all the information you'll need (transmit/receive counts, simulation time) to compute the energy consumption. Specifically, you'd want to assign some energy per packet transmission, some energy per packet reception, and some power consumption while idling. If you dynamically adjust power level or packet duration in your protocol, you will need to account for that too.
Practical devices that use UnetStack often have a battery voltage parameter that provides some measure of energy available. However, this may be hard to use as battery voltage does not linearly depend on energy, but is highly dependent on the actual battery chemistry.
Something else that you might want to bear in mind in developing routing protocols that use energy information: transmitting remaining energy information from a node to neighbors takes energy! Do keep this in mind!!!

SQL Server 2008 Activity Monitor Resource Wait Category: Does Latch include CPU or just disk IO?

In SQL Server 2008 Activity Monitor, I see Wait Time on Wait Category "Latch" (not Buffer Latch) spike above 10,000ms/sec at times. Average Waiter Count is under 10, but this is by far the highest area of waits in a very busy system. Disk IO is almost zero and page life expectancy is over 80,000, so I know it's not slowed down by disk hardware and assume it's not even touching SAN cache. Does this mean SQL Server is waiting on CPU (i.e. resolving a bajillion locks) or waiting to transfer data from the local server's cache memory for processing?
Background: System is a 48-core running SQL Server 2008 Enterprise w/ 64GB of RAM. Queries are under 100ms in response time - for now - but I'm trying to understand the bottlenecks before they get to 100x that level.
Class Count Sum Time Max Time
ACCESS_METHODS_DATASET_PARENT 649629086 3683117221 45600
BUFFER 20280535 23445826 8860
NESTING_TRANSACTION_READONLY 22309954 102483312 187
NESTING_TRANSACTION_FULL 7447169 123234478 265
Some latches are IO, some are CPU, some are other resource. It really depends on which particular latch type you're seeing this. sys.dm_os_latch_stats will show which latches are hot in your deployment.
I wouldn't worry about the last three items. The two nesting_transaction ones look very healthy (low average, low max). Buffer is also OK, more or less, although the the 8s max time is a bit high.
The AM_DS_PARENT latch is related to parallel queries/parallel scans. Its average is OK, but the max of 45s is rather high. W/o going into too much detail I can tell that long wait time on this latch type indicate that your IO subsystem can encounter spikes (and the max 8s BUFFER latch waits corroborate this).

Resources