How to estimate Inference time from average forward pass time in caffe? - benchmarking

I use this command to benchmark my ConvNet in caffe:
./build/tools/caffe time -model models/own_xx/deploy.prototxt -weights examples/RSR_50k_all_1k_db/snapshot_iter_10000.caffemodel -gpu=0
It runs fine and generates output which ends with:
I0426 16:08:19.345427 15441 caffe.cpp:377] Average Forward pass: 13.5549 ms.
I0426 16:08:19.345484 15441 caffe.cpp:379] Average Backward pass: 10.7661 ms.
I0426 16:08:19.345527 15441 caffe.cpp:381] Average Forward-Backward: 25.2922 ms.
I0426 16:08:19.345579 15441 caffe.cpp:383] Total Time: 1264.61 ms.
I0426 16:08:19.345628 15441 caffe.cpp:384] *** Benchmark ends ***
In some tutorials I have seen the guy somehow simply infer the classification time from Average Forward pass. However, I can not find any formula or material which explain how to do this. Is there actually some link between the two entites? Which other factors e.g. number of iterations, and batch size are involved? My goal is to accurately predict classification time of my ConvNet on the GPU.
UPDATE: To not appear completely ignorant I will add here that I have a basic idea that forward pass is the time taken by an input to generate a relative output so it can be called the inference time too. However, what I am interested in is knowing if that is true irrespective of batch size and iterations? I tried but during benchmarking, caffe does not offer any 'batch' options.

The average forward pass time is the time it takes to propagate one batch of inputs from the input ("data") layer to the output layer. The batch size specified in your models/own_xx/deploy.prototxt file will determine how many images are processed per batch.
For instance, if I run the default command that comes with Caffe:
build/tools/caffe time --model=models/bvlc_alexnet/deploy.prototxt --gpu=0
I get the following output
...
I0426 13:07:32.701490 30417 layer_factory.hpp:77] Creating layer data
I0426 13:07:32.701513 30417 net.cpp:91] Creating Layer data
I0426 13:07:32.701529 30417 net.cpp:399] data -> data
I0426 13:07:32.709048 30417 net.cpp:141] Setting up data
I0426 13:07:32.709079 30417 net.cpp:148] Top shape: 10 3 227 227 (1545870)
I0426 13:07:32.709084 30417 net.cpp:156] Memory required for data: 6183480
...
I0426 13:07:34.390281 30417 caffe.cpp:377] Average Forward pass: 16.7818 ms.
I0426 13:07:34.390290 30417 caffe.cpp:379] Average Backward pass: 12.923 ms.
I0426 13:07:34.390296 30417 caffe.cpp:381] Average Forward-Backward: 29.7969 ms.
The following line:
I0426 13:07:32.709079 30417 net.cpp:148] Top shape: 10 3 227 227 (1545870)
is super important. It says that your input layer is 10x3x227x227-dimensional. In this case, the batch size is 10 images, each of size 3x227x227 (the 3 refers to each of the rgb channels in an image).
So effectively, it took 1.67818 ms/image to do a forward pass or inference time per image.
Changing the batch size
If you want to change the batch size, look at your .prototxt file. The
models/bvlc_alexnet/deploy.prototxt file that comes with Caffe looks like the following:
name: "AlexNet"
layer {
name: "data"
type: "Input"
top: "data"
input_param { shape: { dim: 10 dim: 3 dim: 227 dim: 227 } }
}
layer { ...
Simply change dim: 10 into some other value (say to '100' to specify a batch size of 100 images per forward pass).

Related

Based on Gatling report, how to make sure 100 requests are processed in less than 1 second

how can I check my requirement of 100 requests are processed in less than 1 second in my gatling3 report. I ran this using jenkins.
my simulation looks like as below
rampConcurrentUsers(1) to (100) during (161 second),
constantConcurrentUsers(100) during (1 minute)
Below is my response time percentile graph of two executions for an interval of one second.
enter image []1 here
What does the min,max here will tell us, i am assuming the percentages 25%-99% are the completion of the request.
Those graph sections are not what you're after - they show the distribution of response times and the number of active users.
So min is the fastest response time
max is the longest
95% is the response time for which 95% of your requests were under
and so on...
So what you could do is look at the section of your graph corresponding to the 100 constant concurrent users injection stage. In this part you would require that the max response time always be under 1 second
(Note: there's something odd with your 2nd report - I assume it didn't come from running the stated injection profile as it has more than 100 concurrent users active)

Cannot repair specific tables on specific nodes in Cassandra

I'm running 5 nodes in one DC of Cassandra 3.10.
As I'm trying to maintain those nodes I'm running on daily basis on every node
nodetool repair -pr
and weekly
nodetool repair -full
This is only table I have difficulties:
Table: user_tmp
SSTable count: 4
Space used (live): 366.71 MiB
Space used (total): 366.71 MiB
Space used by snapshots (total): 216.87 MiB
Off heap memory used (total): 5.28 MiB
SSTable Compression Ratio: 0.4690289976332873
Number of keys (estimate): 1968368
Memtable cell count: 2353
Memtable data size: 84.98 KiB
Memtable off heap memory used: 0 bytes
Memtable switch count: 1108
Local read count: 62938927
Local read latency: 0.324 ms
Local write count: 62938945
Local write latency: 0.018 ms
Pending flushes: 0
Percent repaired: 76.94
Bloom filter false positives: 0
Bloom filter false ratio: 0.00000
Bloom filter space used: 4.51 MiB
Bloom filter off heap memory used: 4.51 MiB
Index summary off heap memory used: 717.62 KiB
Compression metadata off heap memory used: 76.96 KiB
Compacted partition minimum bytes: 51
Compacted partition maximum bytes: 654949
Compacted partition mean bytes: 194
Average live cells per slice (last five minutes): 2.503074492537404
Maximum live cells per slice (last five minutes): 179
Average tombstones per slice (last five minutes): 1.0
Maximum tombstones per slice (last five minutes): 1
Dropped Mutations: 19 bytes
Percent repaired is never above 80% on this table on this and one more node but on others is above 85%. RF is 3, and strategy is SizeTieredCompactionStrategy
gc_grace_period is on 10days and as I somewhere in that period I'm getting writetimeout on exactly this table but after consumer which got this timeout is immediately replaced with another one everything keep going like nothing happened. Its like one time writetimeout.
My question is: Are you maybe have suggestion for better repair strategy because I'm kind of a noob and every suggest is a big win for me + any other for this table?
Maybe repair -inc instead of repair -pr
The nodetool repair command in Casandra 3.10 defaults to running incremental repair. There have been some major issues with incremental repair and it's currently not recommended by the community to run incremental repair. Please see this article for some great insight into repair and the issues with incremental repair: http://thelastpickle.com/blog/2017/12/14/should-you-use-incremental-repair.html
I would recommend, as does many others, to run:
nodetool repair -full -pr
Please be aware that you need to run repair on every node in your cluster. This means that if you run repair on one node per day you can have a max of 7 nodes (since with default gc_grace you should aim to finish repair within 7 days). And you also have to rely on that nothing goes wrong when doing repair since you would have to restart any failing jobs.
This is why tools like Reaper exist. It solves these issues with ease, it automates repair and makes life simpler. Reaper runs scheduled repairs and provides a web interface to make administration easier. I would highly recommend using reaper for routine maintance and nodetool repair for unplanned activities.
Edit: Link http://cassandra-reaper.io/

InfluxDB "GROUP BY time" shifts time

I'm having a problem thats limiting me quite a bit. We are trying to sample our data by grouping time. We have millions of points and want to fetch every Nth point in a given interval. We have implemented a solution that calculates the time difference in this interval and then groups by it to receive the correct amount of points.
SELECT last(value) as value FROM measurement WHERE time >= '...' AND time <= '...' GROUP BY time(calculated_time) fill(none)
The amount of points returned seems to be correct, but the dates are not.
See the results below:
Without sampling
> SELECT value FROM "measurement" WHERE time >= '2016-01-01T00:00:00Z' AND time <= '2017-01-01T00:00:00Z' LIMIT 5;
name: measurement
time value
---- -----
2016-01-01T00:00:00Z 61.111
2016-01-01T01:00:00Z 183.673
2016-01-01T02:00:00Z 200
2016-01-01T03:00:00Z 66.667
2016-01-01T04:00:00Z 97.959
With Sampling
> SELECT last(value) as value FROM "measurement" WHERE time >= '2016-01-01T00:00:00Z' AND time <= '2017-01-01T00:00:00Z' GROUP BY time(23m) fill(none) LIMIT 5;
name: measurement
time value
---- -----
2015-12-31T23:44:00Z 61.111
2016-01-01T00:53:00Z 183.673
2016-01-01T01:39:00Z 200
2016-01-01T02:48:00Z 66.667
2016-01-01T03:57:00Z 97.959
I expect the data to be returned to have the correct timestamp as in the database, regardless of the time used in the aggregation function. Instead, the time returned seems to be a multiple of the aggregated time. That is, if my aggregation is GROUP BY time(7m) then the points seem to a multiple of 7 apart.
If there is no solution to my problem with influx, is there an alternative database I can use where I can accomplish this? The data in this example is uniform and evenly distributed, but this is not always the case. More often than not it will be randomly distributed (spans of seconds to minutes).

Hazelcast vs. Ignite benchmark

I am using data grids as my primary "database". I noticed a drastic difference between Hazelcast and Ignite query performance. I optimized my data grid usage by the proper custom serialization and indexes, but the difference is still noticeable IMO.
Since no one asked it here, I am going to answer my own question for all future references. This is not an abstract (learning) exercise, but a real-world benchmark, that models my data grid usage in large SaaS systems - primarily to display sorted and filtered paginated lists. I primarily wanted to know how much overhead my universal JDBC-ish data grid access layer adds compared to raw no-frameworks Hazelcast and Ignite usage. But since I am comparing apples to apples, here comes the benchmark.
I have reviewed the provided code on GitHub and have many comments:
Indexing and Joins
Probably the most important point is that Apache Ignite indexing is a lot more sophisticated than Hazelcast. Unlike Hazelcast, Ignite supports ANSI 99 SQL, so you can write your queries at will.
Most importantly, unlike Hazelcast, Ignite supports group-indexes and SQL JOINs across different caches or data types. Imagine that you have Person and Organization tables, and you need to select all Persons working for the same Organization. This would be impossible to do in 1 step in Hazelcast (correct me if I am wrong), but in Ignite it is a simple SQL JOIN query.
Given the above, Ignite indexes will take a bit longer to create, especially in your test, where you have 7 of them.
Fixes in TestEntity Class
In your code, the entity you store in cache, TestEntity, recalculates the value for idSort, createdAtSort, and modifiedAtSort every time the getter is called. Ignite calls these getters several times while the entity is being stored in the index tree. A simple fix to the TestEntity class provides 4x performance improvement: https://gist.github.com/dsetrakyan/6bfe089d53f888448503
Heap Measurement is Not Accurate
The way you measure heap is incorrect. You should at least call System.gc() before taking the heap measurement, and even that would not be accurate. For example, in the results below, I get negative heap size using your method.
Warmup
Every benchmark requires a warm-up. For example, when I apply the TestEntity fix, as suggested above, and do the cache population and queries 2 times, I get better results.
MySQL Comparison
I don't think comparing a single-node Data Grid test to MySQL is fair, neither for Ignite, nor for Hazelcast. Databases have their own caching and whenever working with such small memory sizes, you are usually testing database in-memory cache vs. Data Grid in-memory cache.
The performance benefit usually comes in whenever doing a distributed test over a partitioned cache. This way a Data Grid will execute the query on each cluster node in parallel and the results should come back a lot faster.
Results
Here are the results I got for Apache Ignite. They look a lot better after I made the aforementioned fixes.
Note that the 2nd time we execute the cache population and cache queries, we get better results because the HotSpot JVM is warmed up.
It is worth mentioning that Ignite does not cache query results. Every time you run the query, you are executing it from scratch.
[00:45:15] Ignite node started OK (id=0960e091, grid=Benchmark)
[00:45:15] Topology snapshot [ver=1, servers=1, clients=0, CPUs=4, heap=8.0GB]
Starting - used heap: 225847216 bytes
Inserting 100000 records: ....................................................................................................
Inserted all records - used heap: 1001824120 bytes
Cache: 100000 entries, heap size: 775976904 bytes, inserts took 14819 ms
------------------------------------
Starting - used heap: 1139467848 bytes
Inserting 100000 records: ....................................................................................................
Inserted all records - used heap: 978473664 bytes
Cache: 100000 entries, heap size: **-160994184** bytes, inserts took 11082 ms
------------------------------------
Query 1 count: 100, time: 110 ms, heap size: 1037116472 bytes
Query 2 count: 100, time: 285 ms, heap size: 1037116472 bytes
Query 3 count: 100, time: 19 ms, heap size: 1037116472 bytes
Query 4 count: 100, time: 123 ms, heap size: 1037116472 bytes
------------------------------------
Query 1 count: 100, time: 10 ms, heap size: 1037116472 bytes
Query 2 count: 100, time: 116 ms, heap size: 1056692952 bytes
Query 3 count: 100, time: 6 ms, heap size: 1056692952 bytes
Query 4 count: 100, time: 119 ms, heap size: 1056692952 bytes
------------------------------------
[00:45:52] Ignite node stopped OK [uptime=00:00:36:515]
I will create another GitHub repo with the corrected code and post it here when I am more awake (coffee is not helping anymore).
Here is the benchmark source code: https://github.com/a-rog/px100data/tree/master/examples/HazelcastVsIgnite
It is part of the JDBC-ish NoSQL framework I mentioned earlier: Px100 Data
Building and running it:
cd <project-dir>
mvn clean package
cd target
java -cp "grid-benchmark.jar:lib/*" -Xms512m -Xmx3000m -Xss4m com.px100systems.platform.benchmark.HazelcastTest 100000
java -cp "grid-benchmark.jar:lib/*" -Xms512m -Xmx3000m -Xss4m com.px100systems.platform.benchmark.IgniteTest 100000
As you can see, I set the memory limits high to avoid garbage collection. You can also run my own framework test (see Px100DataTest.java) and compare to the two above, but let's concentrate on pure performance. Neither test uses Spring or anything else except for Hazelcast 3.5.1 and Ignite 1.3.3 - the latest at the moment.
The benchmark transactionally inserts the specified number of appr. 1K-size records (100000 of them - you can increase it, but beware of memory) in batches (transactions) of 1000. Then it executes two queries with ascending and descending sort: four total. All query fields and ORDER BY are indexed.
I am not going to post the entire class (download it from GitHub). The Hazelcast query looks like this:
PagingPredicate predicate = new PagingPredicate(
new Predicates.AndPredicate(new Predicates.LikePredicate("textField", "%Jane%"),
new Predicates.GreaterLessPredicate("id", first.getId(), false, false)),
(o1, o2) -> ((TestEntity)o1.getValue()).getId().compareTo(((TestEntity)o2.getValue()).getId()),
100);
The matching Ignite query:
SqlQuery<Object, TestEntity> query = new SqlQuery<>(TestEntity.class,
"FROM TestEntity WHERE textField LIKE '%Jane%' AND id > '" + first.getId() + "' ORDER BY id LIMIT 100");
query.setPageSize(100);
Here are the results executed on my 2012 8-core MBP with 8G of memory:
Hazelcast
Starting - used heap: 49791048 bytes
Inserting 100000 records: ....................................................................................................
Inserted all records - used heap: 580885264 bytes
Map: 100000 entries, used heap: 531094216 bytes, inserts took 5458 ms
Query 1 count: 100, time: 344 ms, heap size: 298844824 bytes
Query 2 count: 100, time: 115 ms, heap size: 454902648 bytes
Query 3 count: 100, time: 165 ms, heap size: 657153784 bytes
Query 4 count: 100, time: 106 ms, heap size: 811155544 bytes
Ignite
Starting - used heap: 100261632 bytes
Inserting 100000 records: ....................................................................................................
Inserted all records - used heap: 1241999968 bytes
Cache: 100000 entries, heap size: 1141738336 bytes, inserts took 14387 ms
Query 1 count: 100, time: 222 ms, heap size: 917907456 bytes
Query 2 count: 100, time: 128 ms, heap size: 926325264 bytes
Query 3 count: 100, time: 7 ms, heap size: 926325264 bytes
Query 4 count: 100, time: 103 ms, heap size: 934743064 bytes
One obvious difference is the insert performance - noticeable in real life. However very rarely one inserts a 1000 records. Typically it is one insert or update (saving entered user's data, etc.), so it doesn't bother me. However the query performance does. Most data-centric business software is read-heavy.
Note the memory consumption. Ignite is much more RAM-hungry than Hazelcast. Which can explain better query performance. Well, if I decided to use an in-memory grid, should I worry about the memory?
You can clearly tell when data grids hit indexes and when they don't, how they cache compiled queries (the 7ms one), etc. I don't want to speculate and will let you play with it, as well, as Hazelcast and Ignite developers provide some insight.
As far, as general performance, it is comparable, if not below MySQL. IMO in-memory technology should do better. I am sure both companies will take notes.
The results above are pretty close. However when used within Px100 Data and the higher level Px100 (which heavily relies on indexed "sort fields" for pagination) Ignite pulls ahead and is better suited for my framework. I care primarily about the query performance.

How do I measure response time in seconds given the following benchmarking data?

We recently got some data back on a benchmarking test from a software vendor, and I think I'm missing something obvious.
If there were 17 transactions (I assume they mean successfully completed requests) per second, and 1500 of these requests could be served in 5 minutes, then how do I get the response time for a single user? Is this sort of thing even possible with benchmarking? I have a lot of other data from them, including apache config settings, but I'm not sure how to do all the math.
Given the server setup they sent, I want to know how I can deduce the user response time. I have looked at other similar benchmarking tests, but I'm having trouble measuring requests to response time. What other data do I need to provide here to get that?
If only 1500 of these can be served per 5 minutes then:
1500 / 5 = 300 transactions per min can be served
300 / 60 = 5 transactions per second can be served
so how are they getting 17 completed transactions per second? Last time I checked 5 < 17 !
This doesn't seem to fit. Or am I looking at it wrongly?
I presume be user response time, you mean the time it takes to serve a single transaction:
If they can serve 5 per second than it takes 200ms (1/5) per transaction
If they can serve 17 per second than it takes 59ms (1/17) per transaction
That is all we can tell from the given data. Perhaps clarify how many transactions are being done per second.

Resources