LMDB random writes really slow for large data (~1MB/s) - database

I am testing LMDB performance with the benchmark given in http://www.lmdb.tech/bench/ondisk/. And I noticed that LMDB random writes are really slow when the data goes beyond memory.
I am using a machine with 4GB DRAM with Intel PCIE SSD. The key size is 10 bytes and value size is 1KB. The benchmark code is given in http://www.lmdb.tech/bench/ondisk/, and the command line I used is "./db_bench_mdb --benchmarks=fillrandbatch --threads=1 --stats_interval=1024 --num=10000000 --value_size=1000 --use_existing_db=0 ".
For the first 1GB of data written, the average write rate is 140MB/s. The rate then drops significantly to 40MB/s for the first 2GB. At the end of the test, in which 10M values are written, the average rate is just 3MB/s, and the instant rate is 1MB/s. I know LMDB is not optimized for writes, but I didn't expect it to be this slow, given that I have a really high-end Intel SSD.
I also notice that the way LMDB access the SSD is really strange. At the beginning of the test, it writes the SSD at around 400MB/s, but performs no read, which is expected. But as we write more and more data, LMDB starts to read the SSD. As time goes on, the read throughput rises while the write throughput drops significantly. At the end of test, LMDB is constantly reading at around 190MB/s, while occationally issuing 100MB writes at around 10-20 second intervals.
1. Is it normal for LMDB to have such low write throughput (1MB/s at the end of test) for data stored on SSD?
2. Why is LMDB reading more data than it is writing (about 20MB data read per 1MB written) at the end of the test?
To my understanding, although we have more data than the DRAM can hold, the branch nodes of the B-tree should still be in the DRAM. So for every write, the only pages that we need to fetch from SSD is the leaf nodes. And when we write the leaf node, we might also need to write its parents. So there should be more writes than reads. But it turns out LMDB is reading much more than writing. I think it might be the reason why it is so slow at the end. But I really cannot understand why.
For your reference, here is part of the log given by the benchmark:
2018/03/12-10:36:30 ... thread 0: (1024,1024) ops and (54584.2,54584.2) ops/second in (0.018760,0.018760) seconds
2018/03/12-10:36:30 ... thread 0: (1024,2048) ops and (111231.8,73231.8) ops/second in (0.009206,0.027966) seconds
2018/03/12-10:36:30 ... thread 0: (1024,3072) ops and (125382.6,85019.2) ops/second in (0.008167,0.036133) seconds
2018/03/12-10:36:30 ... thread 0: (1024,4096) ops and (206202.2,99661.8) ops/second in (0.004966,0.041099) seconds
2018/03/12-10:36:30 ... thread 0: (1024,5120) ops and (259634.9,113669.2) ops/second in (0.003944,0.045043) seconds
2018/03/12-10:36:30 ... thread 0: (1024,6144) ops and (306495.1,126984.1) ops/second in (0.003341,0.048384) seconds
2018/03/12-10:36:30 ... thread 0: (1024,7168) ops and (339185.2,139447.1) ops/second in (0.003019,0.051403) seconds
2018/03/12-10:36:30 ... thread 0: (1024,8192) ops and (384240.2,151512.9) ops/second in (0.002665,0.054068) seconds
2018/03/12-10:36:30 ... thread 0: (1024,9216) ops and (385252.1,162465.2) ops/second in (0.002658,0.056726) seconds
2018/03/12-10:36:30 ... thread 0: (1024,10240) ops and (371553.0,172152.9) ops/second in (0.002756,0.059482) seconds
...
2018/03/12-10:36:37 ... thread 0: (1024,993280) ops and (70127.4,142518.0) ops/second in (0.014602,6.969505) seconds
2018/03/12-10:36:37 ... thread 0: (1024,994304) ops and (199415.8,142559.9) ops/second in (0.005135,6.974640) seconds
2018/03/12-10:36:37 ... thread 0: (1024,995328) ops and (75953.1,142431.4) ops/second in (0.013482,6.988122) seconds
2018/03/12-10:36:37 ... thread 0: (1024,996352) ops and (200823.7,142474.0) ops/second in (0.005099,6.993221) seconds
2018/03/12-10:36:37 ... thread 0: (1024,997376) ops and (71975.8,142330.8) ops/second in (0.014227,7.007448) seconds
2018/03/12-10:36:37 ... thread 0: (1024,998400) ops and (62117.1,142142.6) ops/second in (0.016485,7.023933) seconds
2018/03/12-10:36:37 ... thread 0: (1024,999424) ops and (36366.2,141720.2) ops/second in (0.028158,7.052091) seconds
2018/03/12-10:36:37 ... thread 0: (1024,1000448) ops and (61914.3,141533.5) ops/second in (0.016539,7.068630) seconds
2018/03/12-10:36:37 ... thread 0: (1024,1001472) ops and (60985.1,141342.6) ops/second in (0.016791,7.085421) seconds
2018/03/12-10:36:37 ... thread 0: (1024,1002496) ops and (60466.5,141149.8) ops/second in (0.016935,7.102356) seconds
2018/03/12-10:36:37 ... thread 0: (1024,1003520) ops and (60189.3,140956.3) ops/second in (0.017013,7.119369) seconds
2018/03/12-10:36:37 ... thread 0: (1024,1004544) ops and (61731.4,140772.1) ops/second in (0.016588,7.135957) seconds
...
2018/03/12-10:40:15 ... thread 0: (1024,3236864) ops and (5620.5,14373.0) ops/second in (0.182189,225.203790) seconds
2018/03/12-10:40:15 ... thread 0: (1024,3237888) ops and (6098.5,14366.9) ops/second in (0.167911,225.371701) seconds
2018/03/12-10:40:15 ... thread 0: (1024,3238912) ops and (5469.5,14359.5) ops/second in (0.187221,225.558922) seconds
2018/03/12-10:40:15 ... thread 0: (1024,3239936) ops and (5593.9,14352.4) ops/second in (0.183056,225.741978) seconds
2018/03/12-10:40:16 ... thread 0: (1024,3240960) ops and (5806.9,14345.7) ops/second in (0.176342,225.918320) seconds
2018/03/12-10:40:16 ... thread 0: (1024,3241984) ops and (5332.9,14338.1) ops/second in (0.192016,226.110336) seconds
2018/03/12-10:40:16 ... thread 0: (1024,3243008) ops and (5532.3,14330.9) ops/second in (0.185096,226.295432) seconds
2018/03/12-10:40:16 ... thread 0: (1024,3244032) ops and (6108.8,14324.8) ops/second in (0.167626,226.463058) seconds
2018/03/12-10:40:16 ... thread 0: (1024,3245056) ops and (6074.7,14318.6) ops/second in (0.168567,226.631625) seconds
2018/03/12-10:40:17 ... thread 0: (1024,3246080) ops and (5615.2,14311.6) ops/second in (0.182362,226.813987) seconds
2018/03/12-10:40:17 ... thread 0: (1024,3247104) ops and (5529.3,14304.5) ops/second in (0.185194,226.999181) seconds
2018/03/12-10:40:17 ... thread 0: (1024,3248128) ops and (5846.2,14298.0) ops/second in (0.175156,227.174337) seconds
2018/03/12-10:40:17 ... thread 0: (1024,3249152) ops and (5741.5,14291.2) ops/second in (0.178351,227.352688) seconds
2018/03/12-10:40:17 ... thread 0: (1024,3250176) ops and (5640.2,14284.3) ops/second in (0.181555,227.534243) seconds
...
2018/03/12-11:30:39 ... thread 0: (1024,9988096) ops and (1917.2,3074.3) ops/second in (0.534112,3248.860552) seconds
2018/03/12-11:30:39 ... thread 0: (1024,9989120) ops and (1858.9,3074.1) ops/second in (0.550851,3249.411403) seconds
2018/03/12-11:30:40 ... thread 0: (1024,9990144) ops and (1922.8,3073.9) ops/second in (0.532557,3249.943960) seconds
2018/03/12-11:30:40 ... thread 0: (1024,9991168) ops and (1857.2,3073.7) ops/second in (0.551382,3250.495342) seconds
2018/03/12-11:30:41 ... thread 0: (1024,9992192) ops and (1851.3,3073.5) ops/second in (0.553130,3251.048472) seconds
2018/03/12-11:30:41 ... thread 0: (1024,9993216) ops and (1941.0,3073.3) ops/second in (0.527568,3251.576040) seconds
2018/03/12-11:30:42 ... thread 0: (1024,9994240) ops and (1923.1,3073.2) ops/second in (0.532461,3252.108501) seconds
2018/03/12-11:30:42 ... thread 0: (1024,9995264) ops and (1987.6,3073.0) ops/second in (0.515200,3252.623701) seconds
2018/03/12-11:30:43 ... thread 0: (1024,9996288) ops and (1931.2,3072.8) ops/second in (0.530233,3253.153934) seconds
2018/03/12-11:30:43 ... thread 0: (1024,9997312) ops and (1918.9,3072.6) ops/second in (0.533633,3253.687567) seconds
2018/03/12-11:30:44 ... thread 0: (1024,9998336) ops and (1999.0,3072.4) ops/second in (0.512246,3254.199813) seconds
2018/03/12-11:30:44 ... thread 0: (1024,9999360) ops and (1853.3,3072.2) ops/second in (0.552533,3254.752346) seconds
fillrandbatch : 325.508 micros/op 3072 ops/sec; 3.0 MB/s
And here is the read/write rate dumpped from iostat at 1s interval:
Device tps MB_read/s MB_wrtn/s MB_read MB_wrtn
sdb 73.00 0.12 25.52 0 25
sdb 531.00 0.00 495.21 0 495
sdb 15089.00 0.00 488.77 0 488
sdb 27431.00 0.01 463.55 0 463
sdb 13093.00 0.00 478.77 0 478
sdb 53676.00 0.00 413.79 0 413
sdb 16781.00 0.00 483.60 0 483
sdb 22267.00 0.00 323.32 0 323
sdb 23945.00 0.00 164.55 0 164
sdb 22867.00 0.00 152.25 0 152
sdb 22038.00 0.00 146.39 0 146
sdb 23825.00 0.00 263.61 0 263
...
sdb 20866.00 85.81 76.90 85 76
sdb 7684.00 101.75 115.19 101 115
sdb 3707.00 154.48 0.00 154 0
sdb 4349.00 181.41 0.00 181 0
sdb 4373.00 184.70 0.00 184 0
sdb 4329.00 185.04 0.00 185 0
sdb 4338.00 182.30 0.01 182 0
sdb 4364.00 184.27 0.00 184 0
sdb 5310.00 177.32 4.99 177 4
sdb 32130.00 99.07 119.70 99 119
sdb 27010.00 103.26 99.25 103 99
sdb 11109.00 67.18 99.96 67 99
sdb 3931.00 172.51 0.00 172 0
sdb 4112.00 171.28 0.00 171 0
sdb 4202.00 183.03 0.00 183 0
sdb 4119.00 183.79 0.00 183 0
sdb 4232.00 182.77 0.02 182 0
sdb 4224.00 185.90 0.00 185 0
sdb 4304.00 186.17 0.00 186 0
sdb 4279.00 188.83 0.00 188 0
sdb 4087.00 184.38 0.00 184 0
sdb 7758.00 163.86 16.70 163 16
sdb 21309.00 68.95 80.11 68 80
sdb 21166.00 81.66 78.42 81 78
sdb 19328.00 71.56 71.55 71 71
sdb 20836.00 89.08 76.52 89 76
sdb 3211.00 112.01 82.21 112 82
sdb 3939.00 173.40 0.00 173 0
sdb 3992.00 178.03 0.00 178 0
sdb 4251.00 181.49 0.00 181 0
sdb 4148.00 185.63 0.00 185 0
sdb 4094.00 184.12 0.01 184 0
sdb 4241.00 187.38 0.00 187 0
sdb 4044.00 186.60 0.00 186 0
sdb 4049.00 185.47 0.00 185 0
sdb 4247.00 189.17 0.00 189 0
...
sdb 17457.00 105.45 64.05 105 64
sdb 16736.00 82.12 62.35 82 62
sdb 12074.00 108.76 66.21 108 66
sdb 2232.00 194.44 0.00 194 0
sdb 2171.00 187.27 0.02 187 0
sdb 2322.00 197.91 0.00 197 0
sdb 2311.00 194.65 0.00 194 0
sdb 2240.00 187.93 0.00 187 0
sdb 2189.00 191.38 0.00 191 0
sdb 2266.00 192.33 0.01 192 0
sdb 2312.00 198.95 0.00 198 0
sdb 2310.00 199.84 0.00 199 0
sdb 2350.00 198.83 0.00 198 0
sdb 2275.00 198.31 0.00 198 0
sdb 3952.00 185.05 6.79 185 6
sdb 15842.00 59.89 59.67 59 59
sdb 16676.00 88.24 61.79 88 61
sdb 14768.00 75.94 55.00 75 54
sdb 5677.00 141.71 35.03 141 35
sdb 2135.00 184.78 0.04 184 0
sdb 2301.00 197.18 0.00 197 0
sdb 2334.00 198.81 0.00 198 0
sdb 2304.00 198.83 0.00 198 0
sdb 2348.00 198.67 0.00 198 0
sdb 2352.00 198.42 0.01 198 0
sdb 2373.00 199.32 0.00 199 0
sdb 2363.00 197.55 0.00 197 0
sdb 2289.00 198.71 0.00 198 0
sdb 2246.00 189.31 0.00 189 0
sdb 2357.00 198.64 0.01 198 0
sdb 2338.00 197.96 0.00 197 0
sdb 6292.00 177.60 16.56 177 16
sdb 19374.00 93.72 72.16 93 72
sdb 16873.00 101.38 62.01 101 62
sdb 16960.00 98.99 76.84 98 76
sdb 2299.00 189.32 6.16 189 6
sdb 2285.00 195.82 0.00 195 0
sdb 2346.00 198.25 0.00 198 0
sdb 2325.00 198.91 0.00 198 0
sdb 2353.00 197.72 0.02 197 0
sdb 2320.00 198.82 0.00 198 0
sdb 2327.00 200.05 0.00 200 0
sdb 2340.00 198.35 0.00 198 0
sdb 2322.00 199.29 0.00 199 0
sdb 2316.00 197.43 0.01 197 0
sdb 690.00 51.17 0.00 51 0

With the help of #Howard Chu, I have identified the problem.
The excessive reads were caused by OS read-ahead. By default, when we read a page, the OS will prefetch the pages following that page into memory. However, this is usually undesirable in a DB, since we are usually doing random read. So we need to disable read-ahead to get optimal performance for random read.
For the benchmark, there is a command line option --readahead=0, which in turn sets MDB_NORDAHEAD option for LMDB.
After disabling read-ahead, the instant write rate grew from 1MB/s to 8MB/s at the end of the test. And the read/write amount observed in iostat is almost identical.

Related

KNN Algorithm is Giving good Accuracy with Bad Confusion Matrix Results

I have data with multilabel classification. I used KNN model in order to classify it. The number of labels are 15, I got accuracy results for each label, averaged the results to get the accuracy of the model which is 93%.
The confusion matrix is showing bad numbers.
Would you tell me what does this mean? Is it overfitting? How can I solve my problem?
Accuracy and mean absolute error (mae) code
Input:
# Getting the accuracy of the model
y_pred1 = level_1_knn_model.predict(X_val1)
accuracy = (sum(y_val1==y_pred1)/y_val1.shape[0])*100
accuracy = sum(accuracy)/len(accuracy)
print("Accuracy: "+str(accuracy)+"%\n")
# Getting the mean absolute error
mae1 = mean_absolute_error(y_val1, y_pred1)
print("Mean Absolute Error: "+str(mae1))
Output:
Accuracy: [96.55462575 97.82146336 99.23207908 95.39247451 98.69340807 74.22793801
78.67975909 97.47825108 99.80189098 77.67264969 91.69399776 99.97084683
99.42621267 99.32682688 99.74159693]%
Accuracy: 93.71426804569977%
Mean Absolute Error: 9.703818402273944
Confusion Matrix and classification report code
Input:
# Calculate the confusion matrix
cMatrix1 = confusion_matrix(y_val1.argmax(axis=1), y_pred1.argmax(axis=1))
# Plot the confusion matrix
plt.figure(figsize=(11,10))
sns.heatmap(cMatrix1, annot=True, fmt='g')
# Calculate the classification report
classReport1 = classification_report(y_val1, y_pred1)
print("\nClassification Report:")
print(classReport1)
Output:
Classification Report:
precision recall f1-score support
0 0.08 0.00 0.01 5053
1 0.03 0.00 0.01 3017
2 0.00 0.00 0.00 1159
3 0.07 0.00 0.01 6644
4 0.00 0.00 0.00 1971
5 0.58 0.65 0.61 47222
6 0.39 0.33 0.36 27302
7 0.02 0.00 0.00 3767
8 0.00 0.00 0.00 299
9 0.58 0.61 0.60 40823
10 0.13 0.02 0.03 11354
11 0.00 0.00 0.00 44
12 0.00 0.00 0.00 866
13 0.00 0.00 0.00 1016
14 0.00 0.00 0.00 390
micro avg 0.54 0.43 0.48 150927
macro avg 0.13 0.11 0.11 150927
weighted avg 0.43 0.43 0.42 150927
samples avg 0.43 0.43 0.43 150927

Appengine/ComputeEngine Memory Issue?

I suspect this is a straightforward memory leak issue (python27 process has a memory leak with appengine libraries running on the Managed VMs GCE containers), but I'm confused a few things about the data I was collecting during the OOM issues.
After running fine for most of a day, my "vmstat 1" suddenly changed drastically:
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
0 0 0 70116 7612 41240 0 0 0 64 231 645 3 2 95 0
0 0 0 70148 7612 41240 0 0 0 0 164 459 2 2 96 0
1 0 0 70200 7612 41240 0 0 0 0 209 712 2 1 97 0
1 0 0 65432 7612 41344 0 0 100 0 602 820 48 5 47 1
1 3 0 69840 5644 29620 0 0 1284 0 812 797 33 6 34 27
0 1 0 69068 5896 30216 0 0 852 68 362 1052 6 1 0 93
0 1 0 68340 6160 30536 0 0 556 0 547 1355 4 2 0 94
0 2 0 67928 6564 30972 0 0 872 0 793 2173 9 5 0 86
0 1 0 63988 6888 34416 0 0 3776 0 716 1940 3 3 0 94
3 0 0 63696 7104 34608 0 0 376 0 353 1006 4 4 34 58
0 0 0 63548 7112 34948 0 0 332 48 379 916 13 1 84 2
0 0 0 63636 7116 34948 0 0 4 0 184 637 0 1 99 0
0 0 0 63660 7116 34948 0 0 0 0 203 556 0 3 97 0
0 1 0 76100 3648 26128 0 0 460 0 409 1142 7 4 85 4
0 3 0 73452 948 15940 0 0 4144 80 1041 1126 53 6 10 31
0 6 0 73828 84 11424 0 0 32924 80 1135 1732 11 4 0 85
0 6 0 72684 64 12324 0 0 52168 4 1519 2397 6 3 0 91
0 11 0 67340 52 12328 0 0 78072 16 1388 2974 2 9 0 89
1 10 0 65992 336 13412 0 0 79796 0 1297 2973 0 9 0 91
0 15 0 69000 48 10396 0 0 78344 0 1203 2739 2 7 0 91
0 15 0 67168 52 11460 0 0 86864 0 1244 3003 0 6 0 94
1 15 0 71268 52 7836 0 0 82552 4 1497 3269 0 7 0 93
In particular, my memory cache and buff dropped, and the io bytes-in surged, and it stayed like this for ~10 minutes afterwards before the machine died and was rebooted by Google Compute Engine. I assume the "bi" represents bytes-in from disk, but I'm curious why swpd showed 0 for this instance, if there was swapping? And why is memory "free" stat still unaffected if things are reaching a swapping point?
Second at the time of the final crash, my top showed:
Top - 15:06:20 up 1 day, 13:23, 2 users, load average: 13.88, 11.22, 9.30
Tasks: 92 total, 3 running, 89 sleeping, 0 stopped, 0 zombie
%Cpu(s): 0.8 us, 8.0 sy, 0.0 ni, 0.0 id, 90.9 wa, 0.0 hi, 0.4 si, 0.0 st
KiB Mem: 1745136 total, 1684032 used, 61104 free, 648 buffers
KiB Swap: 0 total, 0 used, 0 free, 12236 cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
23 root 20 0 0 0 0 R 12.6 0.0 2:11.61 kswapd0
10 root rt 0 0 0 0 S 2.5 0.0 0:52.92 watchdog/0
2315 root 20 0 192m 17m 0 S 2.1 1.0 47:58.74 kubelet
2993 root 20 0 6116m 1.2g 0 S 0.9 70.2 318:41.51 python2.7
6644 root 20 0 55452 12m 0 S 0.9 0.7 0:00.81 python
2011 root 20 0 761m 9924 0 S 0.7 0.6 12:23.44 docker
6624 root 20 0 4176 132 0 D 0.5 0.0 0:00.24 du
140 root 0 -20 0 0 0 S 0.4 0.0 0:08.64 kworker/0:1H
2472 root 20 0 39680 5616 296 D 0.4 0.3 0:27.43 python
1 root 20 0 10656 132 0 S 0.2 0.0 0:02.61 init
3 root 20 0 0 0 0 S 0.2 0.0 2:02.17 ksoftirqd/0
22 root 20 0 0 0 0 R 0.2 0.0 0:24.61 kworker/0:1
1834 root 20 0 53116 756 0 S 0.2 0.0 0:01.79 rsyslogd
1859 root 20 0 52468 9624 0 D 0.2 0.6 0:29.36 supervisord
2559 root 20 0 349m 172m 0 S 0.2 10.1 25:56.31 ruby
Again, I see the python27 process has claimed to 70% (which when combined with the 10% from Ruby, puts me into dangerous territory). Buy why is kswapd going crazy with my 10% CPU when the above vmstat shows 0?
Should I just not trust vmstat's swapd?

Plot this kind of graph from data of an array

Good afternoon,
I am working on a Matlab project and I have stored some data in an array. I would like to plot a plot like the plot shown below. However, I don't know what plotting function I need to use and how, in order to obtain the image plot (it will be not the same, but this style).
My data is on a 11x16 - matrix.
Thank you guys so much beforehand!
#rayryeng
It was a really useful answer, although I didn't need that exact shape. I need the shape that my data would create, I've been trying to modify the code you wrote in order to obtain what I need but I did not obtained it...
My data is
data = ( 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 ;
8.00 8.02 8.04 8.07 8.12 8.20 8.30 8.42 8.53 8.63 8.72 8.80 8.86 8.91 8.96 9.00;
6.00 6.03 6.07 6.12 6.22 6.37 6.59 6.83 7.07 7.28 7.45 7.60 7.72 7.83 7.92 8.00;
4.00 4.03 4.07 4.14 4.26 4.48 4.85 5.26 5.63 5.95 6.21 6.43 6.61 6.75 6.88 7.00;
2.00 2.02 2.05 2.10 2.20 2.44 3.08 3.70 4.23 4.67 5.01 5.29 5.52 5.70 5.86 6.00;
0 0 0 0 0 0 1.33 2.24 2.93 3.47 3.88 4.21 4.46 4.67 4.84 5.00;
0 0 0 0 0 0 0 1.01 1.78 2.38 2.84 3.19 3.46 3.67 3.84 4.00;
0 0 0 0 0 0 0 0 0.80 1.43 1.91 2.25 2.51 2.70 2.86 3.00;
0 0 0 0 0 0 0 0 0 0.63 1.10 1.41 1.62 1.77 1.89 2.00;
0 0 0 0 0 0 0 0 0 0 0.44 0.66 0.79 0.88 0.94 1.00;
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0)
This is my matrix of data (sorry I know it's too long), well and when I try to plot writing:
[x,y] = meshgrid(1:16,1:11);
contourf(x,y,data,20,'LineStyle','none');
colorbar
It should have a different shape than what I get. I need to get that the part that are 0 (zeros) are like the white part of the plot I showed before. (Different shape though) I don't really know how to do it (my data should be read properly), if you could help me I would be really thankful.
Thank you so much for last answer.
It depends on your data, I believe you should use contourf.
This is as close as I could get,
[x,y] = meshgrid(1:16,1:11);
data = - y;
data(end,5:10) = NaN;
data(end-1,6:9) = NaN;
data(end-2,7:8) = NaN;
contourf(x,y,data,20,'LineStyle','none');
colorbar
with,
data = - y .* abs(log(sin(.10 * x - 5.5)+.5));
data(data < -4) = NaN;
So I suppose the code is right, it's matter of your data,
with data = max(data(:)) - data;
What you have is almost correct. All you need to do is set any data that is 0 to NaN. That way, when you throw it into contourf, those parts are not visualized. As such:
data = [10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 ;
8.00 8.02 8.04 8.07 8.12 8.20 8.30 8.42 8.53 8.63 8.72 8.80 8.86 8.91 8.96 9.00;
6.00 6.03 6.07 6.12 6.22 6.37 6.59 6.83 7.07 7.28 7.45 7.60 7.72 7.83 7.92 8.00;
4.00 4.03 4.07 4.14 4.26 4.48 4.85 5.26 5.63 5.95 6.21 6.43 6.61 6.75 6.88 7.00;
2.00 2.02 2.05 2.10 2.20 2.44 3.08 3.70 4.23 4.67 5.01 5.29 5.52 5.70 5.86 6.00;
0 0 0 0 0 0 1.33 2.24 2.93 3.47 3.88 4.21 4.46 4.67 4.84 5.00;
0 0 0 0 0 0 0 1.01 1.78 2.38 2.84 3.19 3.46 3.67 3.84 4.00;
0 0 0 0 0 0 0 0 0.80 1.43 1.91 2.25 2.51 2.70 2.86 3.00;
0 0 0 0 0 0 0 0 0 0.63 1.10 1.41 1.62 1.77 1.89 2.00;
0 0 0 0 0 0 0 0 0 0 0.44 0.66 0.79 0.88 0.94 1.00;
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0];
data(data == 0) = NaN;
[x,y] = meshgrid(1:16,1:11);
contourf(x,y,data,20,'LineStyle','none');
colorbar
This is what I get:
Given your comments, you want the y-axis to be reversed. Simply put axis ij; at the end of the code above to flip the y-axis so that y-down is the positive direction. If you do that, we get this figure:
Credit should go to Kamtal as he figured out where you needed to start. I just helped finish off the requirement.

ncurses - mvwprintw generates whitespace

I have a ncurses project, where I use mvwprintw to print a long string to a window.
mvwprintw(traceview_window_flatprofile, 1, 0, "%s", flatprofile_as_str());
the result looks like this:
% self children self children
time time time calls /call /call name
39.86 886 µs 0 ns 32 27697 ns 0 ns addr_translate [13]
25.69 571 µs 1454 µs 1 571 µs 1454 µs main [0]
7.02 156 µs 0 ns 1 156 µs 0 ns addr_fini [66]
6.28 139 µs 55006 ns 1 139 µs 55006 ns addr_init [2]
3.83 85094 ns 21956 ns 2 42547 ns 10978 ns flatprofile_snprintf [43]
2.08 46150 ns 0 ns 1 46150 ns 0 ns addr_read_symbol_table [3]
When I print the same string to stderr, using
fprintf(stderr, "%s\n", flatprofile_as_str());
the result looks like:
% self children self children
time time time calls /call /call name
39.86 886 µs 0 ns 32 27697 ns 0 ns addr_translate [13]
25.69 571 µs 1454 µs 1 571 µs 1454 µs main [0]
7.02 156 µs 0 ns 1 156 µs 0 ns addr_fini [66]
6.28 139 µs 55006 ns 1 139 µs 55006 ns addr_init [2]
3.83 85094 ns 21956 ns 2 42547 ns 10978 ns flatprofile_snprintf [43]
2.08 46150 ns 0 ns 1 46150 ns 0 ns addr_read_symbol_table [3]
Do you know what could cause this difference?
EDIT: in addition to the answer below, the following question solves a related issue.
How to make ncurses display UTF-8 chars correctly in C?
The difference seems to be caused by the special character µ i am not quite sure how you can fix it but you will probably have to adjust your flatprofile_as_str() function.
I remember having a similar problem with special chars from utf-8 and i solved it by using this function to count not the bytes but the actual lenght of a string:
int strlen_utf8(char *s) {
int i = 0, j = 0;
while (s[i]) {
if ((s[i] & 0xc0) != 0x80) j++;
i++;
}
return j;
}

Confusing apache's server-status

I have a CentOS 5.5 Final with Apache 2.2.3... I was checking about possible misconfigurations or similar so I settled the server-status which reports to me this result just opening only the page of server-status:
Apache Server Status for 192.168.3.23
Server Version: Apache/2.2.3 (CentOS)
Server Built: Aug 30 2010 12:32:08
Current Time: Tuesday, 07-Aug-2012 10:10:23 CEST
Restart Time: Tuesday, 07-Aug-2012 10:04:40 CEST
Parent Server Generation: 0
Server uptime: 5 minutes 42 seconds
Total accesses: 5 - Total Traffic: 19 kB
CPU Usage: u0 s0 cu0 cs0
.0146 requests/sec - 56 B/second - 3891 B/request
1 requests currently being processed, 7 idle workers
_____W__........................................................
................................................................
................................................................
................................................................
Scoreboard Key:
"_" Waiting for Connection, "S" Starting up, "R" Reading Request,
"W" Sending Reply, "K" Keepalive (read), "D" DNS Lookup,
"C" Closing connection, "L" Logging, "G" Gracefully finishing,
"I" Idle cleanup of worker, "." Open slot with no current process
Srv PID Acc M CPU SS Req Conn Child Slot Client VHost Request
0-0 28511 0/1/1 _ 0.00 248 0 0.0 0.00 0.00 X.X.X.X 127.0.0.1 GET /server-status HTTP/1.1
1-0 28512 0/1/1 _ 0.00 238 0 0.0 0.00 0.00 X.X.X.X 127.0.0.1 GET /server-status HTTP/1.1
2-0 28513 0/1/1 _ 0.00 225 0 0.0 0.00 0.00 X.X.X.X 127.0.0.1 GET /server-status HTTP/1.1
3-0 28515 0/1/1 _ 0.00 218 0 0.0 0.00 0.00 X.X.X.X 127.0.0.1 GET /server-status HTTP/1.1
4-0 28516 0/1/1 _ 0.00 7 0 0.0 0.00 0.00 X.X.X.X 127.0.0.1 GET /server-status HTTP/1.1
5-0 28517 0/0/0 W 0.00 0 0 0.0 0.00 0.00 X.X.X.X 127.0.0.1 GET /server-status HTTP/1.1
Since I am the only one who is looking at that page shouldn't that result just show only one row even if I refresh the page with CTRL+F5 on the browser? If yes do you think there could be some misconfigurations?

Resources