tfjs converter - dtype string24 not supported - tensorflow.js

I get stuck for several days by converting a saved model:
$ tensorflowjs_converter --input_format=tf_saved_model --output_node_names='detection_boxes,detection_classes,detection_scores,num_detections' --skip_op_check=SKIP_OP_CHECK ./saved_model ./web_model
Using TensorFlow backend.
2019-02-20 08:49:48.827375: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
2019-02-20 08:49:52.385385: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:501] Optimization results for grappler item: graph_to_optimize
2019-02-20 08:49:52.385410: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:503] debug_stripper: Graph size after: 2419 nodes (0), 2842 edges (0), time = 15.828ms.
2019-02-20 08:49:52.385416: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:503] model_pruner: Graph size after: 2212 nodes (-207), 2635 edges (-207), time = 39.354ms.
2019-02-20 08:49:52.385421: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:503] constant folding: Graph size after: 1039 nodes (-1173), 1273 edges (-1362), time = 249.663ms.
2019-02-20 08:49:52.385425: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:503] arithmetic_optimizer: Graph size after: 782 nodes (-257), 1239 edges (-34), time = 108.649ms.
2019-02-20 08:49:52.385429: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:503] dependency_optimizer: Graph size after: 729 nodes (-53), 1139 edges (-100), time = 19.787ms.
2019-02-20 08:49:52.385495: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:503] model_pruner: Graph size after: 729 nodes (0), 1139 edges (0), time = 8.716ms.
2019-02-20 08:49:52.385506: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:503] remapper: Graph size after: 974 nodes (245), 1419 edges (280), time = 50.844ms.
2019-02-20 08:49:52.385512: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:503] constant folding: Graph size after: 647 nodes (-327), 1056 edges (-363), time = 237.244ms.
2019-02-20 08:49:52.385516: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:503] arithmetic_optimizer: Graph size after: 647 nodes (0), 1056 edges (0), time = 73.777ms.
2019-02-20 08:49:52.385521: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:503] dependency_optimizer: Graph size after: 647 nodes (0), 1056 edges (0), time = 16.117ms.
2019-02-20 08:49:52.385525: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:503] debug_stripper: Graph size after: 647 nodes (0), 1056 edges (0), time = 5.789ms.
2019-02-20 08:49:52.385530: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:503] model_pruner: Graph size after: 647 nodes (0), 1056 edges (0), time = 7.263ms.
2019-02-20 08:49:52.385534: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:503] constant folding: Graph size after: 647 nodes (0), 1056 edges (0), time = 74.943ms.
2019-02-20 08:49:52.385538: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:503] arithmetic_optimizer: Graph size after: 647 nodes (0), 1056 edges (0), time = 78.973ms.
2019-02-20 08:49:52.385543: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:503] dependency_optimizer: Graph size after: 647 nodes (0), 1056 edges (0), time = 17.675ms.
2019-02-20 08:49:52.385547: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:503] model_pruner: Graph size after: 647 nodes (0), 1056 edges (0), time = 6.868ms.
2019-02-20 08:49:52.385551: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:503] remapper: Graph size after: 647 nodes (0), 1056 edges (0), time = 7.687ms.
2019-02-20 08:49:52.385556: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:503] constant folding: Graph size after: 647 nodes (0), 1056 edges (0), time = 72.413ms.
2019-02-20 08:49:52.385561: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:503] arithmetic_optimizer: Graph size after: 647 nodes (0), 1056 edges (0), time = 78.796ms.
2019-02-20 08:49:52.385565: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:503] dependency_optimizer: Graph size after: 647 nodes (0), 1056 edges (0), time = 16.332ms.
Writing weight file ./web_model/tensorflowjs_model.pb...
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/2.7/bin/tensorflowjs_converter", line 10, in <module> sys.exit(main())
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflowjs/converters/converter.py", line 322, in main strip_debug_ops=FLAGS.strip_debug_ops)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflowjs/converters/tf_saved_model_conversion_pb.py", line 294, in convert_tf_saved_model skip_op_check=skip_op_check, strip_debug_ops=strip_debug_ops)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflowjs/converters/tf_saved_model_conversion_pb.py", line 138, in optimize_graph extract_weights(optimized_graph, output_graph, quantization_dtype)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflowjs/converters/tf_saved_model_conversion_pb.py", line 182, in extract_weights[ const_manifest], path, quantization_dtype=quantization_dtype)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflowjs/write_weights.py", line 119, in write_weights group_bytes, total_bytes, _ = _stack_group_bytes(group)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflowjs/write_weights.py", line 196, in _stack_group_bytes _assert_valid_weight_entry(entry)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflowjs/write_weights.py", line 305, in _assert_valid_weight_entry data.dtype.name + ' not supported.')
ValueError: Error dumping weight Equal/y, dtype string24 not supported.
(tensorflowjs 0.8.0)
Could someone advice me where/what to search to avoid this error?

Related

How to fix tapered eval taking more nodes

I've just implemented tapered eval but I'm not sure if I'm actually done in it because it just ruins the move ordering.
I'm using this fen for test: r3k2r/p1ppqpb1/bn2pnp1/3PN3/1p2P3/2N2Q1p/PPPBBPPP/R3K2R w KQkq - 0 1
So this are the info with tapered eval:
info depth 1 score cp 18.000000 nodes 65 time 116 pv e2a6
info depth 2 score cp 18.000000 nodes 165 time 402 pv e2a6 b4c3
info depth 3 score cp 18.000000 nodes 457 time 568 pv e2a6 b4c3 d2c3
info depth 4 score cp 18.000000 nodes 3833 time 1108 pv e2a6 b4c3 d2c3 h3g2
info depth 5 score cp 17.000000 nodes 12212 time 1875 pv e2a6 e6d5 c3d5
info depth 6 score cp 17.000000 nodes 77350 time 4348 pv e2a6 e6d5 c3d5
bestmove e2a6 ponder e6d5
And without tapered eval:
info depth 1 score cp 19.000000 nodes 75 time 66 pv e2a6
info depth 2 score cp 19.000000 nodes 175 time 182 pv e2a6 b4c3
info depth 3 score cp 19.000000 nodes 398 time 371 pv e2a6 b4c3 d2c3
info depth 4 score cp 19.000000 nodes 3650 time 947 pv e2a6 b4c3 d2c3 h3g2
info depth 5 score cp 18.000000 nodes 10995 time 1849 pv e2a6 e6d5 c3d5
info depth 6 score cp 18.000000 nodes 75881 time 4334 pv e2a6 e6d5 c3d5
bestmove e2a6 ponder e6d5
You can see that without tapered eval actualy has less nodes than the other, I'm just wondering if this is necessary or did I just implemented it wrong.
My phase function:
int totalPhase = pawnPhase * 16 + knightPhase * 4 + bishopPhase * 4 + rookPhase * 4 + queenPhase * 2;
int phase = totalPhase;
for each piece in node {
if piece is pawn, phase -= pawnPhase
else if piece is knight, phase -= knightPhase
...
}
return (phase * 256 + (totalPhase / 2)) / totalPhase;
And then I added the interpolation in the eval function:
for (each piece in node) {
...score material weights and positional scores, etc.
}
evaluation = ((mgEvaluation * (256 - phase)) + (egEvaluation * phase)) / 256;
I got the formula in this site: Tapered Eval
If this is actually necessary, can someone give me tips to optimize this?
Tapered eval is very useful and needs to be used since the tactic in early/mid game is way different than in the end game. You don't mention how you sort the moves, but since the tapered eval gives you different numbers in the piece square tables (PST) for a mid game position it is just natural that the move ordering will be slightly different than before. The results you are getting are pretty close to eachother and seems plausible.
Test the start position with tapered eval and see that it gives the same results as just normal eval with the PST for opening. Also do the same with an endgame position and just PST for endgame which should also give the same result.

How can I perform a matrix interpolation from a linearly spaced axis to a logarithmically spaced axis?

Anyone know how can I interpole a energy spectrum matrix linearrly spaced to a matrix where one of the axis is logarithimically spaced instead of linearly spaced?
The size of my energy spectrum matrix is 64x165. The original x axis represents the energy variation in terms of directions and the original y axis represents the energy variation in terms of frequencies. Both vectors are spaced linearly (the same interval between each vector position). I want to interpolate this matrix to a 24x25 format where the x axis (directions) continues linearly spaced (now a vector with 24 positions instead of 64) but the y axis (frequency) is not linearly spaced anymore; it is a vector with different intervals between positions (the interval between the position 2 and the position 1 is smaller than the interval between the position 3 and the position 2 of this vector... and so on up to position 25).
It is important to point out that all vectors (including the new frequency logarithmically spaced vector) are known (I don't wanna to generate them).
I tried the function interp2 and griddata. Both functions showed the same result, but this result is completely different from the original spectrum (what I would not expect to happen since I just did an interpolation). Anyone could help? I'm using Matlab 2011 for Windows.
Small example:
freq_input=[0.038592 0.042451 0.046311 0.05017 0.054029 0.057888 0.061747 0.065607 0.069466 0.073325]; %Linearly spaced
dir_input=[0 45 90 135 180 225 270 315]; %Linearly spaced
matrix_input=[0.004 0.006 1.31E-06 0.011 0.032 0.0007 0.010 0.013 0.001 0.008
0.007 0.0147 3.95E-05 0.023 0.142 0.003 0.022 0.022 0.003 0.017
0.0122 0.0312 0.0012 0.0351 0.285 0.024 0.048 0.036 0.015 0.036
0.0154 0.0530 0.0185 0.0381 0.242 0.102 0.089 0.058 0.060 0.075
0.0148 0.0661 0.1209 0.0345 0.095 0.219 0.132 0.087 0.188 0.140
0.0111 0.0618 0.2232 0.0382 0.027 0.233 0.156 0.119 0.370 0.187
0.0069 0.0470 0.1547 0.0534 0.010 0.157 0.154 0.147 0.436 0.168
0.0041 0.0334 0.0627 0.0646 0.009 0.096 0.136 0.163 0.313 0.112]; %8 lines (directions) and 10 columns (frequencies)
freq_output=[0.412E-01 0.453E-01 0.498E-01 0.548E-01 0.603E-01]; %Logarithimically spaced
dir_output=[0 45 90 135 180 225 270 315]; %The same as dir_input
After did a meshgrid with the freq_input and dir_input vectors, and a meshgrid using freq_output and dir_output, I tried interp2(freq_input,dir_input,matrix,freq_output,dir_output) and griddata(freq_input,dir_input,matrix,freq_output,dir_output) and the results seems wrong.
The course of action you described should work fine, so it's possible that you misinterpreted your results after interpolation when you said "the result seems wrong".
Here's what I mean, assuming your dummy data from the question:
% interpolate using griddata
matrix_output = griddata(freq_input,dir_input,matrix_input,freq_output.',dir_output);
% need 2d arrays later for scatter plotting the result
[freq_2d,dir_2d] = meshgrid(freq_output,dir_output);
figure;
% plot the original data
surf(freq_input,dir_input,matrix_input);
hold on;
scatter3(freq_2d(:),dir_2d(:),matrix_output(:),'rs');
The result shows the surface plot (based on the original input data) with red squares superimposed on it: the interpolated values
You can see that the linearly interpolated data values follow the bilinear surface drawn by surf perfectly (rotating the figure around in 3d makes this even more obvious). In other words, the interpolation and subsequent plotting is fine.

An Algorithm Comparing Peaks: are they in phase or not?

I am developing an algorithm for comparing two lists of numbers. The lists represent peaks discovering in a signal using a robust peak detection method. I wish to come up with some way of determining whether the peaks are either in phase, out of phase, or neither (could not be determined). For example:
These arrays would be considered in phase:
[ 94 185 278 373 469], [ 89 180 277 369 466]
But these arrays would be out of phase:
[51 146 242 349], [99 200 304 401]
There is no requirement that the arrays must be the same length. I have looked into measuring periodicity, however in this case I can assume the signal is already periodic.
Another idea I had was to divide all the array elements by their index (or their index+1) to see if they cluster around one or two points, but this is not robust and fails if a single peak is missing.
What approaches might be useful in solving this problem?
One approach would be to find the median distance from each peak in the first list to a peak in the second list.
If you divide this distance by the median distance between peaks in the first list, you will get a fraction where 0 means in phase, and 0.5 means out of phase.
For example:
[ 94 185 278 373 469], [ 89 180 277 369 466]
94->89 = 5
185->180 = 5
278->277 = 1
373->369 = 4
469->466 = 5
Score = median(5,5,1,4,5) / median distance between peaks
= 5 / 96 = 5.2% => in phase
[51 146 242 349], [99 200 304 401]
51->99 = 48
146->99 = 47
242->200 = 42
349->304 = 45
score = median(48,47,42,45) / median distance between peaks
= 46 / 95.5
= 48% => out of phase
I would enter the peak locations, using them as index locations, into a much larger array (best if the length of the array is close to an integer multiple of the periodicity distance of your peaks), and then do either a complex Goertzel filter (if you know the frequency), or do a DFT or FFT (if you don't know the frequency) of the array. Then use atan2() on the complex result (at the peak magnitude frequency for the FFT) to measure phase relative to the array starts. Then compare unwrapped phases using some difference threshold.

Represent graph in table and given visited nodes which nodes to visit next

Say I have a graph with N nodes 111,222,...,nnn and I have the graph represented in the following table for example
NodeID | PredecessorID
222 111
333 111
555 222
555 333
and so on.
given a list of M nodes that have been visited how can I find all the nodes that are to be visited next?
a node to be visited next is a node that all of its predecessors have been visited.
If your list M contains all visited nodes and not only a subset of them you can do it like this:
foreach n in N:
visite = True
if n is not marked:
foreach predecessor (pn) of n:
visite = visite and (is pn marked)
if visite = True:
add n to visitable nodes
In the worst case the number of predecessors of n is N (complete Graph) so the runtime complexity of this is O(N^2)

How to pick the right blocks in array matlab?

I have some code written in Matlab, I have A 5x5 matrix, where block (1,1), (2,2), (3,3), (4,4), and (5,5) = 1 . I set those blocks to 1 only as a border to separate upper right region and bottom left region of the matrix. My question is how to pick values only from the upper right region, where the value of A > threshold, without the border values of 1?
This is the example, where Threshold 0.43, and the yellow blocks is the border value 1, the green blocks are the result that I want.
I already implemented, but still can't pick the right block that I want. Please Help Thankyou.
nb :
Threshold = 0.43
A = [1 0.03 0.45 0.25 0.046; 0.03 1 0.32 0.11 0.36; 0.45 0.32 1 0.68 0.42; 0.25 0.11 0.68 1 0.55; 0.046 0.36 0.42 0.55 1]
A(triu(A,1)>Threshold)
compare the uppertriangular part of the matrix w/ the threshold to get indices, then use indices to pull from original matrix A

Resources