Getting faces to merge while merging shapes in OpenCASCADE - opencascade

I would like to merge two cuboids such that their common faces also get merged. Currently, I am not able to get those common faces merged with the below code:
const TopoDS_Shape b1 = BRepPrimAPI_MakeBox(10, 10, 20);
const TopoDS_Shape b2 = BRepPrimAPI_MakeBox(gp_Pnt(5, 0, 0), 30, 30, 30);
const TopoDS_Shape fused = BRepAlgoAPI_Fuse(b1, b2);
Please say how to do this.

It is not clear from your question what do you want to obtain. The result of fuse operation is one shell that contains faces from both arguments, some of which are splits of the original faces. This result is as on the below picture:
Not simplified fuse result
May be you want to get merged the result faces that occur to be located on the same plane? It is like on the following picture:
Simplified fuse result
It that case all you need is just to simplify the result. For this you need to call the method SimplifyResult before getting the result:
const TopoDS_Shape b1 = BRepPrimAPI_MakeBox(10, 10, 20);
const TopoDS_Shape b2 = BRepPrimAPI_MakeBox(gp_Pnt(5, 0, 0), 30, 30, 30);
TopoDS_Shape fused;
BRepAlgoAPI_Fuse aFuse(b1, b2);
if (aFuse.IsDone())
{
aFuse.SimplifyResult();
fused = aFuse.Shape();
}

There is a difference between BRepAlgoAPI_Fuse and BRepAlgo_Fuse.
See this test

Related

numpy array difference to the largest value in another large array which less than the original array

numpy experts,
I'm using numpy.
I want to compare two arrays, get the largest value that is smaller than one of the arrays, and calculate the difference between them.
For example,
A = np.array([3, 5, 7, 12, 13, 18])
B = np.array([4, 7, 17, 20])
I want [1, 0, 4, 2] (4-3, 7-7, 17-13, 20-18) , in this case.
The problem is that the size of the A and B arrays is so large that it would take a very long time to do this by simple means. I can try to divide them to some size, but I wonder if there is a simple numpy function to solve this problem.
Or can I use numba?
For your information, This is my current very stupid codes.
delta = np.zeros_like(B)
for i in range(len(B)):
index_A = (A <= B[i]).argmin() - 1
delta[i] = B[i] - A[index_A]
I agree with #tarlen555 that the problem is mostly related to the for-loop. I guess this one is already much faster:
diff = B-A[:,np.newaxis]
diff[diff<0] = max(A.max(), B.max())
diff.min(axis=0)
In the second line, I wanted to fill all entries with negative values with something ridiculously large. Since your numbers are integer, np.inf doesn't work, but something like that could be more elegant.
EDIT:
Another way:
from scipy.spatial import cKDTree
tree = cKDTree(A.reshape(-1, 1))
k = 2
large_value = max(A.max(), B.max())
while True:
indices = tree.query(B.reshape(-1, 1), k=k)[1]
diff = B[:,np.newaxis]-A[indices]
if np.all(diff.max(axis=-1)>=0):
break
k += 1
diff[diff<0] = large_value
diff.min(axis=1)
This solution could be more memory-efficient but frankly I'm not sure how much more.

How to set a subarray in rust

Still new to rust, so sorry if a little bit of a basic question, but I can't find any good resource. I'm writing some ipc code, and I have the following snippet:
// Header format | "magic string" | msg len | msg type id |
let mut header: [u8; 14] = [105, 51, 45, 105, 112, 99, 0, 0, 0, 0, 0, 0, 0, 0];
I want to set header [10..14] to the native byte order encoding of a 32bit message type. I also want to set header[6..10] to the message length (again native byte order 32 bit int).
So far I have tried:
Using slices: (cannot find a form that compiles)
header[10 .. 14] = msg_type.to_ne_bytes();
Some weird array design (cannot find a form that compiles)
[header[10], header[11], header[12], header[13]] = msg_type.to_ne_bytes();
Saving the result and writing it over. This works, but seems inelegant. If this is the right answer, I understand, it just feels wrong.
let res = msg_type.to_ne_bytes();
header[10] = res[0];
header[11] = res[1];
header[12] = res[2];
header[13] = res[3];
If there is also some way to do this with the array creation, I am happy to look into it. Thanks in advance!
P.S. this is a client for swaywm's ipc messaging if anyone is curious.
Using slices is the way to go. It's an array, so you get all the slice methods on the array as well.
We take your example header[10 .. 14] = msg_type.to_ne_bytes(); and turn it into this, which works:
header[10..14].copy_from_slice(&msg_type.to_ne_bytes());
Note that in a.copy_from_slice(b) you need to ensure that a and b have the same length, otherwise the method call panics.
(An alternative but very similar way would be to use the crate byteorder.)

Creating a list of many ndarrays (different size) in python

I am new to python. Do we have any similar structure like Matlab's Multidimensional structure arrays in Python 2.7 that handles many ndarrays in a list. For instance, I have 15 of these layers (i.e. layer_X, X=[1,15]) with different size but all are 4D:
>>>type(layer_1)
<type 'numpy.ndarray'>
>>> np.shape(layer_1)
(1, 1, 32, 64)
>>> np.shape(layer_12)
(1, 1, 512, 1024)
How do I assign a structure that handles these ndarray with their position X?
You can use a dictionary:
layer_dict = {}
for X in range(1,16):
layer_dict['layer_' + str(X)] = np.ndarray(shape=(1, 1, 32, 64))
This allows to store arrays of various sizes (and any other datatypes to be precise), add and remove components. It also allows you to access your arrays efficiently.
To add a layer type:
layer_dict['layer_16'] = np.ndarray(shape=(1, 1, 512, 1024))
To delete one:
del layer_dict['layer_3']
Note that the items are not stored in order, but that does not prevent you from efficient in-order processing with approaches similar to one in the initial construction loop. If you want to have an ordered dictionary, you can use OrderedDict from the collections module.
If there is any particular rule for choosing the size of each layer, update your question and I will edit my answer.
This is an example of sequential usage:
for X in range(1,16):
temp = layer_dict['layer_' + str(X)]
print type(temp)
The type of temp is an ndarray that you can use as any other ndarray.
A more detailed usage example:
for X in range(1,16):
temp = layer_dict['layer_' + str(X)]
temp[0, 0, 2, 0] = 1
layer_dict['layer_' + str(X)] = temp
Here each layer is fetched into temp, modified, and then reassigned to layer_dict.
You can just use a list:
layers = [layer_1, layer_12]

Element by Element Comparison of Multiple Arrays in MATLAB

I have a multiple input arrays and I want to generate one output array where the value is 0 if all elements in a column are the same and the value is 1 if all elements in a column are different.
For example, if there are three arrays :
A = [28, 28, 43, 43]
B = [28, 43, 43, 28]
C = [28, 28, 43, 43]
Output = [0, 1, 0, 1]
The arrays can be of any size and any number, but the arrays are also the same size.
A none loopy way is to use diff and any to advantage:
A = [28, 28, 43,43];
B = [28, 43, 43,28];
C = [28, 28, 43,43];
D = any(diff([A;B;C])) %Combine all three (or all N) vectors into a matrix. Using the Diff to find the difference between each element from row to row. If any of them is non-zero, then return 1, else return 0.
D = 0 1 0 1
There are several easy ways to do it.
Let's start by putting the relevant vectors in a matrix:
M = [A; B; C];
Now we can do things like:
idx = min(M)==max(M);
or
idx = ~var(M);
No one seems to have addressed that you have a variable amount of arrays. In your case, you have three in your example but you said you could have a variable amount. I'd also like to take a stab at this using broadcasting.
You can create a function that will take a variable number of arrays, and the output will give you an array of an equal number of columns shared among all arrays that conform to the output you're speaking of.
First create a larger matrix that concatenates all of the arrays together, then use bsxfun to take advantage of broadcasting the first row and ensuring that you find columns that are all equal. You can use all to complete this step:
function out = array_compare(varargin)
matrix = vertcat(varargin{:});
out = ~all(bsxfun(#eq, matrix(1,:), matrix), 1);
end
This will take the first row of the stacked matrix and see if this row is the same among all of the rows in the stacked matrix for every column and returns a corresponding vector where 0 denotes each column being all equal and 1 otherwise.
Save this function in MATLAB and call it array_compare.m, then you can call it in MATLAB like so:
A = [28, 28, 43, 43];
B = [28, 43, 43, 28];
C = [28, 28, 43, 43];
Output = array_compare(A, B, C);
We get in MATLAB:
>> Output
Output =
0 1 0 1
Not fancy but will do the trick
Output=nan(length(A),1); %preallocation and check if an index isn't reached
for i=1:length(A)
Output(i)= ~isequal(A(i),B(i),C(i));
end
If someone has an answer without the loop take that, but i feel like performance is not an issue here.

Mathematica : Conditional Operations on Lists

I would like to average across "Rows" in a column. That is rows that have the same value in another column.
For example :
e= {{1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2},
{69, 7, 30, 38, 16, 70, 97, 50, 97, 31, 81, 96, 60, 52, 35, 6,
24, 65, 76, 100}}
I would like to average all the Value in the second column that have the same value in the first one.
So Here : The Average for Col 1 = 1 & Col 1 = 2
And then create a third column with the result of this operation. So the values in that columns should be the same for the first 10 lines an next 10.
Many Thanks for any help you could provide !
LA
Output Ideal Format :
Interesting problem. This is the first thing that came into my mind:
e[[All, {1}]] /. Reap[Sow[#2, #] & ### e, _, # -> Mean##2 &][[2]];
ArrayFlatten[{{e, %}}] // TableForm
To get rounding you may simply add Round# before Mean in the code above: Round#Mean##2
Here is a slightly faster method, but I actually prefer the Sow/Reap one above:
#[[1, 1]] -> Round#Mean##[[All, 2]] & /# GatherBy[e, First];
ArrayFlatten[{{e, e[[All, {1}]] /. %}}] // TableForm
If you have many different elements in the first column, either of the solutions above can be made faster by applying Dispatch to the rule list that is produced, before the replacement (/.) is done. This command tells Mathematica to build and use an optimized internal format for the rules list.
Here is a variant that is slower, but I like it enough to share anyway:
Module[{q},
Reap[{#, Sow[#2,#], q##} & ### e, _, (q## = Mean##2) &][[1]]
]
Also, general tips, you can replace:
Table[RandomInteger[{1, 100}], {20}] with RandomInteger[{1, 100}, 20]
and Join[{c}, {d}] // Transpose with Transpose[{c, d}].
What the heck, I'll join the party. Here is my version:
Flatten/#Flatten[Thread/#Transpose#{#,Mean/##[[All,All,2]]}&#GatherBy[e,First],1]
Should be fast enough I guess.
EDIT
In response to the critique of #Mr.Wizard (my first solution was reordering the list), and to explore a bit the high-performance corner of the problem, here are 2 alternative solutions:
getMeans[e_] :=
Module[{temp = ConstantArray[0, Max[#[[All, 1, 1]]]]},
temp[[#[[All, 1, 1]]]] = Mean /# #[[All, All, 2]];
List /# temp[[e[[All, 1]]]]] &[GatherBy[e, First]];
getMeansSparse[e_] :=
Module[{temp = SparseArray[{Max[#[[All, 1, 1]]] -> 0}]},
temp[[#[[All, 1, 1]]]] = Mean /# #[[All, All, 2]];
List /# Normal#temp[[e[[All, 1]]]]] &[GatherBy[e, First]];
The first one is the fastest, trading memory for speed, and can be applied when keys are all integers, and your maximal "key" value (2 in your example) is not too large. The second solution is free from the latter limitation, but is slower. Here is a large list of pairs:
In[303]:=
tst = RandomSample[#, Length[#]] &#
Flatten[Map[Thread[{#, RandomInteger[{1, 100}, 300]}] &,
RandomSample[Range[1000], 500]], 1];
In[310]:= Length[tst]
Out[310]= 150000
In[311]:= tst[[;; 10]]
Out[311]= {{947, 52}, {597, 81}, {508, 20}, {891, 81}, {414, 47},
{849, 45}, {659, 69}, {841, 29}, {700, 98}, {858, 35}}
The keys can be from 1 to 1000 here, 500 of them, and there are 300 random numbers for each key. Now, some benchmarks:
In[314]:= (res0 = getMeans[tst]); // Timing
Out[314]= {0.109, Null}
In[317]:= (res1 = getMeansSparse[tst]); // Timing
Out[317]= {0.219, Null}
In[318]:= (res2 = tst[[All, {1}]] /.
Reap[Sow[#2, #] & ### tst, _, # -> Mean##2 &][[2]]); // Timing
Out[318]= {5.687, Null}
In[319]:= (res3 = tst[[All, {1}]] /.
Dispatch[
Reap[Sow[#2, #] & ### tst, _, # -> Mean##2 &][[2]]]); // Timing
Out[319]= {0.391, Null}
In[320]:= res0 === res1 === res2 === res3
Out[320]= True
We can see that the getMeans is the fastest here, getMeansSparse the second fastest, and the solution of #Mr.Wizard is somewhat slower, but only when we use Dispatch, otherwise it is much slower. Mine and #Mr.Wizard's solutions (with Dispatch) are similar in spirit, the speed difference is due to (sparse) array indexing being more efficient than hash look-up. Of course, all this matters only when your list is really large.
EDIT 2
Here is a version of getMeans which uses Compile with a C target and returns numerical values (rather than rationals). It is about twice faster than getMeans, and the fastest of my solutions.
getMeansComp =
Compile[{{e, _Integer, 2}},
Module[{keys = e[[All, 1]], values = e[[All, 2]], sums = {0.} ,
lengths = {0}, , i = 1, means = {0.} , max = 0, key = -1 ,
len = Length[e]},
max = Max[keys];
sums = Table[0., {max}];
lengths = Table[0, {max}];
means = sums;
Do[key = keys[[i]];
sums[[key]] += values[[i]];
lengths[[key]]++, {i, len}];
means = sums/(lengths + (1 - Unitize[lengths]));
means[[keys]]], CompilationTarget -> "C", RuntimeOptions -> "Speed"]
getMeansC[e_] := List /# getMeansComp[e];
The code 1 - Unitize[lengths] protects against division by zero for unused keys. We need every number in a separate sublist, so we should call getMeansC, not getMeansComp directly. Here are some measurements:
In[180]:= (res1 = getMeans[tst]); // Timing
Out[180]= {0.11, Null}
In[181]:= (res2 = getMeansC[tst]); // Timing
Out[181]= {0.062, Null}
In[182]:= N#res1 == res2
Out[182]= True
This can probably be considered a heavily optimized numerical solution. The fact that the fully general, brief and beautiful solution of #Mr.Wizard is only about 6-8 times slower speaks very well for the latter general concise solution, so, unless you want to squeeze every microsecond out of it, I'd stick with #Mr.Wizard's one (with Dispatch). But it's important to know how to optimize code, and also to what degree it can be optimized (what can you expect).
A naive approach could be:
Table[
Join[ i, {Select[Mean /# SplitBy[e, First], First## == First#i &][[1, 2]]}]
, {i, e}] // TableForm
(*
1 59 297/5
1 72 297/5
1 90 297/5
1 63 297/5
1 77 297/5
1 98 297/5
1 3 297/5
1 99 297/5
1 28 297/5
1 5 297/5
2 87 127/2
2 80 127/2
2 29 127/2
2 70 127/2
2 83 127/2
2 75 127/2
2 68 127/2
2 65 127/2
2 1 127/2
2 77 127/2
*)
You could also create your original list by using for example:
e = Array[{Ceiling[#/10], RandomInteger[{1, 100}]} &, {20}]
Edit
Answering #Mr.'s comments
If the list is not sorted by its first element, you can do:
Table[Join[
i, {Select[
Mean /# SplitBy[SortBy[e, First], First], First## == First#i &][[1,2]]}],
{i, e}] //TableForm
But this is not necessary in your example
Why not pile on?
I thought this was the most straightforward/easy-to-read answer, though not necessarily the fastest. But it's really amazing how many ways you can think of a problem like this in Mathematica.
Mr. Wizard's is obviously very cool as others have pointed out.
#Nasser, your solution doesn't generalize to n-classes, although it easily could be modified to do so.
meanbygroup[table_] := Join ## Table[
Module[
{sublistmean},
sublistmean = Mean[sublist[[All, 2]]];
Table[Append[item, sublistmean], {item, sublist}]
]
, {sublist, GatherBy[table, #[[1]] &]}
]
(* On this dataset: *)
meanbygroup[e]
Wow, the answers here are so advanced and cool looking, Need more time to learn them.
Here is my answer, I am still matrix/vector/Matlab'ish guy in recovery and transition, so my solution is not functional like the experts solution here, I look at data as matrices and vectors (easier for me than looking at them as lists of lists etc...) so here it is
sizeOfList=10; (*given from the problem, along with e vector*)
m1 = Mean[e[[1;;sizeOfList,2]]];
m2 = Mean[e[[sizeOfList+1;;2 sizeOfList,2]]];
r = {Flatten[{a,b}], d , Flatten[{Table[m1,{sizeOfList}],Table[m2,{sizeOfList}]}]} //Transpose;
MatrixForm[r]
Clearly not as a good a solution as the functional ones.
Ok, I will go now and hide away from the functional programmers :)
--Nasser

Resources