Having read the very interesting article serie of Roman Lygin related to parallel applications with OpenCascade, I decided to make some tests with an OCC class which is often used : BRepExtrema_DistShapeShape.
For my case BRepExtrema_DistShapeShape gives me the projection of a point (vertex) on a topologic shape.
My test is as follows :
1. Load a testing geom curve C (for example from a file)
2. Load a testing topologic shape S (for example from a file)
3. Discretize C with GCPnts_UniformAbscissa, getting Points
4. For each point Pi in Points, project Pi on S with BRepExtrema_DistShapeShape
I want to make the fourth step parallel, using TBB (Threading Building Blocks 2.1) for that purpose (tbb::parallel_for()).
Here is the code I wrote :
// The Body functor that tbb::parallel_for() will execute in parallel.
apply_occ_dist_shape_shape(const TopoDS_Shape& shape,
void operator()(const tbb::blocked_range
std::cout for (int i = r.begin(); i != r.end(); ++i)
const TopoDS_Shape& _shape;
}; // struct apply_occ_dist_shape_shape
// 1. Load the testing geom curve (C).
const Handle_Geom_Curve curve1 = // A (long) curve ...
// Load the testing topologic shape (S).
const TopoDS_Shape shape1 = // A (complex) shape ...
// Discretize the curve (C).
const double discrStep = 10.;
GCPnts_UniformAbscissa discretizer(curveAdaptor, discrStep);
for (int i = 1; i points[i - 1] = curve1->Value(discretizer.Parameter(i));
const size_t grainSize = 100;
The third parameter of the constructor of tbb::blocked_range() is the grain size.
This value tells TBB to split a range into two sub-ranges if its size is greater than grainSize.
So let me give the results :
- There is 334 points in my curve discretization (so points.size() == 334) -> it forces TBB to split the range [0,334) and activates parallelization.
- The computation starts with two sub-ranges :
range : 0, 83
range : 167, 250
And fails because of an exception (caught be TBB) :
terminate called after throwing an instance of 'tbb::captured_exception'
what(): Unidentified exception
Setting MMGT_OPT to 0 does not help.
Setting grainSize to a value greater than points.size() disables parallelization and then the computation works.
I have not gone deeper into OpenCascade sources, but as Roman pointed out in his article serie, the problem is surely related to static variables declared in some OpenCascade C++ implementation files. But where ?