OpenGl Context in Multithreading

I'm trying to make a separate thread which has to work with visualization via AIS_InteractiveContext. It raises a problem of one OpenGl context for two threads. It's not obvious for me how to handle several OpenGl contexts in terms of OpenCascade (maybe even one and somehow shared). Could you please give me some explanations what should I do? Which instruments (classes, functions) would be helpful in such case? I'm working with Standart Library threads for multithreading.

Kirill Gavrilov's picture

Consider describing your use case in more details - how exactly you are using AIS_InteractiveContext within multiple threads and for which purpose. Why do you create multiple OpenGL contexts?

In general, OpenGL renderer is expected to be used from the same thread it was created.

momooshi's picture

I want to create a simple program for 2d-sketching. My idea is to make a line (or other 2d-primitives but I've started with a line) with points which are input by a user with mouse clicks. So I had to make a thread which is waiting for the user's input (an infinite cycle) and display the result dynamically as the user adds points (the user clicks for the first time - the first point appears on the screen, the second point of the line is being recomputed as the mouse cursor is moving and the line is being displayed with these points until the user clicks for the second time - then the line is finished). So, in the function for this thread I had to add calls for AIS_InteractiveContext Display to display my dynamically recomputed line. When I try to make a line in my program, it usually works for some time, I successfully get the result, but then it crashes. During line creation I receive error messages in program output which tell me about problems with OpenGL Context (decoded message says that demanded resource was busy). After some research I found the information that one OpenGL Context for several threads doesn't work properly so I'm thinking about creating a different context for the separate thread.
Maybe it also would be better to mention an architecture. I have a class for initialization of OpenCascade viewer and a class describing the main window (I'm working with Qt), so in the second class I have a slot for the button, which activates the mode of line creation, so exactly in this slot I'm trying to run the second thread.

Attachments: 
Kirill Gavrilov's picture

This is not how it is usually done.

Normally application puts GUI and 3D Viewer logic into the main thread and creates background working thread(s) to execute expensive calculations. Background threads signal main thread about finished job, so that GUI thread may display results to the user (and update 3D Viewer content). In some frameworks like QtQuick, GUI and rendering threads might be also separated, so that you'll have to synchronize between GUI, 3D rendering and background threads.

Of course, you may create a dedicated 3D Viewer (OpenGl_GraphicDriver, V3d_Viewer, AIS_InteractiveContext) for each working thread, but usually this makes no sense from performance point of view.

momooshi's picture

Thank you for the reply. I'll try to revise my approach and I'll post the final solution after success.

momooshi's picture

Yes, it really was possible to change the solution. I managed to do it with Qt signal-slot mechanism (I'm tend to come to overly complicated solutions at the first place). Thank you very much

Marco Balen's picture

Hi,
I have a similar question about threding and OCC.
I have to display the contents of iges/step files and to avoid app blocking with big files I want to move the reading code of the iges/step files on another thread.
The reading thread may be interrupted when the user changes the file selection and the reading thread have not completed the reading of the file.
How can manage the thread ?
I have seen the OCC OSD_ThreadPool class, but it seems doesn't have a method to stop the thread process.
In Microsoft VC++ there is the concurrency::task_group class that permits the execution and interruption of the thread process.
But, what happens to the memory usage of OCC classes in the thread ?
Have you a suggestion ?

Kirill Gavrilov's picture

This doesn't look related to visualization, only to threading.

Low-level thread interruption might work very fast, but implies severe side effects, so that it should not be used in normal cases to avoid memory leaks or putting application into incorrect state. Threads are expected to communicate to each other, and, for instance, signal if current long operation should be aborted. This means that this very operation should be able to handle such abortion flag.

In case of OCCT, this interface is provided by Message_ProgressIndicator. Algorithms taking Message_ProgressRange are capable to indicate their current progress through Message_ProgressIndicator::Show() and to stop execution on Message_ProgressIndicator::UserBreak() implemented by application. Depending on the algorithm and it's progress granulation logic, abortion might happen instantly or with some delay (algorithm might keep working on current state before it will see abortion request).

You may read more about this interface in OCCT documentation and also in this article.

Marco Balen's picture

Thank Kirill,
I will investigate on the Message_ProgressIndicator.