diff --git a/doc/tutorial/rendering/tutorial-panda3d.dox b/doc/tutorial/rendering/tutorial-panda3d.dox index 73787fa70a..66ab3a0941 100644 --- a/doc/tutorial/rendering/tutorial-panda3d.dox +++ b/doc/tutorial/rendering/tutorial-panda3d.dox @@ -138,7 +138,7 @@ vpPanda3DBaseRenderer, which implements basic functions for a panda renderer. - Now to build ViSP with Panda3D support when `.dmg` file `Panda3D-1.11.0-py3.9.dmg` is installed, you can just run cmake as usual. Note that PCL is not compatible with Panda3D, that's why we disable here PCL usage - (see \ref tutorial-panda3d-issue-macOS). + (see \ref tutorial-panda3d-issue-segfault-macOS). \code{.sh} $ cd $VISP_WS/visp-build $ cmake ../visp -DUSE_PCL=OFF @@ -165,31 +165,81 @@ vpPanda3DBaseRenderer, which implements basic functions for a panda renderer. - Installer are available for Windows browsing the [download](https://www.panda3d.org/download/) page. -\section tutorial-panda3d-usage Rendere based on Panda3D usage +\section tutorial-panda3d-usage Using Panda3D for rendering An example that shows how to exploit Panda3D in ViSP to render a color image with support for textures and lighting, a -depth image, normals in world space and in camera space is given in tutorial-panda3d-renderer.cpp. +depth image, normals in object space and in camera space is given in tutorial-panda3d-renderer.cpp. -Here you will find the code used to create the renderer: + +To start rendering, we first instanciate a vpPanda3DRendererSet. This object allows to render multiple modalities (color, depth, etc.) in a single pass. +To add different rendering modalities, we will use subclasses that will be registered to the renderer set. +Internally, each sub renderer has its own scene: the renderer set synchronizes everything when the state changes (i.e, an object is added, an object is moved or the camera parameters change.) + +A panda3D renderer should be instanciated with a vpPanda3DRenderParameters object. This object defines: +- The camera intrinsics (see vpCameraParameters): As of now, Only parameters for a distortion free model are supported +- The image resolution +- The near and far clipping plane values. Object parts that are too close (less than the near clipping value) or too far (greater than the far clipping value) will not be rendered. + +The creation of the renderer set can be found below \snippet tutorial-panda3d-renderer.cpp Renderer set -Here you will find the code used to create the sub renderers: +To actually render color, normals, etc., we need to define subrenderers: \snippet tutorial-panda3d-renderer.cpp Subrenderers init -Here you will find the code used to add the sub renderers to the main renderer: +The different subrenderers are: + +- vpPanda3DGeometryRenderer instances allow to retrieve 3D information about the object: these are the surface normals in the object or camera frame, as well as the depth information. +- vpPanda3DRGBRenderer objects perform the traditional color rendering. Lighting interaction can be disable, as is the case for the second renderer (diffuse only). +- Post processing renderers, such as vpPanda3DLuminanceFilter, vpPanda3DCanny, operate on the output image of another renderer. They can be used to further process the output data and can be chained together. +In this case, the chain vpPanda3DLuminanceFilter -> vpPanda3DGaussianBlur -> vpPanda3DCanny will perform a canny edge detection (without hysteresis) on a blurred, grayscale image. + +For these subrenderers to actually be useful, they should be added to the main renderer: \snippet tutorial-panda3d-renderer.cpp Adding subrenderers +\warning Once they have been added, a call to vpPanda3DBaseRenderer::initFramework() should be performed. Otherwise, no rendering will be performed and objects will not be loaded. + Here you will find the code used to configure the scene: \snippet tutorial-panda3d-renderer.cpp Scene configuration +We start by loading the object to render with vpPanda3DBaseRenderer::loadObject, followed by vpPanda3DBaseRenderer::addNodeToScene. +For the Color-based renderer, we add lights to shade our object. Different light types are supported, reusing the available Panda3D features. + +Once the scene is setup, we can start rendering. This will be performed in a loop. + +The first step shown is the following: +\snippet tutorial-panda3d-renderer.cpp Updating render parameters + +Each frame, we compute the values of the clipping planes, and update the rendering properties. This will ensure that the target object is visible. +Depending on your use case, this may not be necessary. + +Once this is done, we can call upon Panda3D to render the object with +\snippet tutorial-panda3d-renderer.cpp Render frame + +\note Note that under the hood, all subrenderers rely on the same Panda3D "framework": calling renderFrame on one will call it for the others. + +To use the renders, we must convert them to ViSP images. +To do so, each subrenderer defines its own *getRender* method, that will perform the conversion from a panda texture to the relevant ViSP data type. + +For each render type, we start by getting the correct renderer via the vpPanda3DRendererSet::getRenderer, then call its *getRender* method. +\snippet tutorial-panda3d-renderer.cpp Fetch render + +Now that we have the retrieved the images, we can display them. To do so, we leverage utility functions defined beforehand (see the full code for more information). This may be required in cases where the data cannot be directly displayed. +For instance, normals are encoded as 3 32-bit floats, but displays require colors to be represented as 8-bit unsigned characters. +The same goes for the depth render, which is mapped back to the 0-255 range, although its value unbound. +\snippet tutorial-panda3d-renderer.cpp Display + +Finally, we use the snippet below to move the object, updating the scene. +To have a constant velocity, we multiply the displacement by the time that has elapsed between the frame's start and end. +\snippet tutorial-panda3d-renderer.cpp Move object + \section tutorial-panda3d-full-code Tutorial full code The full code of tutorial-panda3d-renderer.cpp is given below. \include tutorial-panda3d-renderer.cpp -\section tutorial-panda3d-run Execute the tutorial +\section tutorial-panda3d-run Running the tutorial -- Once ViSP is build, you may run the tutorial by: +- Once ViSP is built, you may run the tutorial by: \code{.sh} $ cd $VISP_WS/visp-build $ ./tutorial/ar/tutorial-panda3d-renderer @@ -201,67 +251,90 @@ The full code of tutorial-panda3d-renderer.cpp is given below. \endhtmlonly \section tutorial-panda3d-issue Known issues -\subsection tutorial-panda3d-issue-macOS Known issue on macOS - -- Segfault: `:framework(error): Unable to create window` - ``` - % ./tutorial-panda3d-renderer - Initializing Panda3D rendering framework - Known pipe types: - CocoaGLGraphicsPipe - (all display modules loaded.) - :framework(error): Unable to create window. - zsh: segmentation fault ./tutorial-panda3d-renderer - ``` - This issue is probably due to `EIGEN_MAX_ALIGN_BYTES` and `HAVE_PNG` macro redefinition that occurs when building ViSP with Panda3D support: - ``` - $ cd visp-build - $ make - ... - [100%] Building CXX object tutorial/ar/CMakeFiles/tutorial-panda3d-renderer.dir/tutorial-panda3d-renderer.cpp.o - In file included from $VISP_WS/visp/tutorial/ar/tutorial-panda3d-renderer.cpp:17: - In file included from $VISP_WS/visp/modules/ar/include/visp3/ar/vpPanda3DRGBRenderer.h:39: - In file included from $VISP_WS/visp/modules/ar/include/visp3/ar/vpPanda3DBaseRenderer.h:42: - In file included from $VISP_WS/3rdparty/panda3d/panda3d/built/include/pandaFramework.h:17: - In file included from $VISP_WS/3rdparty/panda3d/panda3d/built/include/pandabase.h:21: - In file included from $VISP_WS/3rdparty/panda3d/panda3d/built/include/dtoolbase.h:22: - $VISP_WS/3rdparty/panda3d/panda3d/built/include/dtool_config.h:40:9: warning: 'HAVE_PNG' macro redefined [-Wmacro-redefined] - #define HAVE_PNG 1 +\subsection tutorial-panda3d-issue-library-macOS Library not loaded: libpanda.1.11.dylib + +This error occurs on macOS. + +``` +% cd $VISP_WS/visp-build/tutorial/ar/ +% ./tutorial-panda3d-renderer +dyld[1795]: Library not loaded: @loader_path/../lib/libpanda.1.11.dylib + Referenced from: <0D61FFE0-73FA-3053-8D8D-8912BFF16E36> /Users/fspindle/soft/visp/visp_ws/test-pr/visp-SamFlt/visp-build/tutorial/ar/tutorial-panda3d-renderer + Reason: tried: '/Users/fspindle/soft/visp/visp_ws/test-pr/visp-SamFlt/visp-build/tutorial/ar/../lib/libpanda.1.11.dylib' (no such file) +zsh: abort ./tutorial-panda3d-renderer +``` + +It occurs when you didn't follow carefully the instructions mentionned in \ref tutorial-panda3d-install-macos section. + +A quick fix is to add the path to the library in `DYLD_LIBRARY_PATH` env var: +``` +$ export DYLD_LIBRARY_PATH=/Library/Developer/Panda3D/lib:$DYLD_LIBRARY_PATH +``` + +\subsection tutorial-panda3d-issue-segfault-macOS Segfault: :framework(error): Unable to create window + +This error occurs on macOS. + +``` +% cd $VISP_WS/visp-build/tutorial/ar/ +% ./tutorial-panda3d-renderer +Initializing Panda3D rendering framework +Known pipe types: + CocoaGLGraphicsPipe +(all display modules loaded.) +:framework(error): Unable to create window. +zsh: segmentation fault ./tutorial-panda3d-renderer +``` +This issue is probably due to `EIGEN_MAX_ALIGN_BYTES` and `HAVE_PNG` macro redefinition that occurs when building ViSP with Panda3D support: +``` +$ cd visp-build +$ make +... +[100%] Building CXX object tutorial/ar/CMakeFiles/tutorial-panda3d-renderer.dir/tutorial-panda3d-renderer.cpp.o +In file included from $VISP_WS/visp/tutorial/ar/tutorial-panda3d-renderer.cpp:17: +In file included from $VISP_WS/visp/modules/ar/include/visp3/ar/vpPanda3DRGBRenderer.h:39: +In file included from $VISP_WS/visp/modules/ar/include/visp3/ar/vpPanda3DBaseRenderer.h:42: +In file included from $VISP_WS/3rdparty/panda3d/panda3d/built/include/pandaFramework.h:17: +In file included from $VISP_WS/3rdparty/panda3d/panda3d/built/include/pandabase.h:21: +In file included from $VISP_WS/3rdparty/panda3d/panda3d/built/include/dtoolbase.h:22: +$VISP_WS/3rdparty/panda3d/panda3d/built/include/dtool_config.h:40:9: warning: 'HAVE_PNG' macro redefined [-Wmacro-redefined] +#define HAVE_PNG 1 + ^ +/opt/homebrew/include/pcl-1.14/pcl/pcl_config.h:53:9: note: previous definition is here +#define HAVE_PNG + ^ +In file included from $VISP_WS/visp/tutorial/ar/tutorial-panda3d-renderer.cpp:17: +In file included from $VISP_WS/visp/modules/ar/include/visp3/ar/vpPanda3DRGBRenderer.h:39: +In file included from $VISP_WS/visp/modules/ar/include/visp3/ar/vpPanda3DBaseRenderer.h:42: +In file included from $VISP_WS/3rdparty/panda3d/panda3d/built/include/pandaFramework.h:17: +In file included from $VISP_WS/3rdparty/panda3d/panda3d/built/include/pandabase.h:21: +In file included from $VISP_WS/3rdparty/panda3d/panda3d/built/include/dtoolbase.h:22: +$VISP_WS/3rdparty/panda3d/panda3d/built/include/dtool_config.h:64:9: warning: 'HAVE_ZLIB' macro redefined [-Wmacro-redefined] +#define HAVE_ZLIB 1 + ^ +/opt/homebrew/include/pcl-1.14/pcl/pcl_config.h:55:9: note: previous definition is here +#define HAVE_ZLIB + ^ +In file included from $VISP_WS/visp/tutorial/ar/tutorial-panda3d-renderer.cpp:17: +In file included from $VISP_WS/visp/modules/ar/include/visp3/ar/vpPanda3DRGBRenderer.h:39: +In file included from $VISP_WS/visp/modules/ar/include/visp3/ar/vpPanda3DBaseRenderer.h:42: +In file included from $VISP_WS/3rdparty/panda3d/panda3d/built/include/pandaFramework.h:17: +In file included from $VISP_WS/3rdparty/panda3d/panda3d/built/include/pandabase.h:21: +$VISP_WS/3rdparty/panda3d/panda3d/built/include/dtoolbase.h:432:9: warning: 'EIGEN_MAX_ALIGN_BYTES' macro redefined [-Wmacro-redefined] +#define EIGEN_MAX_ALIGN_BYTES MEMORY_HOOK_ALIGNMENT + ^ +/opt/homebrew/include/eigen3/Eigen/src/Core/util/ConfigureVectorization.h:175:11: note: previous definition is here + #define EIGEN_MAX_ALIGN_BYTES EIGEN_IDEAL_MAX_ALIGN_BYTES ^ - /opt/homebrew/include/pcl-1.14/pcl/pcl_config.h:53:9: note: previous definition is here - #define HAVE_PNG - ^ - In file included from $VISP_WS/visp/tutorial/ar/tutorial-panda3d-renderer.cpp:17: - In file included from $VISP_WS/visp/modules/ar/include/visp3/ar/vpPanda3DRGBRenderer.h:39: - In file included from $VISP_WS/visp/modules/ar/include/visp3/ar/vpPanda3DBaseRenderer.h:42: - In file included from $VISP_WS/3rdparty/panda3d/panda3d/built/include/pandaFramework.h:17: - In file included from $VISP_WS/3rdparty/panda3d/panda3d/built/include/pandabase.h:21: - In file included from $VISP_WS/3rdparty/panda3d/panda3d/built/include/dtoolbase.h:22: - $VISP_WS/3rdparty/panda3d/panda3d/built/include/dtool_config.h:64:9: warning: 'HAVE_ZLIB' macro redefined [-Wmacro-redefined] - #define HAVE_ZLIB 1 - ^ - /opt/homebrew/include/pcl-1.14/pcl/pcl_config.h:55:9: note: previous definition is here - #define HAVE_ZLIB - ^ - In file included from $VISP_WS/visp/tutorial/ar/tutorial-panda3d-renderer.cpp:17: - In file included from $VISP_WS/visp/modules/ar/include/visp3/ar/vpPanda3DRGBRenderer.h:39: - In file included from $VISP_WS/visp/modules/ar/include/visp3/ar/vpPanda3DBaseRenderer.h:42: - In file included from $VISP_WS/3rdparty/panda3d/panda3d/built/include/pandaFramework.h:17: - In file included from $VISP_WS/3rdparty/panda3d/panda3d/built/include/pandabase.h:21: - $VISP_WS/3rdparty/panda3d/panda3d/built/include/dtoolbase.h:432:9: warning: 'EIGEN_MAX_ALIGN_BYTES' macro redefined [-Wmacro-redefined] - #define EIGEN_MAX_ALIGN_BYTES MEMORY_HOOK_ALIGNMENT - ^ - /opt/homebrew/include/eigen3/Eigen/src/Core/util/ConfigureVectorization.h:175:11: note: previous definition is here - #define EIGEN_MAX_ALIGN_BYTES EIGEN_IDEAL_MAX_ALIGN_BYTES - ^ - 3 warnings generated. - [100%] Linking CXX executable tutorial-panda3d-renderer - [100%] Built target tutorial-panda3d-renderer - ``` - The work around consists in disabling `PCL` usage during ViSP configuration - ``` - $ cd $VISP_WS/visp-build - $ cmake ../visp -DUSE_PCL=OFF - $ make -j$(sysctl -n hw.logicalcpu) - ``` +3 warnings generated. +[100%] Linking CXX executable tutorial-panda3d-renderer +[100%] Built target tutorial-panda3d-renderer +``` +The work around consists in disabling `PCL` usage during ViSP configuration +``` +$ cd $VISP_WS/visp-build +$ cmake ../visp -DUSE_PCL=OFF +$ make -j$(sysctl -n hw.logicalcpu) +``` + */ diff --git a/modules/ar/include/visp3/ar/vpPanda3DBaseRenderer.h b/modules/ar/include/visp3/ar/vpPanda3DBaseRenderer.h index 2c69edb298..d0844a4625 100644 --- a/modules/ar/include/visp3/ar/vpPanda3DBaseRenderer.h +++ b/modules/ar/include/visp3/ar/vpPanda3DBaseRenderer.h @@ -71,17 +71,18 @@ class VISP_EXPORT vpPanda3DBaseRenderer * Will also perform the renderer setup (scene, camera and render targets) */ virtual void initFramework(); + virtual void initFromParent(std::shared_ptr framework, PointerTo window); + virtual void initFromParent(const vpPanda3DBaseRenderer &renderer); - /** - * @brief - * - * @param framework - * @param window - */ - void initFromParent(std::shared_ptr framework, std::shared_ptr window); - - + virtual void beforeFrameRendered() { } virtual void renderFrame(); + virtual void afterFrameRendered() + { + GraphicsOutput *mainBuffer = getMainOutputBuffer(); + if (mainBuffer != nullptr) { + m_framework->get_graphics_engine()->extract_texture_data(mainBuffer->get_texture(), mainBuffer->get_gsg()); + } + } /** * @brief Get the name of the renderer @@ -90,6 +91,8 @@ class VISP_EXPORT vpPanda3DBaseRenderer */ const std::string &getName() const { return m_name; } + void setName(const std::string &name) { m_name = name; } + /** * @brief Get the scene root * @@ -101,31 +104,7 @@ class VISP_EXPORT vpPanda3DBaseRenderer * * @param params the new rendering parameters */ - virtual void setRenderParameters(const vpPanda3DRenderParameters ¶ms) - { - unsigned int previousH = m_renderParameters.getImageHeight(), previousW = m_renderParameters.getImageWidth(); - bool resize = previousH != params.getImageHeight() || previousW != params.getImageWidth(); - - m_renderParameters = params; - - if (resize) { - for (GraphicsOutput *buffer: m_buffers) { - //buffer->get_type().is_derived_from() - GraphicsBuffer *buf = dynamic_cast(buffer); - if (buf == nullptr) { - throw vpException(vpException::fatalError, "Panda3D: could not cast to GraphicsBuffer when rendering."); - } - else { - buf->set_size(m_renderParameters.getImageWidth(), m_renderParameters.getImageHeight()); - } - } - } - - // If renderer is already initialized, modify camera properties - if (m_camera != nullptr) { - m_renderParameters.setupPandaCamera(m_camera); - } - } + virtual void setRenderParameters(const vpPanda3DRenderParameters ¶ms); /** * @brief Returns true if this renderer process 3D data and its scene root can be interacted with. @@ -143,6 +122,15 @@ class VISP_EXPORT vpPanda3DBaseRenderer */ int getRenderOrder() const { return m_renderOrder; } + void setRenderOrder(int order) + { + int previousOrder = m_renderOrder; + m_renderOrder = order; + for (GraphicsOutput *buffer: m_buffers) { + buffer->set_sort(buffer->get_sort() + (order - previousOrder)); + } + } + /** * @brief Set the camera's pose. * The pose is specified using the ViSP convention (Y-down right handed). @@ -206,8 +194,10 @@ class VISP_EXPORT vpPanda3DBaseRenderer * @param name name of the node that should be used to compute near and far values. * @param near resulting near clipping plane distance * @param far resulting far clipping plane distance + * @param fast Whether to use the axis align bounding box to compute the clipping planes. + * This is faster than reprojecting the full geometry in the camera frame */ - void computeNearAndFarPlanesFromNode(const std::string &name, float &near, float &far); + void computeNearAndFarPlanesFromNode(const std::string &name, float &near, float &far, bool fast); /** * @brief Load a 3D object. To load an .obj file, Panda3D must be compiled with assimp support. @@ -250,6 +240,8 @@ class VISP_EXPORT vpPanda3DBaseRenderer virtual GraphicsOutput *getMainOutputBuffer() { return nullptr; } + virtual void enableSharedDepthBuffer(vpPanda3DBaseRenderer &sourceBuffer); + protected: /** @@ -272,16 +264,14 @@ class VISP_EXPORT vpPanda3DBaseRenderer */ virtual void setupRenderTarget() { } - const static vpHomogeneousMatrix VISP_T_PANDA; //! Homogeneous transformation matrix to convert from the Panda coordinate system (right-handed Z-up) to the ViSP coordinate system (right-handed Y-Down) const static vpHomogeneousMatrix PANDA_T_VISP; //! Inverse of VISP_T_PANDA - protected: - const std::string m_name; //! name of the renderer + std::string m_name; //! name of the renderer int m_renderOrder; //! Rendering priority for this renderer and its buffers. A lower value will be rendered first. Should be used when calling make_output in setupRenderTarget() std::shared_ptr m_framework; //! Pointer to the active panda framework - std::shared_ptr m_window; //! Pointer to owning window, which can create buffers etc. It is not necessarily visible. + PointerTo m_window; //! Pointer to owning window, which can create buffers etc. It is not necessarily visible. vpPanda3DRenderParameters m_renderParameters; //! Rendering parameters NodePath m_renderRoot; //! Node containing all the objects and the camera for this renderer PointerTo m_camera; diff --git a/modules/ar/include/visp3/ar/vpPanda3DGeometryRenderer.h b/modules/ar/include/visp3/ar/vpPanda3DGeometryRenderer.h index b043145960..c4540b969a 100644 --- a/modules/ar/include/visp3/ar/vpPanda3DGeometryRenderer.h +++ b/modules/ar/include/visp3/ar/vpPanda3DGeometryRenderer.h @@ -36,6 +36,7 @@ #if defined(VISP_HAVE_PANDA3D) #include #include +#include #include BEGIN_VISP_NAMESPACE @@ -60,7 +61,7 @@ class VISP_EXPORT vpPanda3DGeometryRenderer : public vpPanda3DBaseRenderer }; vpPanda3DGeometryRenderer(vpRenderType renderType); - ~vpPanda3DGeometryRenderer() { } + ~vpPanda3DGeometryRenderer() = default; /** * @brief Get render results into ViSP readable structures @@ -71,6 +72,8 @@ class VISP_EXPORT vpPanda3DGeometryRenderer : public vpPanda3DBaseRenderer */ void getRender(vpImage &colorData, vpImage &depth) const; + void getRender(vpImage &normals, vpImage &depth, const vpRect &bb, unsigned int h, unsigned w) const; + /** * @brief Get render results into ViSP readable structures. This version only retrieves the normal data * @param colorData Depending on the vpRenderType, normals in the world or camera frame may be stored in this image. @@ -82,7 +85,8 @@ class VISP_EXPORT vpPanda3DGeometryRenderer : public vpPanda3DBaseRenderer */ void getRender(vpImage &depth) const; - GraphicsOutput *getMainOutputBuffer() VP_OVERRIDE { return m_normalDepthBuffer; } + GraphicsOutput *getMainOutputBuffer() VP_OVERRIDE { return (GraphicsOutput *)m_normalDepthBuffer; } + protected: void setupScene() VP_OVERRIDE; @@ -90,10 +94,10 @@ class VISP_EXPORT vpPanda3DGeometryRenderer : public vpPanda3DBaseRenderer private: vpRenderType m_renderType; - Texture *m_normalDepthTexture; - GraphicsOutput *m_normalDepthBuffer; + PointerTo m_normalDepthTexture; + PointerTo m_normalDepthBuffer; - static const char *SHADER_VERT_NORMAL_AND_DEPTH_WORLD; + static const char *SHADER_VERT_NORMAL_AND_DEPTH_OBJECT; static const char *SHADER_VERT_NORMAL_AND_DEPTH_CAMERA; static const char *SHADER_FRAG_NORMAL_AND_DEPTH; diff --git a/modules/ar/include/visp3/ar/vpPanda3DLight.h b/modules/ar/include/visp3/ar/vpPanda3DLight.h index 0e5fa38e8f..c41e535a6b 100644 --- a/modules/ar/include/visp3/ar/vpPanda3DLight.h +++ b/modules/ar/include/visp3/ar/vpPanda3DLight.h @@ -209,7 +209,6 @@ class VISP_EXPORT vpPanda3DDirectionalLight : public vpPanda3DLight PT(DirectionalLight) light = new DirectionalLight(m_name); light->set_color(LColor(m_color.R, m_color.G, m_color.B, 1)); vpColVector dir = vpPanda3DBaseRenderer::vispVectorToPanda(m_direction); - std::cout << m_direction << ", " << dir << std::endl; light->set_direction(LVector3f(m_direction[0], m_direction[1], m_direction[2])); NodePath np = scene.attach_new_node(light); scene.set_light(np); diff --git a/modules/ar/include/visp3/ar/vpPanda3DPostProcessFilter.h b/modules/ar/include/visp3/ar/vpPanda3DPostProcessFilter.h index 2304919ef6..079e4e96a4 100644 --- a/modules/ar/include/visp3/ar/vpPanda3DPostProcessFilter.h +++ b/modules/ar/include/visp3/ar/vpPanda3DPostProcessFilter.h @@ -62,12 +62,21 @@ class VISP_EXPORT vpPanda3DPostProcessFilter : public vpPanda3DBaseRenderer m_renderOrder = m_inputRenderer->getRenderOrder() + 1; } + virtual ~vpPanda3DPostProcessFilter() = default; + bool isRendering3DScene() const VP_OVERRIDE { return false; } - GraphicsOutput *getMainOutputBuffer() VP_OVERRIDE { return m_buffer; } + GraphicsOutput *getMainOutputBuffer() VP_OVERRIDE { return (GraphicsOutput *)m_buffer; } + + void afterFrameRendered() VP_OVERRIDE + { + if (m_isOutput) { + vpPanda3DBaseRenderer::afterFrameRendered(); + } + } protected: virtual void setupScene() VP_OVERRIDE; @@ -81,15 +90,14 @@ class VISP_EXPORT vpPanda3DPostProcessFilter : public vpPanda3DBaseRenderer void getRenderBasic(vpImage &I) const; void getRenderBasic(vpImage &I) const; - virtual FrameBufferProperties getBufferProperties() const = 0; std::shared_ptr m_inputRenderer; bool m_isOutput; //! Whether this filter is an output to be used and should be copied to ram std::string m_fragmentShader; PointerTo m_shader; - Texture *m_texture; - GraphicsOutput *m_buffer; + PointerTo m_texture; + PointerTo m_buffer; static const char *FILTER_VERTEX_SHADER; }; diff --git a/modules/ar/include/visp3/ar/vpPanda3DRGBRenderer.h b/modules/ar/include/visp3/ar/vpPanda3DRGBRenderer.h index 25a3d6ce20..cadb5c021e 100644 --- a/modules/ar/include/visp3/ar/vpPanda3DRGBRenderer.h +++ b/modules/ar/include/visp3/ar/vpPanda3DRGBRenderer.h @@ -39,6 +39,8 @@ #include #include +#include "pointerTo.h" + BEGIN_VISP_NAMESPACE /** * \ingroup group_ar_renderer_panda3d_3d @@ -78,6 +80,7 @@ class VISP_EXPORT vpPanda3DRGBRenderer : public vpPanda3DBaseRenderer, public vp */ vpPanda3DRGBRenderer(bool showSpeculars) : vpPanda3DBaseRenderer(showSpeculars ? "RGB" : "RGB-diffuse"), m_showSpeculars(showSpeculars) { } + virtual ~vpPanda3DRGBRenderer() = default; /** * @brief Store the render resulting from calling renderFrame() into a vpImage. @@ -92,27 +95,26 @@ class VISP_EXPORT vpPanda3DRGBRenderer : public vpPanda3DBaseRenderer, public vp void setBackgroundImage(const vpImage &background); - GraphicsOutput *getMainOutputBuffer() VP_OVERRIDE { return m_colorBuffer; } + GraphicsOutput *getMainOutputBuffer() VP_OVERRIDE { return (GraphicsOutput *)m_colorBuffer; } bool isShowingSpeculars() const { return m_showSpeculars; } - protected: void setupScene() VP_OVERRIDE; void setupRenderTarget() VP_OVERRIDE; + virtual std::string makeFragmentShader(bool hasTexture, bool specular); private: bool m_showSpeculars; - Texture *m_colorTexture; - GraphicsOutput *m_colorBuffer; + PointerTo m_colorTexture; + PointerTo m_colorBuffer; static const char *COOK_TORRANCE_VERT; static const char *COOK_TORRANCE_FRAG; NodePath m_backgroundImage; - DisplayRegion *m_display2d; - Texture *m_backgroundTexture; - + PointerTo m_display2d; + PointerTo m_backgroundTexture; }; END_VISP_NAMESPACE diff --git a/modules/ar/include/visp3/ar/vpPanda3DRendererSet.h b/modules/ar/include/visp3/ar/vpPanda3DRendererSet.h index 9c0bded81e..49098952cf 100644 --- a/modules/ar/include/visp3/ar/vpPanda3DRendererSet.h +++ b/modules/ar/include/visp3/ar/vpPanda3DRendererSet.h @@ -71,6 +71,8 @@ class VISP_EXPORT vpPanda3DRendererSet : public vpPanda3DBaseRenderer, public vp * Thus, if a renderer B depends on A for its render, and if B.getRenderOrder() > A.getRenderOrder() it can rely on A being initialized when B.initFromParent is called (along with the setupCamera, setupRenderTarget). */ void initFramework() VP_OVERRIDE; + void initFromParent(std::shared_ptr framework, PointerTo window) VP_OVERRIDE; + void initFromParent(const vpPanda3DBaseRenderer &renderer) VP_OVERRIDE; /** * @brief Set the pose of the camera, using the ViSP convention. This change is propagated to all subrenderers @@ -134,7 +136,7 @@ class VISP_EXPORT vpPanda3DRendererSet : public vpPanda3DBaseRenderer, public vp */ void addNodeToScene(const NodePath &object) VP_OVERRIDE; - void setRenderParameters(const vpPanda3DRenderParameters ¶ms) VP_OVERRIDE; + virtual void setRenderParameters(const vpPanda3DRenderParameters ¶ms) VP_OVERRIDE; void addLight(const vpPanda3DLight &light) VP_OVERRIDE; @@ -147,6 +149,8 @@ class VISP_EXPORT vpPanda3DRendererSet : public vpPanda3DBaseRenderer, public vp */ void addSubRenderer(std::shared_ptr renderer); + void enableSharedDepthBuffer(vpPanda3DBaseRenderer &sourceBuffer) VP_OVERRIDE; + /** * @brief Retrieve the first subrenderer with the specified template type. * @@ -164,6 +168,7 @@ class VISP_EXPORT vpPanda3DRendererSet : public vpPanda3DBaseRenderer, public vp } return nullptr; } + /** * @brief Retrieve the subrenderer with the specified template type and the given name. * @@ -185,12 +190,25 @@ class VISP_EXPORT vpPanda3DRendererSet : public vpPanda3DBaseRenderer, public vp return nullptr; } + void beforeFrameRendered() VP_OVERRIDE + { + for (std::shared_ptr &renderer: m_subRenderers) { + renderer->beforeFrameRendered(); + } + } + + void afterFrameRendered() VP_OVERRIDE + { + for (std::shared_ptr &renderer: m_subRenderers) { + renderer->afterFrameRendered(); + } + } + protected: void setupScene() VP_OVERRIDE { } void setupCamera() VP_OVERRIDE { } -private: std::vector> m_subRenderers; }; END_VISP_NAMESPACE diff --git a/modules/ar/src/panda3d-simulator/vpPanda3DBaseRenderer.cpp b/modules/ar/src/panda3d-simulator/vpPanda3DBaseRenderer.cpp index 13e9a6f9f0..948d999f7a 100644 --- a/modules/ar/src/panda3d-simulator/vpPanda3DBaseRenderer.cpp +++ b/modules/ar/src/panda3d-simulator/vpPanda3DBaseRenderer.cpp @@ -36,6 +36,8 @@ #include "load_prc_file.h" #include +#include "boundingSphere.h" +#include "boundingBox.h" BEGIN_VISP_NAMESPACE const vpHomogeneousMatrix vpPanda3DBaseRenderer::VISP_T_PANDA({ @@ -57,11 +59,11 @@ void vpPanda3DBaseRenderer::initFramework() WindowProperties winProps; winProps.set_size(LVecBase2i(m_renderParameters.getImageWidth(), m_renderParameters.getImageHeight())); int flags = GraphicsPipe::BF_refuse_window; - m_window = std::shared_ptr(m_framework->open_window(winProps, flags)); + m_window = m_framework->open_window(winProps, flags); // try and reopen with visible window if (m_window == nullptr) { winProps.set_minimized(true); - m_window = std::shared_ptr(m_framework->open_window(winProps, 0)); + m_window = m_framework->open_window(winProps, 0); } if (m_window == nullptr) { throw vpException(vpException::notInitialized, @@ -74,7 +76,7 @@ void vpPanda3DBaseRenderer::initFramework() //m_window->get_display_region_3d()->set_camera(m_cameraPath); } -void vpPanda3DBaseRenderer::initFromParent(std::shared_ptr framework, std::shared_ptr window) +void vpPanda3DBaseRenderer::initFromParent(std::shared_ptr framework, PointerTo window) { m_framework = framework; m_window = window; @@ -83,6 +85,11 @@ void vpPanda3DBaseRenderer::initFromParent(std::shared_ptr frame setupRenderTarget(); } +void vpPanda3DBaseRenderer::initFromParent(const vpPanda3DBaseRenderer &renderer) +{ + initFromParent(renderer.m_framework, renderer.m_window); +} + void vpPanda3DBaseRenderer::setupScene() { m_renderRoot = m_window->get_render().attach_new_node(m_name); @@ -101,7 +108,35 @@ void vpPanda3DBaseRenderer::setupCamera() void vpPanda3DBaseRenderer::renderFrame() { + beforeFrameRendered(); m_framework->get_graphics_engine()->render_frame(); + afterFrameRendered(); +} + +void vpPanda3DBaseRenderer::setRenderParameters(const vpPanda3DRenderParameters ¶ms) +{ + unsigned int previousH = m_renderParameters.getImageHeight(), previousW = m_renderParameters.getImageWidth(); + bool resize = previousH != params.getImageHeight() || previousW != params.getImageWidth(); + + m_renderParameters = params; + + if (resize) { + for (GraphicsOutput *buffer: m_buffers) { + //buffer->get_type().is_derived_from() + GraphicsBuffer *buf = dynamic_cast(buffer); + if (buf == nullptr) { + throw vpException(vpException::fatalError, "Panda3D: could not cast to GraphicsBuffer when rendering."); + } + else { + buf->set_size(m_renderParameters.getImageWidth(), m_renderParameters.getImageHeight()); + } + } + } + + // If renderer is already initialized, modify camera properties + if (m_camera != nullptr) { + m_renderParameters.setupPandaCamera(m_camera); + } } void vpPanda3DBaseRenderer::setCameraPose(const vpHomogeneousMatrix &wTc) @@ -153,7 +188,7 @@ vpHomogeneousMatrix vpPanda3DBaseRenderer::getNodePose(NodePath &object) return vpHomogeneousMatrix(t, q) * PANDA_T_VISP; } -void vpPanda3DBaseRenderer::computeNearAndFarPlanesFromNode(const std::string &name, float &nearV, float &farV) +void vpPanda3DBaseRenderer::computeNearAndFarPlanesFromNode(const std::string &name, float &nearV, float &farV, bool fast) { if (m_camera == nullptr) { throw vpException(vpException::notInitialized, "Cannot compute planes when the camera is not initialized"); @@ -162,16 +197,88 @@ void vpPanda3DBaseRenderer::computeNearAndFarPlanesFromNode(const std::string &n if (object.is_empty()) { throw vpException(vpException::badValue, "Node %s was not found", name.c_str()); } - LPoint3 minP, maxP; - object.calc_tight_bounds(minP, maxP, m_cameraPath); - nearV = vpMath::maximum(0.f, minP.get_y()); - farV = vpMath::maximum(nearV, maxP.get_y()); + if (!fast) { + LPoint3 minP, maxP; + double t1 = vpTime::measureTimeMs(); + object.calc_tight_bounds(minP, maxP); + const BoundingBox box(minP, maxP); + float minZ = std::numeric_limits::max(), maxZ = 0.f; + const vpHomogeneousMatrix wTcam = getCameraPose(); + const vpHomogeneousMatrix wTobj = getNodePose(name) * vpPanda3DBaseRenderer::PANDA_T_VISP; + const vpHomogeneousMatrix camTobj = wTcam.inverse() * wTobj; + for (unsigned int i = 0; i < 8; ++i) { + const LPoint3 p = box.get_point(i); + const vpColVector pv = vpColVector({ p.get_x(), -p.get_z(), p.get_y(), 1.0 }); + vpColVector cpV = camTobj * pv; + cpV /= cpV[3]; + float Z = cpV[2]; + if (Z > maxZ) { + maxZ = Z; + } + if (Z < minZ) { + minZ = Z; + } + } + + nearV = minZ; + farV = maxZ; + } + else { + const BoundingVolume *volume = object.node()->get_bounds(); + if (volume->get_type() == BoundingSphere::get_class_type()) { + const BoundingSphere *sphere = (const BoundingSphere *)volume; + const LPoint3 center = sphere->get_center(); + const float distCenter = (center - m_cameraPath.get_pos()).length(); + nearV = vpMath::maximum(0.f, distCenter - sphere->get_radius()); + farV = vpMath::maximum(nearV, distCenter + sphere->get_radius()); + } + else if (volume->get_type() == BoundingBox::get_class_type()) { + const vpHomogeneousMatrix wTcam = getCameraPose(); + const vpHomogeneousMatrix wTobj = getNodePose(object) * vpPanda3DBaseRenderer::PANDA_T_VISP; + const vpHomogeneousMatrix camTobj = wTcam.inverse() * wTobj; + const BoundingBox *box = (const BoundingBox *)volume; + double minZ = std::numeric_limits::max(), maxZ = 0.0; + + for (unsigned int i = 0; i < 8; ++i) { + const LPoint3 p = box->get_point(i); + vpColVector cp = camTobj * vpColVector({ p.get_x(), -p.get_z(), p.get_y(), 1.0 }); + double Z = cp[2] / cp[3]; + if (Z < minZ) { + minZ = Z; + } + if (Z > maxZ) { + maxZ = Z; + } + } + nearV = minZ; + farV = maxZ; + } + else { + throw vpException(vpException::fatalError, "Unhandled bounding volume %s type returned by Panda3d", volume->get_type().get_name().c_str()); + } + } +} + +void vpPanda3DBaseRenderer::enableSharedDepthBuffer(vpPanda3DBaseRenderer &sourceBuffer) +{ + if (isRendering3DScene()) { + GraphicsOutput *buffer = getMainOutputBuffer(); + if (buffer != nullptr) { + buffer->set_clear_depth_active(false); + if (!buffer->share_depth_buffer(sourceBuffer.getMainOutputBuffer())) { + throw vpException(vpException::fatalError, "Could not share depth buffer!"); + } + } + } } NodePath vpPanda3DBaseRenderer::loadObject(const std::string &nodeName, const std::string &modelPath) { NodePath model = m_window->load_model(m_framework->get_models(), modelPath); - std::cout << "After loading model" << std::endl; + for (int i = 0; i < model.get_num_children(); ++i) { + model.get_child(i).clear_transform(); + } + model.detach_node(); model.set_name(nodeName); return model; diff --git a/modules/ar/src/panda3d-simulator/vpPanda3DCommonFilters.cpp b/modules/ar/src/panda3d-simulator/vpPanda3DCommonFilters.cpp index 5ef57c4b48..232a2302eb 100644 --- a/modules/ar/src/panda3d-simulator/vpPanda3DCommonFilters.cpp +++ b/modules/ar/src/panda3d-simulator/vpPanda3DCommonFilters.cpp @@ -171,7 +171,7 @@ void main() { sum_v += pix * kernel_v[i]; } - vec2 orientationAndValid = sum_h * sum_h + sum_v * sum_v > 0 ? vec2(atan(sum_v/sum_h), 1.f) : vec2(0.f, 0.f); + vec2 orientationAndValid = sum_h * sum_h + sum_v * sum_v > 0 ? vec2(-atan(sum_v/sum_h), 1.f) : vec2(0.f, 0.f); p3d_FragData = vec4(sum_h, sum_v, orientationAndValid.x, orientationAndValid.y); } else { p3d_FragData = vec4(0.f, 0.f, 0.f, 0.f); diff --git a/modules/ar/src/panda3d-simulator/vpPanda3DGeometryRenderer.cpp b/modules/ar/src/panda3d-simulator/vpPanda3DGeometryRenderer.cpp index 7700089823..41edf3e0f0 100644 --- a/modules/ar/src/panda3d-simulator/vpPanda3DGeometryRenderer.cpp +++ b/modules/ar/src/panda3d-simulator/vpPanda3DGeometryRenderer.cpp @@ -51,19 +51,15 @@ out float distToCamera; void main() { - //gl_Position = ftransform(); - gl_Position = p3d_ModelViewProjectionMatrix * p3d_Vertex; - // View space is Z-up right handed, flip z and y - oNormal = p3d_NormalMatrix * normalize(p3d_Normal); - // oNormal.yz = oNormal.zy; - // oNormal.y = -oNormal.y; - vec4 cs_position = p3d_ModelViewMatrix * p3d_Vertex; + gl_Position = p3d_ModelViewProjectionMatrix * p3d_Vertex; + oNormal = p3d_NormalMatrix * normalize(p3d_Normal); + vec4 cs_position = p3d_ModelViewMatrix * p3d_Vertex; distToCamera = -cs_position.z; } )shader"; -const char *vpPanda3DGeometryRenderer::SHADER_VERT_NORMAL_AND_DEPTH_WORLD = R"shader( +const char *vpPanda3DGeometryRenderer::SHADER_VERT_NORMAL_AND_DEPTH_OBJECT = R"shader( #version 330 in vec3 p3d_Normal; @@ -73,14 +69,10 @@ uniform mat4 p3d_ModelViewProjectionMatrix; out vec3 oNormal; out float distToCamera; - void main() { - //gl_Position = ftransform(); gl_Position = p3d_ModelViewProjectionMatrix * p3d_Vertex; - oNormal = normalize(p3d_Normal); - oNormal.yz = oNormal.zy; - oNormal.y = -oNormal.y; + oNormal = vec3(p3d_Normal.x, -p3d_Normal.z, p3d_Normal.y); vec4 cs_position = p3d_ModelViewMatrix * p3d_Vertex; distToCamera = -cs_position.z; } @@ -91,19 +83,12 @@ const char *vpPanda3DGeometryRenderer::SHADER_FRAG_NORMAL_AND_DEPTH = R"shader( in vec3 oNormal; in float distToCamera; -out vec4 p3d_FragColor; -out vec4 fragColor; out vec4 p3d_FragData; - void main() { - vec3 n = normalize(oNormal); - //if (!gl_FrontFacing) - //n = -n; - p3d_FragData = vec4(n, distToCamera); - + p3d_FragData.bgra = vec4(normalize(oNormal), distToCamera); } )shader"; @@ -127,13 +112,13 @@ void vpPanda3DGeometryRenderer::setupScene() PT(Shader) shader; if (m_renderType == OBJECT_NORMALS) { shader = Shader::make(Shader::ShaderLanguage::SL_GLSL, - SHADER_VERT_NORMAL_AND_DEPTH_WORLD, - SHADER_FRAG_NORMAL_AND_DEPTH); + SHADER_VERT_NORMAL_AND_DEPTH_OBJECT, + SHADER_FRAG_NORMAL_AND_DEPTH); } else if (m_renderType == CAMERA_NORMALS) { shader = Shader::make(Shader::ShaderLanguage::SL_GLSL, - SHADER_VERT_NORMAL_AND_DEPTH_CAMERA, - SHADER_FRAG_NORMAL_AND_DEPTH); + SHADER_VERT_NORMAL_AND_DEPTH_CAMERA, + SHADER_FRAG_NORMAL_AND_DEPTH); } m_renderRoot.set_shader(shader); } @@ -153,12 +138,12 @@ void vpPanda3DGeometryRenderer::setupRenderTarget() WindowProperties win_prop; win_prop.set_size(m_renderParameters.getImageWidth(), m_renderParameters.getImageHeight()); // Don't open a window - force it to be an offscreen buffer. - int flags = GraphicsPipe::BF_refuse_window | GraphicsPipe::BF_resizeable; + int flags = GraphicsPipe::BF_refuse_window | GraphicsPipe::BF_resizeable | GraphicsPipe::BF_refuse_parasite; GraphicsOutput *windowOutput = m_window->get_graphics_output(); GraphicsEngine *engine = windowOutput->get_engine(); GraphicsPipe *pipe = windowOutput->get_pipe(); - m_normalDepthBuffer = engine->make_output(pipe, renderTypeToName(m_renderType), -100, fbp, win_prop, flags, + m_normalDepthBuffer = engine->make_output(pipe, renderTypeToName(m_renderType), m_renderOrder, fbp, win_prop, flags, windowOutput->get_gsg(), windowOutput); if (m_normalDepthBuffer == nullptr) { @@ -172,7 +157,7 @@ void vpPanda3DGeometryRenderer::setupRenderTarget() m_normalDepthBuffer->set_inverted(windowOutput->get_gsg()->get_copy_texture_inverted()); fbp.setup_color_texture(m_normalDepthTexture); m_normalDepthTexture->set_format(Texture::F_rgba32); - m_normalDepthBuffer->add_render_texture(m_normalDepthTexture, GraphicsOutput::RenderTextureMode::RTM_copy_ram); + m_normalDepthBuffer->add_render_texture(m_normalDepthTexture, GraphicsOutput::RenderTextureMode::RTM_bind_or_copy, GraphicsOutput::RenderTexturePlane::RTP_color); m_normalDepthBuffer->set_clear_color(LColor(0.f)); m_normalDepthBuffer->set_clear_color_active(true); DisplayRegion *region = m_normalDepthBuffer->make_display_region(); @@ -200,15 +185,48 @@ void vpPanda3DGeometryRenderer::getRender(vpImage &normals, vpImage &normals, vpImage &depth, const vpRect &bb, unsigned int h, unsigned w) const +{ + normals.resize(h, w); + // memset(normals.bitmap, 0, normals.getSize() * sizeof(vpRGBf)); + depth.resize(normals.getHeight(), normals.getWidth(), 0.f); + // memset(depth.bitmap, 0, normals.getSize()); + + const unsigned top = static_cast(std::max(0.0, bb.getTop())); + const unsigned left = static_cast(std::max(0.0, bb.getLeft())); + const unsigned bottom = static_cast(std::min(static_cast(h), bb.getBottom())); + const unsigned right = static_cast(std::min(static_cast(w), bb.getRight())); + const unsigned numComponents = m_normalDepthTexture->get_num_components(); + const unsigned rowIncrement = m_renderParameters.getImageWidth() * numComponents; // we ask for only 8 bits image, but we may get an rgb image + const float *data = (float *)(&(m_normalDepthTexture->get_ram_image().front())); + // Panda3D stores data upside down + data += rowIncrement * (m_renderParameters.getImageHeight() - 1); + if (numComponents != 4) { + throw vpException(vpException::dimensionError, "Expected panda texture to have 4 components!"); + } + for (unsigned int i = 0; i < m_renderParameters.getImageHeight(); ++i) { + const float *const rowData = data - i * rowIncrement; + vpRGBf *normalRow = normals[top + i]; + float *depthRow = depth[top + i]; +#pragma omp simd + for (unsigned int j = 0; j < m_renderParameters.getImageWidth(); ++j) { + normalRow[left + j].R = (rowData[j * 4]); + normalRow[left + j].G = (rowData[j * 4 + 1]); + normalRow[left + j].B = (rowData[j * 4 + 2]); + depthRow[left + j] = (rowData[j * 4 + 3]); + } + } +} + void vpPanda3DGeometryRenderer::getRender(vpImage &normals) const { normals.resize(m_normalDepthTexture->get_y_size(), m_normalDepthTexture->get_x_size()); @@ -220,13 +238,12 @@ void vpPanda3DGeometryRenderer::getRender(vpImage &normals) const float *data = (float *)(&(m_normalDepthTexture->get_ram_image().front())); data = data + rowIncrement * (normals.getHeight() - 1); rowIncrement = -rowIncrement; - for (unsigned int i = 0; i < normals.getHeight(); ++i) { vpRGBf *normalRow = normals[i]; for (unsigned int j = 0; j < normals.getWidth(); ++j) { - normalRow[j].B = (data[j * 4]); + normalRow[j].R = (data[j * 4]); normalRow[j].G = (data[j * 4 + 1]); - normalRow[j].R = (data[j * 4 + 2]); + normalRow[j].B = (data[j * 4 + 2]); } data += rowIncrement; } diff --git a/modules/ar/src/panda3d-simulator/vpPanda3DPostProcessFilter.cpp b/modules/ar/src/panda3d-simulator/vpPanda3DPostProcessFilter.cpp index 2daa1c9af8..0a123b51f5 100644 --- a/modules/ar/src/panda3d-simulator/vpPanda3DPostProcessFilter.cpp +++ b/modules/ar/src/panda3d-simulator/vpPanda3DPostProcessFilter.cpp @@ -66,7 +66,6 @@ void vpPanda3DPostProcessFilter::setupScene() m_fragmentShader); m_renderRoot.set_shader(m_shader); m_renderRoot.set_shader_input("dp", LVector2f(1.0 / buffer->get_texture()->get_x_size(), 1.0 / buffer->get_texture()->get_y_size())); - std::cout << m_fragmentShader << std::endl; m_renderRoot.set_texture(buffer->get_texture()); m_renderRoot.set_attrib(LightRampAttrib::make_identity()); } @@ -110,7 +109,7 @@ void vpPanda3DPostProcessFilter::setupRenderTarget() //m_buffer->set_inverted(true); m_texture = new Texture(); fbp.setup_color_texture(m_texture); - m_buffer->add_render_texture(m_texture, m_isOutput ? GraphicsOutput::RenderTextureMode::RTM_copy_ram : GraphicsOutput::RenderTextureMode::RTM_copy_texture); + m_buffer->add_render_texture(m_texture, m_isOutput ? GraphicsOutput::RenderTextureMode::RTM_bind_or_copy : GraphicsOutput::RenderTextureMode::RTM_copy_texture); m_buffer->set_clear_color(LColor(0.f)); m_buffer->set_clear_color_active(true); DisplayRegion *region = m_buffer->make_display_region(); @@ -123,7 +122,6 @@ void vpPanda3DPostProcessFilter::setupRenderTarget() void vpPanda3DPostProcessFilter::setRenderParameters(const vpPanda3DRenderParameters ¶ms) { - m_renderParameters = params; unsigned int previousH = m_renderParameters.getImageHeight(), previousW = m_renderParameters.getImageWidth(); bool resize = previousH != params.getImageHeight() || previousW != params.getImageWidth(); @@ -161,11 +159,11 @@ void vpPanda3DPostProcessFilter::getRenderBasic(vpImage &I) const rowIncrement = -rowIncrement; for (unsigned int i = 0; i < I.getHeight(); ++i) { - data += rowIncrement; unsigned char *colorRow = I[i]; for (unsigned int j = 0; j < I.getWidth(); ++j) { colorRow[j] = data[j * numComponents]; } + data += rowIncrement; } } @@ -184,13 +182,13 @@ void vpPanda3DPostProcessFilter::getRenderBasic(vpImage &I) const rowIncrement = -rowIncrement; for (unsigned int i = 0; i < I.getHeight(); ++i) { - data += rowIncrement; vpRGBf *colorRow = I[i]; for (unsigned int j = 0; j < I.getWidth(); ++j) { colorRow[j].B = data[j * numComponents]; colorRow[j].G = data[j * numComponents + 1]; colorRow[j].R = data[j * numComponents + 2]; } + data += rowIncrement; } } diff --git a/modules/ar/src/panda3d-simulator/vpPanda3DRGBRenderer.cpp b/modules/ar/src/panda3d-simulator/vpPanda3DRGBRenderer.cpp index ab6800fdaf..cc429ab31d 100644 --- a/modules/ar/src/panda3d-simulator/vpPanda3DRGBRenderer.cpp +++ b/modules/ar/src/panda3d-simulator/vpPanda3DRGBRenderer.cpp @@ -195,6 +195,7 @@ void main() p3d_FragData += (p3d_LightSource[i].color * attenuation) * nl * (baseColor * vec4(kd, 1.f) + vec4(specularColor, 1.f)); } + p3d_FragData.bgra = p3d_FragData; } )shader"; @@ -274,7 +275,6 @@ void vpPanda3DRGBRenderer::setBackgroundImage(const vpImage &background) //m_backgroundTexture = TexturePool::load_texture("/home/sfelton/IMG_20230221_165330430.jpg"); unsigned char *data = (unsigned char *)m_backgroundTexture->modify_ram_image(); - std::cout << m_backgroundTexture->get_x_size() << ", " << m_backgroundTexture->get_y_size() << std::endl; for (unsigned int i = 0; i < background.getHeight(); ++i) { const vpRGBa *srcRow = background[background.getHeight() - (i + 1)]; unsigned char *destRow = data + i * background.getWidth() * 4; @@ -299,13 +299,15 @@ void vpPanda3DRGBRenderer::getRender(vpImage &I) const for (unsigned int i = 0; i < I.getHeight(); ++i) { vpRGBa *colorRow = I[i]; - for (unsigned int j = 0; j < I.getWidth(); ++j) { - // BGRA order in panda3d - colorRow[j].B = data[j * 4]; - colorRow[j].G = data[j * 4 + 1]; - colorRow[j].R = data[j * 4 + 2]; - colorRow[j].A = data[j * 4 + 3]; - } + + memcpy((unsigned char *)(colorRow), data, sizeof(unsigned char) * 4 * I.getWidth()); + // for (unsigned int j = 0; j < I.getWidth(); ++j) { + // // BGRA order in panda3d + // colorRow[j].R = data[j * 4]; + // colorRow[j].G = data[j * 4 + 1]; + // colorRow[j].B = data[j * 4 + 2]; + // colorRow[j].A = data[j * 4 + 3]; + // } data += rowIncrement; } } @@ -349,7 +351,7 @@ void vpPanda3DRGBRenderer::setupRenderTarget() m_colorTexture = new Texture(); fbp.setup_color_texture(m_colorTexture); //m_colorTexture->set_format(Texture::Format::F_srgb_alpha); - m_colorBuffer->add_render_texture(m_colorTexture, GraphicsOutput::RenderTextureMode::RTM_copy_ram); + m_colorBuffer->add_render_texture(m_colorTexture, GraphicsOutput::RenderTextureMode::RTM_copy_texture); m_colorBuffer->set_clear_color(LColor(0.f)); m_colorBuffer->set_clear_color_active(true); DisplayRegion *region = m_colorBuffer->make_display_region(); diff --git a/modules/ar/src/panda3d-simulator/vpPanda3DRendererSet.cpp b/modules/ar/src/panda3d-simulator/vpPanda3DRendererSet.cpp index 10ed67cdff..a59b481812 100644 --- a/modules/ar/src/panda3d-simulator/vpPanda3DRendererSet.cpp +++ b/modules/ar/src/panda3d-simulator/vpPanda3DRendererSet.cpp @@ -39,30 +39,25 @@ vpPanda3DRendererSet::vpPanda3DRendererSet(const vpPanda3DRenderParameters &rend { m_renderParameters = renderParameters; load_prc_file_data("", "textures-power-2 none"); + load_prc_file_data("", "gl-version 3 2"); + load_prc_file_data("", "no-singular-invert"); } - void vpPanda3DRendererSet::initFramework() { - - // load_prc_file_data("", "load-display p3tinydisplay"); - // load_prc_file_data("", "color-bits 32 32 32"); - load_prc_file_data("", "gl-version 3 2"); - - - if (m_framework.use_count() > 0) { throw vpException(vpException::notImplementedError, "Panda3D renderer: Reinitializing is not supported!"); } m_framework = std::shared_ptr(new PandaFramework()); + m_framework->open_framework(); WindowProperties winProps; winProps.set_size(LVecBase2i(m_renderParameters.getImageWidth(), m_renderParameters.getImageHeight())); int flags = GraphicsPipe::BF_refuse_window; - m_window = std::shared_ptr(m_framework->open_window(winProps, flags)); + m_window = m_framework->open_window(winProps, flags); if (m_window == nullptr) { winProps.set_minimized(true); - m_window = std::shared_ptr(m_framework->open_window(winProps, 0)); + m_window = m_framework->open_window(winProps, 0); } if (m_window == nullptr) { throw vpException(vpException::fatalError, "Could not open Panda3D window (hidden or visible)"); @@ -74,6 +69,21 @@ void vpPanda3DRendererSet::initFramework() } } +void vpPanda3DRendererSet::initFromParent(std::shared_ptr framework, PointerTo window) +{ + vpPanda3DBaseRenderer::initFromParent(framework, window); + for (std::shared_ptr &renderer: m_subRenderers) { + renderer->initFromParent(m_framework, m_window); + } +} + +void vpPanda3DRendererSet::initFromParent(const vpPanda3DBaseRenderer &renderer) +{ + vpPanda3DBaseRenderer::initFromParent(renderer); + for (std::shared_ptr &renderer: m_subRenderers) { + renderer->initFromParent(*this); + } +} void vpPanda3DRendererSet::setCameraPose(const vpHomogeneousMatrix &wTc) { @@ -167,10 +177,6 @@ void vpPanda3DRendererSet::addSubRenderer(std::shared_ptr ++it; } m_subRenderers.insert(it, renderer); - for (const auto &r: m_subRenderers) { - std::cout << r->getName() << " "; - } - std::cout << std::endl; renderer->setRenderParameters(m_renderParameters); if (m_framework != nullptr) { @@ -179,7 +185,14 @@ void vpPanda3DRendererSet::addSubRenderer(std::shared_ptr } } -END_VISP_NAMESPACE +void vpPanda3DRendererSet::enableSharedDepthBuffer(vpPanda3DBaseRenderer &sourceBuffer) +{ + for (std::shared_ptr &subRenderer: m_subRenderers) { + if (subRenderer.get() != &sourceBuffer) { + subRenderer->enableSharedDepthBuffer(sourceBuffer); + } + } +} #elif !defined(VISP_BUILD_SHARED_LIBS) // Work around to avoid warning: libvisp_ar.a(vpPanda3DRendererSet.cpp.o) has no symbols diff --git a/modules/core/include/visp3/core/vpImage.h b/modules/core/include/visp3/core/vpImage.h index 5cedde9c8e..59177f746f 100644 --- a/modules/core/include/visp3/core/vpImage.h +++ b/modules/core/include/visp3/core/vpImage.h @@ -314,7 +314,11 @@ template class vpImage vpImage operator-(const vpImage &B) const; //! Copy operator - vpImage &operator=(vpImage other); + vpImage &operator=(const vpImage &other); +#if ((__cplusplus >= 201103L) || (defined(_MSVC_LANG) && (_MSVC_LANG >= 201103L))) // Check if cxx11 or higher + //! move constructor + vpImage &operator=(vpImage &&other); +#endif vpImage &operator=(const Type &v); bool operator==(const vpImage &I) const; diff --git a/modules/core/include/visp3/core/vpImageFilter.h b/modules/core/include/visp3/core/vpImageFilter.h index 4039856615..66868f7e2b 100644 --- a/modules/core/include/visp3/core/vpImageFilter.h +++ b/modules/core/include/visp3/core/vpImageFilter.h @@ -561,6 +561,31 @@ class VISP_EXPORT vpImageFilter } } + /** + * @brief Apply a filter at a given image location + * + * @tparam FilterType Image and filter types: double or float + * @param I The input image + * @param row The row coordinate where the filter should be applied + * @param col The column coordinate where the filter should be applied + * @param M the filter + */ + template + static FilterType filter(const vpImage &I, const vpArray2D &M, unsigned int row, unsigned int col) + { + const unsigned int size_y = M.getRows(), size_x = M.getCols(); + const unsigned int half_size_y = size_y / 2, half_size_x = size_x / 2; + FilterType corr = 0; + + for (unsigned int a = 0; a < size_y; ++a) { + for (unsigned int b = 0; b < size_x; ++b) { + FilterType val = static_cast(I[row - half_size_y + a][col - half_size_x + b]); // Correlation + corr += M[a][b] * val; + } + } + return corr; + } + #if ((__cplusplus >= 201103L) || (defined(_MSVC_LANG) && (_MSVC_LANG >= 201103L))) // Check if cxx11 or higher template static void filter(const vpImage &I, vpImage &If, const vpArray2D &M, bool convolve = false) = delete; diff --git a/modules/core/include/visp3/core/vpImage_operators.h b/modules/core/include/visp3/core/vpImage_operators.h index fbddbdd292..be9ae68025 100644 --- a/modules/core/include/visp3/core/vpImage_operators.h +++ b/modules/core/include/visp3/core/vpImage_operators.h @@ -182,15 +182,50 @@ inline std::ostream &operator<<(std::ostream &s, const vpImage &I) /*! \brief Copy operator */ -template vpImage &vpImage::operator=(vpImage other) +template vpImage &vpImage::operator=(const vpImage &other) { - swap(*this, other); + + resize(other.height, other.width); + memcpy(static_cast(bitmap), static_cast(other.bitmap), other.npixels * sizeof(Type)); + if (other.display != nullptr) { + display = other.display; + } + + return *this; +} + +#if ((__cplusplus >= 201103L) || (defined(_MSVC_LANG) && (_MSVC_LANG >= 201103L))) // Check if cxx11 or higher + +template vpImage &vpImage::operator=(vpImage &&other) +{ + + if (row != nullptr) { + delete[] row; + } + row = other.row; + if (bitmap != nullptr && hasOwnership) { + delete[] bitmap; + } + bitmap = other.bitmap; if (other.display != nullptr) { display = other.display; } + height = other.height; + width = other.width; + npixels = other.npixels; + hasOwnership = other.hasOwnership; + + other.bitmap = nullptr; + other.display = nullptr; + other.npixels = 0; + other.width = 0; + other.height = 0; + other.row = nullptr; + other.hasOwnership = false; return *this; } +#endif /*! \brief = operator : Set all the element of the bitmap to a given value \e diff --git a/modules/gui/include/visp3/gui/vpDisplayFactory.h b/modules/gui/include/visp3/gui/vpDisplayFactory.h index 713dea9678..59585ed1c6 100644 --- a/modules/gui/include/visp3/gui/vpDisplayFactory.h +++ b/modules/gui/include/visp3/gui/vpDisplayFactory.h @@ -175,7 +175,8 @@ std::shared_ptr createDisplay() */ template std::shared_ptr createDisplay(vpImage &I, const int winx = -1, const int winy = -1, - const std::string &title = "", const vpDisplay::vpScaleType &scaleType = vpDisplay::SCALE_DEFAULT) + const std::string &title = "", + const vpDisplay::vpScaleType &scaleType = vpDisplay::SCALE_DEFAULT) { #if defined(VISP_HAVE_DISPLAY) #ifdef VISP_HAVE_X11 @@ -199,8 +200,100 @@ std::shared_ptr createDisplay(vpImage &I, const int winx = -1, con return std::shared_ptr(nullptr); #endif } -#endif +namespace impl +{ +struct GridSettings +{ + unsigned int rows; + unsigned int cols; + unsigned int startY; + unsigned int startX; + unsigned int paddingX; + unsigned int paddingY; +}; + +void makeDisplayGridHelper(std::vector> &res, const GridSettings &settings, + unsigned int currRow, unsigned int currCol, + unsigned int currentPixelX, unsigned int currentPixelY, + unsigned int maxRowHeightPixel) +{ + if (currRow != (settings.rows - 1) && (currCol != settings.cols - 1)) { + throw vpException(vpException::dimensionError, "Too few images for the grid size"); + } + (void)res; + (void)settings; + (void)currRow; + (void)currCol; + (void)currentPixelX; + (void)currentPixelY; + (void)maxRowHeightPixel; +} + +template +void makeDisplayGridHelper(std::vector> &res, const GridSettings &settings, + unsigned int currRow, unsigned int currCol, + unsigned int currentPixelX, unsigned int currentPixelY, + const unsigned int maxRowHeightPixel, + const std::string &name, vpImage &I, Args&... args) +{ + if (currRow >= settings.rows) { + throw vpException(vpException::dimensionError, "Too many images for the grid size"); + } + if (currCol == settings.cols) { + makeDisplayGridHelper(res, settings, currRow + 1, 0, settings.startX, + currentPixelY + maxRowHeightPixel + settings.paddingY, 0, name, I, args...); + } + else { + std::shared_ptr display = vpDisplayFactory::createDisplay(I, currentPixelX, currentPixelY, name); + vpDisplay::display(I); + vpDisplay::flush(I); + res.push_back(display); + makeDisplayGridHelper(res, settings, currRow, currCol + 1, currentPixelX + I.getWidth() + settings.paddingX, + currentPixelY, std::max(maxRowHeightPixel, I.getHeight()), args...); + } +} +} + +/** + * \brief Create a grid of displays, given a set of images. + * All the displays will be initialized in the correct location with the content of the associated image and name. + * All the images should have been initialized before with the correct resolution. + * The display creation and image association will follow a row major order. + * + * \tparam Args A sequence of display name (const std::string&) and ViSP image. + * The name should always come before the image. The image can be vpImage or vpImage + * \param rows Number of rows in the grid + * \param cols Number of columns in the grid + * \param startX The starting left position of the grid + * \param startY The starting top localization of the grid + * \param paddingX Horizontal padding between windows + * \param paddingY Vertical padding between windows + * \param args The name => image => name sequence + * \return std::vector> The allocated displays. + * + * \throws If the grid dimensions and number of images do not match + * + */ +template +std::vector> makeDisplayGrid(unsigned int rows, unsigned int cols, + unsigned int startX, unsigned int startY, + unsigned int paddingX, unsigned int paddingY, + Args&... args) +{ + std::vector> res; + impl::GridSettings settings; + settings.rows = rows; + settings.cols = cols; + settings.paddingX = paddingX; + settings.paddingY = paddingY; + settings.startX = startX; + settings.startY = startY; + makeDisplayGridHelper(res, settings, 0, 0, settings.startX, settings.startY, 0, args...); + return res; } +#endif +} + END_VISP_NAMESPACE #endif diff --git a/modules/io/include/visp3/io/vpJsonArgumentParser.h b/modules/io/include/visp3/io/vpJsonArgumentParser.h index 9a3db027b2..a95bfdf2a7 100644 --- a/modules/io/include/visp3/io/vpJsonArgumentParser.h +++ b/modules/io/include/visp3/io/vpJsonArgumentParser.h @@ -55,6 +55,7 @@ nlohmann::json convertCommandLineArgument(const std::string &arg) nlohmann::json j = nlohmann::json::parse(arg); return j; } + /** * @brief Specialization of command line parsing for strings: a shell may eat the quotes, which would be necessary for JSON parsing to work. * This function thus directly converts the string to a JSON representation: no parsing is performed. @@ -136,6 +137,13 @@ BEGIN_VISP_NAMESPACE class VISP_EXPORT vpJsonArgumentParser { public: + + enum vpJsonArgumentType + { + WITH_FIELD = 0, + FLAG = 1 + }; + /** * @brief Create a new argument parser, that can take into account both a JSON configuration file and command line arguments. * @@ -163,7 +171,6 @@ class VISP_EXPORT vpJsonArgumentParser */ std::string help() const; - /** * @brief Add an argument that can be provided by the user, either via command line or through the json file. * @@ -181,6 +188,7 @@ class VISP_EXPORT vpJsonArgumentParser template vpJsonArgumentParser &addArgument(const std::string &name, T ¶meter, const bool required = true, const std::string &help = "No description") { + argumentType[name] = WITH_FIELD; const auto getter = [name, this](nlohmann::json &j, bool create) -> nlohmann::json * { size_t pos = 0; nlohmann::json *f = &j; @@ -246,6 +254,17 @@ class VISP_EXPORT vpJsonArgumentParser return *this; } + /** + * @brief Add an argument that acts as a flag when specified on the command line. + * When this flag is specified, the boolean passed in argument will be inverted. + * + * @param name Name of the flag. + * @param parameter The boolean to modify when the flag is specified + * @param help The description of the argument. + * @return vpJsonArgumentParser& returns self, allowing argument definition chaining + */ + vpJsonArgumentParser &addFlag(const std::string &name, bool ¶meter, const std::string &help = "No description"); + /** * @brief Parse the arguments. * @@ -254,12 +273,12 @@ class VISP_EXPORT vpJsonArgumentParser */ void parse(int argc, const char *argv[]); - private: std::string description; // Program description std::string jsonFileArgumentName; // Name of the argument that points to the json file: ./program --config settings.json. Here jsonFileArgumentName == "--config" std::string nestSeparator; // JSON nesting delimiter character. Used to access JSON nested objects from a single string std::map> parsers; // Functions that update the variables with the values contained in the JSON document (should be used after calling updaters) + std::map argumentType; // Update the base json document with command line arguments std::map> updaters; // Update the base json document with command line arguments std::map> helpers; // Functions that output the usage and description of command line arguments: used when the help flag is given as argument nlohmann::json exampleJson; // Example JSON argument file: displayed when user calls for help diff --git a/modules/io/src/tools/vpJsonArgumentParser.cpp b/modules/io/src/tools/vpJsonArgumentParser.cpp index bfe68f06c2..b8d5baea35 100644 --- a/modules/io/src/tools/vpJsonArgumentParser.cpp +++ b/modules/io/src/tools/vpJsonArgumentParser.cpp @@ -94,6 +94,63 @@ std::string vpJsonArgumentParser::help() const return ss.str(); } +vpJsonArgumentParser &vpJsonArgumentParser::addFlag(const std::string &name, bool ¶meter, const std::string &help) +{ + argumentType[name] = FLAG; + const auto getter = [name, this](nlohmann::json &j, bool create) -> nlohmann::json * { + size_t pos = 0; + nlohmann::json *f = &j; + std::string token; + std::string name_copy = name; + + while ((pos = name_copy.find(nestSeparator)) != std::string::npos) { + token = name_copy.substr(0, pos); + + name_copy.erase(0, pos + nestSeparator.length()); + if (create && !f->contains(token)) { + (*f)[token] = {}; + } + else if (!f->contains(token)) { + return nullptr; + } + f = &(f->at(token)); + } + if (create && !f->contains(name_copy)) { + (*f)[name_copy] = {}; + } + else if (!f->contains(name_copy)) { + return nullptr; + } + f = &(f->at(name_copy)); + return f; + }; + + parsers[name] = [¶meter, getter, name](nlohmann::json &j) { + const nlohmann::json *field = getter(j, false); + const bool fieldHasNoValue = ((field == nullptr) || (field != nullptr && field->is_null())); + if (!fieldHasNoValue && (field->type() == json::value_t::boolean && (*field) == true)) { + parameter = !parameter; + } + }; + + updaters[name] = [getter](nlohmann::json &j, const std::string &) { + nlohmann::json *field = getter(j, true); + *field = true; + }; + + helpers[name] = [help, parameter]() -> std::string { + std::stringstream ss; + nlohmann::json repr = parameter; + ss << help << std::endl << "Default: " << repr; + return ss.str(); + }; + + nlohmann::json *exampleField = getter(exampleJson, true); + *exampleField = parameter; + + return *this; +} + void vpJsonArgumentParser::parse(int argc, const char *argv[]) { json j; @@ -132,14 +189,19 @@ void vpJsonArgumentParser::parse(int argc, const char *argv[]) } if (parsers.find(arg) != parsers.end()) { - if (i < argc - 1) { - updaters[arg](j, std::string(argv[i + 1])); - ++i; + if (argumentType[arg] == WITH_FIELD) { + if (i < argc - 1) { + updaters[arg](j, std::string(argv[i + 1])); + ++i; + } + else { + std::stringstream ss; + ss << "Argument " << arg << " was passed but no value was provided" << std::endl; + throw vpException(vpException::ioError, ss.str()); + } } - else { - std::stringstream ss; - ss << "Argument " << arg << " was passed but no value was provided" << std::endl; - throw vpException(vpException::ioError, ss.str()); + else if (argumentType[arg] == FLAG) { + updaters[arg](j, std::string()); } } else { diff --git a/modules/io/test/testJsonArgumentParser.cpp b/modules/io/test/testJsonArgumentParser.cpp index 9678b4b205..02735d197b 100644 --- a/modules/io/test/testJsonArgumentParser.cpp +++ b/modules/io/test/testJsonArgumentParser.cpp @@ -118,9 +118,10 @@ SCENARIO("Parsing arguments from JSON file", "[json]") {"b", 2.0}, {"c", "a string"}, {"d", true}, - {"e", { - {"a", 5} - }} + {"flag", true}, + {"flagDefaultTrue", true}, + {"e", {{"a", 5} } + } }; saveJson(j, jsonPath); @@ -129,6 +130,10 @@ SCENARIO("Parsing arguments from JSON file", "[json]") std::string c = ""; bool d = false; int ea = 4; + bool flag = false; + bool flagInitialValue = flag; + + bool invertedFlag = true; WHEN("Declaring a parser with all parameters required") { vpJsonArgumentParser parser("A program", "--config", "/"); @@ -136,7 +141,9 @@ SCENARIO("Parsing arguments from JSON file", "[json]") .addArgument("b", b, true) .addArgument("c", c, true) .addArgument("d", d, true) - .addArgument("e/a", ea, true); + .addArgument("e/a", ea, true) + .addFlag("flag", flag) + .addFlag("flagDefaultTrue", invertedFlag); THEN("Calling the parser without any argument fails") { @@ -162,7 +169,8 @@ SCENARIO("Parsing arguments from JSON file", "[json]") REQUIRE(c == j["c"]); REQUIRE(d == j["d"]); REQUIRE(ea == j["e"]["a"]); - + REQUIRE(flag != flagInitialValue); + REQUIRE(invertedFlag != true); } THEN("Calling the parser by specifying the json argument but leaving the file path empty throws an error") { @@ -177,26 +185,30 @@ SCENARIO("Parsing arguments from JSON file", "[json]") { const int argc = 3; for (const auto &jsonElem : j.items()) { - modifyJson([&jsonElem](json &j) { j.erase(jsonElem.key()); }); - const char *argv[] = { - "program", - "--config", - jsonPath.c_str() - }; - REQUIRE_THROWS(parser.parse(argc, argv)); + if (jsonElem.key().rfind("flag", 0) != 0) { + modifyJson([&jsonElem](json &j) { j.erase(jsonElem.key()); }); + const char *argv[] = { + "program", + "--config", + jsonPath.c_str() + }; + REQUIRE_THROWS(parser.parse(argc, argv)); + } } } THEN("Calling the parser with only the json file but setting a random field to null throws an error") { const int argc = 3; for (const auto &jsonElem : j.items()) { - modifyJson([&jsonElem](json &j) { j[jsonElem.key()] = nullptr; }); - const char *argv[] = { - "program", - "--config", - jsonPath.c_str() - }; - REQUIRE_THROWS(parser.parse(argc, argv)); + if (jsonElem.key().rfind("flag", 0) != 0) { + modifyJson([&jsonElem](json &j) { j[jsonElem.key()] = nullptr; }); + const char *argv[] = { + "program", + "--config", + jsonPath.c_str() + }; + REQUIRE_THROWS(parser.parse(argc, argv)); + } } } THEN("Calling the parser with an invalid json file path throws an error") @@ -223,7 +235,8 @@ SCENARIO("Parsing arguments from JSON file", "[json]") "b", std::to_string(newb), "c", newc, "d", newdstr, - "e/a", std::to_string(newea) + "e/a", std::to_string(newea), + "flag" }; int argc; std::vector argv; @@ -234,6 +247,7 @@ SCENARIO("Parsing arguments from JSON file", "[json]") REQUIRE(c == newc); REQUIRE(d == newd); REQUIRE(ea == newea); + REQUIRE(flag != flagInitialValue); } THEN("Calling the parser with JSON and command line argument works") @@ -244,7 +258,8 @@ SCENARIO("Parsing arguments from JSON file", "[json]") "program", "--config", jsonPath, "a", std::to_string(newa), - "b", std::to_string(newb) + "b", std::to_string(newb), + "flagDefaultTrue" }; int argc; std::vector argv; @@ -255,7 +270,7 @@ SCENARIO("Parsing arguments from JSON file", "[json]") REQUIRE(c == j["c"]); REQUIRE(d == j["d"]); REQUIRE(ea == j["e"]["a"]); - + REQUIRE(invertedFlag == false); } THEN("Calling the parser with a missing argument value throws an error") { diff --git a/modules/python/generator/visp_python_bindgen/header.py b/modules/python/generator/visp_python_bindgen/header.py index 9ce9702cec..b0617e07ae 100644 --- a/modules/python/generator/visp_python_bindgen/header.py +++ b/modules/python/generator/visp_python_bindgen/header.py @@ -362,11 +362,16 @@ def add_method_doc_to_pyargs(method: types.Method, py_arg_strs: List[str]) -> Li method_name = get_name(method.name) params_strs = [get_type(param.type, owner_specs, header_env.mapping) for param in method.parameters] py_arg_strs = get_py_args(method.parameters, owner_specs, header_env.mapping) + param_names = [param.name or 'arg' + str(i) for i, param in enumerate(method.parameters)] py_arg_strs = add_method_doc_to_pyargs(method, py_arg_strs) ctor_str = f'''{python_ident}.{define_constructor(params_strs, py_arg_strs)};''' + if 'simpleAdd' in ctor_str: + print(params_strs) + print(header_env.mapping) + raise RuntimeError add_to_method_dict('__init__', MethodBinding(ctor_str, is_static=False, is_lambda=False, is_operator=False, is_constructor=True)) diff --git a/modules/python/generator/visp_python_bindgen/methods.py b/modules/python/generator/visp_python_bindgen/methods.py index 92164afcba..5e3a064bf5 100644 --- a/modules/python/generator/visp_python_bindgen/methods.py +++ b/modules/python/generator/visp_python_bindgen/methods.py @@ -211,6 +211,7 @@ def parameter_can_have_default_value(parameter: types.Parameter, specs, env_mapp is_const = t.ptr_to.const else: type_name = '' + if GeneratorConfig.is_forbidden_default_argument_type(type_name): return False @@ -341,8 +342,6 @@ def make_keep_alive_str(values) -> str: else: pybind_options = [method_doc.documentation] + pybind_options - - # If a function has refs to immutable params, we need to return them. # Also true if user has specified input cpp params as output python params should_wrap_for_tuple_return = param_is_output is not None and any(param_is_output) @@ -361,7 +360,6 @@ def to_argument_name(type: str, name: str) -> str: else: return type + ' ' + name - params_with_names = [to_argument_name(t, name) for t, name in zip(input_param_types, input_param_names)] # Params that are only outputs: they should be declared in function. Assume that they are default constructible diff --git a/modules/tracker/klt/include/visp3/klt/vpKltOpencv.h b/modules/tracker/klt/include/visp3/klt/vpKltOpencv.h index a4103e542b..98bb462da4 100644 --- a/modules/tracker/klt/include/visp3/klt/vpKltOpencv.h +++ b/modules/tracker/klt/include/visp3/klt/vpKltOpencv.h @@ -119,7 +119,7 @@ class VISP_EXPORT vpKltOpencv * \param color : Color used to display the features. * \param thickness : Thickness of the drawings. */ - void display(const vpImage &I, const vpColor &color = vpColor::red, unsigned int thickness = 1); + void display(const vpImage &I, const vpColor &color = vpColor::red, unsigned int thickness = 1) const; /*! * Display features list. * diff --git a/modules/tracker/klt/src/vpKltOpencv.cpp b/modules/tracker/klt/src/vpKltOpencv.cpp index 9f137744d2..66a4e4de3d 100644 --- a/modules/tracker/klt/src/vpKltOpencv.cpp +++ b/modules/tracker/klt/src/vpKltOpencv.cpp @@ -166,7 +166,7 @@ void vpKltOpencv::getFeature(const int &index, long &id, float &x, float &y) con id = m_points_id[(size_t)index]; } -void vpKltOpencv::display(const vpImage &I, const vpColor &color, unsigned int thickness) +void vpKltOpencv::display(const vpImage &I, const vpColor &color, unsigned int thickness) const { vpKltOpencv::display(I, m_points[1], m_points_id, color, thickness); } diff --git a/modules/tracker/mbt/src/vpMbGenericTracker.cpp b/modules/tracker/mbt/src/vpMbGenericTracker.cpp index 61e4218109..760296c065 100644 --- a/modules/tracker/mbt/src/vpMbGenericTracker.cpp +++ b/modules/tracker/mbt/src/vpMbGenericTracker.cpp @@ -253,7 +253,7 @@ double vpMbGenericTracker::computeCurrentProjectionError(const vpImage & const vpCameraParameters &_cam) { vpImage I; - vpImageConvert::convert(I_color, I); // FS: Shoudn't we use here m_I that was converted in track() ? + vpImageConvert::convert(I_color, I); // FS: Shouldn't we use here m_I that was converted in track() ? return computeCurrentProjectionError(I, _cMo, _cam); } diff --git a/modules/tracker/me/include/visp3/me/vpMeSite.h b/modules/tracker/me/include/visp3/me/vpMeSite.h index 93ef098e3f..e5f94c5dbb 100644 --- a/modules/tracker/me/include/visp3/me/vpMeSite.h +++ b/modules/tracker/me/include/visp3/me/vpMeSite.h @@ -146,13 +146,13 @@ class VISP_EXPORT vpMeSite * Display moving edges in image I. * @param I : Input image. */ - void display(const vpImage &I); + void display(const vpImage &I) const; /*! * Display moving edges in image I. * @param I : Input image. */ - void display(const vpImage &I); + void display(const vpImage &I) const; /*! * Get the angle of tangent at site. @@ -237,6 +237,20 @@ class VISP_EXPORT vpMeSite */ void track(const vpImage &I, const vpMe *me, const bool &test_contrast = true); + /*! + * Similar to the track() function, but stores the best numCandidates hypotheses in `outputHypotheses`. + * The best matching hypotheses (if it is not suppressed) is assigned to *this* and is stored as the first + * element of `outputHypotheses`. + * The hypotheses are sorted from best to worst match in the vector. + * A match may be in the vector but mark as suppressed. If this is undesired, you should filter them afterwards. + * + * \throws If `numCandidates` is superior to me.getRange() * 2 + 1. + * + * \warning To display the moving edges graphics a call to vpDisplay::flush() is needed after this function. + */ + void trackMultipleHypotheses(const vpImage &I, const vpMe &me, const bool &test_contrast, + std::vector &outputHypotheses, const unsigned numCandidates); + /*! * Set the angle of tangent at site. * @@ -306,9 +320,27 @@ class VISP_EXPORT vpMeSite */ inline double getContrastThreshold() const { return m_contrastThreshold; } + /*! - * Copy operator. + * Get the final computed likelihood threshold value, depending on the likelihood threshold type and ME settings. + * + * \return value of the contrast threshold of the site. */ + inline double computeFinalThreshold(const vpMe &me) const + { + const double threshold = getContrastThreshold(); + if (me.getLikelihoodThresholdType() == vpMe::NORMALIZED_THRESHOLD) { + return 2.0 * threshold; + } + else { + const double n_d = me.getMaskSize(); + return threshold / (100.0 * n_d * trunc(n_d / 2.0)); + } + } + + /*! + * Copy operator. + */ vpMeSite &operator=(const vpMeSite &m); /*! diff --git a/modules/tracker/me/src/moving-edges/vpMeSite.cpp b/modules/tracker/me/src/moving-edges/vpMeSite.cpp index d7fe541c1e..cc252715c0 100644 --- a/modules/tracker/me/src/moving-edges/vpMeSite.cpp +++ b/modules/tracker/me/src/moving-edges/vpMeSite.cpp @@ -39,6 +39,8 @@ #include // std::fabs #include // numeric_limits #include +#include + #include #include #include @@ -46,6 +48,17 @@ BEGIN_VISP_NAMESPACE #ifndef DOXYGEN_SHOULD_SKIP_THIS + +struct vpMeSiteHypothesis +{ + vpMeSiteHypothesis(vpMeSite *site, double l, double c) : site(site), likelihood(l), contrast(c) + { } + + vpMeSite *site; + double likelihood; + double contrast; +}; + static bool horsImage(int i, int j, int half, int rows, int cols) { int half_1 = half + 1; @@ -276,65 +289,47 @@ void vpMeSite::track(const vpImage &I, const vpMe *me, const bool // range = +/- range of pixels within which the correspondent // of the current pixel will be sought unsigned int range = me->getRange(); + const unsigned int normalSides = 2; + const unsigned int numQueries = range * normalSides + 1; vpMeSite *list_query_pixels = getQueryList(I, static_cast(range)); double contrast_max = 1 + me->getMu2(); double contrast_min = 1 - me->getMu1(); - // array in which likelihood ratios will be stored - double *likelihood = new double[(2 * range) + 1]; - const unsigned int val_2 = 2; + double threshold = computeFinalThreshold(*me); if (test_contrast) { double diff = 1e6; - for (unsigned int n = 0; n < ((val_2 * range) + 1); ++n) { - // convolution results - double convolution_ = list_query_pixels[n].convolution(I, me); - double threshold = list_query_pixels[n].getContrastThreshold(); - - if (me->getLikelihoodThresholdType() == vpMe::NORMALIZED_THRESHOLD) { - threshold = 2.0 * threshold; - } - else { - double n_d = me->getMaskSize(); - threshold = threshold / (100.0 * n_d * trunc(n_d / 2.0)); - } + for (unsigned int n = 0; n < numQueries; ++n) { + // convolution results + double convolution_ = list_query_pixels[n].convolution(I, me); // luminance ratio of reference pixel to potential correspondent pixel // the luminance must be similar, hence the ratio value should // lay between, for instance, 0.5 and 1.5 (parameter tolerance) - likelihood[n] = fabs(convolution_ + m_convlt); + const double likelihood = fabs(convolution_ + m_convlt); - if (likelihood[n] > threshold) { + if (likelihood > threshold) { contrast = convolution_ / m_convlt; if ((contrast > contrast_min) && (contrast < contrast_max) && (fabs(1 - contrast) < diff)) { diff = fabs(1 - contrast); max_convolution = convolution_; - max = likelihood[n]; + max = likelihood; max_rank = static_cast(n); } } } } else { // test on contrast only - for (unsigned int n = 0; n < ((val_2 * range) + 1); ++n) { - double threshold = list_query_pixels[n].getContrastThreshold(); - - if (me->getLikelihoodThresholdType() == vpMe::NORMALIZED_THRESHOLD) { - threshold = 2.0 * threshold; - } - else { - double n_d = me->getMaskSize(); - threshold = threshold / (100.0 * n_d * trunc(n_d / 2.0)); - } + for (unsigned int n = 0; n < numQueries; ++n) { // convolution results double convolution_ = list_query_pixels[n].convolution(I, me); - likelihood[n] = fabs(val_2 * convolution_); - if ((likelihood[n] > max) && (likelihood[n] > threshold)) { + const double likelihood = fabs(2 * convolution_); + if ((likelihood > max) && (likelihood > threshold)) { max_convolution = convolution_; - max = likelihood[n]; + max = likelihood; max_rank = static_cast(n); } } @@ -354,9 +349,6 @@ void vpMeSite::track(const vpImage &I, const vpMe *me, const bool m_normGradient = vpMath::sqr(max_convolution); m_convlt = max_convolution; - - delete[] list_query_pixels; - delete[] likelihood; } else // none of the query sites is better than the threshold { @@ -373,16 +365,127 @@ void vpMeSite::track(const vpImage &I, const vpMe *me, const bool m_state = THRESHOLD; // threshold suppression } - delete[] list_query_pixels; - delete[] likelihood; // modif portage } + delete[] list_query_pixels; +} + +void vpMeSite::trackMultipleHypotheses(const vpImage &I, const vpMe &me, const bool &test_contrast, +std::vector &outputHypotheses, const unsigned numCandidates) +{ + + // range = +/- range of pixels within which the correspondent + // of the current pixel will be sought + unsigned int range = me.getRange(); + const unsigned int numQueries = range * 2 + 1; + + if (numCandidates > numQueries) { + throw vpException(vpException::badValue, "Error in vpMeSite::track: the number of retained hypotheses cannot superior to the number of queried sites."); + } + + vpMeSite *list_query_pixels = getQueryList(I, static_cast(range)); + + // Insert into a map, where the key is the sorting criterion (negative likelihood or contrast diff) + // and the key is the ME site + its computed likelihood and contrast. + // After computation: iterating on the map is guaranteed to be done with the keys being sorted according to the criterion. + // Multimap allows to have multiple values (sites) with the same key (likelihood/contrast diff) + // Only the candidates that are above the threshold are kept + std::multimap candidates; + + const double contrast_max = 1 + me.getMu2(); + const double contrast_min = 1 - me.getMu1(); + + const double threshold = computeFinalThreshold(me); + + // First step: compute likelihoods and contrasts for all queries + if (test_contrast) { + for (unsigned int n = 0; n < numQueries; ++n) { + vpMeSite &query = list_query_pixels[n]; + // convolution results + const double convolution_ = query.convolution(I, &me); + // luminance ratio of reference pixel to potential correspondent pixel + // the luminance must be similar, hence the ratio value should + // lay between, for instance, 0.5 and 1.5 (parameter tolerance) + const double likelihood = fabs(convolution_ + m_convlt); + + query.m_convlt = convolution_; + const double contrast = convolution_ / m_convlt; + candidates.insert(std::pair(fabs(1.0 - contrast), vpMeSiteHypothesis(&query, likelihood, contrast))); + } + } + else { // test on likelihood only + for (unsigned int n = 0; n < numQueries; ++n) { + // convolution results + vpMeSite &query = list_query_pixels[n]; + const double convolution_ = query.convolution(I, &me); + const double likelihood = fabs(2 * convolution_); + query.m_convlt = convolution_; + candidates.insert(std::pair(-likelihood, vpMeSiteHypothesis(&query, likelihood, 0.0))); + } + } + // Take first numCandidates hypotheses: map is sorted according to the likelihood/contrast difference so we can just + // iterate from the start + outputHypotheses.resize(numCandidates); + + std::multimap::iterator it = candidates.begin(); + if (test_contrast) { + for (unsigned int i = 0; i < numCandidates; ++i, ++it) { + outputHypotheses[i] = *(it->second.site); + outputHypotheses[i].m_normGradient = vpMath::sqr(outputHypotheses[i].m_convlt); + const double likelihood = it->second.likelihood; + const double contrast = it->second.contrast; + + if (likelihood > threshold) { + if (contrast <= contrast_min || contrast >= contrast_max) { + outputHypotheses[i].m_state = CONTRAST; + } + else { + outputHypotheses[i].m_state = NO_SUPPRESSION; + } + } + else { + outputHypotheses[i].m_state = THRESHOLD; + } + } + } + else { + for (unsigned int i = 0; i < numCandidates; ++i, ++it) { + outputHypotheses[i] = *(it->second.site); + const double likelihood = it->second.likelihood; + if (likelihood > threshold) { + outputHypotheses[i].m_state = NO_SUPPRESSION; + } + else { + outputHypotheses[i].m_state = THRESHOLD; + } + } + } + + const vpMeSite &bestMatch = outputHypotheses[0]; + + + if (bestMatch.m_state != NO_SUPPRESSION) { + if ((m_selectDisplay == RANGE_RESULT) || (m_selectDisplay == RESULT)) { + + vpDisplay::displayPoint(I, bestMatch.m_i, bestMatch.m_j, vpColor::red); + } + *this = outputHypotheses[0]; + } + else { + if ((m_selectDisplay == RANGE_RESULT) || (m_selectDisplay == RESULT)) { + vpDisplay::displayPoint(I, bestMatch.m_i, bestMatch.m_j, vpColor::green); + } + m_normGradient = 0; + } + + delete[] list_query_pixels; } int vpMeSite::operator!=(const vpMeSite &m) { return ((m.m_i != m_i) || (m.m_j != m_j)); } -void vpMeSite::display(const vpImage &I) { vpMeSite::display(I, m_ifloat, m_jfloat, m_state); } -void vpMeSite::display(const vpImage &I) { vpMeSite::display(I, m_ifloat, m_jfloat, m_state); } +void vpMeSite::display(const vpImage &I) const { vpMeSite::display(I, m_ifloat, m_jfloat, m_state); } + +void vpMeSite::display(const vpImage &I) const { vpMeSite::display(I, m_ifloat, m_jfloat, m_state); } // Static functions diff --git a/tutorial/ar/tutorial-panda3d-renderer.cpp b/tutorial/ar/tutorial-panda3d-renderer.cpp index a4d26b8793..39cf1d8aba 100644 --- a/tutorial/ar/tutorial-panda3d-renderer.cpp +++ b/tutorial/ar/tutorial-panda3d-renderer.cpp @@ -14,6 +14,7 @@ #include #include + #include #include @@ -102,12 +103,13 @@ int main(int argc, const char **argv) bool showLightContrib = false; bool showCanny = false; char *modelPathCstr = nullptr; - char *backgroundPathCstr = nullptr; + char *backgroundPathCstr = nullptr; vpParseArgv::vpArgvInfo argTable[] = { {"-model", vpParseArgv::ARGV_STRING, (char *) nullptr, (char *)&modelPathCstr, "Path to the model to load."}, + {"-background", vpParseArgv::ARGV_STRING, (char *) nullptr, (char *)&backgroundPathCstr, "Path to the background image to load for the rgb renderer."}, {"-step", vpParseArgv::ARGV_CONSTANT_BOOL, (char *) nullptr, (char *)&stepByStep, @@ -130,6 +132,16 @@ int main(int argc, const char **argv) return (false); } + if (PStatClient::is_connected()) { + PStatClient::disconnect(); + } + + std::string host = ""; // Empty = default config var value + int port = -1; // -1 = default config var value + if (!PStatClient::connect(host, port)) { + std::cout << "Could not connect to PStat server." << std::endl; + } + std::string modelPath; if (modelPathCstr) { modelPath = modelPathCstr; @@ -137,6 +149,7 @@ int main(int argc, const char **argv) else { modelPath = "data/suzanne.bam"; } + std::string backgroundPath; if (backgroundPathCstr) { backgroundPath = backgroundPathCstr; @@ -144,7 +157,9 @@ int main(int argc, const char **argv) const std::string objectName = "object"; //! [Renderer set] - vpPanda3DRenderParameters renderParams(vpCameraParameters(300, 300, 160, 120), 240, 320, 0.01, 10.0); + double factor = 1.0; + vpPanda3DRenderParameters renderParams(vpCameraParameters(600 * factor, 600 * factor, 320 * factor, 240 * factor), int(480 * factor), int(640 * factor), 0.01, 10.0); + unsigned h = renderParams.getImageHeight(), w = renderParams.getImageWidth(); vpPanda3DRendererSet renderer(renderParams); renderer.setRenderParameters(renderParams); renderer.setVerticalSyncEnabled(false); @@ -160,8 +175,7 @@ int main(int argc, const char **argv) std::shared_ptr rgbRenderer = std::make_shared(); std::shared_ptr rgbDiffuseRenderer = std::make_shared(false); std::shared_ptr grayscaleFilter = std::make_shared("toGrayscale", rgbRenderer, false); - std::shared_ptr blurFilter = std::make_shared("blur", grayscaleFilter, false); - std::shared_ptr cannyFilter = std::make_shared("canny", blurFilter, true, 10.f); + std::shared_ptr cannyFilter = std::make_shared("canny", grayscaleFilter, true, 10.f); //! [Subrenderers init] //! [Adding subrenderers] @@ -173,7 +187,6 @@ int main(int argc, const char **argv) } if (showCanny) { renderer.addSubRenderer(grayscaleFilter); - renderer.addSubRenderer(blurFilter); renderer.addSubRenderer(cannyFilter); } std::cout << "Initializing Panda3D rendering framework" << std::endl; @@ -201,11 +214,9 @@ int main(int argc, const char **argv) rgbRenderer->printStructure(); - std::cout << "Setting camera pose" << std::endl; - renderer.setCameraPose(vpHomogeneousMatrix(0.0, 0.0, -0.3, 0.0, 0.0, 0.0)); + renderer.setCameraPose(vpHomogeneousMatrix(0.0, 0.0, -5.0, 0.0, 0.0, 0.0)); //! [Scene configuration] - unsigned h = renderParams.getImageHeight(), w = renderParams.getImageWidth(); std::cout << "Creating display and data images" << std::endl; vpImage normalsImage; vpImage cameraNormalsImage; @@ -231,35 +242,39 @@ int main(int argc, const char **argv) #elif defined(VISP_HAVE_D3D9) using DisplayCls = vpDisplayD3D; #endif - - DisplayCls dNormals(normalDisplayImage, 0, 0, "normals in world space"); - DisplayCls dNormalsCamera(cameraNormalDisplayImage, 0, h + 80, "normals in camera space"); - DisplayCls dDepth(depthDisplayImage, w + 80, 0, "depth"); - DisplayCls dColor(colorImage, w + 80, h + 80, "color"); + unsigned int padding = 80; + DisplayCls dNormals(normalDisplayImage, 0, 0, "normals in object space"); + DisplayCls dNormalsCamera(cameraNormalDisplayImage, 0, h + padding, "normals in camera space"); + DisplayCls dDepth(depthDisplayImage, w + padding, 0, "depth"); + DisplayCls dColor(colorImage, w + padding, h + padding, "color"); DisplayCls dImageDiff; if (showLightContrib) { - dImageDiff.init(lightDifference, w * 2 + 80, 0, "Specular/reflectance contribution"); + dImageDiff.init(lightDifference, w * 2 + padding, 0, "Specular/reflectance contribution"); } DisplayCls dCanny; if (showCanny) { - dCanny.init(cannyImage, w * 2 + 80, h + 80, "Canny"); + dCanny.init(cannyImage, w * 2 + padding, h + padding, "Canny"); } renderer.renderFrame(); bool end = false; bool firstFrame = true; std::vector renderTime, fetchTime, displayTime; while (!end) { - float nearV = 0, farV = 0; const double beforeComputeBB = vpTime::measureTimeMs(); - rgbRenderer->computeNearAndFarPlanesFromNode(objectName, nearV, farV); + //! [Updating render parameters] + float nearV = 0, farV = 0; + geometryRenderer->computeNearAndFarPlanesFromNode(objectName, nearV, farV, true); renderParams.setClippingDistance(nearV, farV); renderer.setRenderParameters(renderParams); - //std::cout << "Update clipping plane took " << vpTime::measureTimeMs() - beforeComputeBB << std::endl; + //! [Updating render parameters] const double beforeRender = vpTime::measureTimeMs(); + //! [Render frame] renderer.renderFrame(); + //! [Render frame] const double beforeFetch = vpTime::measureTimeMs(); + //! [Fetch render] renderer.getRenderer(geometryRenderer->getName())->getRender(normalsImage, depthImage); renderer.getRenderer(cameraRenderer->getName())->getRender(cameraNormalsImage); renderer.getRenderer(rgbRenderer->getName())->getRender(colorImage); @@ -269,8 +284,9 @@ int main(int argc, const char **argv) if (showCanny) { renderer.getRenderer()->getRender(cannyRawData); } - + //! [Fetch render] const double beforeConvert = vpTime::measureTimeMs(); + //! [Display] displayNormals(normalsImage, normalDisplayImage); displayNormals(cameraNormalsImage, cameraNormalDisplayImage); displayDepth(depthImage, depthDisplayImage, nearV, farV); @@ -283,6 +299,7 @@ int main(int argc, const char **argv) vpDisplay::display(colorImage); vpDisplay::displayText(colorImage, 15, 15, "Click to quit", vpColor::red); + //! [Display] if (stepByStep) { vpDisplay::displayText(colorImage, 50, 15, "Next frame: space", vpColor::red); @@ -306,10 +323,12 @@ int main(int argc, const char **argv) } } const double afterAll = vpTime::measureTimeMs(); + //! [Move object] const double delta = (afterAll - beforeRender) / 1000.0; const vpHomogeneousMatrix wTo = renderer.getNodePose(objectName); const vpHomogeneousMatrix oToo = vpExponentialMap::direct(vpColVector({ 0.0, 0.0, 0.0, 0.0, vpMath::rad(20.0), 0.0 }), delta); renderer.setNodePose(objectName, wTo * oToo); + //! [Move object] } if (renderTime.size() > 0) { std::cout << "Render time: " << vpMath::getMean(renderTime) << "ms +- " << vpMath::getStdev(renderTime) << "ms" << std::endl;