Interactive 3D Plots with Plotly

By Thomas Weng on December 23, 2022

I’ve recently had to plot a lot of 3D elements like point clouds and meshes for my research. Compared to 2D plots, 3D plots benefit greatly from interaction: translating and rotating the plot gives a better sense of the geometry of elements in the plot. I’ve found that plotly is the easiest and most responsive tool for interactive 3D plots, and this post is a primer on how I make them.

Visualizing meshes with plotly

Below is an example of a sphere rendered with Python and plotly.

The code is straightforward: pass the vertices and faces of a mesh to plotly’s create_trisurf() function.

Adding other elements to the same plot

Use the add_trace() function to add other elemnts, e.g. a point cloud, to the same figure.

Saving the plot

Besides using fig.show() to show the plot, you can save it as html, or even log it to wandb:

Why not matplotlib or trimesh?

3D plotting in matplotlib is extremely slow. The trimesh Scene visualizer is also slower than plotly. Interactive features for both matplotlib and trimesh become unusable as the number of elements in the plot grows.

Plotly, on the other hand, uses a webGL backend to render in the browser. If you have hardware acceleration turned on for your browser it is even faster. I haven’t tried visualizers from other libraries like open3d, as I was pretty happy with plotly once I started using it.


Here is the link to the Github Gist with all the code in this post. Hope it helps!

Tags: how-to, data-viz

Deploying Python 3 PyTorch Networks in Python 2 for ROS

By Thomas Weng on December 11, 2022

Scenario: you train a neural network in Python 3 with PyTorch, and want to deploy your network on a real robot using ROS. However, ROS only supports Python 2 packages, and getting ROS to work with Python 3 is difficult.1

While ROS 2 does have support for Python 3, it is still in development. In the meantime, here is a possible workaround for running Python 3 networks on ROS 1.

The workaround is to load the weights of a Python 3 PyTorch model, process the weights into a format compatible with Python 2 PyTorch, then save the output as a text file to be loaded in Python 2.

In the following gist, module_save_util.py processes the Python 3 model and saves the weights for Python2: link to gist. module_load_util.py can then be called in Python 2 to load the processed weights.

In our case, we trained networks with PyTorch 1.8.0/1.8.1 and deployed with PyTorch 1.3.1. I cannot give any guarantees on whether the performance is the same after porting the weights to Python 2, though in my own experiments I did not see a drop. That said, it has been a while since I have done this solution so your mileage may vary.

Others at the Robotics Institute have proposed the following solutions, though I haven’t tried them myself:

Credit goes to Sujay Bajracharya for developing the solution in this post! If there is code in the gist above from other sources, I am happy to give credit as well.

Full gist:


Footnotes

Tags: how-to, robotics

Filter MoveIt cartesian path plans with jump_threshold

By Thomas Weng on December 20, 2020

MoveIt is a useful tool for robot motion planning, but it often lacks documentation for key functions and features. One particularly opaque function is compute_cartesian_path, which takes as input waypoints of end effector poses, and outputs a joint trajectory that visits each pose. Although this function is in the tutorials, the tutorials don’t cover the critical role of the jump_threshold parameter when running the function on a real robot.

On real robots, executing the trajectory produced by compute_cartesian_path(..., jump_threshold=0) can sometimes cause the robot to jerk unpredictably away from the desired path. This undesirable motion seems to occur when the arm approaches singularities and can’t continue following the waypoints without reorienting the arm.1

The solution to this problem is to set the jump_threshold parameter, which limits the maximum distance or “jump” in joint positions between two consecutive trajectory points. When this parameter is set, the planner returns a partial solution if consecutive joint positions exceed the threshold, so the robot will no longer move jerkily.

But what should the value of jump_threshold be? Looking at the code, the trajectory points must satisfy the following condition:

distance between consecutive joint positions <= jump_threshold * mean joint position distance

So the distance must be less than or equal to the threshold times the mean distance.2 In practice, I played with the threshold3 until the robot moved without jerking or stopping prematurely.

Tuning the jump_threshold was critical for getting cartesian path planning working on our robot arm. Now, we plan the whole trajectory up front and abort if only a partial trajectory is found. Hope this helps!


Footnotes

  1. In most cases, the cartesian planner will recognize that there is no feasible plan and return a partial trajectory. But sometimes it seems to attempt to reorient the arm quickly, which causes the jerky deviation from the desired path. 

  2. There is code for using an absolute distance condition instead of the mean. Also see this Github issue for relevant discussion. 

  3. I use jump_threshold=5.0 but your mileage may vary. 

Tags: how-to, robotics

Compressing Videos with ffmpeg

By Thomas Weng on August 4, 2020

Publication venues in robotics often allow supplementary video submissions. There are often strict size limits on video submissions. To meet these limits, it’s often necessary to compress the video.

I used to compress my videos using Compressor after making my videos with iMovie or Final Cut Pro. Now I just use ffmpeg to compress my videos. I share/export the video from Final Cut Pro and then use this command to compress it:

ffmpeg -i VIDEO.mov -vcodec libx264 -crf 20 VIDEO_COMPRESSED.mov

The -crf flag specifies the compression factor and it ranges from 0 (lossless) to 51 (worst quality), with 23 as the default. If I’m trying to get under a size limit, I’ll run the command multiple times, playing with this value until I meet the requirement.

ffmpeg can also be used for converting between video formats, trimming, cropping, etc. Hope this helps!

Tags: how-to

Installing NVIDIA drivers and CUDA on Ubuntu 16.04 and 18.04

By Thomas Weng on July 23, 2020

There are several ways to install NVIDIA drivers. This method using a runfile is the one that works most reliably for me.

  1. Determine the driver version and CUDA version you need. The versions will depend on the GPU you have and the code you want to run. For example, specific pytorch and tensorflow versions are tied to specific CUDA versions.

  2. Download the driver and CUDA version as a runfile from the NVIDIA website: CUDA Downloads

  3. Disable the default nouveau drivers. Create a file at /etc/modprobe.d/blacklist-nouveau.conf and add the following:
     blacklist nouveau
     options nouveau modeset=0
    

    Then regenerate the kernel initramfs: sudo update-initramfs -u

  4. If you already have nvidia installed, uninstall it: sudo apt-get purge nvidia-* and reboot your machine.

  5. Upon restart, drop into the tty terminal with Ctrl+Alt+F1 and log in.

  6. Stop running the gui. On Ubuntu 16.04 run sudo service lightdm stop, on 18.04 run sudo systemctl stop gdm3.service.

  7. Find the runfile you downloaded from step 2 and run it: sudo sh ./NAME_OF_RUNFILE.run

  8. Follow the prompts to install the driver and CUDA. If the installation fails, you will need to check /var/log/nvidia-installer.log or /var/log/cuda-installer.log to find out why. You can test if the installation was successful by running nvidia-smi.Check the reported driver and CUDA versions.

  9. Restart the gui. On Ubuntu 16.04 run sudo service lightdm start, on 18.04 run sudo systemctl start gdm3.service.

  10. Leave the tty terminal and return to the gui: Ctrl+Alt+F7.

For more detailed instructions, see the NVIDIA CUDA Installation Guide.

Tags: how-to