Filter MoveIt cartesian path plans with jump_threshold

By Thomas Weng on December 20, 2020

MoveIt is a useful tool for robot motion planning, but it often lacks documentation for key functions and features. One particularly opaque function is compute_cartesian_path, which takes as input waypoints of end effector poses, and outputs a joint trajectory that visits each pose. Although this function is in the tutorials, the tutorials don’t cover the critical role of the jump_threshold parameter when running the function on a real robot.

On real robots, executing the trajectory produced by compute_cartesian_path(..., jump_threshold=0) can sometimes cause the robot to jerk unpredictably away from the desired path. This undesirable motion seems to occur when the arm approaches singularities and can’t continue following the waypoints without reorienting the arm.1

The solution to this problem is to set the jump_threshold parameter, which limits the maximum distance or “jump” in joint positions between two consecutive trajectory points. When this parameter is set, the planner returns a partial solution if consecutive joint positions exceed the threshold, so the robot will no longer move jerkily.

But what should the value of jump_threshold be? Looking at the code, the trajectory points must satisfy the following condition:

distance between consecutive joint positions <= jump_threshold * mean joint position distance

So the distance must be less than or equal to the threshold times the mean distance.2 In practice, I played with the threshold3 until the robot moved without jerking or stopping prematurely.

Tuning the jump_threshold was critical for getting cartesian path planning working on our robot arm. Now, we plan the whole trajectory up front and abort if only a partial trajectory is found. Hope this helps!


Footnotes

  1. In most cases, the cartesian planner will recognize that there is no feasible plan and return a partial trajectory. But sometimes it seems to attempt to reorient the arm quickly, which causes the jerky deviation from the desired path. 

  2. There is code for using an absolute distance condition instead of the mean. Also see this Github issue for relevant discussion. 

  3. I use jump_threshold=5.0 but your mileage may vary. 

Tags: how-to, robotics

Compressing Videos with ffmpeg

By Thomas Weng on August 4, 2020

Publication venues in robotics often allow supplementary video submissions. There are often strict size limits on video submissions. To meet these limits, it’s often necessary to compress the video.

I used to compress my videos using Compressor after making my videos with iMovie or Final Cut Pro. Now I just use ffmpeg to compress my videos. I share/export the video from Final Cut Pro and then use this command to compress it:

ffmpeg -i VIDEO.mov -vcodec libx264 -crf 20 VIDEO_COMPRESSED.mov

The -crf flag specifies the compression factor and it ranges from 0 (lossless) to 51 (worst quality), with 23 as the default. If I’m trying to get under a size limit, I’ll run the command multiple times, playing with this value until I meet the requirement.

ffmpeg can also be used for converting between video formats, trimming, cropping, etc. Hope this helps!

Tags: how-to

Installing NVIDIA drivers and CUDA on Ubuntu 16.04 and 18.04

By Thomas Weng on July 23, 2020

There are several ways to install NVIDIA drivers. This method using a runfile is the one that works most reliably for me.

  1. Determine the driver version and CUDA version you need. The versions will depend on the GPU you have and the code you want to run. For example, specific pytorch and tensorflow versions are tied to specific CUDA versions.

  2. Download the driver and CUDA version as a runfile from the NVIDIA website: CUDA Downloads

  3. Disable the default nouveau drivers. Create a file at /etc/modprobe.d/blacklist-nouveau.conf and add the following:
     blacklist nouveau
     options nouveau modeset=0
    

    Then regenerate the kernel initramfs: sudo update-initramfs -u

  4. If you already have nvidia installed, uninstall it: sudo apt-get purge nvidia-* and reboot your machine.

  5. Upon restart, drop into the tty terminal with Ctrl+Alt+F1 and log in.

  6. Stop running the gui. On Ubuntu 16.04 run sudo service lightdm stop, on 18.04 run sudo systemctl stop gdm3.service.

  7. Find the runfile you downloaded from step 2 and run it: sudo sh ./NAME_OF_RUNFILE.run

  8. Follow the prompts to install the driver and CUDA. If the installation fails, you will need to check /var/log/nvidia-installer.log or /var/log/cuda-installer.log to find out why. You can test if the installation was successful by running nvidia-smi.Check the reported driver and CUDA versions.

  9. Restart the gui. On Ubuntu 16.04 run sudo service lightdm start, on 18.04 run sudo systemctl start gdm3.service.

  10. Leave the tty terminal and return to the gui: Ctrl+Alt+F7.

For more detailed instructions, see the NVIDIA CUDA Installation Guide.

Tags: how-to

Replacing my command prompt with emoji

By Thomas Weng on November 26, 2019

The title says it all:

My new emojified command prompt

Why did I do this? Wanted to see if it was possible. And seeing emoji somehow brings me a lot of joy.

I did this on my Mac with zsh and Oh My Zsh. Here are the steps if you want to do the same in your own terminal.

1. In your .zshrc, add the emoji plugin to the list of plugins.

plugins=(
  git alias-tips autojump emoji
)

2. In your zsh theme, update the PROMPT variable.

You’ll find the name of your current theme in your .zshrc file. The file for that theme is under /.oh-my-zsh/themes/.

In my theme file, I replaced > with $(random_emoji animals), which loads a random animal emoji as my terminal prompt each time I open a new terminal. Other options are available in the emoji plugin documentation.

# Old prompt
PROMPT='
${_current_dir}$(git_prompt_info) %{$fg[$CARETCOLOR]%}>%{$resetcolor%} '
# New prompt
PROMPT='
${_current_dir}$(git_prompt_info) %{$fg[$CARETCOLOR]%}$(random_emoji animals)%{$resetcolor%} '

To see changes, reload zsh with source ~/.zshrc or open and close the terminal.

3. Change the command prompt emoji with the random_emoji command

Step 2 replaces your prompt with a random emoji for each new terminal. If you get tired of the current emoji, you can change it with the command random_emoji. It outputs a random emoji and also randomly changes the emoji of your prompt.

Change the command prompt emoji with `random_emoji`

Enjoy!

How to install ROS drivers for Azure Kinect on Ubuntu 16.04

By Thomas Weng on August 31, 2019

Following up from my previous post on installing the Azure Kinect SDK on Ubuntu 16.04, this post provides instructions for setting up ROS drivers for the Azure Kinect. These instructions apply for ROS kinetic and Ubuntu 16.04.

The credit for figuring out these steps goes to Kevin Zhang!

Installation steps

  1. Install the Azure Kinect SDK executables on your path so they can be found by ROS.
    $ cd path/to/Azure-Kinect-Sensor-SDK/build
    $ sudo ninja install
    
  2. Clone the official ROS driver into a catkin workspace.1
    $ cd catkin_ws/src
    $ git clone https://github.com/microsoft/Azure_Kinect_ROS_Driver.git
    
  3. Make minor edits to the codebase. If you were to build the workspace now, you would get errors relating to std::atomic syntax.

    To fix this, open <repo>/include/azure_kinect_ros_driver/k4a_ros_device.h and convert all instances of std::atomic_TYPE type declarations to std::atomic<TYPE>. Below is a diff of the edits I made.

    @@ -117,11 +117,11 @@ class K4AROSDevice
      volatile bool running_;
     
      // Last capture timestamp for synchronizing playback capture and imu thread
           -    std::atomic_int64_t last_capture_time_usec_;
           +    std::atomic<int64_t> last_capture_time_usec_;
     
      // Last imu timestamp for synchronizing playback capture and imu thread
           -    std::atomic_uint64_t last_imu_time_usec_;
           -    std::atomic_bool imu_stream_end_of_file_;
           +    std::atomic<uint64_t> last_imu_time_usec_;
           +    std::atomic<bool> imu_stream_end_of_file_;
     
      // Threads
      std::thread frame_publisher_thread_;
    
  4. Build the catkin workspace with either catkin_make or catkin build.

  5. Copy the libdepthengine and libstdc++ binaries that you placed in the Azure_Kinect_SDK/build/bin folder from my previous post in your catkin workspace.
    $ cp path/to/Azure_Kinect_SDK/build/bin/libdepthengine.so.1.0 path/to/catkin_ws/devel/lib/
    $ cp path/to/Azure_Kinect_SDK/build/bin/libstdc++.so.6 path/to/catkin_ws/devel/lib/
    

    You will have to do this whenever you do a clean build of your workspace.

  6. Copy udev rules from the ROS driver repo to your machine.
    $ cp /path/to/Azure_Kinect_ROS_Driver/scripts/99-k4a.rules /etc/udev/rules.d/
    

    Unplug and replug your sensor into the machine after copying the file over.

  7. Source your built workspace and launch the driver.
    $ source path/to/catkin_ws/devel/setup.bash
    $ roslaunch azure_kinect_ros_driver driver.launch
    

    Note that there are parameters you can adjust in the driver launch file, e.g. FPS, resolution, etc.

  8. Run Rviz and you should be able to open Image and PointCloud2 widgets that read topics from the sensor!
Screenshot from RViz

Footnotes

  1. I used commit be9a528ddac3f9a494045b7acd76b7b32bd17105, but a later commit may work. 

Tags: robotics, how-to