Introduction
This article will show how to use the IMU Dataset Visualizer/Exporter to generate Pose Datasets that is used in ModelPack 2.0.
Requirements
- Teensy 4.1 Development Board
- BNO085 IMU Sensor
- Arduino 2.0+
Step 1: Configure Arduino and Teensy Libraries
- Install Arduino 2.0
- Follow the instructions in SPI mode and the proper hardware connections described in the BNO08x guidelines.
- Install teensy board following these instructions.
Step 2: Upload the Arduino Sketch
- Clone the repository containing this application.
- Upload the .ino sketch stored in /quaternion_yaw_pitch_roll/quaternion_yaw_pitch_roll.ino onto the teensy 4.1 board that is connected to the BNO085 IMU sensor.
- You can confirm if the processes are working correctly by opening a serial monitor in the Arduino IDE, you should see print statements of the data collected by the IMU in the form:
-
Adafruit BNO08x test! BNO08x Found!
Setting desired reports
Reading events
sensor was reset Setting desired reports
53748205 0 0.75 0.01 2.76
4238 0 0.75 0.01 2.76
4237 0 0.75 0.01 2.76
3237 0 0.75 0.01 2.76. -
Once proper functionalities are confirmed, close this serial monitor.
Step 3: Install the Requirements
It is recommended to start a fresh python virtual environment prior to moving forward with the next steps.
Install the requirements using the shell following command:
pip install -r requirements.txt
Step 4: Run the Application
Note: Please ensure that a proper serial connection is present between the Teensy board and the host machine running this application and make a note of the port name and the baudrate. Also ensure that a camera is connected to the host machine to view the live feed that draws the pose orientations.
python -m headpose.posecapture -c "Port Name" -b "baudrate"
- The command above will start the camera feed and show the visualizer showing the compass by default.
- To quit the program, press 'q' on the keyboard.
- To recalibrate the yaw angle, press 'r' on the keyboard.
Additional Recalibration Information
The recalibration only affects the angle for yaw. The orientation displayed will be the 0 angle for the yaw and any subsequent yaw angles will be in reference to this yaw angle. The angles for roll and pitch have been precalibrated at the start of the process based on the predetermined standard angles for roll and pitch stored in ./standard_orientation.json.
The predetermined standard angles for roll and pitch were chosen when the provided mannequin head is wearing the helmet and is sitting on an elevated flat surface facing at the camera directly. The angles generated by the IMU in this orientation is 0.01 and 0.08159265358979306 rad for roll and pitch respectively. These standard angles should be regarded as the 0 angles for roll and pitch.
- For roll, the recalibration is: new_roll = (current_roll - 0.01)
- For pitch, the recalibration is: new_pitch = (-current_pitch + np.pi - 0.08159265358979306)
Note: For pitch, the IMU is mounted upside down so a rotation of 180 degrees needs to be applied, but the rotation is also inverse, so the current pitch needs to be negated.
Step 5: Export a Dataset
To export the dataset, additional command line parameters need to be set:
python -m headpose.posecapture -c "Port Name" -b "baudrate" -s "path to save the dataset" -m "maximum number of images to save (optional)" -i "interval in seconds to capture the frame (optional)" --model "path to the helmet detection model (optional)"
- The parameter,
-s
accepts a path to store the dataset.
The dataset will be saved in the following structure:
path provided
|-------------YYYY-MM-DD--H_M_S
|---- images |------- validate
|------ yyyy-mm-dd--h_m_s_ms.jpg
|---- labels |------- validate
|------ yyyy-mm-dd--h_m_s_ms.json
|---- original_angles
|------- yyyy-mm-dd--h_m_s_ms.txt
|---- labels.txt
- The parameter,
-m
accepts an integer indicating the number of images to save, the program will stop once this limit is reached. This parameter is optional. - The parameter,
-i
accepts an integer indicating the interval in seconds for which to save an image. This parameter is optional. - The parameter,
--model
accepts a path to the Keras detection model trained to detect the Arrow helmet. This parameter is optional, but it allows detection annotations to be automatically stored in the dataset. If there are no detection annotations, it will have to be manually added inside the annotation files using an external annotator tool.
Note: For the best results, do not include the -i
parameter and ensure that head rotations are slow and smooth. At the very start of the camera, ensure that the helmet is facing directly at the camera for proper calibration of the yaw as the starting point position.
Also observe any drift errors in the yaw by moving back to the initial position, if the drift error is present, recalibrate again by pressing and holding the key 'r' in the keyboard.
Additional Requirements
To ensure dataset viability, it is needed to visualize the angles generated and ensure that they reflect the helmet orientation. If not, the dataset will need to be cleaned and delete incorrect angles due to the drift and keep only the angles corroborated to be correct.
Appendix
This section shows all the possible command line parameters to control the application.
-c
, --com
, this parameter allow specification of the port name for which the teensy board is connected. Default is 'COM10'.
-b
, --baudrate
, this parameter allows specification of the connection channel baudrate. Default value is 115200.
-t
, --timeout
, this parameter allows specification of the read timeout value in seconds. Default value is 0.1 seconds.
-s
, --store
, this parameter allows specification of the path to store the frame and the JSON annotations.
i
, --interval
, this parameter allows specification of the interval in seconds to save the frame and the annotations.
-m
, --max_files
, this parameter allows specification of the maximum number of files to save for each frame and annotations. Default value is 100 files for each.
-w
, --width
, this parameter specifies the width of the frame to display. The height is predetermined based on the width passed to maintain aspect ratio.
--model
, this parameter is optional to pass a float Keras detection model which is used to draw bounding boxes around the helmet.
Comments
0 comments
Please sign in to leave a comment.