Skip to content
Snippets Groups Projects
Commit 38af1aa9 authored by John Welsh's avatar John Welsh
Browse files

cleared exec count

parent 236b73f9
Branches
Tags v0.3.0
No related merge requests found
%% Cell type:markdown id: tags:
# Collision Avoidance - Data Collection
If you ran through the basic motion notebook, hopefully you're enjoying how easy it can be to make your Jetbot move around! Thats very cool! But what's even cooler, is making JetBot move around all by itself!
This is a super hard task, that has many different approaches but the whole problem is usually broken down into easier sub-problems. It could be argued that one of the most
important sub-problems to solve, is the problem of preventing the robot from entering dangerous situations! We're calling this *collision avoidance*.
In this set of notebooks, we're going to attempt to solve the problem using deep learning and a single, very versatile, sensor: the camera. You'll see how with a neural network, camera, and the NVIDIA Jetson Nano, we can teach the robot a very useful behavior!
The approach we take to avoiding collisions is to create a virtual "safety bubble" around the robot. Within this safety bubble, the robot is able to spin in a circle without hitting any objects (or other dangerous situations like falling off a ledge).
Of course, the robot is limited by what's in it's field of vision, and we can't prevent objects from being placed behind the robot, etc. But we can prevent the robot from entering these scenarios itself.
The way we'll do this is super simple:
First, we'll manually place the robot in scenarios where it's "safety bubble" is violated, and label these scenarios ``blocked``. We save a snapshot of what the robot sees along with this label.
Second, we'll manually place the robot in scenarios where it's safe to move forward a bit, and label these scenarios ``free``. Likewise, we save a snapshot along with this label.
That's all that we'll do in this notebook; data collection. Once we have lots of images and labels, we'll upload this data to a GPU enabled machine where we'll *train* a neural network to predict whether the robot's safety bubble is being violated based off of the image it sees. We'll use this to implement a simple collision avoidance behavior in the end :)
> IMPORTANT NOTE: When JetBot spins in place, it actually spins about the center between the two wheels, not the center of the robot chassis itself. This is an important detail to remember when you're trying to estimate whether the robot's safety bubble is violated or not. But don't worry, you don't have to be exact. If in doubt it's better to lean on the cautious side (a big safety bubble). We want to make sure JetBot doesn't enter a scenario that it couldn't get out of by turning in place.
%% Cell type:markdown id: tags:
### Display live camera feed
So let's get started. First, let's initialize and display our camera like we did in the *teleoperation* notebook.
> Our neural network takes a 224x224 pixel image as input. We'll set our camera to that size to minimize the filesize of our dataset (we've tested that it works for this task).
> In some scenarios it may be better to collect data in a larger image size and downscale to the desired size later.
%% Cell type:code id: tags:
``` python
import traitlets
import ipywidgets.widgets as widgets
from IPython.display import display
from jetbot import Camera, bgr8_to_jpeg
camera = Camera.instance(width=224, height=224)
image = widgets.Image(format='jpeg', width=224, height=224) # this width and height doesn't necessarily have to match the camera
camera_link = traitlets.dlink((camera, 'value'), (image, 'value'), transform=bgr8_to_jpeg)
display(image)
```
%% Output
%% Cell type:markdown id: tags:
Awesome, next let's create a few directories where we'll store all our data. We'll create a folder ``dataset`` that will contain two sub-folders ``free`` and ``blocked``,
where we'll place the images for each scenario.
%% Cell type:code id: tags:
``` python
import os
blocked_dir = 'dataset/blocked'
free_dir = 'dataset/free'
# we have this "try/except" statement because these next functions can throw an error if the directories exist already
try:
os.makedirs(free_dir)
os.makedirs(blocked_dir)
except FileExistsError:
print('Directories not created becasue they already exist')
```
%% Cell type:markdown id: tags:
If you refresh the Jupyter file browser on the left, you should now see those directories appear. Next, let's create and display some buttons that we'll use to save snapshots
for each class label. We'll also add some text boxes that will display how many images of each category that we've collected so far. This is useful because we want to make
sure we collect about as many ``free`` images as ``blocked`` images. It also helps to know how many images we've collected overall.
%% Cell type:code id: tags:
``` python
button_layout = widgets.Layout(width='128px', height='64px')
free_button = widgets.Button(description='add free', button_style='success', layout=button_layout)
blocked_button = widgets.Button(description='add blocked', button_style='danger', layout=button_layout)
free_count = widgets.IntText(layout=button_layout, value=len(os.listdir(free_dir)))
blocked_count = widgets.IntText(layout=button_layout, value=len(os.listdir(blocked_dir)))
display(widgets.HBox([free_count, free_button]))
display(widgets.HBox([blocked_count, blocked_button]))
```
%% Cell type:markdown id: tags:
Right now, these buttons wont do anything. We have to attach functions to save images for each category to the buttons' ``on_click`` event. We'll save the value
of the ``Image`` widget (rather than the camera), because it's already in compressed JPEG format!
To make sure we don't repeat any file names (even across different machines!) we'll use the ``uuid`` package in python, which defines the ``uuid1`` method to generate
a unique identifier. This unique identifier is generated from information like the current time and the machine address.
%% Cell type:code id: tags:
``` python
from uuid import uuid1
def save_snapshot(directory):
image_path = os.path.join(directory, str(uuid1()) + '.jpg')
with open(image_path, 'wb') as f:
f.write(image.value)
def save_free():
global free_dir, free_count
save_snapshot(free_dir)
free_count.value = len(os.listdir(free_dir))
def save_blocked():
global blocked_dir, blocked_count
save_snapshot(blocked_dir)
blocked_count.value = len(os.listdir(blocked_dir))
# attach the callbacks, we use a 'lambda' function to ignore the
# parameter that the on_click event would provide to our function
# because we don't need it.
free_button.on_click(lambda x: save_free())
blocked_button.on_click(lambda x: save_blocked())
```
%% Cell type:markdown id: tags:
Great! Now the buttons above should save images to the ``free`` and ``blocked`` directories. You can use the Jupyter Lab file browser to view these files!
Now go ahead and collect some data
1. Place the robot in a scenario where it's blocked and press ``add blocked``
2. Place the robot in a scenario where it's free and press ``add free``
3. Repeat 1, 2
> REMINDER: You can move the widgets to new windows by right clicking the cell and clicking ``Create New View for Output``. Or, you can just re-display them
> together as we will below
Here are some tips for labeling data
1. Try different orientations
2. Try different lighting
3. Try varied object / collision types; walls, ledges, objects
4. Try different textured floors / objects; patterned, smooth, glass, etc.
Ultimately, the more data we have of scenarios the robot will encounter in the real world, the better our collision avoidance behavior will be. It's important
to get *varied* data (as described by the above tips) and not just a lot of data, but you'll probably need at least 100 images of each class (that's not a science, just a helpful tip here). But don't worry, it goes pretty fast once you get going :)
%% Cell type:code id: tags:
``` python
display(image)
display(widgets.HBox([free_count, free_button]))
display(widgets.HBox([blocked_count, blocked_button]))
```
%% Cell type:markdown id: tags:
## Next
Once you've collected enough data, we'll need to copy that data to our GPU desktop or cloud machine for training. First, we can call the following *terminal* command to compress
our dataset folder into a single *zip* file.
> The ! prefix indicates that we want to run the cell as a *shell* (or *terminal*) command.
> The -r flag in the zip command below indicates *recursive* so that we include all nested files, the -q flag indicates *quiet* so that the zip command doesn't print any output
%% Cell type:code id: tags:
``` python
!zip -r -q dataset.zip dataset
```
%% Cell type:markdown id: tags:
You should see a file named ``dataset.zip`` in the Jupyter Lab file browser. You should download the zip file using the Jupyter Lab file browser by right clicking and selecting ``Download``.
Next, we'll need to upload this data to our GPU desktop or cloud machine (we refer to this as the *host*) to train the collision avoidance neural network. We'll assume that you've set up your training
machine as described in the JetBot WiKi. If you have, you can navigate to ``http://<host_ip_address>:8888`` to open up the Jupyter Lab environment running on the host. The notebook you'll need to open there is called ``collision_avoidance/train_model.ipynb``.
So head on over to your training machine and follow the instructions there! Once your model is trained, we'll return to the robot Jupyter Lab enivornment to use the model for a live demo!
......
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment