13.2 C
New York
Tuesday, November 26, 2024

Berkeley launched BridgeData V2 dataset for robotic studying at scale


Hearken to this text

Voiced by Amazon Polly
dataset image of a robot closing a cabinet door.

The preliminary and closing state of a trajectory with the pure language annotation “shut cupboard”. | Supply: UC Berkeley

A analysis workforce at UC Berkeley has launched BridgeData V2, a big and various dataset of robotic manipulation behaviors. The dataset goals to facilitate analysis in scalable robotic studying. 

This up to date dataset is appropriate with open-vocabulary and multi-task studying strategies conditioned on objective photographs or pure language directions. The talents discovered from the dataset might be generalized to novel objects and environments and throughout establishments. 

The Berkeley workforce collected information from a variety of duties in many alternative environments with variations in objects, digital camera poses, and workspace positioning. All of those variations are to raised assist broad generalization. 

The dataset contains 60,096 trajectories, 50,365 teleoperated demonstrations, 9,731 rollouts, 24 environments, and 13 expertise. Every trajectory is labeled with a pure language instruction equivalent to the duty the robotic is performing. 

The 24 environments included within the dataset are grouped into 4 completely different classes. A lot of the information comes from seven distinct toy kitchens, which all embody some mixture of sinks, stoves, and microwaves. 

A lot of the 13 expertise included in BridgeData V2 come from foundational object manipulation duties like pick-and-place, pushing, and sweeping. Some information comes from setting manipulation, which incorporates issues like opening and shutting doorways and drawers. The remainder of the info comes from extra complicated duties, like stacking blocks, folding garments, and sweeping granular media. Some components of the info come from mixtures of those classes. 

The workforce evaluated a number of state-of-the-art offline studying strategies utilizing the dataset. First, they evaluated the dataset on duties which might be seen within the coaching information. Whereas the duties had been seen in coaching, many strategies nonetheless required the workforce to generalize to novel object positions, distractor objects, and lighting. The workforce then evaluated the dataset on duties that require generalizing expertise within the information to novel objects and environments. 

The info was collected on a WidowX 250 6DOF robotic arm. The workforce collected the demonstrations by teleoperating the robotic with a VR controller. The management frequency is 5 Hz and the typical trajectory size is 38 timesteps. 

For sensing, the workforce used an RGBD digital camera that’s mounted in an over-the-shoulder view, two RGB cameras with poses which might be randomized throughout information assortment, and an RGB digital camera connected to the robotic’s wrist. All the photographs are saved at a 640×480 decision. 

The dataset might be downloaded right here. The info for teleoperated demonstrations and from the scripted pick-and-place coverage are supplied as separate zip recordsdata. The workforce gives each mannequin coaching code and pre-trained weights for getting began with the dataset.

This repository gives code and directions for coaching on the dataset and evaluating insurance policies. This information gives directions for organising the robotic {hardware}.

Related Articles

Latest Articles