Google
 

Saturday, February 23, 2008

No Directions Required--Software Smartens Mobile Robots

DARPA initiative to develop self-navigating robots introduces a world of potential for the development of autonomous vehicles, but will the government take advantage of its research or let it wither on the vine?

By Peter Sergo



SMART ROBOT: DARPA's LAGR initiative awarded each of eight teams of scientists $2 million to $3 million to develop software that would give unmanned vehicles the ability to autonomously learn and navigate irregular off-road terrain.
Courtesy of Yann Lecun, N.Y.U.

Computer experts recently gathered in San Antonio, Tex., to test one last time how well their software programs enabled a mobile robot vehicle to think for—and steer—itself. The event wrapped up the Defense Advanced Research Projects Agency's (DARPA) three-year Learning Applied to Ground Robots (LAGR) initiative, which awarded each of eight teams of scientists $2 million to $3 million to develop software that would give unmanned vehicles the ability to autonomously learn and navigate irregular off-road terrain.

Autonomous maneuvering may not seem terribly difficult for a reasonably smart robot on wheels. But although some vegetation, such as short grass on a prairie, is easily traversable, obstacles such as dense bushes and tree trunks are not. To expediently reach point B, the robot must be able to quickly sort through a range of flora and decide which ones it can travel over—or through—and which are rigid, impenetrable barriers.

Researchers initially believed that visual learning—making basic sense of a surrounding based on changes in light—would be easy to implement in computer systems. But Eero Simoncelli, a principal investigator at the New York University's (N.Y.U.) Laboratory for Computational Vision, pointed out that humans take vision for granted and overlook its complexity. "For you to avoid an object in your path is trivial," he says. "What's visual input [to a computer]? It's a bunch of pixels. It's a bunch of numbers that tell you how much light fell on each part of the sensor. That's a long way from a description of a cup sitting on a table." Extracting symbolic definitions from a large set of numeric values, he adds, is much harder than anyone realized.

Classifying natural obstacles was but one of myriad factors that DARPA researchers had to predict and implement in a software program to expand the capacity of a mobile robot to quickly analyze and travel through an environment. "Of course, no one [knew] how to design this," says Yann Lecun, professor of computer science at N.Y.U.'s Courant Institute of Mathematics who led the university's team. "So DARPA [was] interested in funding projects that advance the science of [robot] learning and vision."

Lecun, who has a knack for designing computer systems that pick out the key visual features in an environment, was an ideal candidate for the LAGR project. DARPA provided the funding and a standard test vehicle so Lecun and Urs Muller, CEO of software maker Net-Scale Technologies in Morganville, N.J., could focus on writing the software. They set out to push the realm of visual-based navigation forward—or to at least bring it up to speed.

A 2002 study by the Washington, D.C.–based National Research Council found that the increase in speed of unmanned ground vehicles was greatly outpaced by the rapid improvement in computer processing from 1990 to 2000 when the physical capability of a vehicle and course complexity is adjusted for. Muller points out that over the past decade there has been a 100-fold increase in computing power and a 1,000-fold gain in memory capacity but developments in unmanned navigational systems have lagged far behind these advances and will continue to without the development of new approaches to visual learning. "The limiting factor in software [design] is the human imagination," he says.

Until LAGR, most self-navigating mobile robots could only scan their immediate surroundings and plot a course over short distances. This made it difficult for robots to figure out an optimum route to any place farther than their own shortsighted universe of about 25 feet (7.6 meters), limiting them to a feel-as-you-go approach that often resulted in time-wasting, circuitous paths to a destination.

Free ebooks,rapidshare,great blog,megaupload,frantic ramblings,ideas,home equity loans,learn electronics,download
Powered By Blogger