According to TED.com: “When a very young child looks at a picture, she can identify simple elements: “cat,” “book,” “chair.” Now, computers are getting smart enough to do that too. What’s next? In a thrilling talk, computer vision expert Fei-Fei Li describes the state of the art — including the database of 15 million photos her team built to “teach” a computer to understand pictures — and the key insights yet to come.”

“As Director of Stanford’s Artificial Intelligence Lab and Vision Lab, Fei-Fei Li is working to solve AI’s trickiest problems — including image recognition, learning and language processing.”

“Using algorithms built on machine learning methods such as neural network models, the Stanford Artificial Intelligence Lab led by Fei-Fei Li has created software capable of recognizing scenes in still photographs — and accurately describe them using natural language.”

“Li’s work with neural networks and computer vision (with Stanford’s Vision Lab) marks a significant step forward for AI research, and could lead to applications ranging from more intuitive image searches to robots able to make autonomous decisions in unfamiliar situations.”

How we’re teaching computers to understand pictures by Fei-Fei Li

 

For those of you not familiar with TED Talks here is a brief summery from www.ted.com: “TED is a small nonprofit devoted to Ideas Worth Spreading. It started out (in 1984) as a conference bringing together people from three worlds: Technology, Entertainment, Design. Since then its scope has become ever broader. Along with two annual conferences — the TED Conference in Long Beach and Palm Springs each spring, and the TEDGlobal conference in Oxford UK each summer — TED includes the award-winning TEDTalks video site, the Open Translation Project and Open TV Project, the inspiring TED Fellows and TEDx programs, and the annual TED Prize”

——————————————————————————————————–

I look forward to your thoughts and comments!