Computers can now describe images using language you'd understand
The technology has its problems, especially if you don't have a large pool of training images. It frequently makes mistakes, as you can see in the examples above. However, these are still early days. Larger data sets should help with the detection routine's performance, and there are likely to be refinements to the code itself. When it does get significantly better, it could have a tremendous effect on everything ranging from artificial intelligence to search. A robot could tell you exactly what it sees without requiring that you look at a camera to check its findings; alternately, you could search for images using ordinary sentences and get only the results you want. It might be years before you see Google's technique used in the real world, but it's clear that you won't have to deal with stilted, machine-like descriptions for too much longer.
Google Research Blog, Stanford University
By Jon Fingas
Engadget
0 comments:
Post a Comment