The Cutting Edge of Medical Technology Content, Community & Collaboration
Search by Voice
Vic Gundotra, Google’s vice president of engineering, introduced several new ways to search on a mobile phone, including by voice. The company already allows people to search from their mobile phones by speaking queries in English and Chinese; Mr. Gundotra said Google was adding Japanese today and would eventually add all major languages.
Demonstrating the tool, a Google employee spoke a rather long query asking for the best restaurants near Google’s offices in Tokyo. The Droid phone returned a rather detailed map in response.
“We really do get the sense that we are just now beginning to sense the possibilities,” Mr. Gundotra said, citing improvements in wireless connectivity and cloud computing.
He said speech recognition had improved enough that someone could speak a request in English, and have Google translate it and read it back to the phone owner in Spanish, a capability the company hopes to deliver in 2010.
Search by Location
Mr. Gundotra said search results on mobile phones can be instantly customized based on location. A search typed in Boston that begins “R… e…” will begin turning up Red Sox results, for example.
Smartphones also allow Google to serve up more geographically relevant results. Using the phone’s GPS coordinates, Google can automatically show users things like nearby restaurants, coffee shops and bars. This feature will also be worked into the Google mobile maps application in the Android app store.
Search by Sight
In perhaps the most ambitious demonstration, Mr. Gundotra talked about an experimental new tool called Google Goggles. (No, it does not make people look more attractive.) Users can take a photo and submit the image to Google as a query. For instance, you can take a photo of a landmark in a foreign country, submit it to Google and get information about the site. The tool is now available in Google Labs.
“It’s incredibly impressive when you understand what’s going on,” Mr. Gundotra said. Images are being sent to Google’s servers, and Google is using the nascent technology of computer vision to recognize the photo and returning information within moments. “Today marks the beginning of this journey.”
The tool could potentially analyze an image of a face and come up with the person’s name. Mr. Gundotra said the company would not introduce that capability until it could work out some privacy issues.
Real-Time Search
Amit Singhal, a Google fellow, took the stage and said Google needed to index the Web every second, not every minute. Pointing to Twitter, he said, “Information is being posted at a pace I have never seen before, and in this information environment, seconds matter.”
Google is adding real-time information to its search results. Search for “Obama” and news stories, Tweets, blog posts and items from Yahoo Answers will cycle onto the page as soon as they are written. Google search pages are about to get a lot more dynamic.
Google signed a deal with Twitter in October, shortly after Microsoft signed a similar one. (Microsoft is demonstrating an early version of its Twitter integration here.)
“This is the comprehensive real time Web,” Mr. Singhal said.
Real-time search will be rolled out to users over the next few days. It will also be available on Android and the iPhone.
Google Trends will also add “hot topics” from the real-time Web, Mr. Singhal said, plucking information about what people are talking about from Twitter, among other sites.
Finally, Google is striking real-time deals with both Facebook and MySpace. Updates from public pages on Facebook will appear in Google real-time search, as will any publicly posted comments on MySpace.
© 2024 Created by CC-Conrad Clyburn-MedForeSight. Powered by
You need to be a member of MedTech I.Q. to add comments!
Join MedTech I.Q.