What The Heck Will Google Do With These Scary Military Robots?

From my Forbes blog:

Let’s see, we have a company that already knows everything about us, has possibly the world’s largest computer network, has recently built one of the biggest artificial-intelligence teams in the world–a company so powerful that it feels the need to soften its dominance with the informal motto, “Don’t be evil.”

And now Google–yes, of course we’re talking about Google–has bought a military robot company call Boston Dynamics. Not just any robot maker this time–after all, it has already quietly bought seven others over the past year, apparently to provide former Android chief Andy Rubin another chance at a moonshot project. No, unlike the other robot makers, this company makes machines by the names of BigDog, Atlas, and Cheetah that can variously outrun Usain Bolt and hurl cinderblocks 17 feet.

So, we’ve got the potential for killer robots that know where you live and can outrun you when they find you. What’s not to like?

All jokes about Skynet, Terminators, and Robocops aside, the latest acquisition raises a serious question about what Google has in mind. It looks for all the world like it’s pursuing yet another seemingly crazy side project that has nothing to do with its mission to organize the world’s information and make it universally accessible and useful. It’s now trying out self-driving cars, home package delivery, wearable computers, and anti-aging technologies.

Clearly it’s time for Google to update its mission statement, not to mention the “Ten things we know to be true,” a list that includes such outdated gems as “It’s best to do one thing really, really well.” …

Read the rest of the post.

Meet Hummingbird: Google Just Revamped Search To Answer Your Long Questions Better

google15bday1

From my Forbes blog:

Google has updated its core algorithm that controls the answers we get to queries on its search engine in a bid to make them work better for longer, more complex questions.

The update, code-named Hummingbird, is the biggest change to the underpinnings of the world’s leading search engine since early 2010, when Google upgraded its algorithm to one it called Caffeine. Google made the change about a month ago, it announced at a press event in the garage of the Menlo Park (Calif.) house where Google started. The event also celebrated the 15th anniversary of Google’s founding, which is tomorrow.

Most people won’t notice an overt difference to search results. But with more people making more complex queries, especially as they can increasingly speak their searches into their smartphones, there’s a need for new mathematical formulas to handle them.

This update to the algorithm focuses more on ranking sites for better relevance by tapping further into the company’s Knowledge Graph, its encyclopedia of 570 million concepts and relationships among them, according to Amit Singhal, Google’s senior VP of search. (For example, there’s a Knowledge Graph “card,” or information box, for the Eiffel Tower, and Knowledge Graph knows it’s a tower, that it has a height, that it’s in Paris, etc., so Google can anticipate you might want to know some of those facts.) Caffeine was more focused on better indexing and crawling of sites to speed results.

After the event, Scott Huffman, a key engineering director at Google currently working on natural language, told me that part of the impetus for the change was that as more people speak searches into phones, they’re doing so in a more natural way than they type in queries–which is to say more complicated. So Google’s search formulas needed to be able to respond to them.

Partly that is through even great use of the Knowledge Graph, so obvious discrete terms can be identified quickly. But it’s also interesting that although queries are getting more complex, that doesn’t always mean it’s harder to find the right answers. The more terms people use, Huffman says, the more context Google can divine. So those extra words, even if they’re in a more complex query, can give Google better information–but only if the algorithms are adjusted to be able to recognize the relationship among those terms.

Ultimately, he says, “we want to get to a natural conversation” between people and Google search on whatever devices they’re using. …

Read the rest of the story.

YouTube’s Prankster Engineer Keeps Google’s Video Site Humming

YouTube's Billy Biggs

YouTube’s Billy Biggs

From Forbes magazine’s annual innovators list:

You probably don’t know his name, but Billy Biggs is one of the people who has helped keep Google on Forbes’ list of the world’s most innovative companies.

In the third annual version of the list out today, Forbes highlights nearly a dozen next-generation innovators who are expected to create the products and services these companies will be counting on to remain innovation machines.

Biggs, a software engineer at YouTube since Google bought the video site in 2006, has had a hand in most of the major projects there already. But at just 35, he will be called upon to create many more. Overall, he says, his work is about “making sure the systems are built for the future and we’re able to build cool things”–even if he doesn’t yet know what they will be. Here’s a closer look at his work:

Billy Biggs likes to say pranks are his full-time job at YouTube, Google’s video service. For April Fool’s Day 2010, for instance, he and a few other software engineers created a new video display format called TEXTp. Ostensibly aimed at cutting network bandwidth costs, it turned YouTube videos into colorful streams of text characters.

Don’t let those hijinks fool you. Labeled a “hidden gem” by a former YouTube executive, Biggs has had a hand in nearly every major technical project there since Google bought it in 2006. His work as principal architect for YouTube’s computer systems and software and its website is credited with helping YouTube reach an industry-leading 6 billion hours of video a month viewed by more than a billion people.

That massive audience has put the site in a position to challenge television for consumer attention and marketer budgets–just as TV faces many new challenges to its reign as the world’s most popular entertainment medium. …

Read the rest of the story.

Meet The Guy Who Helped Google Beat Apple’s Siri

Google's Jeff Dean

Google’s Jeff Dean

From my Forbes blog:

For all the attention lavished on Siri, the often-clever voice-driven virtual assistant on Apple’s iPhone, Google’s mobile search app lately has impressed a lot more people. That’s partly thanks to Google Now, its own virtual assistant that’s part of that app, which some observers think is more useful than Siri.

But the success of Google’s mobile search stems at least as much from a big improvement over the past year in Google’s speech recognition efforts. That’s the result of research by legendary Google Fellow Jeff Dean and others in applying a fast-emerging branch of artificial intelligence called deep learning to recognizing speech in all its ambiguity and in noisy environments. Replacing part of Google’s speech recognition system last July with one based on deep learning cut error rates by 25% in one fell swoop.

As I wrote in a recent article on deep learning neural networks, the technology tries to emulate the way layers of neurons in the human neocortex recognize patterns and ultimately engage in what we call thinking. Improvements in mathematical formulas coupled with the rise of powerful networks of computers are helping machines get noticeably closer to humans in their ability to recognize speech and images.

Making the most of Google’s vast network of computers has been Dean’s specialty since he joined Google an almost inconceivable 14 years ago, when the company employed only 20 people. He helped create a programming tool called MapReduce that allowed software developers to process massive amounts of data across many computers, as well as BigTable, a distributed storage system that can handle millions of gigabytes of data (known in technical terms as “bazillions.”) Although conceptual breakthroughs in neural networks have a huge role in deep learning’s success, sheer computer power is what has made deep learning practical in a Big Data world.

Dean’s extreme geekitude showed in a recent interview, when he gamely tried to help me understand how deep learning works, in much more detail than most of you will ever want to know. Nonetheless, I’ll warn you that some of this edited interview still gets pretty deep, as it were. Even more than the work of Ray Kurzweil, who joined Google recently to improve the ability of computers to understand natural language, Dean’s work is focused on more basic advances in how to use smart computer and network design to make AI more effective, not on the application to advertising.

Still, Google voice search seems certain to change the way most people find things, including products. So it won’t hurt for marketers and users alike to understand a bit more about how this technology will transform marketing, which after all boils down to how to connect people with products and services they’re looking for. Here’s a deeply edited version of our conversation:

Q: What’s “deep” about deep learning?

A: “Deep” typically refers to the fact that you have many layers of neurons in neural networks. It’s been very hard to train networks with many layers. In the last five years, people have come up with techniques that allow training of networks with more layers than, say, three. So in a sense it’s trying to model how human neurons respond to stimuli.

We’re trying to model not at the detailed molecular level, but abstractly we understand there are these lower-level neurons that construct very primitive features, and as you go higher up in the network, it’s learning more and more complicated features.

Q: What has happened in the last five years to make deep learning a more widely used technique?

A: In the last few years, people have figured out how to do layer-by-layer pre-training [of the neural network]. So you can train much deeper networks than was possible before. The second thing is the use of unsupervised training, so you can actually feed it any image you have, even if you don’t know what’s in it. That really expands the set of data you can consider because now, it’s any image you get your hands on, not just one where you have a true label of what that image is [such as an image you know is a cheetah]. The third thing is just more computational power. …

Read the full interview.

Interview: How Ray Kurzweil Plans To Revolutionize Search At Google

Google's Ray Kurzweil (Photo: Wikipedia)

Google’s Ray Kurzweil (Photo: Wikipedia)

From my Forbes blog:

When Google announced in January that Ray Kurzweil would be joining the company, a lot of people wondered why the phenomenally accomplished entrepreneur and futurist would want to work for a large company he didn’t start.

Kurzweil’s answer: No one but Google could provide the kind of computing and engineering resources he needed to fulfill his life’s work. Ever since age 14, the 65-year-old inventor of everything from music synthesizers to speech recognition systems has aimed to create a true artificial intelligence, even going so far as to predict that machines would match human intelligence by 2029.

Now, as a director of engineering at Google, he’s focusing specifically on enabling computers to truly understand and even speak in natural language. As I outlined in a recent story on deep learning–a fast-rising branch of AI that attempts to mimic the human neocortex to recognize patterns in speech, images, and other data–Kurzweil eventually wants to help create a “cybernetic friend” that knows what you want before you do (that is, if someone else doesn’t get to it first).

Indeed, Kurzweil’s focus is timely from a competitive standpoint as well. Google upped the ante on Apr. 29 by bringing its Google Now voice search app to the iPhone and iPad, in direct competition with Apple’s Siri. And Facebook just revealed that it built a natural-language interface for its Graph Search service announced earlier this year. It’s becoming clear that search is already starting to move beyond the “caveman queries” that characterized effective search techniques until recently.

In a recent interview I conducted for the story, Kurzweil revealed a surprising amount of detail about his planned work at Google. No doubt the nature of that work will evolve as he settles in at the company, but so far, this interview provides possibly the deepest look so far at his plans.

At least initially, that work won’t relate directly to advertising. But marketers will need to understand how profoundly Kurzweil’s and others’ work at Google could change not only what search will become in the age of more and more intelligent machines, but  the way we interact with information and even each other. All that is sure to mean big changes in the nature of advertising and marketing–well before 2029.

Q: In your book, How to Create a Mind, you lay out a theory of how the brain works. Can you explain it briefly?

A: The world is hierarchical. Only mammals have a neocortex, and the neocortex evolved to provide a better understanding of the structure of the world so you can do a better job of modifying it to your needs and solving problems within a hierarchical world. We think in a hierarchical manner. Our first invention was language, and language is hierarchical.

The theory behind deep learning, which I would call hierarchical learning, is that you have a model that reflects the hierarchy in the natural phenomenon you’re trying to learn. If you don’t do that, it’s going to be much weaker and fooled by apparent ambiguities.

Q: How will you apply that theory at Google?

A: What I’ll be doing here is developing hierarchical methods specifically aimed at understanding natural language, extracting semantic meaning … actually developing a way to represent and model the semantic content of documents to do a better job of search and answering questions.

An increasing percentage of queries to Google are in the form of questions. The questions right now can’t have an indefinite complexity to them. But if we can actually model language in a hierarchical fashion, we can do a better job of answering questions and doing search in general, by actually modeling what all these billions of web pages are trying to say. …

Read the rest of the interview.

This Is How Google (And Its Advertisers) Will Really Get Inside Your Head

HAL9000

From my Forbes blog:

Google cofounder Sergey Brin said only half-jokingly back in 2002 that his company aimed to create the equivalent of the intelligent computer HAL 9000 in 2001: A Space Odyssey, but without the bug that resulted in it, you know, killing people.

More than a decade later, Google isn’t nearly there, for better or worse. But lately, it has been aiming much more directly at building HAL, or what’s sometimes called the Google Brain. As I wrote in a recent article, a fast-emerging branch of artificial intelligence called deep learning is helping Google and other companies and researchers produce significant advances in machines that at least approach the way we think. It won’t be long–for better or worse–before their work also has a profound impact on marketing and advertising as well. …

Read the rest of the analysis.

Deep Learning: Artificial Intelligence Finally Gets Smart

building.a.brain-tr

From my feature story in Technology Review:

When Ray Kurzweil met with Google CEO Larry Page last July, he wasn’t looking for a job. A respected inventor who’s become a machine-intelligence futurist, Kurzweil wanted to discuss his upcoming book How to Create a Mind. He told Page, who had read an early draft, that he wanted to start a company to develop his ideas about how to build a truly intelligent computer: one that could understand language and then make inferences and decisions on its own.

It quickly became obvious that such an effort would require nothing less than Google-scale data and computing power. “I could try to give you some access to it,” Page told Kurzweil. “But it’s going to be very difficult to do that for an independent company.” So Page suggested that Kurzweil, who had never held a job anywhere but his own companies, join Google instead. It didn’t take Kurzweil long to make up his mind: in January he started working for Google as a director of engineering. “This is the culmination of literally 50 years of my focus on artificial intelligence,” he says.

Kurzweil was attracted not just by Google’s computing resources but also by the startling progress the company has made in a branch of AI called deep learning. Deep-learning software attempts to mimic the activity in layers of neurons in the neocortex, the wrinkly 80 percent of the brain where thinking occurs. The software learns, in a very real sense, to recognize patterns in digital representations of sounds, images, and other data.

All this has normally cautious AI researchers hopeful that intelligent machines may finally escape the pages of science fiction. Indeed, machine intelligence is starting to transform everything from communications and computing to medicine, manufacturing, and transportation. The possibilities are apparent in IBM’s Jeopardy!-winning Watson computer, which uses some deep-learning techniques and is now being trained to help doctors make better decisions. Microsoft has deployed deep learning in its Windows Phone and Bing voice search. Says Peter Lee, head of Microsoft Research USA, “Deep learning has reignited some of the grand challenges in artificial intelligence.” …

Read the full story.

With Graph Search, Can Facebook Kill LinkedIn, Yelp–Even Google?

DSC00202

Facebook CEO Mark Zuckerberg introduces Graph Search (Photo: Robert Hof)

From my Forbes.com blog The New Persuaders:

Facebook took pains today to tell the world that its new social search serviceGraph Search, is only a very limited tool that it will roll out very slowly over a period of months and years.

But CEO Mark Zuckerberg and his search staff couldn’t help but reveal their enthusiasm for the vast possibilities. For all their professed modesty, what struck me at the company’s press event introducing the service was how specific and broad-ranging Zuckerberg and his Graph Search leaders were about what it could provide: just about everything, potentially, that every company from LinkedIn to Yelp to Foursquare to Match.com to … yes, even Google provides today.

That’s an exaggeration, of course, that even Facebook folks surely didn’t intend. All of those companies have distinct, well-developed services with extensive user bases that are unlikely to shrivel up no matter how good Graph Search turns out to be. In most cases, they will probably retain a durable advantage for years to come. And as Zuckerberg said, it’s very, very early for Facebook search, and search is a devilishly complex discipline to do well.

Still, to hear it from Facebook itself, Graph Search will offers ways to provide similar services, sometimes in potentially easier and more effective ways:

* Recruiting: One of the first examples Facebook provided today was that Graph Search could help in finding qualified candidates for jobs. For instance, Lars Rasmussen, the Facebook director of engineering who heads the Graph Search team, mentioned that he could find people from NASA Ames Research Center who are friends of Facebook employees.

As investors, who bid up LinkedIn’s share a fraction today, no doubt recognize, that company has a pretty good if not exclusive hold on recruiters. And given that finding friends who worked somewhere is a rather specific subset of qualified candidates for a position, there’s not much chance recruiters will abandon LinkedIn for Facebook anytime soon. But Facebook, already used in various ways by recruiters, could siphon off activities that might otherwise have gone to LinkedIn. … Read more at The New Persuaders. But to conclude …

No, Facebook won’t kill any of these companies, certainly not anytime soon. They’re too strong, Facebook has too much still to build and then to prove, and rarely does a company kill another healthy company no matter how good its products are.

Investors may be thinking as much, as they sold Facebook shares to the tune of a 2.7% drop in price today. But if anyone doubted Facebook’s ability to keep disrupting the status quo, they surely shouldn’t doubt it anymore. Even with its baby steps into the search business, Facebook has again set new terms of engagement in the battle for the soul, or at least the cash register, of the Internet.

13 Questions For 2013 In The World Of Online Advertising

questionsCross-posted at my Forbes.com blog The New Persuaders:

For the past few years, I’ve offered predictions here and on The New Persuaders for what’s likely to come in the next year. This year, I’m going to shake it up and throw out a few questions instead. I think I know the answers to some of them, but if many won’t be answered definitively by year-end, they remain top of mind for me and probably for many others in online media and advertising.

So in this, the first full week of the new year, here are some questions to which I hope to start finding answers:

* Will image advertising finally take off online? I have to believe that as people spend more and more time online instead of reading print publications and watching TV, brand marketers will want and need to reach them there with ads that are aimed at creating consideration for later purchases, not just eliciting an immediate sale like Google’s search ads and too many banner ads. We’re already starting to see signs of such advertising with the early success of Facebook’s Sponsored StoriesTwitter’s Promoted Tweets, and YouTube’s TrueView ads–not to mention the explosion of tablets, which provide a lean-back experience more compatible with image advertising. This won’t be a sudden change, since brand marketers and agencies don’t move quickly, but you can’t tell me there aren’t going to be increasingly compelling ways for brands to influence people online.

* Can advertisers and publishers make ads more personal without scaring people? That’s the $64 billion question, and it likely won’t get answered in full this year. It’s easy for headline-hungry politicians to make a big deal out of Facebook’s latest privacy gaffe or the Wall Street Journal’s or the New York Times’ latest scare story about an ad that followed somebody all over the Web. That’s especially so since Facebook really does push the privacy envelope too far at times, and too many advertisers idiotically chase one more sales conversion at the cost of scaring off hundreds of others or inviting onerous legislation. But making ads more useful to each individual person is not only crucial to online commerce, it’s potentially better for most consumers as well–seriously, I don’t need to see another ad for a fitness center or a new credit card, but that ad for Camper van Beethoven’s new CD had me in a split-second. The answer lies in these two words, everyone: transparency and choice.

* Will mobile advertising work? Well, some of it already does, to hear Google and Facebook tell it. And while those already devalued digital dimes so far turn to pennies when it comes to ads on smartphones and tablets, this still feels more like growing pains than a crisis in online advertising. Sure, the screens are small and people don’t like to be interrupted in their mobile cocoons. So a different kind of advertising is probably needed–clearly, banners don’t cut it on a four-inch screen. But the value to advertisers of knowing your location and maybe the apps you’re using, coupled with knowledge of what your friends like–all with permission, of course–is huge. That permission may be really tough to earn. But if advertisers can offer tangible value, perhaps in the form of useful services related to what you’re doing or looking for or shopping for–and isn’t that the ultimate native ad?–people may loosen their hold on that information.

I have a lot more questions, but I’ve got to stop before too much of 2013 is gone.

Check out more questions at the full post.

Apple CEO Tim Cook Is Blowing Smoke When He Dismisses Rival 7-Inch Tablets

Apple Introduces iPad Mini... and some new com...

Apple’s iPad mini

From my Forbes.com blog The New Persuaders:

As a company that way more often than not comes out with superior products, Apple rarely appears defensive. Today was an exception.

On Apple’s fourth-quarter earnings call, CEO Tim Cook took repeated potshots at small tablets of the kind that–yes–Apple itself just debuted. The iPad mini is clearly aimed at blunting the appeal of seven-inch tablets such as Google’s Nexus 7 and Amazon.com’s Kindle Fire.

While I think Cook is probably right that the iPad will continue to dominate tablets, and even that it continues to make the best ones, his overenthusiastic criticism of seven-inch tablets struck me as surprisingly defensive. Saying Apple didn’t set out to build a “small, cheap tablet,” he called the competitors “compromised” products. “We would never make a seven-inch tablet,” he sniffed.

Why not? Because they’re too small, he said. The iPad mini is almost an inch larger, which means a 30% larger screen and 50% larger viewing area. I’ll grant that that is noticeable, and appealing.

But c’mon. These are all tablets you can hold in one hand, and acting as if the iPad mini is something utterly unique–“in a whole different league,” as he put it–comes off more than a bit desperate. Apple is clearly playing catch-up here, and trying to position the iPad mini as nothing like the Nexus 7 or Kindle Fire only serves to make us realize that Apple actually does feel threatened by these devices that beat it to what has turned out to be a real market. …

Read the complete post at The New Persuaders.