Google’s Blueprint for Search Domination
Do you really think Google would reveal its plans on how they want search to evolve? I sure do. If you don’t believe me just ask Matt Cutts. Or better yet just watch him answering the question below.
After bypassing the cyborg comments he makes some pretty profound statements that Google should be a “good assistant,” “understand the context,” and “synthesize information.” But more importantly he goes on to say that Google should be able to handle difficult syntax not just by data or knowledge but towards analysis and wisdom. Now what does that mean?
Quick Algorithm Recap
In the famous words of Hitch “You can’t know where you’re going until you know where you’ve been,” so to get a better understanding of the future lets back up a few years and look at what Google has done with previous algorithm updates. I am only going to hit on the high points, but if you want to go further I would recommend referencing SEO Moz’s Algorithm History.
- Florida Update – November 2003
- Paid Links – October 2007
- Rel Canonical – February 2009
- Social Signals – December 2010
- Panda – February 2011
- Google Authorship – June 2011
- Penguin – April 2012
- Knowledge Graph – May 2012
- EMD Update – September 2012
Many of the previous algorithm updates and iterations listed were aimed to dismantle spam, technical manipulation, and improving their infrastructure. It took over a decade of progress before Google was even able to begin to tackle the context issue.
Google Authorship and the Knowledge Graph implementation was the catalyst to bring data together in a sensible format. The Knowledge Graph pulls data from reliable sources to show images, descriptions, background information, people involved, and other related information while Google Authorship connects content with a specific author. The Knowledge Graph is even more sophisticated than it would appear at first glance. Bill Slawski at SEO by the Sea has uncovered that the information in knowledge graphs can be dynamic depending on what users are searching for, so not all knowledge graphs are created equal.
Back to the Present
So what does Matt Cutts mean when he says that search will be going toward analysis and wisdom? The simple answer is Google wants to answer every single question the user has on the very first try and if possible before the user even asks the question.
In an article in the Guardian, Google’s CEO Larry Page said that they are trying to reduce every possible friction between the user, their thoughts, and the information they want to find. He even mentions brain implants to answer questions at the time a thought originates. Maybe Larry and Matt are in cahoots to make us all cyborgs. But I digress…
In order for Google to get to the point where they can answer every possible question a couple of things have to occur. They need to have access to a lot of data and a way to relationally put it together. Part of the data gathering process has already been explained above with Google Authorship and the Knowledge Graph, but lets continue going down the rabbit trail on more sources they are using to get data.
First they have Google Analytics which is installed on millions of websites. Have you ever wondered why Google Analytics is free for up to 10 million pageviews a month? It is the amount of data that is now at their disposal. Google makes it very easy for you to share your data with them. When you’re setting up a Google Analytics account, they have conveniently pre-checked all the data sharing settings for you even though they are technically optional. This data allows them to understand user behavior for individual websites but more importantly for different verticals.
Let’s not forget that Google has been fighting for more of the web browser market share. But why would they create a web browser when there was so much competition when they launched? The answer is simple. Data. Mike King has even presented his case that Chrome is actually Google Bot in disguise. It makes sense if you think about it. Through Chrome, Google can easily follow user behavior even if they never use Google Search. If the webmaster decided not to share their data through Google Analytics, Google can still know page load times, duration on a page, or any other user behavior metric. Chrome can find websites that the regular Google Bot might miss as long as the user types the domain directly into the browser. No more hiding from the search giant!
(And if that wasn’t enough)
Google would have been the second largest Internet Service Provider based on its infrastructure a year ago. That is if back then they offered that as a service. Fast forward a year and Google has already established a foothold in Kansas, Missouri, Texas, and Utah for its own broadband service called Google Fiber. Becoming an ISP elevates Google to a completely different level in gathering user data. Now they have the power to track information beyond websites. Just like analytics they are giving away Internet access up to 5Mbps for at least 7 years! Who would say “No” to saving $30 on their Internet bill? Please don’t be surprised. They already have the infrastructure in place and spent a billion dollars on its infrastructure last quarter. What matters to them is the data they will have at their disposal. Google’s reach now goes beyond the web browser and can now know what songs you like to listen to on Pandora or Spotify, messages sent through programs such as Skype or Trillian, files being transferred through Dropbox and the new Adobe Creative Suite, and the list can go on. Anything that can be sent over the Internet can now be tracked. Google Bot is transforming like Optimus Prime.
I won’t even start with the new Google Maps, Google Glasses, Google+, or the largest email network in the world called Gmail…
Back to the Future
When does all this information gathering stop? Google is hoarding companies like there is about to be a capitalism apocalypse, but the stockpile keeps adding more potential ways to get data. It wouldn’t surprise me if Google continues to extend its reach outside of the digital realm.
Think about it. After setting itself up as an ISP, it will be one step away from being a legitimate competitor in cable services. Then not only would Google know that you “Like” The Big Bang Theory, but it will be able to know exactly how much you like it. Do you watch it every time it comes on, or do you watch the first 10 minutes of it before you change the channel to the local news? Maybe you are just an occasional Wednesday night viewer. Before you know it Google TV will offer personalized TV ads. It won’t be hard. All they will have to do is match their personalized online data with the cable account. No other company would have enough information to compete. Google’s only threat will be being classified as a monopoly by the U.S. government. But then again with all this potential information being stored by one company I am surprised not more people are following the road of CISPA through Congress.
The once linear algorithm of links and pagerank is going to evolve. It’s becoming more dynamic due to artificial intelligence. Wil Reynolds has even seen this machine learning in action. Google recognizes his acronym #RCS as Real Company Stuff though the acronym was only created a few months ago. I guarantee that was not done manually. The improvements in their algorithms have come from acquisitions like DNNresearch Inc which was a machine learning company discovered through its award program. These acquisitions along with their own AI research is beginning to create a paradigm shift in how concepts appear to be connected.
One thing is for certain the search experience will become much more personalized than it is already when Google has more data to sift through. It will be the assistant that doesn’t need keywords. Soon the days of the lopsided algorithm that currently consist of who has the most authorative backlinks will begin to be a thing of the past.
Love what you read? Hate what you read? Either way I want to hear from you in the comments below.