The Fred Update does not exist. Algorithm updates are much more serious than you think.

Everyone has heard about the Fred update (some people are speaking about an algorithm!). Do you know that such update does not exist and it never did? This is just a joke made by Garry Illyes, who was discouraged by the fact that he is being asked to comment every time there is a slight fluctuation in the rankings of a few pages. It is a waste of time to write and think about updates that do not exist, while really serious changes in search results are flying right under your radar.

 

In the year 2016, Google has made 1653 implementations in search results. As the search engine’s interface is remarkably constant, it can be assumed that most of them were backend-related. Even if just the half of that number had anything to do it with the ranking algorithms, it means that on the average, there were two upgrades per day. This value is consistent with the official position of Google’s workers, who claim that there are 2-3 updates per day.

Therefore, since Fred was born on March this year, there should have been implemented 238-257 Fred updates.

Even Barry Schwartz from Seroundtable.com admitted that the most recent update had „absolutely no pattern”.

Modern algorithmic updates

The times of Google’s transparency with regard to algorhythms have long passed. The statement regarding the September 2016 Penguin update was remarkably generous, but it was also the only such statement issued in the last two years. The previous official post (from April 2016), dealt with the introduction of an algorithm appraising the sites’ mobile-friendliness, but its impact has been put into doubt. An even earlier statement has been issued in August 2014, announcing an algorhythm promoting the sites with the HTTPS protocol, but the Google workers themselves have undermined the HTTPS ranking boost. This short calendar shows that even Google itself does not treat the discussion about updating particular algorhythms very seriously.

The word core has become the main theme of statements of varying levels of officiality. The Panda algorithm has been implemented into the the core. The Penguin algorithm has been implemented into the core. If anyone asks the Google workers about bigger changes in search results, the information about core-related changes is the one mentioned the most frequently aside from the jocular Fred.

 

This is clear consequence of appointing John Giannandrea, an outstanding machine learning specialist, as the head of the Google search engine. Soon he will have held the position for a year and a half. What his position actually consists of is transforming old, costly algorithms (requiring input from 25 thousand high-paid engineers who have to, among other things, create updates) into self-reliant algorithms that might be sometimes even capable of learning and which can be implemented into the core (and which require… this will be discussed below).

Unlike Fred, Machine learning is no joke

Algorithms that become a part of the core are really serious, long-term changes.

For the last year, we have been observing significant fluctuations in rankings of sites every month, which bring to mind the Google Dance from decade before – only much higher in frequency. Of course we are still dissecting them, which we can do thanks to having access to a very large sample, but at this point is it is hard to speak of them as of updates. The enumeration presented in a paragraph above and the recent humorous statements from Google workers speak for themselves. Are you asking if there was an update? Yes there was! And there will be a another one today. And two or three next ones tomorrow.

Finally, John Mueller lost his patience and explained that this does not work that way:

So from our point of view there was no Fred update. This is a name that was given externally to a bunch of updates we’ve been doing over time. So it’s not the case where there’s this one thing that is changing in search, we make changes in search all the time and we are always working on rying to find ways to bring more relevant, more high quality content to users, and that can be affecting a variety of websites, a variety of areas where we show content in search.

It is not easy to talk about the core algorithms, mainly because of the scarcity of available official statements regarding them. Google’s Andrey Lipattsev explained that the transformation in the core is connected with the algorithm’s self-reliance. Self-reliance in the context of algorithms is difficult to define differently than as the lack of necessity of manual correction and in the world of constantly changing search results, the lack of such necessity means that the algorithm is able to correct its functioning – and here comes machine learning.

The dependence of an independent algorithm

Of course, it is not like machine learning is a perfect brain in a processor (although such renders usually accompany articles about RankBrain and others…) that corrects its functioning employing self-reflection. ML algorhythms need large amounts of categorized data. In it they find patterns that are later used for ranking search results. In Google, this data is provided by ten thousands of google raters.

There has been a lot of discussion about google raters in the recent months. Leading media outlets have pointed out that the Google search engines presents some disturbing search results, like for instance:

December 2016. After typing „did the holocaust happen” in Google, the first result is „Top 10 reasons why the holocaust didn’t happen”. This meant that when you did a voice search via Google, the answer you have heard was the content of the first result!

 

Under the threat of large advertisers backing down from its programs, Google has, hired additional 10 thousand raters.

At the same time there have been reports that point to much more serious problems:

“I’m not saying this is the entire reason for the current crisis,” says a former Google ad rater, who was not authorized to speak with WIRED about the program. “But I do believe the instability in the program is a factor. We raters train AI, but we know very well that human eyes—and human brains—need to put some deliberate thought into evaluating content.” (…) Raters say their workload suggests that volume and speed are more of a priority than accuracy. In some cases, they’re asked to review hours-long videos in less than two minutes. On anonymous online forums, raters swap time-saving techniques—for instance, looking up rap video lyrics to scan quickly for profanity, or skipping through a clip in 10-second chunks instead of watching the entire thing. (…) Multiple ad raters say they have been asked to watch videos with shocking content. “The graphic stuff is far more graphic lately … someone trying to commit suicide with their dog in their truck,” one rater said. The person set the truck on fire, the rater said, then exited the truck and committed suicide with a shot to the head. In the online forums frequented by ad raters, anonymous posters said they had seen videos of violence against women, children, and animals. Several posters said they needed to take breaks after watching several such videos in a row. Ad raters said they don’t know how Google selects the videos they will watch—they only see the title and thumbnail of the video before they rate it, not a rationale. (…) “The communication from Google was totally nonexistent,” one former rater said. “Google is legendary for not communicating.”

If the data used by the algorithms is gathered in conditions that are not conducive to quality, how does it impact the algorithms themselves? Google has found in itself in a serious (=costly) trouble with inappriopriate content in its products and has taken some much-publicized measures in order to solve it. An analogous problem with the rest of ranking algorithms does not necessarily have to be a serious (=costly) problem from the point of view of the Internet hegemon. Is the everyday Google Dance a consequence of making ML learn from low-quality data? Although this is a far-reaching question, it is not just we who see really large fluctuations in search results:

 

A moratorium on the word “update”? Not yet, but…

At this point, it is more appriopriate to talk about “refreshing” rather than “updating” algorithms. Google will never divulge such confidential secrets, so it will always remain in the realm of speculations, but we can safely make a statement (or even an observation) that a large part of fluctuations happen in a cyclical manner. The Internet is humongous even for Google’s incredible infrastructure with its special TPU units, and more complex calculations may take weeks to process. Weeks-long reactions can be observed in the case of gaining new links, or even prosaic redirections of old domains to new ones. What is the point of calling calculations that last a few weeks an update? And especially a Fred update? 🙂

I would be remiss if I did not end the article on a more positive note – the current state of the search query mentioned above:


 

 

Images sources: Twitter, IndianExpress.com, Google.com, Strategiq.co. Linked sources: Twitter.com, Google.com, Seroundtable.com, Moz.com, TheSEMPost.com, SearchEngineLand.com, Gadgets.ndtv.com, Wired.com.

Leave a Reply

Your email address will not be published. Required fields are marked *
You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>