DEV Community

Paweł Nosko
Paweł Nosko

Posted on

Google Core Update December 2025 – a report from the eye of the storm

On December 11th, Google officially announced another Google Core Update. The mere fact that the update appeared in December was not obvious at all — the break from the previous core update was record-breakingly long, which had been sparking speculation in the industry for weeks. This time, Google again confirmed a certain regularity: Wednesdays and Thursdays are their favorite days to launch the biggest changes in the algorithm.

However, such a long "window of silence" between updates is hard to explain by holidays or seasonality. Everything indicates that Google deliberately delayed, refining the mechanisms that evaluate websites on a holistic level. The longer the break, the higher the expectations — and the larger the reshuffling we are observing in the search results today.

From the first day of the rollout, I have been following user reports on Reddit, the WebmasterWorld forum, as well as analyses published on Barry Schwartz’s blog — especially the comments under the posts, because that is where real emotions and the first symptoms of change are most often visible. Simultaneously, I am analyzing data from dozens of my own websites, covering various business models and content types.

Thanks to this, I can look at this Core Update not only from the perspective of an observer, but also a practitioner. In the further part of the article, I will share the conclusions that emerge from the eye of this storm — without simplifications, without SEO myths, and without promising quick recipes.

What is a Google Core Update and why does Google release it

Before we move further, it is worth pausing for a moment and answering the basic question: why does Google release core updates at all? These are not minor tweaks or reactions to single abuses. A Core Update is the moment in which Google reviews and updates the way it evaluates the entire internet — in bulk, systemically, and without exceptions.

Changes in ranking mechanisms

A Core Update is not about "punishing" specific pages or turning on one new ranking factor. Google modifies the mechanisms that are responsible for evaluating:
• content relevance,
• its quality,
• usefulness for the user,
• credibility on the scale of the entire website.
In practice, this means a change in the weights of signals, rather than a simple answer like: "from today, X matters." This is exactly why during a core update, some pages can suddenly gain and others fall — even though technically "nothing has changed on them." The way the algorithm interprets them has changed.

Site Quality Score update

Site Quality Score (SQS) is an informal, unconfirmed by Google website quality indicator that the SEO industry has been talking about for years. Its existence is suggested by, among others, analyses published by Barry Schwartz, but also repeatable patterns visible in data from subsequent core updates.

From our analyses at Design Cart, it clearly follows: Site Quality Score updates exclusively during a Core Update. That is when a massive recalculation of the holistic value of the website takes place — not only of the domain as a whole, but also of individual subpages. Importantly, between core updates, this "quality score" remains practically frozen.

Over the last year, on many of our clients' online stores, we implemented elements that tangibly raised the level of E-E-A-T: hard evidence, video materials, expert content, and extensive information sections. The effect? For months — silence. Subpages did not react, visibility stood still, even though objectively the quality was growing.

This very mechanism is the source of frustration and disorientation for many webmasters. They do "everything right," invest time and resources, and Google seems not to notice it. Only the update of the Site Quality Score during a Core Update makes these changes actually taken into account and rewarded — often in leaps, rather than gradually.

That is why Core Updates are so turbulent. This is not a day-by-day evolution. This is the moment when Google defrosts the quality evaluation and recalculates it from scratch.

How a core update works "under the hood"

At this point, we enter an area that Google does not officially describe. What you read below is not algorithm documentation, but a set of observations based on data, repeatability from previous Core Updates, and real tests on live websites. It must be clearly stated: this is still a theory — but a theory that proves true surprisingly often.

From my point of view, a Core Update can be logically divided into three distinct phases that follow one another.

Phase I: defrosting of search results

This is the first phase, which can be observed at the moment of the Core Update announcement or shortly after. Search results begin to behave as if they have been "defrosted" — they become significantly more fluid and susceptible to change. Positions can jump from day to day, and sometimes even from hour to hour. In practice, this looks like a total SERP dance.

In this phase, new ranking rules also begin to apply, though still without a final "verdict." This is the moment of greatest chaos and, simultaneously, the greatest stress for webmasters.

During this phase, I noticed a very characteristic correlation: on all sites where we had removed fluff content before the Core Update, drops appeared. Regardless of the quality of the remaining content. It looked as if:

  • the new content was not yet being taken into account,
  • and the removal of the old content temporarily cut off the

site’s existing power or created a "gap" in its power.
We observed the same patterns during the June Core Update 2025. Importantly — in that case, the sites that fell during Phase I returned to higher positions than before after the update ended. The larger and more aggressive the changes in content (especially mass removal), the deeper the drop in Phase I.

This is one of the reasons why "cleanup" actions during the rollout can be very risky.

Phase II: recalculating Site Quality Score

After a period of violent fluctuations, a deceptive calm usually follows. Visibility stabilizes, changes are smaller, and many webmasters get the impression that "the worst is over." In practice, this is the quietest, but simultaneously the most computationally heavy phase of the Core Update.

Everything indicates that this is exactly when the Site Quality Score recalculation occurs. If we assume that Google uses advanced AI models (often working-titled in the industry as MUVERA) for page quality analysis, the scale of the operation is enormous. These are not single pages — these are millions of subpages of large websites being analyzed anew.
This stage could explain:

  • the relative calm in the SERPs,
  • delays and instabilities in Google Search Console,
  • periodic problems with PageSpeed Insights and other Google tools.

From the outside, it looks like the calm before the storm, but in the background, a massive recalculation of the holistic value of websites is underway.

Phase III: ranking with the new Site Quality Score

This is the phase everyone is waiting for — and the one that decides the "winners" and "losers" of this Core Update. After the Site Quality Score recalculation is finished, Google begins to actually rank pages according to the new quality assessment.
Precisely at this moment:

  • sites into which good, consistent work has been put for months or years begin to go up,
  • websites based on apparent quality, mass production of content, or lack of real experience — lose visibility.

During the June Core Update 2025, this effect was particularly visible a few days before the official announcement of the update's conclusion and a few days after it. That was when the largest increases and most severe drops appeared — no longer chaotic, but permanent.

Only after Phase III can one speak of the real picture of the situation. Everything that happens earlier is the process of the algorithm reaching a new equilibrium.

What Google pays attention to in a Core Update (what wins most often)

Instead of trying to break the algorithm down into hundreds of signals, it is better to look at the Core Update through several thinking filters that Google applies today to evaluate websites. These are not individual ranking factors, but a way of interpreting the quality of content and the service as a whole.

Relevance and user satisfaction (content + intent + Helpful Content)

The first and most important question of the algorithm today sounds very simple: Does this page really answer the user's question better than the other results? It is not about whether the content contains all the keywords, but:

  • whether it closes the topic,
  • whether it takes context into account,
  • whether it answers real doubts that arise after asking a question.

The influence of the Helpful Content Update is very clearly visible here. "Correct" content, written just for the sake of having something there, is no longer enough. Google is getting better at recognizing whether the user feels served after entering the page, or if they have to return to the results and keep searching.

Substantive quality and credibility (E-E-A-T in practice)

At this stage, the algorithm looks not only at what is written, but who is writing it and from what perspective. What counts is:

  • author's experience,
  • real expertise,
  • evidence (examples, data, own materials),
  • freshness of content,
  • thematic consistency of the entire website.

This is the moment when empty declarations stop working. The mere information "expert with 10 years of experience" means nothing if the content does not show this experience in practice. Google is getting better and better at distinguishing knowledge from the repetition of knowledge.

"Site-wide" and the quality of the entire service

A Core Update very clearly operates at the level of the entire service, not just individual subpages. If a domain is:

  • full of thin content,
  • glued together from random topics,
  • based on mass "SEO content," then even good subpages may have problems with rankings.

This is exactly where we can talk about the end of the SEO fluff era. For years, such content worked because it increased the semantic reach of the site. Today, it increasingly acts as ballast that drags the domain down. Google looks at the service holistically: does this place really deserve visibility in a given topic.

UX and technicals as an "amplifier," not a magic button

Page speed, mobile, ads, or pop-ups are rarely the direct cause of large drops. However, they very often worsen the overall user satisfaction rating. Advertising clutter, aggressive windows, mobile chaos, or slow loading make even good content:

  • consumed less effectively,
  • abandoned faster,
  • recommended further less often (directly or indirectly).

Technicals do not "save" weak content, but they can amplify or weaken the effect of a Core Update.

Originality and uniqueness

At a certain point, all the above filters cease to be enough. If several pages meet high quality standards, Google must ask another question: How does this content differ from the rest? In many analyses published by SEO experts, the conclusion appears that originality and uniqueness are becoming the decisive ammunition in the fight against equally well-prepared competition. Algorithms based on AI models (often working-titled in the industry as MUVERA) have an increasing ability to detect:

  • repetitive patterns,
  • generic narratives,
  • content that "brings nothing new."

Therefore, pages that win more and more often are those that:

  • show their own experience,
  • have unique examples,
  • present real conclusions instead of compilations of others'.

And this is probably one of the key reasons why in this Core Update some pages skyrocket, while others — despite being correct — stand still.

Who usually gains and who loses (hypotheses from observations + examples)

Every Core Update very quickly divides the internet into two groups: those who gain and those who wonder what went wrong. Based on observations from the current rollout — my own data, industry reports, and discussions on Reddit, among others — fairly repetitive patterns are beginning to emerge.

The most common profiles of winners

The services that handle it best are those that bring something real, rather than just correctly formatted text.
The first group consists of sites based on unique experience:

  • own tests,
  • case studies,
  • comparisons based on real data,
  • original conclusions and observations.

This is content that cannot be copied or mass-generated — because it results from practice, not research.

The second group includes pages that close the topic. They don't stop at answering "what is it," but lead the user further:

  • they expand on doubts,
  • they answer questions like "what to choose and why,"
  • they contain sensible FAQs,
  • they help make a decision rather than just describing it.

In both cases, the common denominator is high user satisfaction — not because the content is long, but because it is complete.

The most common profiles of losers

On the other side are services that functioned correctly for years but today are beginning to be rated increasingly worse.

The first group is mass-generated content without added value. Linguistically correct, SEO-optimized, but bringing nothing new. For a long time, such content was "enough." Now, it is increasingly becoming invisible.

The second group consists of sites with a broad thematic mix, without a clear identity. Services that "write about everything," hoping something will stick. In Core Updates, it is increasingly clear that a lack of authority in a specific field weighs down the entire domain.

The third group is aggregation without original input — rewriting the internet, summarizing others' content, compilations without experience. The algorithm is getting better at recognizing that such pages are not a source of knowledge, but its echo.

The end of the copywriter era

Between 2022 and 2025, classic copywriting was doing quite well. Why? Because for a long time, algorithms rewarded:

  • correct language,
  • logical structure,
  • "nice text mush" that looked expert.

This was enough, even if the author had no real experience in a given subject. Current Core Updates are hitting this model more and more clearly. A copywriter — understood as a person who can write well but is not an expert — is ceasing to be sufficient. Not because the text is weak. Because it lacks experience, evidence, and its own perspective.

The hunt for pseudo-experts

At the same time, we see an increasingly sharp fight against pseudo-experts. The internet is full of pages whose authors write that they do this and that, have 120 years of experience, graduated from Harvard, and happen to know Prince Charles personally.

The problem begins the moment we enter such "experts'" blogs and see 100% AI-slop — content without examples, without evidence, without a practical background. There are no case studies, own observations, photos, data, or anything that would confirm the declared expertise.

According to the opinion of many industry experts, the current Core Update may be Google's first such clear step toward verifying evidence rather than declarations. It is no longer enough to write that one is an expert. Increasingly, it must be shown in reality — through experience, original materials, and unique conclusions.

“Testimonies from the battlefield” – what people are saying (Reddit, WebmasterWorld)

During an ongoing Core Update, one of the most valuable sources of insight is reports from the front line. Not official announcements, not tool charts, but the voices of people who live off organic traffic every day. However, to avoid turning this chapter into a collection of rumors and emotions, it’s worth clearly defining the method.

Reddit – the most common themes in user statements

On Reddit, the dominant posts are very “fresh” accounts — emotional, yet surprisingly consistent in terms of themes.

  1. Data delays and chaos (GSC lag)
    “On my end the GSC is pretty slow, last updated basically 48 hours ago. But compared to a few days ago… it def dropped a lot in impressions and clicks.”
    Many users point out massive delays in Google Search Console, which makes analyzing drops “live” practically impossible. This suggests that part of the panic may stem from a lack of up-to-date data, not solely from real losses.

  2. Impact on entire domains, not individual pages
    “Core update December hitted whole website.”
    This is a very common motif — the update doesn’t work in a granular way. Instead, it looks more like a recalibration of trust toward the entire domain, which perfectly aligns with observations about site-wide quality.

  3. Chaos in Organic vs Direct (GA4)
    “GA4 looks to have flipped Organic vs Direct Traffic.”
    Some of the reported “drops” may be caused by traffic attribution issues in Google Analytics 4. During a Core Update, the boundary between organic and direct traffic can become heavily blurred.

  4. Sudden day-to-day drops
    “Lost rankings overnight after the update. Anyone else seeing this?”

These reports fit perfectly with the picture of high volatility in the early phases of an update. Ranking swings every 24–48 hours are currently the norm, not the exception.

WebmasterWorld – tone and observations

Discussions on WebmasterWorld have a completely different character. Less emotion, more long-term frustration and attempts to understand “where this is all heading.”

  1. AI and the feeling of being “used”
    “It’s as if Google is simply saying: ‘Oh, we have your content for our AI. We don’t need you anymore.’” — Micha
    This is one of the strongest threads: a sense that content from small and medium-sized sites has been “absorbed,” and traffic is no longer flowing back to the authors.

  2. The “organic compost” theory
    “Organic results at the bottom of the page which Google treats as compost…” — BigKat
    According to some users, classic organic results are losing significance, and the effort put into ranking is no longer paying off in its previous form.

  3. Impact on small sites
    “What we’re seeing right now affects almost exclusively small websites.” — Micha
    This is a very frequent observation: the Core Update looks less like an algorithm tweak and more like a quality filter that ruthlessly weeds out weaker domains.

  4. Rising importance of alternative search engines
    “90% is now from other SEs. Ranking well on Bing and DuckDuckGo.” — seokees
    “Google traffic has dropped significantly, while traffic from Bing has risen a bit.” — Chris
    An interesting signal: some sites are seeing traffic diversification, which may be the result of both algorithmic changes and shifts in user behavior.

  5. A plague of bots and vulnerability scans
    “90+% WP probes, just hundreds to thousands of page requests per day.” — jmccormac / RedBar
    In the background of the Core Update, many webmasters are noticing increased bot activity, which further complicates the analysis of real traffic.

What does this all mean?

Reports from Reddit and WebmasterWorld show one thing very clearly:
this is not a “normal” Core Update. The scale of changes, data chaos, and strong site-wide effects make many people feel a loss of control.

How to do monitoring during an update (so you don't go crazy)

A Core Update is the moment when it is easiest to make mistakes resulting not from a lack of knowledge, but from emotions. Fluctuations are natural, data is delayed, and charts can look dramatic. Therefore, monitoring during the rollout should be simple, repeatable, and as objective as possible.

What to check daily (15 minutes)

It's not about staring at charts for hours, but about a short, consistent routine.

The first step is Google Search Console. Focus exclusively on:

  • pages and queries with the largest change,
  • separating brand vs. non-brand traffic.

This allows you to quickly distinguish visibility problems from changes in user behavior.

The second step is segmentation, without which the data is useless:

  • mobile vs. desktop,
  • country / language,
  • page type: category, product, article.

It often turns out that a "drop" only affects one segment, while the rest of the site remains stable.

The third element is time-stamped notes. Instead of trying to remember, write down:

  • day 1,
  • day 4,
  • day 9 of the rollout.

After the Core Update ends, these notes create a coherent reporting narrative that allows you to understand what was a temporary fluctuation and what was a real trend change.

What NOT to do during the rollout

The greatest losses during Core Updates do not result from the algorithm, but from hasty decisions.

Do not make a revolution in the structure of the entire site. Overhauls, migrations, and mass changes to internal linking during the rollout can effectively distort the picture of the situation and make it difficult to assess what was actually an effect of the update.

Do not mass-delete content "in a panic" without an audit. As previous observations have shown, aggressive content purging during a Core Update often deepens drops instead of fixing them. Decisions about deleting or merging content should be made after the update ends, based on full data.

During a Core Update, the best strategy is controlled observation, not a nervous reaction. It is precisely calm and consistency that allow you to draw conclusions that will pay off only a few weeks later.

My prediction (from the eye of the storm)

Looking at the scale of the changes, the pace of the rollout, and the signals coming from various niches, everything indicates that the most susceptible to further reshuffling will be services based exclusively on generic content — especially affiliation, "SEO how-to blogs," and sites that have grown for years thanks to mass production of text. Google will most likely continue to tighten the screw where it is difficult to point to the author's real experience or the unique value of the service.

Large fluctuations may also affect news and Discover, where the algorithm is increasingly aggressively filtering sources for quality and credibility, not just freshness. On the other hand, specialized services with a clear thematic identity, which have consistently built E-E-A-T over a long period, should behave more stably.

It is still too early for full conclusions — we will see the real picture a few days after the official end of the Core Update. Until that moment, everything we observe is part of the algorithm's process of reaching a new equilibrium, and not its final verdict.

Summary

We know that the December Core Update is not a cosmetic adjustment, but a deep recalculation of page quality on the scale of the entire internet. We clearly see that Google is increasingly rewarding real experience, thematic consistency, and content that actually solves users' problems rather than just describing them. We do not know, however, exactly what weights the individual signals have and which elements will prove decisive after the rollout is completed.

Changes should be read with distance — short-term drops and increases during the update are rarely final and are very easy to misinterpret. The most sensible strategy is calm monitoring, segment analysis, and postponing radical decisions until the moment the algorithm reaches a new equilibrium. What is worth doing right now is consistently building quality: experience, evidence, uniqueness, and trust.

A Core Update is not a punishment, but a selection mechanism. For some, it means a loss; for others — confirmation that the chosen direction was right. In the long run, the winners are not those who react to every chart, but those who understand why Google introduces these changes in the first place.

Top comments (0)