DEV Community

When critics advance AI: How Apple's research reminds us why scrutiny matters

Christoph Görn on June 12, 2025

What happens when the world's most valuable technology company publishes research exposing fundamental limitations in AI? If you're Gary Marcus, yo...
Collapse
 
xwero profile image
david duymelinck

I'm not going to contest the research, I'm not smart enough.
But to me it feels like there was some financial pressure behind the release of the study. Apple was expected to come up with AI integration, because everyone else is doing it. And Apple as a brand is still seen as ahead of the curve.

I agree that we need to learn about the opportunities and limitations of technology. But there are many sales people out there looking for money. And they are dressing every new technology up like a Christmas tree.

Collapse
 
kristofer_meetvista profile image
Kristofer

Apple tried the whole AI thing (Remember Apple Intelligence?) and it turned out to be a complete mess, yet another wrapper around ChatGPT and half of the promises are still not implemented or really badly implemented.
Of course they now want to say "AI is lacking, thats why our AI is lacking", its their interest to down play AI because they are so far behind. Perhaps we should apply a bit of critical thinking into Apples Research.

Collapse
 
xwero profile image
david duymelinck

I think you read my comment wrong. I'm not criticizing the research, I'm criticizing shareholder pressure and marketing for setting the timing of the release of the rapport.

If it was released further away from the WWDC event, it was less likely to be seen as a ploy to keep the share price high.

Thread Thread
 
kristofer_meetvista profile image
Kristofer • Edited

My bad. My comment was not suppose to criticizing your comment, I agree with it. I should have added my comment on the main thread instead.

All I want to say is that we should apply critical thinking on Apples Research, looking at the WWDC 25 highlight, they didnt mention AI once same month their paper comes out clamining AI is lacking. Is Apple doing this for the greater good of the AI development OR could it be that Apple dont want to loose face like google did and still sell billions worth of devices?

Todays LLMs are far from perfect, still an developing technology that havent found its true purpose, but I think Apple are having their "no one is going to buy devices" moment that Microsoft had back in early 2000s.

Thread Thread
 
xwero profile image
david duymelinck

Is Apple doing this for the greater good of the AI development OR could it be that Apple dont want to loose face like google did and still sell billions worth of devices?

This is the sentiment I wanted to avoid by preempting my first comment that I don't question the research.
Both things can be true. The research is valid and it is a marketing trick.

Thread Thread
 
kristofer_meetvista profile image
Kristofer

I agree with that, however I dont think Apple has the authenticity to make these research claims, tbh no big tech company has the authenticity its all about the shareholders.

Thread Thread
 
xwero profile image
david duymelinck

While I agree that Apple isn't leading the AI research, they have enough funds to hire good researchers.

I don't blame you for being skeptical about big companies. But the truth is that they have the deep pockets to invest in the research. I think the problematic things come from the applications of that research.

Thread Thread
 
kristofer_meetvista profile image
Kristofer

Yes, but its not a genuine effort from Apple to push the development of AI, instead its a way to inform their customer that "its ok to buy our devices even though it has no big AI capabilities".

This reminds me of MS doubbled down on Vista and Aero glass UI instead of focus on mobile devices.

I guess the hype surrounding LLMs are the real issue.

Collapse
 
kurealnum profile image
Oscar

Really appreciate the citations -- it's nice to know that something has actually been thought about, and not just written. Apple's research kind of aligns with my personal views on AI: it's great for things that thousands of people have done before, but the moment you throw something new or different at it, it implodes.

Collapse
 
starkraving profile image
Mike Ritchie • Edited

Lately I’ve seen references to LRMs, or “Large Reasoning Models”. Maybe someone is already seeing the flaws in using language models for reasoning and are starting to build models specifically for reasoning?

Collapse
 
goern profile image
Christoph Görn

A Call to Action: Beyond Corporate Motivations to Collaborative Progress

The discussion around Apple's AI research reveals a crucial insight that demands our collective response: we cannot let corporate motivations overshadow the vital need for honest AI assessment.

Both David and Kristofer identified the core tension we face - major tech companies control much of AI research, yet their commercial interests inevitably influence what gets published and when. This reality doesn't invalidate good research, but it highlights a dangerous dependency that we must address.

Here's what we must do:

Support Independent AI Research

  • Fund academic institutions and non-profit organizations conducting AI limitation studies
  • Advocate for government research grants that aren't tied to commercial outcomes
  • Promote open-source AI research initiatives that prioritize transparency over profit

Demand Research Transparency

  • Pressure companies to publish methodologies, datasets, and negative results - not just breakthrough claims
  • Support researchers who speak honestly about AI limitations, even when it challenges industry narratives
  • Create industry standards requiring disclosure of commercial motivations behind research timing

Build Critical AI Literacy

  • Educate ourselves and others to evaluate AI research based on scientific merit, not corporate reputation
  • Share resources that help people understand the difference between marketing claims and peer-reviewed findings
  • Encourage skeptical thinking about AI capabilities while remaining open to genuine advances

Foster Collaborative Evaluation

  • Support initiatives like MITRE's AI Incident Sharing and NIST's AI Risk Management Framework
  • Participate in open-source projects that test and validate AI claims independently
  • Create cross-industry standards for AI evaluation that prioritize safety and accuracy over speed-to-market

The future of AI shouldn't be determined by shareholder interests or quarterly earnings calls. When we see valuable research - regardless of who funds it - let's use it as a catalyst for broader, independent investigation.

The question isn't whether Apple (or any tech giant) has pure motives. The question is whether we'll let corporate control of AI research prevent us from building better, safer, more reliable systems.

Let's turn this skepticism into action. Support independent AI research, demand transparency, and help build the critical thinking infrastructure our AI future desperately needs.

What will you do to ensure AI development serves humanity's interests, not just corporate bottom lines?

Collapse
 
ciphernutz profile image
Ciphernutz

Really insightful research and an outstanding article. Thanks for sharing this—it highlights how critical scrutiny can drive meaningful progress in AI.

Collapse
 
nevodavid profile image
Nevo David

growth like this is always nice to see. kinda makes me wonder - what keeps stuff going long-term? like, beyond just the early hype?

Collapse
 
jakubstetz profile image
Jakub Stetz

Fascinating research and outstanding article. Thank you for sharing.