Introduction
Today I want to talk about something a little different. During my master’s degree, I wrote a short paper on technology acceptance. Seeing parts of that thinking show up in how people are adopting AI today led me to write this piece.
We all have our own way of deciding whether a new piece of technology is worth using. Sometimes it just clicks (PlayStation, anyone?). Other times, we hesitate, question it, or drop it altogether. That’s where models come in—to help explain how and why people adopt new tech.
There are plenty of these models out there. One of the most well-known is the Technology Acceptance Model (TAM). Another is the Innovation Diffusion Theory (IDT). A few years ago, researchers combined them to better understand how people were responding to blockchain technology. Their model, still in the hypothesis stage as of 2017, laid out a bunch of helpful ideas that feel just as relevant today—especially when it comes to AI.
This article doesn’t try to prove the model’s hypotheses, but I’ll use them to explain how certain ideas from the model have shown up in my personal adoption of AI.
The model looks at how useful and easy a technology feels, how well it fits into our existing habits (compatibility), how much better it is than what came before (relative advantage), and how complex it seems (complexity). Together, these things shape our attitude, our intention, and ultimately whether we actually use the technology.
Here’s a quick summary of what the model suggests:
- If something feels useful and easy, we’re more likely to use it.
- If it fits our needs and seems better than what we had, that boosts our interest.
- If it feels too complex, that can get in the way—even if it’s powerful.
Even though the original model focused on blockchain, I’ve found that it reflects a lot of how I’ve come to use AI in my own life and work. So in this article, I want to walk through how these ideas apply to my relationship with AI—what pulled me in, what slowed me down, and what finally made it feel worth using.
Deeper dive
Before diving into how this applies to AI, let’s take a closer look at the model. The Technology Acceptance Model (TAM)—especially when extended with ideas from the Innovation Diffusion Theory—tries to explain why we accept or reject new technologies. It starts with three foundational factors:
Compatibility
This refers to how well the technology fits into your existing habits, tools, or lifestyle. If something aligns with how you already think or work, you're far more likely to adopt it. For example, if you're already using cloud tools, switching to a new cloud-based AI service feels natural—it’s compatible.Relative Advantage
This is about whether the new technology clearly offers something better than what came before. It could be faster, cheaper, more accurate, or just more convenient. If the benefits are obvious and meaningful, people are more motivated to try it.Complexity
This looks at how difficult the technology feels to understand or use. The more confusing or overwhelming it is, the less likely people are to embrace it—no matter how powerful it is. Simplicity matters, especially early on.
These three factors shape how we experience the technology in two major ways:
- Perceived Usefulness – Does this actually help me get things done better or faster?
- Perceived Ease of Use – Is this simple enough for me to pick up without frustration?
Together, these perceptions form the backbone of our attitude toward using the technology. If that attitude is positive, it leads to behavioral intention (the decision to try or continue using it), which then leads to actual use.
In short, TAM lays out a kind of emotional and cognitive journey:
Fit → Benefits → Simplicity → Attitude → Intention → Action.
It’s not just about what the tech can do—it’s about how it feels to use it.
Here’s a paragraph that fits into your current section and captures how compatibility applies to you personally, using your structure and tone:
How do these factors apply to me
1. Compatibility (How well it fits)
To make sense of how compatibility plays out in my relationship with AI, I’m going to break it down into entity pairs—essentially, how well two things work together. This approach mirrors the basic idea behind compatibility: does this new thing align with what already exists? So I’ll be looking at how AI fits with me as a person, how it fits with the tools and devices I use, and how I, in turn, relate to those tools. These pairings help unpack not just whether AI works, but whether it fits—with my mindset, my habits, and my environment.
First, there’s no inner conflict between me and the concept of AI. I’ve been in software engineering for years, so the idea of feeding structured/semi-structured data into a system and getting meaningful output doesn’t feel foreign—it feels normal. In that sense, AI and I are a good match.
Then there’s the compatibility between AI and the devices I rely on. Whether it’s my phone, MacBook, iPad, or smart speakers, AI already integrates with many of them, often invisibly, through personal assistants and smart apps.
Finally, there’s the link between me and those devices themselves. I spend a lot of time working with technology and virtual assistants, which makes the transition to AI-enhanced workflows almost frictionless. In short, AI fits well into both my mindset and my digital environment—it didn’t need to force its way in.
2. Relative Advantage (How it’s better than before)
Relative advantage is all about whether AI gives me a noticeable improvement over the way I used to work, think, or solve problems. It’s not just about what AI can do—it’s about what it can do better, faster, or more intelligently compared to my previous tools or workflows. In this section, I’ll explore this idea by looking at how AI has helped me in both professional and personal contexts, using a few practical examples to show where the advantage really shows up.
At work, AI has become more than a novelty tbh it’s a strategic companion. I use it to break down complex technical problems into more manageable parts. It gives me access to a curated knowledge base that spans across domains and industries—something no single search or documentation site can offer. When I'm stuck on a design decision, I can use AI as a sounding board to validate assumptions or explore alternatives. It has helped me spot edge cases I hadn’t considered in architectural discussions, and has become a go-to tool for code generation, implementation strategies, and even migration planning. In moments when I need to move fast without losing depth, AI offers clarity and speed.
Outside of work, the relative advantage shows up in small but meaningful ways. I often use AI to answer quick questions or explore unfamiliar topics—replacing the old process of jumping through multiple tabs or forums. It helps me refine emails (honestly this is a big one), especially when I want to strike the right tone or simplify a message. And sometimes, when I need dinner ideas, it becomes a personalized recipe engine that saves me time and mental energy. My current banana cake recipe is AI generated.
In all of these cases, AI doesn’t just replicate what I used to do—it elevates it and I think this is one of the most important bits (enhancement and not replacement). That’s what makes it relatively advantageous: it extends my capability, accelerates my thinking, and fills in knowledge gaps I didn’t even know I had.
3. Complexity (How hard it is to understand or use)
AI is, by nature, incredibly complex. Behind the scenes, there are machine learning models, massive datasets, and layers of statistical reasoning that all work together to produce something that feels intelligent. It involves everything from neural networks to natural language processing, and these systems don’t just respond—they learn, adapt, and generate context-aware answers. When you think about the engineering and math that goes into building these systems, it’s no surprise that AI can seem intimidating.
But here’s the thing: all that complexity has been beautifully abstracted. Though I have a fair understanding of how some of the items like transformer models, embeddings, vectors or search algorithms work under the hood, I really appreciate how these apps and platforms I use don’t actually require me to understand how embeddings represent meaning in high-dimensional space. Instead, I interact with AI through simple input boxes, I get help building context naturally through prompts and clarifications, and I receive surprisingly meaningful outputs—whether I’m asking for architectural advice or rewriting a sentence. Pure gold
This abstraction removes the complexity from my day-to-day experience almost entirely. As a user, I don’t feel overwhelmed or buried in technical detail. I just use it—and it works. In many ways, it’s like driving a high-performance car without needing to know how the engine is tuned.
All roads lead to actual use
When I reflect on why AI has become part of my life, it’s because of how well it fits for certain tasks, how much better it makes things, and how effortlessly I’m able to interact with it. AI feels like a seamless extension of the tools I already use. I don’t have to jump through hoops to access it or figure out how it works. This is all too familiar in a way that the input-prompt model mirrors how I’ve always used search engines. This effectively presents a low barrier of adoption for me. Beyond that, AI actually helps me get where I want to go. In my work, it allows me to reach my goals efficiently and effectively. Outside of work, it’s just as valuable—whether I’m refining an email, learning something new, or figuring out what to cook. The tools I use abstract away all the complexity happening under the hood, so I never feel intimidated or overwhelmed. It doesn’t feel like I’m using advanced machine learning models, it feels like I’m just having a conversation. And thanks to my small background in machine learning, I have an idea of how complex these systems really are, which only deepens my appreciation for how simple they’ve made the experience.
All of that naturally shapes how I feel about using AI. I have a positive attitude toward it because my experience has consistently been smooth, helpful, and efficient. That attitude makes me more likely to keep using it, and more open to exploring new ways it could support my work and life. It’s not a tool I’m testing anymore—it’s a tool I intend to use safely and regularly. And that intention shows up in my behavior. The line between trying it out and actually using it has quietly disappeared. Heck I pay for it 😂
The Spoof, the bad and the fugly
The journey through the TAM model hasn’t always been smooth. One thing that’s easy to overlook is that TAM isn’t a one-time path you walk through and complete. It’s not a straight line from "this is useful" to "I use it forever." Instead, it’s dynamic—your perceptions can shift, and in some cases, the very things that once made a tool feel useful and easy can start to erode. Over time, the good aspects need to be reinforced, or else the bad ones begin to grow, especially as the product evolves—or fails to.
In my experience with AI, there have been clear breaches in that positive flow. I have to admit, there are moments when the system just doesn’t deliver. During research, I’ve encountered hallucinations—responses that sound confident but are completely made up. When I push deeper with more nuanced or technical questions, I start to see the limits in reasoning or coherence. Sometimes, AI forgets the context I’ve already provided, or applies it incorrectly, forcing me to rephrase, reframe, or start again. And in those moments, my perception of ease and usefulness takes a hit. In moments like these, its feels more brittle than smooth.
When that happens, I find myself taking a step back. I double-check everything. I slow down. I cross-reference with my own knowledge or other sources. That extra cognitive load—the need to proofread, verify, and interpret—interrupts the simplicity I had come to expect. Thankfully, things have improved over time. The models have gotten better, and so have the interfaces that help manage these limitations. But those setbacks are still reminds me that that usefulness is fragile, and ease of use is conditional. Two of the key things to not in my experience with this mode
There’s also something more personal I’ve noticed. I’ve come to believe that keeping a healthy distance from AI is important—not just for accuracy, but for maintaining my own cognitive edge. It’s easy to slip into a mode of over-reliance, especially when a tool is so capable and responsive. But leaning on AI for things I already know—or could figure out with a bit of effort—can dull that edge over time. I think there’s value in doing things the “manual” way sometimes. Recalling knowledge. Struggling a bit. Solving things on my own. That’s not a rejection of AI; it’s a reminder that human ability still matters, and that AI is a tool—not a crutch.
So yes, the TAM can work in reverse. If a system starts to feel less reliable, or too controlling, or too limiting, it can cause attitudes to shift. Intention drops. Actual use fades. And unless those gaps are acknowledged and improved, even the most impressive tech can lose its place in your workflow. For me, staying conscious of these cracks helps me use AI wisely—not blindly. It keeps the relationship
Ok, Technology Acceptance Model (TAM) Matters, Why should we care???
1. Startups and Product Design
For startups, TAM isn’t just a theoretical model—it’s a tactical advantage. In the early stages of building a product, every decision counts. TAM provides a structured way to think about how real users will perceive and interact with the product. Startups can intentionally design their tools to be more compatible with users’ existing habits and workflows, reducing the barrier to entry. They can also focus on delivering a clear relative advantage over existing solutions—whether it's faster execution, better insights, or increased convenience. At the same time, simplifying onboarding and usage helps reduce perceived complexity, allowing users to quickly experience value without feeling overwhelmed.
By keeping these factors in mind—compatibility, advantage, and simplicity—startups are better positioned to shape user perceptions of usefulness and ease. These are critical to early adoption. Additionally, TAM helps startups test whether they’re truly solving a meaningful problem in a way that users will embrace. If users don’t find it useful or easy to adopt, that’s a signal for iteration. TAM trends can also be used to forecast whether the product will gain traction in a competitive market or fall into the trap of being too niche, too hard to use, or just not compelling enough.
2. Understanding Technology Adoption Over Time
TAM, along with the models that have extended or evolved from it, offers a window into how people have historically accepted (or resisted) new technology. What makes TAM especially interesting is that it doesn't isolate adoption to just features or technical specs—it looks at human perception, which is often influenced by much more than just functionality. Over time, researchers have built on TAM to incorporate new dimensions like trust, user experience, hype, and social influence.
These models reveal a lot about how technology acceptance isn’t always driven by individual logic. In many cases, people use new tools because they’re socially pressured to, or because someone they trust has validated the tool. This is especially true in workplace environments or tight-knit communities. Influencers, managers, or even teammates can significantly affect whether a tool is picked up or pushed aside. TAM captures this broader narrative—that adoption is both individual and collective, and shaped by shifting social, technical, and emotional factors. Understanding these patterns helps us predict not only if a tool will be adopted, but why and under what conditions.
3. Enterprise Software Adoption
In larger organizations, where buying and rolling out software affects hundreds or thousands of users, TAM becomes a critical decision-making tool. It allows companies to think beyond cost and features, and instead ask: Will our employees actually use this? Companies can apply TAM to assess internal attitudes toward new tools—either through surveys, small pilots, or feedback loops—and use that insight to design better onboarding, training, and internal communication strategies. For example, if employees perceive a tool as too complex or irrelevant, adoption rates will be low, no matter how powerful the tool is on paper. A clear example of this for me was a software in one of my earlier companies used for time sheets.
To counteract this, some companies introduce gamification—reward systems, challenges, or social features—to increase engagement and shift employee attitudes toward the tool. Over time, these tactics can improve perceived ease of use and usefulness, which in turn boosts intention and actual use. These days, businesses don’t have to start from scratch. They can look at how other companies in their industry have applied TAM principles to software adoption, learning from their successes or failures. In this way, TAM becomes not just a research tool, but a practical playbook for driving internal tech adoption at scale.
Conclusion
My journey with AI has mirrored many of the ideas in the Technology Acceptance Model—how something feels useful, how easy it is to use, how well it fits into my life, and how much better it makes things compared to what came before. But just like any relationship with technology, it hasn’t been perfect. There have been moments of friction, doubt, and necessary distance. What this model reminds me is that adoption isn’t a single decision—it’s a continuous process. As AI continues to evolve, so will my experience with it. And by staying aware of what makes it valuable—and where it falls short—I can keep using it in a way that supports me, not replaces me.
Top comments (0)