DEV Community

Cover image for 5 Things AI Can't Do, Even in JavaScript
DevUnionX
DevUnionX

Posted on

5 Things AI Can't Do, Even in JavaScript

AI has been transforming code generation and software development processes in recent years, but there are still limits it hasn't crossed. Many developers now have AI tools write repetitive boilerplate code, yet complex problems, ambiguous tasks, and human-centered design decisions remain obstacles. This becomes especially apparent in the JavaScript world, considering features like dynamic runtime, asynchronous event loops, interactive browser environments, and security constraints. An AI code assistant attempting to remove jQuery and replace it with vanilla JavaScript, for instance, might fail to understand fundamental DOM timing differences. In other words, it can make the technical transformation while missing the meaning.

What follows examines five concrete areas where AI falls short, even in JavaScript. Each section explains why AI remains limited in that domain, provides reasoning based on both general characteristics and JavaScript-specific factors, and offers real-world examples. The technical terminology gets explained in plain language to make the discussion accessible both to developers and general readers. These five topics are: creativity and innovation, complex system design and architectural decisions, asynchronous processing and timing, user experience and empathy-requiring design, and security, ethics, and responsibility.

Start with creativity, one of AI's weakest areas. Current AI models can generate content by imitating patterns learned from large datasets, but they cannot actually propose genuinely innovative ideas or unconventional concepts. AI lacks the ability to set its own goals and develop new strategies. Put simply, while an AI algorithm might optimize a given problem definition, it cannot formulate an entirely new question or develop ideas outside the box.

Data dependency and repetition constrain AI fundamentally. When you ask AI for a new JavaScript user interface sketch, the model produces results by blending similar interfaces it has seen previously. But expecting it to design an unexpected, unusual interface layout or conceptualize an entirely new JavaScript library from scratch is unrealistic. Researchers like Gary Marcus from Carnegie Mellon University emphasize that AI systems lack genuine creativity and only generate content according to existing patterns. It's scientifically established that AI hasn't reached real creative thinking in our time. So if you tell AI to develop an innovative JavaScript framework from scratch, it will just reorganize what it learned from existing frameworks.

The purpose-setting deficit matters too. Deciding what's important in the creative process belongs to humans. AI only follows given instructions when solving problems; it cannot choose which goals to pursue. A developer designing a new web application, for instance, first establishes criteria like performance, scalability, and user experience. AI only works when you tell it to produce something fitting those criteria. You might say AI can suggest solutions, but it cannot select the right decision for your specific constraints. In other words, AI cannot develop strategy or product vision on its own.

Consider a design concept example. Suppose a designer tells a developer to use imagination to create a unique color palette and interaction design. Human creativity is necessary here to design original color combinations or user interactions that transcend trends. AI can only produce color palettes it has seen before, based on past visuals or demo sites. It might reach a result resembling a Photoshop filter, but this outcome typically combines existing ideas rather than representing fundamental innovation.

The jQuery removal example illustrates this creativity deficit in the code world too. When a developer tells an AI code assistant to remove jQuery from a project and replace it with vanilla JavaScript, what happens demonstrates the problem. Developer Alex Bilmes's experiment provides evidence: AI tried to find and convert jQuery code snippet by snippet, but the code's meaning got lost during this transformation. AI replaced $(document).ready() with window.onload, but these two functions behave differently. One waits until everything on the page loads, while the other runs as soon as the DOM is ready. This shows a simple but creative deficiency: AI made the wrong choice because it didn't understand the logic of DOM readiness. In this case, AI merely translated code but didn't demonstrate the ability to grasp and redesign the code's meaning and function.
Creativity-requiring work limits AI. While the technology offers solutions through pattern recognition and similar examples, birthing completely new ideas or determining vision belongs to human intelligence. As noted above, AI cannot create strategy, cannot conceptualize, and cannot choose its own goals. In the JavaScript context, this shows that developing original libraries, innovative user interfaces, or plans to solve unknown problems requires human creativity.
Moving to the second limitation, AI can help with individual code snippets and error correction, but it cannot independently handle tasks like seeing the big picture, designing system architecture, and creating long-term strategy. Planning the architecture of a complex JavaScript application requires balancing numerous factors: data structures, inter-component communication, load balancing, performance optimization, and maintainability. AI typically falls short at this stage because such decisions involve ambiguous parameters like project context, team constraints, and user needs.

Context and purpose analysis represents one fundamental task AI cannot do. Defining a real project's needs matters. When AI receives a command to create a web service with certain features for your application, it only produces something technically valid. But human developers decide which features are genuinely necessary, which user scenarios matter, and what the priorities should be. Optimizing an e-commerce site's inventory tracking, payment system, and user interface needs, for instance, requires simultaneously considering scalability, security, and timing constraints. AI doesn't grapple with such uncertainties. You might summarize it this way: AI asks questions and provides answers, but humans decide which questions need asking. Framing the problem and directing the project with the right questions is a human skill.

Architectural decisions and engineering experience matter too. Drawing a software's architectural lines, choosing frameworks, or defining boundaries between modules are decisions requiring experience. AI can offer simple code suggestions, but it cannot understand which suggestion better fits your technical, functional, or financial constraints. Experts warn that AI can bring suggestions for complex frontend architectures, but engineers remain responsible for the results. Many modern JavaScript applications grow complex through shared design systems, performance optimizations, and continuous updates. Within this complexity, AI only produces syntax-level solutions, while decisions that make the project sustainable are left to human minds.

Long-term planning extends beyond just code. Architecture requires foreseeing future needs, managing technical debt, and minimizing maintenance costs, which demands foresight and vision. While AI provides tools to improve current code, it cannot answer questions like whether this code will be suitable for expansion a year from now. Human developers typically decide by considering growth, user numbers, and integration needs over years. As one Medium analysis notes, AI optimizes code quality, not product success. It doesn't guarantee the product's overall success or sustainability. Designing scalability frameworks so software can handle more users in the future, planning data flow to support multiple channels: these require planning AI cannot handle alone.
The complex refactoring example makes this concrete. The jQuery removal case mentioned earlier actually involves an architectural decision beyond mere code refactoring. In Alex Bilmes's experience, AI didn't just find and replace code snippets but failed to understand jQuery's underlying philosophy, like when $(document).ready() executes. A simple find-and-replace approach proved insufficient; instead, understand-and-redesign was needed. Engineering really is this: each new technological step requires evaluating context and risk. AI conversely offers reflections of past data. As the AI News site emphasized, while AI assists as an aide with code suggestions, it cannot assume the engineer role architecturally. Engineering means contextual understanding, decision-making, and risk calculation. In summary, when AI makes a mistake, it writes one line of code wrong; when a human makes a mistake, they make an architectural decision that affects things for years.

System design and similar high-level engineering tasks lie outside AI's gift. AI can offer several alternative code solutions for the same problem, but without human intervention it cannot decide which requirements your project should prioritize or which path would be more appropriate. For advanced JavaScript projects, choosing suitable frameworks, defining interfaces between modules, determining data flow strategy: these are areas where human cognition enters.

JavaScript's most distinctive features include asynchronous programming and the event loop. Web browsers are designed to handle many events simultaneously: user interactions, network requests, and timers. Predicting how code will behave in this dynamic and unpredictable environment is difficult. Since AI models don't get the chance to run and observe this environment, AI can encounter unwanted surprises in asynchronous scenarios.

Event loop complexity matters here. In JavaScript, asynchronous code manages the asynchrony between events from different sources: fetch(), setTimeout, Promise, and user clicks. In a real application, which triggers first depends on network delays or user behavior. An AI code assistant can provide examples based on a single code block, but it cannot definitively know the event sequence that will occur during actual code execution. This leads to unpredictable errors. For instance, AI might accidentally make code synchronous or ignore event ordering, causing unexpected crashes. In the jQuery example mentioned above, AI replaced the $(document).ready() call with window.onload, but these two events trigger under different conditions. Consequently, the JavaScript code ran before the DOM was fully ready, causing errors. Such timing differences represent situations beyond the predefined patterns AI learned, so the model struggles to solve them instantly.

Concurrency issues create some of the most insidious problems in asynchronous code. Race conditions exemplify this. When two separate API requests go out simultaneously, the order in which they return cannot be guaranteed. AI might offer suggestions in a fake environment without compiling or running code, but it cannot foresee situations in a live environment where one request blocks another. As a developer, you need to manage promise chains or write code with async/await handling whichever returns first, but AI might not fully grasp this behavior. Similarly, if a Web Worker runs in the background or setInterval triggers functions in the background, AI cannot simulate scenarios related to those consequences.
Debugging difficulty matters too. AI struggles with asynchronous debugging. While analyzing code statically, AI has difficulty accounting for asynchronous flows. According to a writer experimenting with a complex error in a Next.js application, the AI model's initial suggestion usually worked for easily fixed simple schema errors, but when the error required understanding where and why it occurred in the system, AI failed. An AI assistant can fix the problem by saying check the console log error, but understanding why and preventing similar errors in the future falls to humans. Especially in asynchronous calls, finding the root cause requires genuine understanding. As seen in the jQuery case in AI News, AI only does code matching; solving why code is faulty isn't possible for it.
Real-time user interaction adds another dimension. User interaction on dynamic web pages can change at any moment. Form submission, modal opening and closing, or animations each happen at different times. AI cannot predict these interactions without testing them. You might say when a button is clicked, function X should run and write the result to modal Y. AI can code this flow, but when encountering atypical user behavior in a live environment, like a user clicking twice or network connection slowing, it proves inadequate at foreseeing the error.
JavaScript's asynchronous structure and event-based processing directly challenge AI. AI can help write code, but it cannot predict the application's behavior during actual runtime. Foreseeing unexpected situations like visual corruption or race conditions in games falls to human scenario imagination and experience.

However technically successful code writing may be, a software's usability and adoption depend crucially on user experience. Good user experience design requires understanding user needs, expectations, and emotions. AI remains limited in these areas due to empathy deficit. AI cannot replicate the empathy that develops naturally among humans and user-centered thinking. We can examine this topic from several angles.

Empathy and emotional intelligence deficit matter fundamentally. In website or application design, what users feel, want, and how they'll feel play major roles. AI can logically execute do this commands, but it cannot sense when a user is bored, excited, or frustrated. In a customer support chat, for instance, AI can answer routine questions but cannot detect when a user is angry or sad and redirect them to appropriate human support. As Independent Turkish emphasized, AI cannot feel empathy and compassion, cannot interact with them. The same applies to user interface design: determining which color combination will soothe users or which layout will be more intuitive requires evaluating from the user's perspective.

Understanding user priorities requires reading users' goals, difficulties, and habits correctly. Good UX demands this. On an e-commerce site's payment page, for example, understanding whether you're dealing with a user wanting quick payment or one examining every option in detail, then designing accordingly, is possible with human skill. AI can generalize based on user behavior patterns, but it might not determine what motivates or confuses a real user at that moment. Additionally, AI cannot account for nuances in design preferences according to different cultural or geographical contexts.

AI cannot foresee design mistakes either. AI might sometimes suggest technically correct interface code, but this suggestion might not fit the usage pattern users expect. Placing a close button in an unusual location might work technically but surprises users. Human designers instantly detect these subtle differences and receive feedback through user testing, while AI doesn't get this feedback. Consequently, when AI is left unsupervised, resulting interfaces can lack aesthetics or ease of use.

Consider an empathy-requiring form example. Suppose you're designing a privacy-related sensitive settings page in an application. Using clearer, more explanatory language instead of complex technical terms for users requires empathy. AI cannot simply simplify mixed terms with an explain instruction because it doesn't know which terms will disturb users. A human designer thinks about users and adds necessary explanations, checkboxes, or warnings where needed to make the form understandable.
An educational example helps illustrate this. In an independent analysis of future education environments, teachers' roles get described this way: the human teacher will be a guide directing student groups, supervising projects, and establishing empathy, while AI will be a data-driven tool personalizing the learning process. This emphasizes that emotional intelligence and guidance aren't realistic goals for AI. Similarly in web development, understanding users and shaping accordingly is a competency entirely dependent on humans.
User experience and empathy-requiring design clearly reveal AI's limits. AI cannot experience what users feel; it only makes inferences based on past data. Therefore, creating user-friendly, intuitive elements in a JavaScript application that meet emotional needs requires human intelligence. Because AI's automatic design suggestions are mostly rule-based, they prove inadequate when facing unexpected user needs.

The final major limitation involves security and ethics. While AI-assisted code generators speed software development, they carry significant risks in security and ethical terms. AI models tend to learn and repeat security vulnerabilities from training data. Additionally, issues requiring ethical and legal responsibility cannot be managed without human oversight. AI cannot be expected to automatically write secure code, especially in JavaScript, or make ethical decisions.

Security vulnerabilities emerge frequently. An AI code assistant might write code without adequately validating or filtering user inputs. When developing an application and you say request name from user, AI might immediately give you code like document.getElementById("name")..., but without you adding SQL injection, XSS, or security checks for this input. A real study showed 62% of code solutions generated by AI contained design flaws or known security vulnerabilities. The reason is AI cannot understand the application's risk model, internal standards, or threat landscape. So AI only tries to solve the job to make it work; security checks remain incomplete.

Hidden insecure patterns compound the problem. Because AI models learn from open-source code, they also learn flawed code patterns from those sources. If the training set contains an SQL query pattern like query equals SELECT asterisk FROM users WHERE id equals plus userInput, the model adopts this pattern. In this case, risk exists that AI output will contain SQL injection. Similarly, AI might offer dangerous shortcut solutions like evaluating a mathematical expression with eval(expression); this can lead developers to skip security measures like input control. In short, while AI generates code quickly, it often contains shortcuts or incomplete checks.
Security check skipping happens regularly. Many security vulnerabilities arise not from code writing but from insufficient security measures. AI just gives you working code. When creating a REST API endpoint, for instance, it builds a structure like take input, pull from database, return, but doesn't automatically add authentication, authorization, or input validation steps. According to research, AI code assistants can skip checks related to user input unless you specify them. When designing a payment form, AI will say call payment query with requested parameters, but you must specify card number format validation, antifraud mechanisms, or HTTPS security. AI produces code that works as a shortcut but isn't responsible for security.
Ethics and responsibility matter beyond just security. Code generated by AI can produce unwanted results when ethical consequences aren't considered. AI can reflect biases in the data it was trained on exactly. In a hiring application, for instance, it might implement decision biases related to gender or race in code. AI is also risky regarding data privacy: AI-assisted tools don't understand how they store user data or whether it gets shared with third parties. Moreover, responsibility for code AI produces remains unclear. If there's a critical error or security vulnerability in JavaScript code written by AI, responsibility falls not on the AI model that produced the error but on whoever uses it or approves this code, which is typically human. Ultimately, AI cannot assume legal or ethical responsibility on its own; this always remains a task falling to the human mind behind it.

Consider a secure code writing example. Suppose a developer wants AI to write Node.js/Express code that saves user inputs to a database. If the prompt didn't say add all security checks, the code usually results in a simple INSERT query and might not include protection against possible SQL injections. Or if you didn't specify using textContent instead of innerHTML for a web form, AI might output with innerHTML and create an XSS vulnerability. AI doesn't see such vulnerabilities; it leaves required security logic to humans. Ultimately, security thinking is the human developer's job.
When security and ethics are at stake, AI falls short. While AI's code generation speed seems attractive, this code can carry security holes and produce ethically questionable results. Experts warn that AI code assistants are powerful but not security tools. Therefore, in JavaScript projects, keeping security shields, user privacy policies, and ethical responsibilities in mind always falls to the developing human. AI can only be a good guide; ensuring your software's security requires human engineering expertise.
These five areas reveal important points where AI remains limited. Creativity, architectural decisions, asynchronous logic, user-centered design, and security/ethics are domains where human intelligence still differentiates, while AI offers supporting tools. In these areas, AI can handle low-level tasks and repetitive duties like a code assistant, but final decisions and strategic thinking belong to humans. As software developers, knowing these limits is key to using AI consciously and efficiently.

Top comments (0)