<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: nandu 🦦</title>
    <description>The latest articles on DEV Community by nandu 🦦 (@nanduanilal).</description>
    <link>https://dev.to/nanduanilal</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nanduanilal"/>
    <language>en</language>
    <item>
      <title>Infrastructure after AI Scaling</title>
      <dc:creator>nandu 🦦</dc:creator>
      <pubDate>Thu, 18 Jul 2024 02:20:52 +0000</pubDate>
      <link>https://dev.to/nanduanilal/infrastructure-after-ai-scaling-3551</link>
      <guid>https://dev.to/nanduanilal/infrastructure-after-ai-scaling-3551</guid>
      <description>&lt;p&gt;Note: this was originally written on Substack &lt;a href="https://nandu.substack.com/p/infrastructure-after-ai-scaling" rel="noopener noreferrer"&gt;here&lt;/a&gt; where you can subscribe to all new posts&lt;/p&gt;

&lt;p&gt;Despite the countless new AI infrastructure startups, Nvidia and OpenAI have swallowed up the lion’s share of value across the AI infrastructure stack:&lt;/p&gt;

&lt;p&gt;Nvidia’s data center revenue grew 427% YoY, with over 60% EBITDA margins. The demand for Nvidia’s market-leading GPUs hasn’t slowed because they are one of the core ingredients to training AI models. To put this growth into perspective, Zoom grew 355% one year after the world launched into a pandemic — while only being &amp;lt;5% of the revenue scale that Nvidia is at.&lt;/p&gt;

&lt;p&gt;OpenAI is the second biggest beneficiary, with recent reports claiming the business grew to a $3.4B revenue run rate, up from $1.6B late last year. OpenAI is continuing to invest in building better models and maintaining their market lead.&lt;/p&gt;

&lt;p&gt;The disproportionate success of these two players can be largely attributed to AI scaling laws, which offer a rough blueprint of how to improve model performance. Since foundation model providers win customers on the basis of model performance, they will continue to build better models until there’s no more juice left to squeeze. At that point, I believe the broader AI infrastructure ecosystem will flourish. AI application builders move from the experimentation phase to the optimization phase, opening up new opportunities for infrastructure tooling.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn3pmq8zqgtqp3esu5mqb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn3pmq8zqgtqp3esu5mqb.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Nvidia and OpenAI love AI scaling laws
&lt;/h2&gt;

&lt;p&gt;AI scaling laws show that model performance improvement is driven by 3 main factors:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Language modeling performance improves smoothly and predictably as we appropriately scale up model size, data, and compute. We expect that larger language models will perform better and be more sample efficient than current models. — Scaling Laws of Neural Language Models (January 2020)&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvi0lsll5vxbgrboraxpo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvi0lsll5vxbgrboraxpo.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This insight has been the bedrock of improvements in foundation models over the past few years. It’s the reason to push towards bigger and bigger models.&lt;/p&gt;

&lt;p&gt;The best way to make money on this scaling law equation has been to either provide one of the inputs (like Nvidia) or combine the inputs yourself (like OpenAI).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvdc19mu5dj5i6crwpnex.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvdc19mu5dj5i6crwpnex.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Because models are improving so fast, the rest of the infrastructure stack cannot stabilize around them. Other AI infrastructure startups address the limitations of models themselves, and therefore, their adoption may be slower during this AI scaling period1.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why AI scaling breaks
&lt;/h2&gt;

&lt;p&gt;Surely, this can’t continue forever (~cautiously optimistic NVDA holder~). At some point, either 1) these laws no longer apply or 2) we can no longer scale up one of the inputs.&lt;/p&gt;

&lt;p&gt;The first reason AI scaling would break is that increasing these 3 inputs no longer results in a meaningful improvement in model performance — the operative word is “meaningful”.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Scaling laws only quantify the decrease in perplexity, that is, improvement in how well models can predict the next word in a sequence. What matters is “emergent abilities”, that is, models’ tendency to acquire new capabilities as size increases. — AI Snake Oil&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;We want models to develop new abilities for new use cases. This makes the R&amp;amp;D economics here unlike most other technologies. We don’t build better cars hoping that they will also learn to fly, but that’s the magic we’re hoping for with foundation models. And to be very clear, the AI scaling laws tell us that we’ll keep getting better at predicting the next token, but they do not have any predictive power on if and when we get new emergent abilities.&lt;/p&gt;

&lt;p&gt;Okay, but even if we continue to see AI scaling laws hold, we may hit a bottleneck on one of the inputs. In other words, we may no longer be able to scale either model size, training data, or compute. The most credible hypotheses I’ve seen are related to constraints around data and hardware limitations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Data Limitations&lt;/strong&gt;&lt;br&gt;
On the data side, it’s simple — we may just run out of training data.&lt;/p&gt;

&lt;p&gt;Epoch has estimated that there are ~300T tokens of publicly available high-quality text data. Their estimates suggest that we will run out sometime late into this decade. We know that Llama-3 (the best available open source model) was trained on a 15T token training set of publicly available sources. As shown below, models have historically increased dataset size by 2.8x per year, so in ~5 years we will need &amp;gt;100x our current amount. Even with innovative AI techniques and workarounds, depletion of our training data seems inevitable.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzolutmh6r3vu5y5jyo70.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzolutmh6r3vu5y5jyo70.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hardware Limitations&lt;/strong&gt;&lt;br&gt;
Another possibility is that we are unable to scale model size and compute due to hardware limitations, which could take two forms. First, GPUs’ memory and bandwidth capacity may not keep up with the demands of increasingly larger models. Sustaining the current pace of progress requires ongoing innovation in hardware and algorithms.&lt;/p&gt;

&lt;p&gt;Outside of the GPU, there are a lot of novel supply chain, energy, and networking problems associated with scaling up data centers. Meta had to scrap their previously planned data center buildout because they thought, “if we're just starting on construction for that data center, by the time we're done, we might be obsolete [unless we futureproof].” As hard as acquiring GPUs is today, building future-proof data centers is harder.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgpm5ntmls05d4i8g49mc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgpm5ntmls05d4i8g49mc.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Infra in a post-AI scaling world
&lt;/h2&gt;

&lt;p&gt;As AI scaling hits its limits, the constraints on model size, data availability, and hardware capabilities will usher in a new infrastructure landscape. Nvidia’s growth will slow down and OpenAI will hope to solidify their position as the AWS of AI. The diminishing returns of larger models will necessitate a shift in focus towards optimizing existing resources and away from the foundation model layer of the stack.&lt;/p&gt;

&lt;p&gt;I’m excited for new categories of infrastructure to emerge as companies shift from experimentation to optimization. The following are themes I’m interested in:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8sfekjabq6gbn8uwzxvp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8sfekjabq6gbn8uwzxvp.png" alt="Image description"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1/ SMALL MODELS&lt;/strong&gt;&lt;br&gt;
The problem with very large models is that you’re carrying the cost of everything that it does for a very particular purpose… Why do you want to carry the cost of this giant model that takes 32 GPUs to serve for one task?&lt;br&gt;
— Naveen Rao, CEO MosaicML&lt;/p&gt;

&lt;p&gt;There are multiple reasons to run small models. One group of reasons focuses on resource-constrained environments where calling a large cloud-hosted model is not feasible. There has been interesting research focused on optimizing model architecture under the constraints of model size for mobile in particular. Beyond research, companies like Edge Impulse2 help build and deploy models on power-constrained devices ranging from wearables to IoT.&lt;/p&gt;

&lt;p&gt;Even when you can run large models, it may not be preferable. Smaller models are cheaper (inference cost) and faster (latency). As AI scaling slows down, companies will shift from experimenting with new AI use cases to scrutinizing the cost/performance of the ones they have deployed. There has been a lot of work around quantization and distillation, but these techniques can still be challenging to implement for very large models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2/ FINE-TUNING&lt;/strong&gt;&lt;br&gt;
Today, fine-tuning has fallen out of favor for many app builders. Fine-tuning is considered expensive, time-consuming, and more challenging to adopt than techniques like prompt engineering or basic RAG. For many companies, staying up-to-date with the latest model improvements is more important than the incremental boost from fine-tuning.&lt;/p&gt;

&lt;p&gt;While the research community has introduced innovations like LoRA and prefix tuning, implementing these fine-tuning methods remains non-trivial for the average product team. Companies like Predibase and Smol aim to lower the barrier to fine-tuning while enabling teams to build task and domain-specific models.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3/ MULTI-MODEL SYSTEMS&lt;/strong&gt;&lt;br&gt;
Instead of using the most powerful and expensive model for every query, enterprises are building multi-model systems. The idea is to break complex queries into smaller chunks that can be handled by task-optimized smaller models. Companies like Martian, NotDiamond or RouteLLM focus on the model routing for AI developers. Alternatively, businesses like Fireworks are building these “compound AI” systems themselves and letting end developers use them for inference.&lt;/p&gt;

&lt;p&gt;Companies like Arcee and Sakana are charting course for a different multi-model future. They are leveraging model merging as a means to create new models altogether. The former is pushing a paradigm where you merge a small domain-specific model with a large base model, while the latter is focused on merging many smaller models focused on narrow domains or tasks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4/ DATA CURATION&lt;/strong&gt;&lt;br&gt;
If we run out of data, companies will have to compete on data quality and curation rather than sheer quantity. Because more data typically results in better models, companies like Scale AI are seeing lots of traction today in providing labeling and annotation services to scale data volume.&lt;/p&gt;

&lt;p&gt;Eventually, we think the opportunity will shift towards more deeply understanding the relationship between training data and model outputs. Companies like Cleanlab and Datology help answer questions like which datasets are most valuable for a specific task, where are gaps or redundancies, how should we sequence training data, etc.&lt;/p&gt;

&lt;p&gt;As synthetic data becomes a more important part of training datasets, there has been interesting research tying synthetic data towards specific model objectives. Because data has so much variability, more vertically-focused solutions may be necessary for niches like healthcare or financial services.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing
&lt;/h2&gt;

&lt;p&gt;Timing is critical for startups. The right combination of market and technology catalysts are necessary to scale against the tide of incumbent solutions. AI scaling laws have proven to be the backbone of growth for both Nvidia and OpenAI. I’d argue that the inevitable slowdown of AI scaling will actually be the “why now” for a new wave of infrastructure startups.&lt;/p&gt;

&lt;p&gt;As the industry transitions from a focus on sheer scale to optimizing and innovating within existing constraints, the companies that can navigate and lead in this next phase are being built today. If you’re building or thinking about AI infrastructure for a post-scaling world, I’d love to chat.&lt;/p&gt;

&lt;p&gt;Nandu&lt;/p&gt;

</description>
      <category>ai</category>
      <category>watercooler</category>
      <category>discuss</category>
    </item>
    <item>
      <title>Why is Meta spending so much on Open Source AI?</title>
      <dc:creator>nandu 🦦</dc:creator>
      <pubDate>Fri, 26 Apr 2024 19:52:45 +0000</pubDate>
      <link>https://dev.to/nanduanilal/metas-open-source-ai-ambitions-554a</link>
      <guid>https://dev.to/nanduanilal/metas-open-source-ai-ambitions-554a</guid>
      <description>&lt;p&gt;Last week, Meta released their latest set of Large Language Models – Llama 3. We continue to see that the combination of larger training datasets, more compute, and new AI research yield better models. &lt;strong&gt;The early data below suggests Llama 3 is the best open source model available, trailing only OpenAI’s GPT-4.&lt;/strong&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;blockquote&gt;
&lt;p&gt;A quick caveat – Meta’s specific license for Llama allows free commercial usage for companies with less than 700 million monthly users (&amp;gt;99.9% of companies). &lt;/p&gt;
&lt;/blockquote&gt;


&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgo1ml0hw4qrmiar2dnhe.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgo1ml0hw4qrmiar2dnhe.png" alt="Lmsys"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;To put this in perspective, Llama 3 is better than AI models from multiple billion dollar companies whose entire business is predicated on selling models-as-a-service (Anthropic, Cohere, etc). Unlike those companies, Meta allows for free commercial use of these models for nearly everyone. &lt;/p&gt;

&lt;p&gt;The obvious question here is why is Meta spending so much money on building these models just to give them away for free? You would think that a company that came out of a self-proclaimed year of efficiency would not enter such a capital-intensive technology arms race.&lt;/p&gt;

&lt;p&gt;Zuckerberg’s recent interview illuminated just how consequential AI could become to Meta and how they’re trying to steer its direction. &lt;/p&gt;

&lt;p&gt;Before getting into why Meta is all in on open source AI, let’s consider why they were better positioned than most.&lt;/p&gt;

&lt;h2&gt;
  
  
  Talent + Compute = AI
&lt;/h2&gt;

&lt;p&gt;To build powerful models, you primarily need two ingredients: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;AI talent (Engineering and Research)&lt;/li&gt;
&lt;li&gt;GPUs for compute&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Meta was uniquely positioned in that they ended up with these two ingredients prior to their ambitions of building LLMs. &lt;/p&gt;

&lt;p&gt;They’ve had the AI talent for years. The core of their product is AI-powered content feeds and ads and their AI research group is now ~10 years old.&lt;/p&gt;

&lt;p&gt;How they acquired the GPUs is more interesting. Mark describes that Meta acquired the GPUs initially to support Instagram Reels. &lt;/p&gt;

&lt;p&gt;We got into this position with Reels where we needed more GPUs to train the models. It was this big evolution for our services. Instead of just ranking content from people or pages you follow, we made this big push to start recommending what we call unconnected content, content from people or pages that you're not following.&lt;/p&gt;

&lt;p&gt;As Reels tried to do its best impersonation of TikTok’s For You page, Meta acquired GPUs to train and serve these recommendation models. So even though Instagram Reels was late to the For You Page, they were early enough to GPUs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Applications vs. Models
&lt;/h2&gt;

&lt;p&gt;Undoubtedly, Meta was well-positioned to get into the LLM game. But why did they? And why did they choose open source? The answer comes down to controlling their value chain by undermining new entrants. &lt;/p&gt;

&lt;p&gt;Like the rest of the industry, Zuckerberg seems to believe that AI and LLMs specifically will yield better products. But they also have the potential to disrupt Meta’s value chain. In that way, this AI shift is similar to the shift to cloud or mobile where new technology architectures and distribution models emerged. While mobile was an overall win for Meta, Zuckerberg describes that it also introduced economic challenges (app store commission) and loss of control (app store approval, user data collection limitations). &lt;/p&gt;

&lt;p&gt;So the question is, are we set up for a world like that with AI? You're going to get a handful of companies that run these closed models that are going to be in control of the APIs and therefore able to tell you what you can build? &lt;/p&gt;

&lt;p&gt;AI is here to stay, so Meta is attempting to steer its trajectory to one that is more favorable for them long-term. For the sake of simplicity, we can think of AI as playing out in one of two scenarios. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 1 — value accrues to the models&lt;/strong&gt;&lt;br&gt;
For this to be true, you have to believe that models end up being highly differentiated and having high switching costs. This is what the OpenAIs and Anthropics of the world are hoping for. The analogy here would be that Foundation Models are the new cloud providers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Scenario 2 — value accrues to the applications&lt;/strong&gt;&lt;br&gt;
For this to be true, you have to believe that models end up being commoditized and easily substitutable for one another. This is what Meta is hoping for because they are first and foremost an application company.&lt;/p&gt;

&lt;p&gt;Because Meta’s core assets are its applications, it would prefer scenario 2. In fact, if Meta is unable to add the best AI to its apps, then it may be at the mercy of a specialized model provider like OpenAI. Or worse yet, they could lose users to a new app with better AI. To some degree, these things are already happening with ChatGPT and Character.ai acquiring consumer attention more rapidly than we’ve seen before. Ultimately, Meta has to figure out AI because they have too much to lose. &lt;/p&gt;

&lt;h2&gt;
  
  
  Why Open Source AI
&lt;/h2&gt;

&lt;p&gt;Meta’s free, open-source approach is their best bet to make scenario 2 happen. They hope to commoditize the model layer, preventing new players from gaining disproportionate control. And the best way to commoditize models is to make it easily accessible and free.&lt;/p&gt;

&lt;p&gt;This goes back to two of the primary benefits of open source: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Ease of adoption (anyone can access, free, independent of all other products) &lt;/li&gt;
&lt;li&gt;Community-driven ecosystem (customizable, compatibility with other tools) &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;I think there are lots of cases where if this [LLMs/AI] ends up being like our databases or caching systems or architecture, we'll get valuable contributions from the community that will make our stuff better.&lt;/p&gt;

&lt;p&gt;Together, Meta creates a flywheel where developers use Llama because it’s accessible, which in turn drives a better ecosystem, which in turn attracts more developers. For Meta, this flywheel should result in better AI powering their applications without relying on a single, overly powerful model provider. &lt;/p&gt;

&lt;h2&gt;
  
  
  The Android of Models
&lt;/h2&gt;

&lt;p&gt;The analogy that comes to mind is that OpenAI is trying to build iOS and Meta is trying to build Android. While OpenAI builds proprietary, closed models, Meta is building free, open source models for everyone else. As much power as Apple has in the mobile market, the alternative that Android provides is an important balancing force.  &lt;/p&gt;

&lt;p&gt;In many respects, Meta’s decisions here are a reflection of the past. It is taking steps to avoid another Apple in its value chain or the rise of another TikTok. While their pursuits are in self-interest, I am personally optimistic about what open source AI means for the tech landscape. More open models should enable a richer application and infrastructure landscape. In this case, I think there is incentive alignment between Meta’s AI ambitions and the future AI landscape. &lt;/p&gt;

&lt;p&gt;It’s still early, but the signs suggest that open source is catching up fast!&lt;/p&gt;

&lt;p&gt;Subscribe @ nandu.substack.com&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>ai</category>
      <category>discuss</category>
    </item>
  </channel>
</rss>
