<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jason Corso</title>
    <description>The latest articles on DEV Community by Jason Corso (@jasoncorso).</description>
    <link>https://dev.to/jasoncorso</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/jasoncorso"/>
    <language>en</language>
    <item>
      <title>🎥🖐🎥🖐 New Video GenAI with Better Rendering of Hands --&gt; Instructional Video Generation 🎥🖐🎥🖐</title>
      <dc:creator>Jason Corso</dc:creator>
      <pubDate>Tue, 17 Dec 2024 16:09:49 +0000</pubDate>
      <link>https://dev.to/jasoncorso/new-video-genai-with-better-rendering-of-hands-instructional-video-generation-23po</link>
      <guid>https://dev.to/jasoncorso/new-video-genai-with-better-rendering-of-hands-instructional-video-generation-23po</guid>
      <description>&lt;p&gt;&lt;strong&gt;New Paper Alert&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Instructional Video Generation – we are releasing a new method for Video Generation that explicitly focuses on fine-grained, subtle hand motions. Given a single image frame as context and a text prompt for an action, our new method generates high quality videos with careful attention to hand rendering. We use the instructional video domain as driver here given the rich set of videos and challenges in instructional videos both for humans and robots.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Try it out yourself.&lt;/strong&gt; Links to the paper, project page and code are below; and a demo page on HuggingFace is in the works so you can more easily try it on your own.&lt;/p&gt;

&lt;p&gt;Our new method generates instructional videos tailored to &lt;em&gt;your room, your tools, and your perspective&lt;/em&gt;. Whether it’s threading a needle or rolling dough, the video shows &lt;em&gt;exactly how you would do it&lt;/em&gt;, preserving your environment while guiding you frame-by-frame. The key breakthrough is in mastering &lt;strong&gt;accurate subtle fingertip actions&lt;/strong&gt;—the exact fine details that matter most in action completion. By designing automatic Region of Motion (RoM) generation and a hand structure loss for fine-grained fingertip movements, our diffusion-based im model outperforms six state-of-the-art video generation methods, bringing unparalleled clarity to Video GenAI.&lt;/p&gt;

&lt;p&gt;👉 Project Page: &lt;a href="https://excitedbutter.github.io/project_page/" rel="noopener noreferrer"&gt;https://excitedbutter.github.io/project_page/&lt;/a&gt;&lt;br&gt;
👉 Paper Link: &lt;a href="https://arxiv.org/abs/2412.04189" rel="noopener noreferrer"&gt;https://arxiv.org/abs/2412.04189&lt;/a&gt;&lt;br&gt;
👉 GitHub Repo: &lt;a href="https://github.com/ExcitedButter/Instructional-Video-Generation-IVG" rel="noopener noreferrer"&gt;https://github.com/ExcitedButter/Instructional-Video-Generation-IVG&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This paper is coauthored with my students Yayuan Li and Zhi Cao at the University of Michigan and Voxel51.&lt;/p&gt;

</description>
      <category>genai</category>
      <category>videogeneration</category>
      <category>computervision</category>
      <category>ai</category>
    </item>
    <item>
      <title>💧 📉 💧 Are you wasting money &amp; time: does your data have a leak? 💧 📉 💧</title>
      <dc:creator>Jason Corso</dc:creator>
      <pubDate>Thu, 12 Dec 2024 17:24:41 +0000</pubDate>
      <link>https://dev.to/jasoncorso/are-you-wasting-money-time-does-your-data-have-a-leak-24kg</link>
      <guid>https://dev.to/jasoncorso/are-you-wasting-money-time-does-your-data-have-a-leak-24kg</guid>
      <description>&lt;p&gt;New open source AI feature alert! 💧🔔💧🔔💧🔔💧🔔 &lt;/p&gt;

&lt;p&gt;Generalization in machine learning models is still poorly understood. Due to this, the status quo practice is to heuristically verify our models on holdout test sets, and hope that this check has some bearing on performance in the wild. Of course, this means that there is huge cost to faulty testing---a huge cost in both critical MLE time and in error filled data and annotation.&lt;/p&gt;

&lt;p&gt;One common failure mode of testing is when the test split is afflicted with data leakage. When testing on such a split, there is no guarantee that generalization is being verified. In fact, in the extreme case, no new information is gained on the performance of the model outside of the train set. Supervised models learn the minimal discriminative features needed to make a decision, and if those features appear in the test set, a dangerous, false sense of confidence can be built in a model. Don't let this happen to you.&lt;/p&gt;

&lt;p&gt;Leaky splits can be the bane of ML models, giving a false sense of confidence, and a nasty surprise in production.  The image on this post is a sneak peak into what you can expect (this example is taken from ImageNet 👀)&lt;/p&gt;

&lt;p&gt;Check out this Leaky-Splits blog post by my friend and colleague Jacob Sela&lt;br&gt;
&lt;a href="https://medium.com/voxel51/on-leaky-datasets-and-a-clever-horse-18b314b98331" rel="noopener noreferrer"&gt;https://medium.com/voxel51/on-leaky-datasets-and-a-clever-horse-18b314b98331&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Jacob is also the lead developer behind the new open source Leaky-Splits feature in FiftyOne, available in version 1.1.&lt;/p&gt;

&lt;p&gt;This function allows you to automatically:&lt;br&gt;
 🕵 Detect data leakage in your dataset splits&lt;br&gt;
 🪣 Clean your data from these leaks&lt;/p&gt;

&lt;p&gt;This will help you:&lt;br&gt;
✔️ Build trust in your data&lt;br&gt;
📊  Get more accurate evaluations&lt;/p&gt;

&lt;p&gt;And, it's open source. Check it out on GitHub.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/voxel51/fiftyone-brain/blob/2e673cfbf8fb2c3574cbbcdd0bc3350fb877db33/fiftyone/brain/__init__.py#L826" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;From your friends at Voxel51&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>computervision</category>
      <category>machinelearning</category>
      <category>leakysplits</category>
    </item>
    <item>
      <title>How difficult is your dataset REALLY?</title>
      <dc:creator>Jason Corso</dc:creator>
      <pubDate>Tue, 10 Dec 2024 16:06:26 +0000</pubDate>
      <link>https://dev.to/jasoncorso/how-difficult-is-this-dataset-really-12e3</link>
      <guid>https://dev.to/jasoncorso/how-difficult-is-this-dataset-really-12e3</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F81hqyus6st1bbiiia6y3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F81hqyus6st1bbiiia6y3.png" alt="Image description" width="800" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;👽 📈 👽 How difficult is this dataset REALLY? 👽 📈 👽&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;New Paper Alert!&lt;/strong&gt;&lt;br&gt;
Class-wise Autoencoders Measure Classification Difficulty and Detect Label Mistakes&lt;/p&gt;

&lt;p&gt;We like to think that the challenge in training a classifier is handled by hyperparameter tuning or model innovation, but there is rich inherent signal in the data and their embeddings.  Understanding how hard a machine learning problem is has been quite elusive.  Not any more.&lt;/p&gt;

&lt;p&gt;Now you can compute the difficulty of a classification dataset without training a classifier, and requiring only 100 labels per class.  And, this difficulty estimate is surprisingly independent of the dataset size.&lt;/p&gt;

&lt;p&gt;Traditionally, methods for dataset difficulty assessment have been time and/or compute-intensive, often requiring training one or multiple large downstream models. What's more, if you train a model with a certain architecture on your dataset and achieve a certain accuracy, there is no way to be sure that your architecture was perfectly suited to the task at hand — it could be that a different set of inductive biases would have led to a model that learned patterns in the data with far more ease.&lt;/p&gt;

&lt;p&gt;Our method trains a lightweight autoencoder for each class and uses the ratios of reconstruction errors to estimate classification difficulty. Running this dataset difficulty estimation method on a 100k sample dataset takes just a few minutes, and doesn't require tuning or custom processing to run on new datasets!&lt;/p&gt;

&lt;p&gt;How well does it work? We conducted a systematic study of 19 common visual datasets, comparing the estimated difficulty from our method to the SOTA classification accuracy. Aside from a single outlier, the correlation is 0.78. It even works on medical datasets!&lt;/p&gt;

&lt;p&gt;Paper Link:  &lt;a href="https://arxiv.org/abs/2412.02596" rel="noopener noreferrer"&gt;https://arxiv.org/abs/2412.02596&lt;/a&gt;&lt;br&gt;
GitHub Repo: &lt;a href="https://github.com/voxel51/reconstruction-error-ratios" rel="noopener noreferrer"&gt;https://github.com/voxel51/reconstruction-error-ratios&lt;/a&gt;&lt;/p&gt;

</description>
      <category>machinelearning</category>
      <category>ai</category>
      <category>computervision</category>
    </item>
    <item>
      <title>Data curation is the real backbone of AI success</title>
      <dc:creator>Jason Corso</dc:creator>
      <pubDate>Thu, 17 Oct 2024 22:10:07 +0000</pubDate>
      <link>https://dev.to/jasoncorso/data-curation-is-the-real-backbone-of-ai-success-42lh</link>
      <guid>https://dev.to/jasoncorso/data-curation-is-the-real-backbone-of-ai-success-42lh</guid>
      <description>&lt;p&gt;🚀 Data curation is the real backbone of AI success. 🚀&lt;/p&gt;

&lt;p&gt;In machine learning, choosing a model is important—but it’s the data that makes or breaks your project. The biggest gains come from continuous refinement of both data and models, driven by experts who understand the domain and the nuances of the task.&lt;/p&gt;

&lt;p&gt;In my latest blog, I dive into why data curation is neither automatable nor outsourceable and how it defines the success of your AI projects. I provide a rigorous definition of data curation, discuss why it requires human expertise in domain and in ML, and provide a thorough discussion of bias in data curation.&lt;/p&gt;

&lt;p&gt;Read more: &lt;a href="https://medium.com/@jasoncorso/data-curation-the-beast-behind-every-ai-model-d136eac4da6b" rel="noopener noreferrer"&gt;https://medium.com/@jasoncorso/data-curation-the-beast-behind-every-ai-model-d136eac4da6b&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>datacuration</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>So I Built This: Broadening the Impact of What You’ve Built in the Lab</title>
      <dc:creator>Jason Corso</dc:creator>
      <pubDate>Thu, 06 Jun 2024 21:24:43 +0000</pubDate>
      <link>https://dev.to/jasoncorso/so-i-built-this-broadening-the-impact-of-what-youve-built-in-the-lab-1pb3</link>
      <guid>https://dev.to/jasoncorso/so-i-built-this-broadening-the-impact-of-what-youve-built-in-the-lab-1pb3</guid>
      <description>&lt;p&gt;How the non-profit, foundation model may be your solution to the challenge of broadening the impact of your research&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnhldi1faiwdz9bw5merd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnhldi1faiwdz9bw5merd.png" alt="Image description" width="800" height="456"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;&lt;center&gt;So I Built This discusses how to align your actions with your interests when considering different ways to broaden the impact of your academic research. Image generated by DALL-E.&lt;/center&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In recent months, I’ve noticed a trend: colleagues approaching me with an eerily similar question that taps into a larger issue facing academics and innovators alike:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“So, I built this. It’s gotten really excellent early feedback. I want to bring it to more people, what should I do?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Now, believe me, I’m both an academic and a startup founder-operator, I have some pretty interesting colleagues. So when one begins with “So I built this,” I give them my attention. What they’ve built is almost certainly state-of-the-art in some capacity. And they’ve each observed its value when used in their early user testing.&lt;/p&gt;

&lt;p&gt;In each case, of course, my colleagues shared details about what they had built and their ideas on what they could do next. But, those details don’t matter greatly, and I want to respect the privacy of the individuals who came to me. I do believe in all of these encounters there was genuine interest in broadening the impact of what they built.&lt;/p&gt;

&lt;p&gt;Below, I’ll break down these conversations into key questions and responses, offering insights on whether to start a company, apply for a grant, or consider a non-profit foundation for broadening the impact of such an innovation.&lt;/p&gt;

&lt;p&gt;Interestingly, in these discussions, it turned out that my colleagues really did not have much interest in building a startup company. They sincerely wanted to see their work adopted. But, they ultimately were less committed to the daunting prospect of building a startup, a roller coaster ride of challenges that is neither for the lighthearted, nor in alignment with the classical performance indicators of academics. This led us to focus on the idea of creating a non-profit or foundation to support broader adoption.&lt;/p&gt;

&lt;h2&gt;
  
  
  Should I start a company for this?
&lt;/h2&gt;

&lt;p&gt;Essentially, no matter how steeped in the academic world these folks are, they understand that startups and university spinouts are a natural option for academics to bring the fruits of their labor to a broader user base. However, actually operating a company means work that likely none of these folks or their team members were trained for or had a particular interest in doing. This is intellectually stimulating work like building a go-to-market strategy and evaluating product-market fit, and it is operations labor like managing a payroll provider, etc. And this is not even getting close to the work required for getting funding from seed or venture investors, which itself takes significant effort.&lt;/p&gt;

&lt;p&gt;After talking about this with each of my inquiring colleagues, it became increasingly clear that although they had passion for their innovation, they did not necessarily have passion for building a company to bring it to market. In each case, we tended to leave this part of the discussion at some form of: “if you are really going to consider starting a company for this, then you need to ensure that you are fully committed to its success.” Starting a company is no small task.&lt;/p&gt;

&lt;p&gt;Oddly, we talked about finding other people to actually start the company for them, which is a classical way of approaching this problem for academics. Yet, none seemed willing to hand over the control, which would ultimately be necessary.&lt;/p&gt;

&lt;h2&gt;
  
  
  Should I apply for an SBIR?
&lt;/h2&gt;

&lt;p&gt;Inevitably, being academics, we also talked about applying for a grant to fund the work of broadening adoption. Most federal funding agencies in the US have a &lt;a href="https://www.sbir.gov/"&gt;Small Business Innovation Research (SBIR) &lt;/a&gt;grant program that aims to stimulate technological innovation by funding small businesses to help commercialize their innovations, thereby fostering economic growth and addressing specific governmental needs. While the program’s aim of stimulating innovation is important, it’s not obvious to me that it’s a match for this case.&lt;/p&gt;

&lt;p&gt;First, you began the conversation telling me you’ve already built something worthwhile. As far as I know, the SBIR program is for yet-to-be-built extensions of your innovation that you may or may not have much interest in.&lt;/p&gt;

&lt;p&gt;Second, I’ve seen numerous cases over the years where someone starts a company funded by an SBIR with the intention of building a business. However, the lifecycle of one SBIR turns out to be insufficient to actually bootstrap the business. So, the founders then need to secure further funding. They could try to go to seed or venture funding. But, this would be a challenge. Although the SBIR grant mentions an emphasis on a business plan, it seems rare that actual customer outreach and product-market fit are established during the work of a typical SBIR.&lt;/p&gt;

&lt;p&gt;These academic founders would need to emphasize the other half of the startup — building the actual business — which is a muscle not frequently developed during the grant period.&lt;/p&gt;

&lt;p&gt;This makes it difficult to convince investors, especially given the time already invested in the company. Instead, the much easier thing to do is apply for another SBIR, laying the groundwork for what are commonly called “SBIR shops,” i.e., small businesses whose primary go-to-market approach is funding through SBIR programs, rarely turning anything into an actual commercial product.&lt;/p&gt;

&lt;p&gt;Third, the principle of the SBIR grant is to, in good faith, actually attempt to start a business based on the technology you develop in the grant. It is not free money; in fact, it is tax-payer money marked for stimulating technological innovation in new small businesses. Is this something you really want to do? As I said above, it is not a decision to be taken lightly.&lt;/p&gt;

&lt;h2&gt;
  
  
  Have you considered a non-profit or a foundation?
&lt;/h2&gt;

&lt;p&gt;While my colleagues and I explored various options for their business endeavors, such as licensing their innovation to another company, our conversations consistently gravitated towards a different model that seemed to better fit the needs and goals of academic innovators. Since these individuals did not want to focus on the work of building a startup company, the route to a non-profit or foundation seemed more appropriate. This route would enable them to maintain a mission-driven focus emphasizing the adoption and impact of their innovation irrespective of the commercial viability.&lt;/p&gt;

&lt;p&gt;The &lt;a href="https://www.embodiedaifoundation.org/"&gt;Embodied AI Foundation&lt;/a&gt; is probably the first place I learned about this different style of broadening adoption of technical work. It supports the highly popular open source CARLA autonomous driving simulator, for example. This model is, however, becoming more popular. I find it popping up in numerous ways to support various innovations. For example, the &lt;a href="http://www.cvdfoundation.org/"&gt;Common Visual Data Foundation&lt;/a&gt; was created to provide long-term support for the wildly popular &lt;a href="https://cocodataset.org/#home"&gt;Common Objects in Context (COCO) dataset&lt;/a&gt; in computer vision.&lt;/p&gt;

&lt;p&gt;While this may be an attractive model that lets the academic achieve their impact goals without the challenges inherent in creating a for-profit entity, it is not without its own risks. The two biggest being funding and operations. How will you fund the effort? This may indeed mean grant programs like those above, but it also unlocks different types of money that emphasize such non-profits as long as your mission is in alignment with those of the money source. Operationally, at least once the effort grows, you likely still need to find someone to help you run the entity. Yet, these challenges seem to me to be a heck of a lot easier than the challenges of starting a for-profit entity.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing
&lt;/h2&gt;

&lt;p&gt;In conclusion, while the allure of a grant- or venture-backed startup company might initially seem like the most direct route to broadening the impact of what you’ve built, it’s essential to consider alternative models that align better with your goals and resources. The non-profit or foundation route offers a mission-driven approach that prioritizes adoption and societal impact over commercial gain, mitigating many of the operational burdens and risks associated with for-profit ventures. As we’ve discussed, it’s not without its challenges, particularly in funding and management, but it offers a viable path for academics passionate about their creations yet hesitant to embark on the startup rollercoaster. By focusing on your mission and leveraging the support of grants and philanthropic funding, you can achieve a broader and more sustainable impact with your innovation.&lt;/p&gt;

&lt;h3&gt;
  
  
  Acknowledgments
&lt;/h3&gt;

&lt;p&gt;Thank you to my colleague Michelle Brinich for reviewing and editing this blog.&lt;/p&gt;

&lt;h3&gt;
  
  
  Biography
&lt;/h3&gt;

&lt;p&gt;Jason Corso is Professor of Robotics, Electrical Engineering and Computer Science at the University of Michigan and Co-Founder / Chief Science Officer of the AI startup Voxel51. He received his PhD and MSE degrees at Johns Hopkins University in 2005 and 2002, respectively, and a BS Degree with honors from Loyola University Maryland in 2000, all in Computer Science. He is the recipient of the University of Michigan EECS Outstanding Achievement Award 2018, Google Faculty Research Award 2015, Army Research Office Young Investigator Award 2010, National Science Foundation CAREER award 2009, SUNY Buffalo Young Investigator Award 2011, a member of the 2009 DARPA Computer Science Study Group, and a recipient of the Link Foundation Fellowship in Advanced Simulation and Training 2003. Corso has authored more than 150 peer-reviewed papers and hundreds of thousands of lines of open-source code on topics of his interest including computer vision, robotics, data science, and general computing. He is a member of the AAAI, ACM, MAA and a senior member of the IEEE.&lt;/p&gt;

&lt;h3&gt;
  
  
  Disclaimer
&lt;/h3&gt;

&lt;p&gt;This article is provided for informational purposes only. It is not to be taken as legal or other advice in any way. The views expressed are those of the author only and not his employer or any other institution. The author does not assume and hereby disclaims any liability to any party for any loss, damage, or disruption caused by the content, errors, or omissions, whether such errors or omissions result from accident, negligence, or any other cause.&lt;/p&gt;

&lt;h3&gt;
  
  
  Copyright 2024 by Jason J. Corso. All Rights Reserved.
&lt;/h3&gt;

&lt;p&gt;No part of this publication may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording, or other electronic or mechanical methods, without the prior written permission of the publisher, except in the case of brief quotations embodied in critical reviews and certain other noncommercial uses permitted by copyright law. For permission requests, write to the publisher via direct message on X/Twitter at &lt;em&gt;JasonCorso&lt;/em&gt;.&lt;/p&gt;

</description>
      <category>computervision</category>
      <category>ai</category>
      <category>machinelearning</category>
      <category>datascience</category>
    </item>
    <item>
      <title>Observations on MLOps – A Fragmented Mosaic of Mismatched Expectations</title>
      <dc:creator>Jason Corso</dc:creator>
      <pubDate>Fri, 05 Apr 2024 17:53:05 +0000</pubDate>
      <link>https://dev.to/jasoncorso/observations-on-mlops-a-fragmented-mosaic-of-mismatched-expectations-1mc9</link>
      <guid>https://dev.to/jasoncorso/observations-on-mlops-a-fragmented-mosaic-of-mismatched-expectations-1mc9</guid>
      <description>&lt;p&gt;🚀 New Blog Alert: Observations on MLOps – A Fragmented Mosaic of Mismatched Expectations 🚀&lt;/p&gt;

&lt;p&gt;Diving deep into the heart of MLOps, my latest blog explores the real-world challenges and fragmented beauty within AI/ML production environments. From the perspectives gained through extensive conversations with production teams to the candid exploration of tooling landscapes and the elusive quest for end-to-end solutions, this piece offers a unique lens on the state of practice in MLOps.  And really is an skilled outsider (an academic) looking into this vast expanse to learn.&lt;/p&gt;

&lt;p&gt;Discover insights on the intricate dance between data science and IT expertise, the mosaic of MLOps tools, and the quest for flexibility in the fast-evolving AI/ML field. Hopefully, it will be helpful to those navigating the complexities of AI/ML production. Join the discussion and share your thoughts!&lt;/p&gt;

&lt;p&gt;🔗 Read the blog: &lt;a href="https://medium.com/@jasoncorso/observations-on-mlops-a-fragmented-mosaic-of-mismatched-expectations-3488685ec0b6"&gt;https://medium.com/@jasoncorso/observations-on-mlops-a-fragmented-mosaic-of-mismatched-expectations-3488685ec0b6&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  AI #ML #MLOps #DataScience #MachineLearning #TechnologyInsights
&lt;/h1&gt;

</description>
      <category>mlops</category>
      <category>ai</category>
      <category>aiops</category>
      <category>aistack</category>
    </item>
    <item>
      <title>A Quick Intro To Seed Rejection in Human+AI Teams</title>
      <dc:creator>Jason Corso</dc:creator>
      <pubDate>Sat, 23 Mar 2024 19:38:55 +0000</pubDate>
      <link>https://dev.to/jasoncorso/a-quick-intro-to-seed-rejection-in-humanai-teams-1j2g</link>
      <guid>https://dev.to/jasoncorso/a-quick-intro-to-seed-rejection-in-humanai-teams-1j2g</guid>
      <description>&lt;p&gt;🚀 Excited to share my latest blog: "Getting Rid of the Bad Seeds: A Quick Intro to Seed Rejection in Human+AI Teams"! 🚀&lt;/p&gt;

&lt;p&gt;For more than 40 years, I’ve been interested in how we interact with machines.  This blog dives into the world of human+AI collaboration.  It provides a brief introduction to the problem of seed rejection in human+AI collaboration.&lt;/p&gt;

&lt;p&gt;This piece highlights my understanding of human+AI interaction has evolved. Humans can prompt AI systems.  AI can prompt humans.  Wait, what?!  Yep!  Or Humans and AI can work together.&lt;/p&gt;

&lt;p&gt;Read the &lt;a href="https://medium.com/@jasoncorso/getting-rid-of-the-bad-seeds-a-quick-intro-to-seed-rejection-in-human-ai-teams-7497129323ff"&gt;blog here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;I'd love to hear what the dev.to community thinks about this direction.&lt;/p&gt;

</description>
      <category>humanaiteaming</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
