<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: S🅰️Ⓜ️ 🛋</title>
    <description>The latest articles on DEV Community by S🅰️Ⓜ️ 🛋 (@couch).</description>
    <link>https://dev.to/couch</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/couch"/>
    <language>en</language>
    <item>
      <title>Smart Cover Letters Made Stupid Simple</title>
      <dc:creator>S🅰️Ⓜ️ 🛋</dc:creator>
      <pubDate>Sun, 26 Jan 2025 00:12:01 +0000</pubDate>
      <link>https://dev.to/couch/smart-cover-letters-made-stupid-simple-2p52</link>
      <guid>https://dev.to/couch/smart-cover-letters-made-stupid-simple-2p52</guid>
      <description>&lt;p&gt;&lt;em&gt;This is a submission for the &lt;a href="https://srv.buysellads.com/ads/long/x/T6EK3TDFTTTTTT6WWB6C5TTTTTTGBRAPKATTTTTTWTFVT7YTTTTTTKPPKJFH4LJNPYYNNSZL2QLCE2DPPQVCEI45GHBT" rel="noopener noreferrer"&gt;Agent.ai&lt;/a&gt; Challenge: Full-Stack Agent (&lt;a href="https://dev.to/challenges/agentai"&gt;See Details&lt;/a&gt;)&lt;/em&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Built
&lt;/h2&gt;

&lt;p&gt;For the Agent.ai Challenge, I built an intelligent, personalized Cover Letter Writer that streamlines the job application process by generating tailored cover letters based on the user’s LinkedIn profile and a job posting. This agent leverages multiple advanced features of Agent.ai, including invoking python utilities via serverless functions, invoking another agent, and custom prompts to deliver high-quality, user-focused results.&lt;/p&gt;

&lt;h3&gt;
  
  
  Key Features and Workflow
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Inputs:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;LinkedIn Profile URL:&lt;/strong&gt; The agent accepts the user’s LinkedIn URL to parse and extract the username. Using Agent.ai’s “LinkedIn Profile” action, the agent fetches the user’s public profile data, such as skills, work history, and accomplishments. Anecdotally: I opted for asking for the full URL rather than just the username after testing this with 15 friends; the majority provided the full LinkedIn URL or didn't know what "LinkedIn username" meant when asked. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A serverless function processes the URL to extract the username seamlessly. (&lt;em&gt;Note: In the future I could imagine a world where a string parse action is available that let's you use regex for a simpler approach to this&lt;/em&gt;).&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Job Posting URL:&lt;/strong&gt; The agent can process any publicly available job posting. It fetches the page content and sends it to a secondary Agent.ai Agent ("&lt;strong&gt;Job Post Summarizer&lt;/strong&gt;"), which parses and summarizes the posting into an XML format. This modular design ensures that the job summarization logic is isolated, making it easier to improve independently of the cover letter writing logic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Output Tone:&lt;/strong&gt; The user can specify the tone of the cover letter, such as formal, friendly, or academic, allowing for customization aligned with the applicant's personality, and the job and company culture.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Additional Notes:&lt;/strong&gt; An optional input field allows users to provide specific instructions to the agent, such as emphasizing a particular skill or experience not prominently featured on their LinkedIn profile. This flexibility ensures the final output aligns with the user's goals.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;2. Core Functionality:&lt;/strong&gt;&lt;br&gt;
After collecting the inputs, the agent integrates the parsed job posting in a structured format and LinkedIn profile data into a tailored writing prompt for the AI. The prompt ensures that the resulting cover letter:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Highlights the user’s key skills and achievements relevant to the job description.&lt;/li&gt;
&lt;li&gt;Addresses specific company and role details.&lt;/li&gt;
&lt;li&gt;Includes placeholders for missing personal details (e.g., phone number, email), which users can fill in manually.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3&gt;
  
  
  Advanced Features Used
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Invoking Python Utilities via Serverless Functions:&lt;/strong&gt; Serverless functions are used to parse the LinkedIn URL into the username, streamlining the input process for users.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Agent.ai Actions:&lt;/strong&gt; Leveraged the “LinkedIn Profile” action to dynamically pull user data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Process Invocations:&lt;/strong&gt; Integrated a secondary agent for job post parsing, maintaining a modular design for scalability and future enhancements.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Custom Prompts:&lt;/strong&gt; The AI’s writing prompt dynamically adapts based on user inputs, ensuring personalized, high-quality outputs.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3&gt;
  
  
  Results
&lt;/h3&gt;

&lt;p&gt;This agent simplifies a common but time-consuming task by automating the creation of professional and personalized cover letters. It not only saves time but also ensures the letters are targeted and effective, helping users stand out in competitive job markets.&lt;/p&gt;

&lt;p&gt;By integrating advanced Agent.ai capabilities, this solution showcases the potential of AI to enhance productivity and deliver meaningful, user-centered tools.&lt;/p&gt;
&lt;h2&gt;
  
  
  Demo
&lt;/h2&gt;

&lt;p&gt;Try the &lt;a href="https://agent.ai/agent/your-cover-letter-writer" rel="noopener noreferrer"&gt;Cover Letter Writer&lt;/a&gt; for yourself!&lt;/p&gt;



&lt;p&gt;&lt;iframe src="https://player.vimeo.com/video/1050399114" width="710" height="399"&gt;
&lt;/iframe&gt;
&lt;/p&gt;

&lt;h2&gt;
  
  
  Agent.ai Experience
&lt;/h2&gt;

&lt;p&gt;My experience with Agent.ai was both rewarding and insightful. Here's a breakdown of my journey:&lt;/p&gt;

&lt;h3&gt;
  
  
  What I Loved
&lt;/h3&gt;

&lt;p&gt;Agent.ai’s interface is clean, intuitive, and well-suited for building dynamic, AI Agents. I particularly enjoyed the ability to use tools like serverless functions and secondary agent invocations to create flexible and modular workflows. The platform's ability to dynamically pull data (e.g., LinkedIn profiles) and leverage processes like custom prompt generation made building my Cover Letter Writer feel both creative and productive.&lt;/p&gt;

&lt;p&gt;The modular design philosophy stood out to me. By breaking down tasks into smaller agents (e.g., a separate "Job Post Summarizer"), I could build a scalable system that balances simplicity and functionality. This approach aligned well with the flexibility of agentic systems, where the agent takes control over how tasks are completed.&lt;/p&gt;

&lt;p&gt;This allowed me to systematically break down the entire task of writing a well-researched Cover Letter, then add the necessary actions for each step. &lt;/p&gt;

&lt;h3&gt;
  
  
  Challenges I Encountered
&lt;/h3&gt;

&lt;p&gt;As with many early-stage tools, there were some rough edges. Specifically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Documentation Gaps:&lt;/strong&gt; Certain features, like executing serverless functions, accessing agent variables, and defining output variables, lacked detailed documentation. This added some trial-and-error to the process.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Debugging Serverless Functions:&lt;/strong&gt; While the ability to run Python utilities via serverless functions is incredibly powerful, the process for debugging and understanding the execution environment required some reverse engineering. I ended up creating test agents and carefully analyzing outputs to better understand the system's capabilities and constraints.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  How I Worked Through the Challenges
&lt;/h3&gt;

&lt;p&gt;Despite these hurdles, I found the problem-solving process to be a ton of fun! By experimenting with the platform and analyzing the environment, I gained a deeper understanding of how Agent.ai operates. This hands-on exploration was invaluable, and I appreciated how the platform’s design allowed me to iterate quickly.&lt;/p&gt;

&lt;h3&gt;
  
  
  What Could Be Improved
&lt;/h3&gt;

&lt;p&gt;To enhance the developer experience, I’d love to see:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Expanded Documentation:&lt;/strong&gt; More guides and examples for advanced features like serverless functions, variable handling, and output definitions would save time and lower the barrier to entry for new users.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Debugging Tools:&lt;/strong&gt; Built-in tools for debugging serverless functions or visualizing the agent’s environment during execution would be a game-changer for efficiency. The existing debug tab is great, iterating on this and making it more powerful would be awesome! Specifically if we could "re-run" agents from a specific step with saved inputs would make debugging and iterating even faster.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Final Thoughts
&lt;/h3&gt;

&lt;p&gt;Overall, Agent.ai is an awesome platform with a lot of potential for building innovative, agent-driven utilities. While there’s room for improvement in terms of documentation and developer support, the core functionality is robust and well designed, and the process of building with Agent.ai is both fun and empowering.&lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>agentaichallenge</category>
      <category>ai</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>How to Build a Fortnite Object Detection Model - Updated</title>
      <dc:creator>S🅰️Ⓜ️ 🛋</dc:creator>
      <pubDate>Fri, 26 Jul 2019 19:46:57 +0000</pubDate>
      <link>https://dev.to/ibmdeveloper/how-to-build-a-fortnite-object-detection-model-4ia6</link>
      <guid>https://dev.to/ibmdeveloper/how-to-build-a-fortnite-object-detection-model-4ia6</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fyflq3gylsbgtfy4l0dwi.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fyflq3gylsbgtfy4l0dwi.gif"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In the &lt;a href="https://dev.to/couch/using-video-games-to-improve-machine-learning-2lk0"&gt;previous post&lt;/a&gt; I talked about how video games can be used as a resource for building general machine learning models. Today I want to walk through building an object detection model using the Watson Machine Learning service to identify and track objects in Fortnite. Let's jump right in!&lt;/p&gt;

&lt;h1&gt;
  
  
  Resources
&lt;/h1&gt;

&lt;p&gt;We'll be using a few different services available on IBM Cloud, so first you'll want to &lt;a href="https://ibm.biz/cloud-annotations-sign-up" rel="noopener noreferrer"&gt;create a new account, or log in&lt;/a&gt; to your existing account.&lt;/p&gt;

&lt;p&gt;The services that we'll be using are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Cloud Object Storage: To store our data and models&lt;/li&gt;
&lt;li&gt;Watson Machine Learning: Environment to train our model&lt;/li&gt;
&lt;li&gt;Cloud Annotations: To quickly label our training data&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You will also need to have a video of Fortnite gameplay that you can use as training and test data (I have provided one, &lt;a href="https://github.com/samuelcouch/fortnite-obj-detect-tutorial/tree/master/training-data" rel="noopener noreferrer"&gt;here&lt;/a&gt;, but the more, the better).&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud Object Storage
&lt;/h2&gt;

&lt;p&gt;Once you have logged into IBM Cloud, simply click on &lt;strong&gt;"Create Resource"&lt;/strong&gt; and search for &lt;code&gt;"object storage"&lt;/code&gt;, give your instance a name, I chose &lt;code&gt;"couch-fortnite-object-storage"&lt;/code&gt; and select the &lt;strong&gt;Lite&lt;/strong&gt; plan (the lite plan is free and allows up to 25GB of storage). Once the service is created, we need to create credentials so that we can use our object storage to store both our test data and model files. Click &lt;strong&gt;New Credential&lt;/strong&gt; and make sure that the &lt;strong&gt;role&lt;/strong&gt; is set to &lt;strong&gt;&lt;em&gt;Writer&lt;/em&gt;&lt;/strong&gt; and the option to &lt;strong&gt;&lt;em&gt;include HMAC credential&lt;/em&gt;&lt;/strong&gt; is checked.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Faovjonxlib2ye6o2557d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Faovjonxlib2ye6o2557d.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once created, click &lt;strong&gt;View Credential&lt;/strong&gt;, you will see a JSON output of your credential, we need a few elements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;apikey&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;cos_hmac_keys&lt;/code&gt; &lt;strong&gt;––&amp;gt;&lt;/strong&gt; &lt;code&gt;access_key_id&lt;/code&gt; and &lt;code&gt;secret_access_key&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;&lt;code&gt;resource_instance_id&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You can either keep this open in a second tab, or save the JSON in a text file to use in a few moments.&lt;/p&gt;

&lt;h2&gt;
  
  
  Watson Machine Learning
&lt;/h2&gt;

&lt;p&gt;The last service we need to use is Watson Machine Learning - follow the same steps as above, searching for &lt;code&gt;"watson machine learning"&lt;/code&gt;, give it a name and select the &lt;strong&gt;lite&lt;/strong&gt; plan. We need to create credentials for this service as well. Click &lt;strong&gt;New Credential&lt;/strong&gt; and again make sure to select &lt;strong&gt;&lt;em&gt;Writer&lt;/em&gt;&lt;/strong&gt; as the role. Click &lt;strong&gt;View Credentials&lt;/strong&gt; and again, make a note of some of the elements that we will need later:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;instance_id&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;password&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;url&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;username&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Preparing the data
&lt;/h1&gt;

&lt;p&gt;The goal here will be to train a model that can both identify and track an object in videos of Fortnite gameplay. We'll use a tool called Cloud Annotations to simplify this process. Navigate to &lt;a href="https://cloud.annotations.ai/" rel="noopener noreferrer"&gt;cloud.annotations.ai&lt;/a&gt; – we'll use our &lt;strong&gt;object storage&lt;/strong&gt; credentials to login. Enter your &lt;code&gt;resource_instance_id&lt;/code&gt; and &lt;code&gt;apikey&lt;/code&gt;, select &lt;strong&gt;US&lt;/strong&gt; as the region.&lt;/p&gt;

&lt;p&gt;Once logged in, the first thing we need to do is click &lt;strong&gt;Create bucket&lt;/strong&gt; and give it a name. Next, select &lt;strong&gt;&lt;em&gt;localization&lt;/em&gt;&lt;/strong&gt; – this will allow us to label objects in a photo by drawing bounding boxes.&lt;br&gt;
&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fc6lp2vlntz2zs4ki016r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fc6lp2vlntz2zs4ki016r.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Next, click &lt;strong&gt;add media&lt;/strong&gt; and select your Fortnite video (video files will be split into individual frames). Now, click &lt;strong&gt;add label&lt;/strong&gt;, let's name it &lt;strong&gt;&lt;em&gt;baller&lt;/em&gt;&lt;/strong&gt; so that we can label ballers in the video (the Fortnite vehical).&lt;/p&gt;

&lt;p&gt;Now we can go through our images, drawing boxes around each baller that we see. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F6wdwlmipy2g3p95flpe7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2F6wdwlmipy2g3p95flpe7.png"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You may label as many or as few images as you would like. Only images that have labels will be used in training.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;As a general note about machine learning, and Fortnite specifically, you should use training data that incorporates all spectrums of environments that you anticipate using in testing and general use of your mode;. What I mean specifically in Fortnite is that there are many different environments you can encounter (city-scapes, trees, snow, lava,  etc.), and you should try to encorporate at least a few different environments in your training data to build the best possible model.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Training
&lt;/h1&gt;

&lt;p&gt;We will be using a CLI tool to interface with our labeled training images and train/download our model. The CLI requires Node 10.13.0 or later.&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;npm install -g cloud-annotations&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Once installed, you will have access to a command, &lt;code&gt;cacli&lt;/code&gt; in your terminal.&lt;/p&gt;

&lt;h2&gt;
  
  
  Training our model
&lt;/h2&gt;

&lt;p&gt;I suggest creating a new directory where we can run our training operations and eventually download our model (I created a directory called &lt;code&gt;fortnite-obj-detection&lt;/code&gt;). From that directory run the command &lt;code&gt;cacli train&lt;/code&gt;, the first time you run this command it will prompt you for credentials for both your &lt;strong&gt;Watson Machine Learning&lt;/strong&gt; and &lt;strong&gt;object storage&lt;/strong&gt; instances, this allows the tool to access the training data and then train our model.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;cacli&lt;/code&gt; will also ask about training params, using the &lt;code&gt;k80&lt;/code&gt; gpu is what we will be using and is included in the lite plan of Watson Machine Learning. The steps, I suggest using &lt;code&gt;20 * [number of training images]&lt;/code&gt; as a general rule of thumb.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Once run, a configuration file will be created so in the future you can simply retrain the model with new data without providing the service credentials.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Once all of the parameters have been filled in, we're ready to train! The CLI tool should automatically initiate the training job and join the queue, it will provide for you a &lt;code&gt;model_id&lt;/code&gt;, we will need this to both monitor and download the model. The CLI will ask if you would like to monitor the job once it has started, but if you close the terminal or would like to monitor elsewhere, you can also run &lt;code&gt;cacli progress [model_id]&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Download the model
&lt;/h2&gt;

&lt;p&gt;Once the job has finished, we're ready to use the model, but first we need to download it! Simply run &lt;code&gt;cacli download [model_id]&lt;/code&gt; and it will retrieve the trained model from our object storage bucket and download it locally. The tool will download 3 versions of the model, ready to deploy in various environments. We will be using the &lt;strong&gt;model_web&lt;/strong&gt; version, ready to use in a tensorflow.js application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Using the model
&lt;/h2&gt;

&lt;p&gt;A &lt;a href="https://github.com/cloud-annotations/standalone-video-react" rel="noopener noreferrer"&gt;standalone react app is available&lt;/a&gt; for you to clone and use already! Once you've cloned the repository, simply copy the &lt;strong&gt;model_web&lt;/strong&gt; directory (the whole directory), and put it in the &lt;strong&gt;public&lt;/strong&gt; directory of the react app. Finally, add a video to the &lt;strong&gt;public&lt;/strong&gt;, called &lt;strong&gt;&lt;em&gt;video.mov&lt;/em&gt;&lt;/strong&gt;. Finally, run the app! If all goes well, the video will play and display bounding boxes around the objects that it has identified.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fyflq3gylsbgtfy4l0dwi.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fthepracticaldev.s3.amazonaws.com%2Fi%2Fyflq3gylsbgtfy4l0dwi.gif"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Hopefully this is a great starting point for you to build your own object detection models! Like I said before, I think that video games create a great environment for developing general purpose machine learning model.&lt;/p&gt;

</description>
      <category>gamedev</category>
      <category>machinelearning</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Using Video Games to Improve Machine Learning (2023)</title>
      <dc:creator>S🅰️Ⓜ️ 🛋</dc:creator>
      <pubDate>Fri, 19 Jul 2019 17:49:09 +0000</pubDate>
      <link>https://dev.to/couch/using-video-games-to-improve-machine-learning-2lk0</link>
      <guid>https://dev.to/couch/using-video-games-to-improve-machine-learning-2lk0</guid>
      <description>&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s---hUiDQfi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/2i8dwev3q2i0dxd2s3cy.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s---hUiDQfi--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_66%2Cw_800/https://thepracticaldev.s3.amazonaws.com/i/2i8dwev3q2i0dxd2s3cy.gif" alt="" width="764" height="464"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;One application for machine learning that I've been excited about is in gaming. Not only useful for game developers, but also many other applications as well! One area of interest to me is the possibility to use video games to simulate real-world challenges and create solutions that can then be implemented outside of the video game. Here I will walk you through how I built a custom object detection model trained for Fortnite.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are we building a model for?
&lt;/h2&gt;

&lt;p&gt;In machine learning its best to start with deciding on what question you want to answer. In my case, I decided to build a model that could track the unique vehicle in Fornite called The Baller. The goal will be to identify and track when a baller is in a player's field of view, including when a player is using the vehicle.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's the point?
&lt;/h2&gt;

&lt;p&gt;The primary thesis of this exercise is to demonstrate how we can use simulated environments to solve real-world, general-purpose, AI problems. Rather than have to gather real-world data, we can capture video game data as a testbed to build, test, and refine AI systems. In the case of Fortnite, and specifically, the model demonstrated here, imagine being able to create the framework for object-avoidance, decision-optimizations, or planning the best route to travel. The possibilities are endless!&lt;/p&gt;

&lt;p&gt;Hopefully, this post spurs your imagination of what's possible using video games. In the next post, I'll share a guide on how to replicate the model above to detect and track ballers.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>games</category>
    </item>
    <item>
      <title>Analyzing League of Legends Data with R (2023)</title>
      <dc:creator>S🅰️Ⓜ️ 🛋</dc:creator>
      <pubDate>Fri, 28 Jun 2019 15:59:35 +0000</pubDate>
      <link>https://dev.to/couch/analyzing-league-of-legends-data-with-r-2kcc</link>
      <guid>https://dev.to/couch/analyzing-league-of-legends-data-with-r-2kcc</guid>
      <description>&lt;h2&gt;
  
  
  Sourcing the data
&lt;/h2&gt;

&lt;p&gt;If you want to analyze all gameplay from public matches, Riot has a great &lt;a href="https://developer.riotgames.com/" rel="noopener noreferrer"&gt;API&lt;/a&gt;. For me, I like to look at the pros in competitive play; an excellent resource is &lt;a href="http://oracleselixir.com/" rel="noopener noreferrer"&gt;Oracle's Elixir&lt;/a&gt;. Specifically, I use the match data files to start all of my research.&lt;/p&gt;

&lt;h2&gt;
  
  
  The R Part
&lt;/h2&gt;

&lt;p&gt;As I said, we'll be using the match data file provided by &lt;a href="http://oracleselixir.com/match-data/" rel="noopener noreferrer"&gt;Oracle's Elixer&lt;/a&gt;, we'll use the Summer 2019 file. If you're not familiar with R yet, that's OK! We'll keep it simple today. All you need to get started is &lt;a href="https://www.rstudio.com/products/rstudio/download/" rel="noopener noreferrer"&gt;R Studio&lt;/a&gt;. Now there are two libraries we'll be using: &lt;code&gt;Tidyverse&lt;/code&gt; (a suite of several libraries for data manipulation) and &lt;code&gt;openxlsx&lt;/code&gt; (You may have noticed the file format is an excel document, this lets us open it with ease). With these two libraries, we can get started!&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;With one line of code, we're ready to rock! Once you run that, we'll have the data loaded in, but before we can start an analysis, we need to clean it up a bit. For example, it thinks the &lt;code&gt;patchno&lt;/code&gt; is a number (11.1 for example), but we really should consider it as a string, and the &lt;code&gt;gamelength&lt;/code&gt; we should use as a double, the &lt;code&gt;date&lt;/code&gt;  is also in an Excel-specific format, so we should make that a usable date. So let's take care of these things:&lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;You'll notice that the date cleaning looks a little bit more involved than the others. The reason is that we first need to represent the column as a  number, and then we need to convert the number to a date (The trick here is that you need to provide the origin parameter, which in Excel is defined as December 30, 1899... I don't know why, but thanks Google!).&lt;/p&gt;

&lt;h2&gt;
  
  
  Analysis
&lt;/h2&gt;

&lt;p&gt;Finally, let's do a simple analysis using the Tidyverse packages! &lt;/p&gt;


&lt;div class="ltag_gist-liquid-tag"&gt;
  
&lt;/div&gt;


&lt;p&gt;Once we have the data thoroughly cleaned and ready to rock, we first filter the data  by the region (in this case, called "league"), so we isolate the LCS (North America region), and then filter the rows by the "Team" rows (as opposed to the results for individual players, the "Team" represent aggregate performance for the entire team. Then we group by the actual teams. Finally, we create a summary for each team, where we tell it to create columns for each of the summaries that we want to see.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgtz062st89508z11ydsc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgtz062st89508z11ydsc.png" width="800" height="373"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In the future, I'll be sharing how to do a more in-depth analysis, but this is a great place for you to get started hopefully. Let me know if you have any questions or ideas for types of analysis to perform!&lt;/p&gt;

</description>
      <category>esports</category>
      <category>data</category>
      <category>r</category>
      <category>gaming</category>
    </item>
  </channel>
</rss>
