<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jacky REVET</title>
    <description>The latest articles on DEV Community by Jacky REVET (@misterstach).</description>
    <link>https://dev.to/misterstach</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/misterstach"/>
    <language>en</language>
    <item>
      <title>Cracking GPT Assistants: Extracting Prompts and Associated Files</title>
      <dc:creator>Jacky REVET</dc:creator>
      <pubDate>Sun, 19 May 2024 17:15:32 +0000</pubDate>
      <link>https://dev.to/worldlinetech/cracking-gpt-assistants-extracting-prompts-and-associated-files-2bp9</link>
      <guid>https://dev.to/worldlinetech/cracking-gpt-assistants-extracting-prompts-and-associated-files-2bp9</guid>
      <description>&lt;p&gt;I am starting here a series of articles on the security of GPT assistants.&lt;/p&gt;

&lt;p&gt;In today's digital age, artificial intelligence (AI) has become an integral part of our daily lives, with GPT (Generative Pre-trained Transformer) assistants at the forefront of revolutionizing our interaction with technology. However, as with any rapidly evolving technology, security remains a major concern. Recent studies and practical demonstrations have revealed a troubling vulnerability: it is surprisingly easy to hack GPT assistants, allowing malicious actors to retrieve the prompts and associated files of these systems.&lt;/p&gt;

&lt;p&gt;Here we will interact with an assistant well-known to musicians&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flg754sj8oeb1v219qpdq.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flg754sj8oeb1v219qpdq.PNG" alt="Image description" width="800" height="241"&gt;&lt;/a&gt;&lt;br&gt;
Here is the malicious prompt&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feh5pfrddd62xumhzjh0b.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feh5pfrddd62xumhzjh0b.PNG" alt="Image description" width="800" height="81"&gt;&lt;/a&gt;&lt;br&gt;
And the magic happens, we retrieve the assistant's prompt&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvt7xroxz606xd9ij2sz1.JPG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvt7xroxz606xd9ij2sz1.JPG" alt="Image description" width="622" height="574"&gt;&lt;/a&gt;&lt;br&gt;
We observe that external files are being used. Here is the malicious prompt to retrieve the assistant's file list:&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fud6pss0dkqv2v43gkdpp.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fud6pss0dkqv2v43gkdpp.PNG" alt="Image description" width="800" height="748"&gt;&lt;/a&gt;&lt;br&gt;
Here is the final malicious command to download the files&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frh066l0zl1n62z2d34a6.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Frh066l0zl1n62z2d34a6.PNG" alt="Image description" width="527" height="789"&gt;&lt;/a&gt;&lt;br&gt;
We have successfully retrieved the files, for example, the README&lt;br&gt;
&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq9t38m0zh6mv8hs1ytua.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fq9t38m0zh6mv8hs1ytua.PNG" alt="Image description" width="800" height="550"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For the next article, we will try to find ways to prevent leaks!&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Warning: This article is for educational purposes only and should not be used for malicious intent&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>hacking</category>
    </item>
  </channel>
</rss>
