DEV Community

Evan Lin
Evan Lin

Posted on • Originally published at evanlin.com on

Today I Learned: AI Safety and the Enlightenment Era

title: [TIL] AI Safety and the Age of Dislightenment
published: false
date: 2023-07-11 00:00:00 UTC
tags: 
canonical_url: http://www.evanlin.com/dislightenment-note/
---

![image-20230713095738135](http://www.evanlin.com/images/2022/image-20230713095738135.png)

Saw this post from Fast.ai: [AI Safety and the Age of Dislightenment](https://www.fast.ai/posts/2023-11-07-dislightenment.html#first-do-no-harm)

# Summary:

Strict AI model licensing and monitoring recommendations may be ineffective or counterproductive. Balancing the defense of society and making society self-defensive is delicate. We should advocate for openness, humility, and broad consultation to develop better responses that align with principles and values, and to constantly evolve in the process of learning more about the potential of this technology.

# Thoughts:

The paper inside is this one: FAR (Frontier AI Regulation: Managing Emerging Risks to Public Safety) [https://arxiv.org/abs/2307.03718](https://t.co/BMtEcjsvcd)

1. Strengthening GPAI regulation will only increase the barrier to entry, making competitors fewer.
2. Regulating usage will be more helpful to social safety than development.
3. Open source can increase diversity and reduce the danger of large companies.

The current progress is because of the public's unknown and lack of regulation, so we should be careful to avoid unnecessary restrictions brought about by new technology panic. Reference: The EU's attempt to regulate open source AI is counterproductive

[https://www.brookings.edu/articles/the-eus-attempt-to-regulate-open-source-ai-is-counterproductive/](https://www.brookings.edu/articles/the-eus-attempt-to-regulate-open-source-ai-is-counterproductive/)
Enter fullscreen mode Exit fullscreen mode

Top comments (0)