Forem

# safety

Discussions on childproofing, online safety, and keeping kids safe.

Posts

👋 Sign in for the ability to sort posts by relevant, latest, or top.
How Digital HSEQ Systems Are Making Ships Safer (And Why Developers Should Care)

How Digital HSEQ Systems Are Making Ships Safer (And Why Developers Should Care)

Comments
4 min read
Non-decision-making AI governance with internal audit and stop conditions

Non-decision-making AI governance with internal audit and stop conditions

Comments
1 min read
When High-Pressure Testing Becomes a Safety Engineering Problem

When High-Pressure Testing Becomes a Safety Engineering Problem

1
Comments
2 min read
Why Your AI Needs Both Intuition and Rules

Why Your AI Needs Both Intuition and Rules

Comments
3 min read
A Formal Verification of the XRP Ledger

A Formal Verification of the XRP Ledger

1
Comments
6 min read
Institutional audit of a non-decision AI framework (27-document corpus)

Institutional audit of a non-decision AI framework (27-document corpus)

Comments
1 min read
Non-decision AI: stop conditions as a first-class control surface

Non-decision AI: stop conditions as a first-class control surface

Comments
1 min read
AI Safety Isn’t About Better Answers. It’s About Knowing When to Stop.

AI Safety Isn’t About Better Answers. It’s About Knowing When to Stop.

Comments
1 min read
If AI Doesn’t Produce Measurable Improvement, It Should Stay Silent

If AI Doesn’t Produce Measurable Improvement, It Should Stay Silent

Comments
1 min read
If AI Doesn’t Improve Anything, It Should Stop Talking

If AI Doesn’t Improve Anything, It Should Stop Talking

Comments
1 min read
If AI Doesn’t Improve Anything, It Should Stay Silent

If AI Doesn’t Improve Anything, It Should Stay Silent

Comments
1 min read
If AI Doesn’t Produce Measurable Improvement, It Should Stay Silent

If AI Doesn’t Produce Measurable Improvement, It Should Stay Silent

Comments
1 min read
If AI Doesn’t Improve Anything, It Should Stop Talking

If AI Doesn’t Improve Anything, It Should Stop Talking

Comments
1 min read
DELTΔX: A non-decision AI governance framework with explicit stop conditions

DELTΔX: A non-decision AI governance framework with explicit stop conditions

Comments 2
1 min read
Between Safety and Value: Defining 'Correctness' Through Nine Years of Journey

Between Safety and Value: Defining 'Correctness' Through Nine Years of Journey

Comments
8 min read
LLMs + Tool Calls: Clever But Cursed

LLMs + Tool Calls: Clever But Cursed

7
Comments
2 min read
Hallucinating Help

Hallucinating Help

1
Comments
9 min read
Safety vs Security in Software: A Practical Guide for Engineers and Infrastructure Teams

Safety vs Security in Software: A Practical Guide for Engineers and Infrastructure Teams

Comments
9 min read
Autonomous Vehicle Reality Check: Smarter AI Through Self-Verification

Autonomous Vehicle Reality Check: Smarter AI Through Self-Verification

Comments
2 min read
LLM Context Window Stress Testing: Reliability Under Load

LLM Context Window Stress Testing: Reliability Under Load

9
Comments 1
4 min read
🐧 Hardening Linux: практическое руководство для безопасной работы

🐧 Hardening Linux: практическое руководство для безопасной работы

2
Comments
2 min read
AI Chatbot Developers: What's the "Other Safety" We Should Be Thinking About Now? User Protection.

AI Chatbot Developers: What's the "Other Safety" We Should Be Thinking About Now? User Protection.

5
Comments
24 min read
Trust & Transparency: Why we updated our review system at mobile.de

Trust & Transparency: Why we updated our review system at mobile.de

Comments
2 min read
Google Shibuya - AI Safety: how do you control what’s smarter than you?

Google Shibuya - AI Safety: how do you control what’s smarter than you?

Comments
1 min read
Safer sandboxing in Rails

Safer sandboxing in Rails

Comments
2 min read
loading...