LLMs read words; risk lies beyond them.

AI systems can sound competent about suicide—while missing the risk.

I help organizations identify where their AI systems mis-handle suicide risk and high-risk decision states—especially the transition from thinking to action—so teams can reduce harm, improve safety, and avoid false confidence.

Request a consult

The Problem

Most AI systems assess suicide risk by analyzing what users say—keywords, sentiment, and explicit ideation. This approach is incomplete and leads to predictable failures.

  • Missing high-risk users who are not expressing distress directly
  • Overreacting to low-risk users
  • Providing responses that are misaligned with actual risk

The most dangerous moments are often the least visible in language—especially in your system’s real-world use.


What I Do

I evaluate how AI systems handle high-risk psychological states—especially where your system may be misreading or missing them.

  • Decision-state transitions (thinking → action)
  • Collapse of perceived options
  • Temporal narrowing
  • Calm or resolved states that mask elevated risk

Services

Practical, safety-focused consulting to identify where your system mis-handles suicide risk and high-risk decision states—especially the transition from thinking to action.

AI Risk Review

Identify where your system misses high-risk users.

  • Review of 25–50 interactions or scenarios
  • Identification of missed risk and false reassurance
  • Analysis of over/under escalation
  • Clear, actionable recommendations

Engagement: Fixed-scope review — 3–5 page report + optional consult call

Failure Mode & Red Teaming

Find where your system breaks under real psychological conditions.

  • Scenario-based stress testing (high-risk, ambiguous, edge-case behavior)
  • Identification of vulnerabilities and misclassification patterns
  • Examples of high-risk misses with clear explanations
  • Targeted recommendations to improve safety and performance

Engagement: Defined project scope tailored to your system

Safety Design & Advisory

Improve how your system handles risk.

  • Response strategy and alignment
  • Escalation logic calibration (when to act, when not to)
  • Reduction of false positives and false negatives
  • Design for real-world decision-state transitions

Engagement: Defined direction with flexible scope — project-based or ongoing advisory

AI Harm & Expert Review

Analyze what went wrong—and why it matters.

  • Transcript and system behavior analysis
  • Identification of missed or misinterpreted risk
  • Opinion on foreseeability and failure points
  • Consultation and testimony (as needed)

Engagement: Case-based or hourly consultation


How to choose

  • New or early system? Start with AI Risk Review
  • Preparing to launch or scale? Choose Failure Mode & Red Teaming
  • Already live and improving? Use Safety Design & Advisory
  • Something went wrong? Request AI Harm & Expert Review

Not sure where to start?

If you’re unsure, your system likely has blind spots worth examining. Let's talk.


AI systems don’t fail because they lack information.

They fail because they misread what matters.

If your system is interacting with real people, this is worth getting right.

Request a consult