Akshyae Singh

Ideas

Things I want to see in the world. Some are half-formed. That's fine.

  • AI systems that can explain their own uncertainty
    Not just confidence scores, but something closer to a reasoned account of what a model doesn't know and why. Right now we have outputs; we barely have anything that resembles epistemic honesty.
  • Safety research that takes capability seriously
    Most alignment work assumes a system that's already powerful. I want to see more work done earlier in the stack — understanding how dangerous capabilities emerge before they're already in deployment.
  • A clearer shared vocabulary for AI risk
    Researchers, policymakers, and the public are all talking about the same thing using completely different language. The confusion isn't accidental, but it's also not fixed. Better ontologies for this problem would help enormously.
  • Institutions that move at the speed of the technology
    Every serious governance effort I've seen operates on a timescale that assumes we have more time than we probably do. I'd like to see a new kind of institution — small, fast, technically credible — that can actually keep up.
  • More researchers who can write
    The people who understand this problem the best are often the worst at explaining it to people who need to understand it. That gap matters. Writing is a form of thinking and a form of influence — it shouldn't be treated as optional.
© 2026