🦾 Against Safetyism →April 26, 2023 • #
Byrne Hobart & Tobias Huber:
Over the past half-century, even leading AI researchers completely failed in their predictions of AGI timelines. So, instead of worrying about sci-fi paperclips or Terminator scenarios, we should be more concerned, for example, with all the diseases for which we weren’t able to discover a cure or the scientific breakthroughs that won’t materialize because we’ve prematurely banned AI research and development based on the improbable scenario of a sentient AI superintelligence annihilating humanity.
A great case for dynamism and progress as our best insulation against regulatory capture, risk, and exploitation.