水果派

The AI Dilemma

March 24, 2023

You may have heard about the arrival of GPT-4, OpenAI鈥檚 latest large language model (LLM) release. GPT-4 surpasses its predecessor in terms of reliability, creativity, and ability to process intricate instructions. It can handle more nuanced prompts compared to previous releases, and is multimodal, meaning it was trained on both images and text. We don鈥檛 yet understand its capabilities - yet it has already been deployed to the public.

At 水果派, we want to close the gap between what the world hears publicly about AI from splashy CEO 水果派ations and what the people who are closest to the risks and harms inside AI labs are telling us. We translated their concerns into a cohesive story and 水果派ed the resulting slides to heads of institutions and major media organizations in New York, Washington DC, and San Francisco. The talk you're about to hear is the culmination of that work, which is ongoing.

AI may help us achieve major advances like curing cancer or addressing climate change. But the point we're making is: if our dystopia is bad enough, it won't matter how good the utopia we want to create. We only get one shot, and we need to move at the speed of getting it right.

Episode Highlights

Major Takeaways

  • Half of AI researchers believe there's a 10% or greater chance that humans will go extinct from their inability to control AI. When we invent a new technology, we uncover a new class of responsibility. If that technology confers power, it will start a race - and if we don鈥檛 coordinate, the race will end in tragedy.
  • Humanity鈥檚 鈥楩irst Contact鈥 moment with AI was social media - and humanity lost. We still haven鈥檛 fixed the misalignment caused by broken business models that encourage maximum engagement. Large language models (LLM) are humanity鈥檚 鈥楽econd Contact鈥 moment, and we鈥檙e poised to make the same mistakes.
  • Guardrails you may assume exist actually don鈥檛. AI companies are quickly deploying their work to the public instead of testing it safely over time. AI chatbots have been added to platforms children use, like Snapchat. Safety researchers are in short supply, and most of the research that鈥檚 happening is driven by for-profit interests instead of academia.
  • The media hasn鈥檛 been covering AI advances in a way that allows you to truly see what鈥檚 at stake. We want to help the media better understand these issues. Cheating on your homework with AI or stealing copyrighted art for AI-generated images are just small examples of the systemic challenges that are ahead. Corporations are caught in an arms race to deploy their new technologies and get market dominance as fast as possible. In turn, the narratives they 水果派 are shaped to be more about innovation and less about potential threats. We should put the onus on the makers of AI - rather than on citizens - to prove its danger.

Take Action

Share These Ideas