AI is a unique tool, a new general-purpose technology. And as with the steam engine, electricity, or the internet, seizing its potential will require public and private stakeholders to collaborate to bridge the gap from AI theory to productive practice. Together, we can transition from the “wow” of AI to the “how” of AI, so that everyone, everywhere can benefit from AI’s opportunities.
Seven principles for responsible regulation
Companies in democracies have thus far led advances in AI capabilities and fundamental AI research. But we need to continue to aim high, focusing on future AI advances, because while America leads in some AI fields, we’re behind in others.
To complement scientific innovation, we’d suggest seven principles as the foundation of bold and responsible AI regulation:
- Support responsible innovation. The Senate’s Bipartisan AI Working Group starts its roadmap with a call for increased spending on both AI innovation and safeguards against known risks. That makes sense, because the goals are complementary. Advances in technology actually increase safety, helping us build more resilient systems. While new technology involves uncertainty, we can still incorporate good practices that build trust and don’t slow beneficial innovation.
- Focus on outputs. Let’s promote AI systems that generate high-quality outputs, while preventing or mitigating harms. Focusing on specific outputs lets regulators intervene in a focused way, rather than trying to manage fast-evolving computer science and deep-learning techniques. That approach grounds new rules in real issues, and helps avoid overbroad regulations that could short-circuit broadly beneficial AI advances.
- Strike a sound copyright balance. While fair use, copyright exceptions, and similar rules governing publicly available data unlock scientific advances and the ability to learn from prior knowledge, website owners should be able to use machine-readable tools to opt out of having content on their sites used for AI training.
- Plug gaps in existing laws. If something is illegal without AI, then it’s illegal with AI. We don’t need duplicative laws or reinvented wheels; we need to identify and fill gaps where existing laws don’t adequately cover AI applications.
- Empower existing agencies. There’s no one-size-fits-all regulation for a general-purpose technology like AI, any more than we have a Department of Engines, or one law to cover all uses of electricity. We instead need to empower agencies and make every agency an AI agency.
- Adopt a hub-and-spoke model. A hub-and-spoke model establishes a center of technical expertise at an agency like NIST that can advance government understanding of AI and support sectoral agencies, recognizing that issues in banking will differ from issues in pharmaceuticals or transportation.
- Strive for alignment. We’ve already seen dozens of frameworks and proposals to govern AI around the world, including more than 600 bills in U.S. states alone. Progressing American innovation requires intervention at points of actual harm, not blanket research inhibitors. And given the national and international scope of these scientific advances, regulation should reflect truly national approaches, aligned with international standards wherever possible.
Looking down the road
AI is driving advances from the everyday to the extraordinary. From improving tools you use every day — Google Search, Translate, Maps, Gmail, YouTube, and more — to changing the way we do science and tackle big societal challenges. Modern AI is not just a technological breakthrough, but a breakthrough in creating breakthroughs — a tool to make progress happen faster.
Think of Google DeepMind’s AlphaFold program, which has already predicted the 3D shapes of nearly all proteins known to science, and how they interact. Or using AI to forecast floods up to seven days in advance, providing life-saving alerts for 460 million people in 80 countries around the world. Or using AI to map the pathways of neurons in the human brain, revealing newly discovered structures and helping scientists understand fundamental processes such as thought, learning, and memory.
AI can drive more stunning breakthroughs like these — if we stay focused on its long-term potential.
That will take being consistent, thoughtful, and collaborative — and keeping our eyes on the benefits everyone stands to gain if we get it right.
“Alphabet Inc. is an American multinational technology conglomerate holding company headquartered in Mountain View, California. It was created through a restructuring of Google on October 2, 2015, and became the parent company of Google and several former Google subsidiaries.”
Please visit the firm link to site