How Do We Best Govern AI? — Microsoft's Blueprint for the Future (Foreword by Brad Smith)
webCredibility Rating
High quality. Established institution or organization with editorial oversight and accountability.
Rating inherited from publication venue: Microsoft
Brad Smith's foreword to Microsoft's 'Governing AI: A Blueprint for the Future' report outlines policy recommendations for governments and Microsoft's internal responsible AI commitments, relevant to AI governance and deployment safety.
Metadata
Summary
Brad Smith introduces Microsoft's AI governance report, arguing that AI's transformative power requires both government regulation and corporate responsibility. The report proposes five policy areas for governments and details Microsoft's internal responsible AI practices, including nearly 350 dedicated staff. It frames AI governance as a generational challenge requiring decisive action.
Key Points
- •Microsoft proposes five areas for government AI policy, laws, and regulations in its 'Governing AI: A Blueprint for the Future' report.
- •Microsoft has ~350 people working on responsible AI, implementing ethical principles across engineering and governance systems.
- •Brad Smith argues AI offers more potential for human good than any prior invention, but guardrails are essential.
- •The report reflects lessons from Microsoft's responsible AI work since Satya Nadella's 2016 call to focus on values in AI development.
- •AI governance is framed as requiring both broad public conversation and decisive, effective policy action from governments and companies.
Cached Content Preview
This post is the foreword written by Brad Smith for Microsoft’s report Governing AI: A Blueprint for the Future . The first part of the report details five ways governments should consider policies, laws, and regulations around AI. The second part focuses on Microsoft’s internal commitment to ethical AI, showing how the company is both operationalizing and building a culture of responsible AI.
“Don’t ask what computers can do, ask what they should do.”
That is the title of the chapter on AI and ethics in a book I co-authored in 2019. At the time, we wrote that, “This may be one of the defining questions of our generation.” Four years later, the question has seized center stage not just in the world’s capitals, but around many dinner tables.
As people have used or heard about the power of OpenAI’s GPT-4 foundation model, they have often been surprised or even astounded. Many have been enthused or even excited. Some have been concerned or even frightened. What has become clear to almost everyone is something we noted four years ago – we are the first generation in the history of humanity to create machines that can make decisions that previously could only be made by people.
Countries around the world are asking common questions. How can we use this new technology to solve our problems? How do we avoid or manage new problems it might create? How do we control technology that is so powerful?
These questions call not only for broad and thoughtful conversation, but decisive and effective action. This paper offers some of our ideas and suggestions as a company.
These suggestions build on the lessons we’ve been learning based on the work we’ve been doing for several years. Microsoft CEO Satya Nadella set us on a clear course when he wrote in 2016 that, “Perhaps the most productive debate we can have isn’t one of good versus evil: The debate should be about the values instilled in the people and institutions creating this technology.”
Since that time, we’ve defined, published, and implemented ethical principles to guide our work. And we’ve built out constantly improving engineering and governance systems to put these principles into practice. Today, we have nearly 350 people working on responsible AI at Microsoft, helping us implement best practices for building safe, secure, and transparent AI systems designed to benefit society.
New opportunities to improve the human condition
The resulting advances in our approach have given us the capability and confidence to see ever-expanding ways for AI to improve people’s lives. We’ve seen AI help save individuals’ eyesight, make progress on new cures for cancer, generate new insights about proteins, and provide predictions to protect people from hazardous weather. Other innovations are fending off cyberattacks and helping to protect fundamental human rights, even in nations afflicted by foreign invasion or civil war.
Everyday activities will benefit as well. By acting as a copilo
... (truncated, 18 KB total)3e0db95139d72290