7 Top Questions Government Leaders Are Asking About AI
Reflections from Ernie Fernandez, Microsoft VP for U.S. State and Local Government, on the common questions he’s hearing from Government leaders.
July 2024
As the Vice President at Microsoft covering U.S. State and Local Government, I am deeply involved in conversations about the role of AI in government.
Recently, at Smart City Expo USA in New York City, I had the opportunity to participate in a panel on Public Sector AI. While preparing for the panel, I began to think about the best ways we could answer some of the questions we’ve been hearing from government leaders at Microsoft.
I enlisted the help of Keith Bauer, the National Data/AI and Application Innovation Leader for Microsoft U.S. State and Local Government. Keith has traversed the country speaking with governors, executive cabinet members, and AI policy makers. Through these interactions, a consistent and meaningful dialogue has emerged.
Together, we have compiled this list of seven of the most common questions that we hear from government leaders about AI.
Question 1: What is Microsoft doing to support Responsible AI?
Microsoft has actively advocated for the safe and responsible use of AI since 2017. We initiated these efforts by launching the Aether committee, which brought together researchers, engineers, and policy experts to craft our AI principles. In 2018, we formally adopted these principles, and in 2019 we formed the Office of Responsible AI. Soon after, we released our Responsible AI Standard, which provides a framework for translating the AI principles into actionable guidance.
Most recently, we published our inaugural Responsible AI Transparency Report which outlines how Microsoft responsibly builds and releases generative AI applications, supports customers in developing their AI applications, and continually evolves its responsible AI program. We are committed to sharing our learnings early and often so everyone can contribute to the responsible use of AI.
Question 2: How are governments addressing data privacy, intellectual property, and security concerns when implementing AI?
Data privacy and security are paramount when adopting new technologies or evolving any part of a technology infrastructure. During our conversations with state and local governments, we are discussing how they currently address these critical AI implementation concerns through traditional measures, including data encryption, access control, compliance and certifications, data residency and sovereignty, and auditing and monitoring.
As governments integrate generative AI technologies, it is important to consider the following to instill trust through transparency, privacy through protection, and security measures with RAG (Retrieval Augmented Generation):
- Transparency Notes: These documents help users and developers understand how AI technologies work – including their capabilities, limitations, and choices that impact performance and behavior. This promotes informed use and deployment of AI systems and empowers governments with a resource to share with constituents.
- Privacy Protection:
- Copilot: Designed to protect sensitive information. When commercial data protection is enabled, Copilot doesn’t retain chat histories, prompts, or responses, and it offers no usage reporting and auditing.
- PII Detection: AI solutions include mechanisms to detect and protect Personally Identifiable Information (PII), ensuring sensitive data isn’t used in AI-generated responses or inputs.
- Intellectual Property Protection: We believe in standing behind our customers when they use our products and services. The Customer Copyright Commitment (previously Copilot Copyright Commitment), is a provision in the Microsoft Product Terms that describes Microsoft’s obligation to defend customers against certain third-party intellectual property claims relating to Output Content. This commitment is part of our broader effort to ensure our State and Local Government customers can use generative AI technologies with confidence.
- Security Measures with Retrieval Augmented Generation: Governments can extend existing security best practices to AI solutions, such as leveraging authentication and user privileges to ensure data access is authorized. This approach will limit a generative AI response that uses a combination of an LLM (Large Language Model) and RAG (Retrieval Augmented Generation) architecture to include only data an authorized user would have access to. This will ensure that sensitive data remains safe, secure, and only available to those with appropriate privileges.
These approaches ensure that AI implementations in government maintain high standards of data privacy and security, protecting both sensitive information and the integrity of AI systems.
Question 3: How is Microsoft enabling governments to manage and mitigate risks – such as hallucinations or harmful content – with generative AI?
To ensure the safe implementation of AI technologies, we support public sector organizations with tools that uphold the highest standards of data privacy and security. Some of these tools include:
- Azure AI Content Safety: Azure AI Content Safety helps customers detect and filter harmful content in applications, with customizable settings to address content risks and severity levels
- System Message Frameworks: These templates assist customers in crafting effective system messages that align AI behavior with a set of expectations and mitigate risks
- Prompt Shield and Groundedness Detection: Groundedness in AI is defined as the process that ensures AI systems understand and interact with the real world accurately. Prompt Shield and Groundedness Detection tools address generative AI risks, such as blocking prompt injection attacks and identifying ungrounded AI-generated statements
- Risks & Safety Monitoring: Introduced in March 2024 within Azure OpenAI Service, this feature provides real-time harmful content detection to help public sector customers adjust configurations that serve their specific business needs, safety measures, and responsible AI principles
As we improve our own tools at Microsoft to map, measure, and manage generative AI risks, we make those tools available to our customers to enable an ecosystem of responsible AI development and deployment.
Question 4: What impact will AI have on the world’s carbon footprint, and what is Microsoft doing about it?
Microsoft is committed to reducing the environmental impact of AI operations, reflecting our broader commitment to a sustainable future. We’re determined to tackle this challenge so the world can harness the full benefits of AI. In addition to our ambitious sustainability commitments, which include our aim to be carbon negative by 2030, there are three areas where we’re deeply invested and increasing our focus.
- Optimizing datacenter energy and water efficiency:
- Energy management: The energy intensity of advanced cloud and AI services has driven us to accelerate our efforts to drive energy reductions. In addition, we have expanded our support to increase the availability of renewable energy, both for our own operations and for the communities in which we operate. We’re focused on the path to 100% zero-carbon electricity in the way we design, build and operate our datacenters.
- Water intensity: We take a holistic approach to water reduction across our business, looking for immediate opportunities through operational usage in the short term and, in the longer term, through design innovation to reduce, recycle and repurpose water.
- Advancing low carbon materials: Innovations in green steel and lower-carbon cement are rapidly emerging, however, these markets are still nascent and need significant investment to scale up and bring supply online. With our $1 billion Climate Innovation Fund, we’re investing to increase the development and deployment of new climate innovations, especially for underfunded sectors and supply-constrained markets like lower-carbon building materials.
- Improving energy efficiency of AI and cloud services: As a founding member of the Green Software Foundation, we collaborate with other industry-leading organizations to help grow the field of green software engineering, contribute to standards for the industry and work together to reduce the carbon emissions of software. Across our cloud services, we’re working to ensure IT professionals have the information they need to better understand and reduce the carbon emissions associated with their cloud usage.
Recognizing AI as a critical tool for environmental sustainability, Microsoft released the Playbook for Accelerating Sustainability with AI on global efforts to develop and deploy sustainability solutions more efficiently.
Climate change is one of the defining issues of our generation – as such, sustainability is core to everything we do. Read the Microsoft Sustainability Report.
Question 5: OpenAI recently released a new model, GPT-4o. What is the ‘o?’ And why is it such a big deal?
The ‘o’ in GPT-4o stands for ‘omni,’ highlighting the model’s ability to handle audio, text, and visual data. This feature represents a major leap forward in real-time AI interactions and applications.
Previously, developing an AI voice assistant required separate models for voice-to-text transcription, text-based reasoning, and text-to-voice synthesis. GPT-4o combines these functions into a single, efficient process that facilitates real-time, seamless interactions, mirroring human conversations by handling interruptions and providing instant clarification. This is especially beneficial for governments using generative AI chatbots to deliver constituent services.
The introduction of GPT-4o signifies a substantial advancement in AI technology, offering a groundbreaking interaction experience that will be increasingly valuable for governments in serving their constituents.
Question 6: How is AI being implemented in government operations? What are some impactful use cases?
AI is transforming government services from enhancing call center operations to aiding in policy development, improving efficiency and public services. While there are thousands of use cases surfacing across government, these use cases can be classified into two general classifications: Generative AI for Constituents and Generative AI for Employees.
- Generative AI for constituents: This provides a generative AI chat experience for the public using your city, county, or state’s data. Think of a chatbot that uses Microsoft’s Azure AI services to provide information in response to questions a constituent may have about a specific use case, such as how to start a business.
- Generative AI for employees: This is an enterprise generative AI experience designed to empower your workforce. Some examples include: call center and 311 assistance; license and permitting; training and simulation; automation (e.g. data redaction or triage); policy interpretation; analysis support; language translation; proposal or RFP content creation; personalized learning plans; resource allocation; shift turnover notes; emergency response plans; briefing documents; and FOIA Response Support.
In one example, Axon’s ‘Draft One’ provides an innovative AI-based solution that streamlines police report writing by using audio transcriptions from body-worn cameras. This software leverages generative AI to create high-quality draft reports in seconds, saving officers around one hour per day and reducing paperwork time by up to 82% during trials. It requires human review and approval, ensuring accuracy and accountability.
We can see a lot of creativity among governments contributing to significant productivity gains, reduction in backlogs, and general improvement of services. With AI, the possibilities for government can truly be limitless.
Question 7: What challenges do state and local governments face when integrating AI into existing systems, and how are they overcoming these obstacles?
When integrating AI into existing systems, state and local governments may face several challenges.
- Technical: Integrating AI with outdated legacy systems is a significant hurdle. Governments are partnering with technology providers like Microsoft to modernize their IT infrastructure, utilizing cloud computing for scalable, flexible environments that support AI applications
- People: Ensuring the workforce is trained and skilled in AI tools is essential. Governments are addressing skilling by offering comprehensive, no-cost AI education programs, many of which are available through LinkedIn Learning
- Policy: Establishing robust AI guidelines and governance frameworks is crucial. Governments are collaborating with experts to share best practices and insights, such as those outlined in Microsoft’s 2024 publication “Global Governance: Goals and Lessons for AI,” to develop effective AI policies.
Governments who are proactively addressing these challenges have a higher capability to successfully integrate AI into their systems to enhance public services.
The takeaway:
We at Microsoft are continuing to learn and grow from these conversations and are committed to empowering government organizations through every step of AI adoption.
No matter where you are in your digital transformation journey, AI can act as a strategic asset to significantly enhance public service capabilities. At Microsoft, we are here to help answer your questions and be your committed partner as you take on the challenges and promises of AI.
About the Center of Expertise
Microsoft’s Public Sector Center of Expertise brings together thought leadership and research relating to digital transformation in the public sector. The Center of Expertise highlights the efforts and success stories of public servants around the globe, while fostering a community of decision makers with a variety of resources from podcasts and webinars to white papers and new research. Join us as we discover and share the learnings and achievements of public sector communities.
Questions or suggestions?