AI Policy in Focus: Key Learnings from NVIDIA GTC
As an AI policy advisor for the Ranking member of the Health, Education, Labor and Pensions Committee in the U.S. Senate, staying ahead of the curve is crucial to ensure policies keep pace with rapidly evolving technologies, and anticipate regulation needs in diverse areas of application. That's why I attended NVIDIA GTC, the leading AI conference for developers, even though I’m not a developer. Here are a few key takeaways on the evolving landscape of AI, with a specific focus on policy considerations.
- Government taking the lead in regulation and broadening access
Government interest in AI is on the rise and shows no signs of slowing. The conference highlighted representatives from different federal agencies, from the Department of Defense, to the Department of Transportation, to the IRS! One of the most interesting panels was a fireside chat with Kathi Vidal, Director of the United States Patent and Trademark Office (PTO), who provided fascinating insights into the various intellectual property challenges we face in the age of generative AI, including copyright, trade secrets and patents. For example, recent PTO guidance does not require inventors to disclose the AI systems they used. However, AI-assisted inventions eligible for patent coverage require a ‘significant’ human contribution, and that person “who only presents a problem to an AI system” might not be eligible for a patent.
The government is already doing big things to set standards and create opportunities, among them:
- Funding research grants for AI development: The National Science Foundation (NSF) spearheaded the launch of the National AI Research Resource (NAIRR) pilot, a program offering researchers access to computational resources, data, software, models, training and user support. This initiative aims to bolster U.S. participation in AI research by connecting researchers with government-funded, industry-donated, and other resources traditionally only available to the highest-funded academic research labs and tech companies.
- Establishing national AI strategies: The AI Executive Order, released in October 2023, sets the stage for a national AI strategy that, while not a replacement for legislation, reflects the government’s AI priorities.
- Creating regulatory frameworks for AI use: Including the OMB AI memo, which directs agencies to advance AI governance and innovation while managing risks from its use in the federal government, as well as the NIST AI Risk Management Framework and many other AI governance frameworks.
- AI's Transformative Impact on Healthcare
The conference emphasized the vast potential of AI in healthcare. A panel with executives from Oracle, Medtronic and other health tech companies discussed the potential of AI in diagnostics, drug discovery and personalized medicine. For instance, AI can analyze medical images (X-rays, MRIs) to assist doctors in diagnosing diseases earlier and recommending more effective treatments. This has the potential to significantly improve patient outcomes and potentially reduce healthcare costs. Moreover, AI can analyze patient data to personalize treatment plans and predict potential health risks, creating opportunities to significantly lessen the administrative burden on healthcare workers. Imagine how much that can impact human longevity and vitality!
However, challenges like data privacy and ensuring fairness in algorithms need to be addressed to ensure responsible implementation. Data privacy concerns revolve around protecting sensitive patient information used to train and operate AI systems. There's a need for clear guidelines on data collection, retention, storage and usage to ensure responsible AI development in healthcare. Additionally, ensuring fairness in algorithms is critical to avoid perpetuating existing biases in the healthcare system.
- Data: The Fuel of AI Development
Data is the lifeblood of AI. The quality, quantity, and accessibility of data all play a vital role in training effective AI models. AI algorithms learn and improve based on the data they are trained on. More data allows AI to identify patterns, make predictions and perform tasks with greater accuracy.
Robust data governance and privacy frameworks are essential to ensure responsible AI development and build trust. A state-by-state approach to data privacy laws creates confusion for businesses operating across multiple states and hinders the creation of a consistent national framework. A unified approach would streamline compliance for companies, pave the way for effective AI regulations, and offer stronger data protection for individuals.
- The Challenge of Big Tech's Dominance
A stark reality is the disparity in resources between Big Tech companies and academia, non-profits and the public sector. Major tech giants, also known as Big Tech, have immense financial resources, cutting-edge technology, and talent pools at their disposal; this allows them to move quickly and aggressively in AI, potentially stifling competition and diverse perspectives.
Dr. Fei-Fei Li made the point that academia and non-profits are currently struggling to compete with Big Tech's lucrative salaries, extensive resources (including infrastructure) and fast-paced environment, making it difficult to attract top researchers. Public-private partnerships could leverage resources and expertise from both sectors to bridge this gap. Additionally, increased government funding for research grants and initiatives in AI could help level the playing field.
- The Importance of Diversity in all aspects of AI
The conference highlighted the critical need for greater diversity in AI, across conferences, the workforce and policy development. A lack of diversity can not only lead to biased algorithms and limited perspectives, but also to missed opportunities for innovation.
For instance, AI systems trained on data sets lacking adequate diversity can perpetuate, even exacerbate, existing societal biases. Imagine an AI-powered facial recognition system trained primarily on data from one ethnicity – it might struggle to accurately recognize faces from other ethnicities or it may confuse people from the same ethnicity, leading to grave consequences. Additionally, a homogenous field of AI developers may overlook important ethical considerations or miss out on innovative solutions that could arise from incorporating varied perspectives and experiences.
Implementing diversity and inclusion initiatives at conferences and workplaces, establishing mentorship programs connecting experienced professionals with aspiring individuals from underrepresented groups, and creating scholarships for underrepresented students pursuing careers in AI are crucial steps toward a more inclusive AI landscape.
In sum, the NVIDIA GTC conference emphasized the importance of collaboration for responsible AI. This includes working together on multi-stakeholder policy development, establishing best practices, and fostering public-private partnerships. Building a diverse AI workforce and ensuring transparency in AI decision-making are also crucial aspects of responsible AI development. Finally, continuous learning and adaptation are essential to keep pace with the rapidly evolving field of AI.
After attending, I am excited and eager to incorporate these takeaways into my work in AI policy, helping develop responsible AI guidelines that balance innovation, safety and equity into any future legislation.