AI hand

Debating the Government’s Role in Artificial Intelligence Innovation

Artificial Intelligence (AI) is poised to transform life as we know it. From healthcare delivery and transportation to marketing, customer service, and a plethora of other applications, AI will have a profound impact on society, particularly on the workforce. Today’s AI capabilities are at the level of codifying common skills, but eventually what’s being called general AI will replace many traditional job roles—creating the need for new ones.

What the government’s role should be at this nascent but critical stage of AI development was the focus of Politico’s July 11 discussion with distinguished panelists from both sides of the political aisle. Kudos to event sponsor Intel for advancing this important conversation. Per Naveen Rao, Corporate Vice President and General Manager of Intel AI, “advances without accounting for societal effects are not sustainable.”

Among the panelists there was definitely consensus around a need for strong public sector-private sector relationships as well as a broad government policy that accounts for societal impacts with ethics in mind. Of course there were differing opinions on how that should happen. A few highlights:

  • Walter Copan, Under Secretary of Commerce for Standards and Technology, and Director, National Institute of Standards and Technology (NIST), naturally advocated for standards that are open, transparent and focused on industry needs. He explained that NIST is already working on a common taxonomy to demystify AI and create a foundation of trust for human-machine interface. Noting AI’s strategic importance to our national competitiveness, he also called for a research agenda that would ensure coordination and synergy across the federal sector.
  • John Delaney (D-MD.), Co-chair of the Congressional Al Caucus, also called for a national AI strategy, saying “the toothpaste is already out of the tube on how AI affects our lives.” Likening the situation to the Sputnik strategy the U.S. pursued during the Cold War, he wants the country aligned toward common goals and a shared vision for how the U.S. will be as competitive as possible, while protecting citizen privacy and keeping AI programming consistent with the values American citizens share. While national regulations will take time given political realities, he noted that “realistically, citizens are already pushing back at state levels;” he’s concerned that if we don’t act at the federal level, we’ll end up with a patchwork of diverse state laws similar to early cell phone coverage that varied state-to-state. That’s confusing for everyone.
  • Dean Garfield, President and CEO of the think tank Information Technology Industry Council, supported Congressman Delaney’s view that the private sector, government and academia should work together to make sure accountability mechanisms are built into development. But he noted that given different cultural and business norms in China, Chinese AI developers (perhaps collaborating with the government) would go beyond boundaries of what Americans would consider acceptable. Given that the two countries are evenly competitively positioned at this early stage, he encouraged the U.S. to quickly step up efforts toward developing a national strategy and standards that will accelerate our competitive industrial edge.
  • Echoing concerns about China, Rep. Will Hurd (R-TX) noted that the Chinese government will and does force its private sector to work with the government, without the concern for privacy and civil liberties we see in the U.S., enabling them to move more quickly. In fact, he noted that last year’s top ten AI venture deals were all done in China. However, he believes the U.S. has the “best and brightest minds,” and should direct federal research funds to advance AI development, then use the tools within the government as a test case.
  • Rashida Richardson, Director of Policy Research at the AI Now Institute at New York University, spoke to the civil liberties concerns that AI introduces, emphasizing that industry self-regulation won’t work. For instance, unconscious bias can easily be programmed into AI (think profiling on insurance applications, for example), so AI is a social as well as a technical issue that must be developed from a fair, ethical, human-centric perspective. She passionately advocated for involving civil society in any discussion about a national AI policy and agenda: “decisions that impact people’s lives require accurate output from algorithms.”

For companies creating technologies in the AI space, now could be the best time to join this critical national conversation. There is a ground floor opportunity to take a leadership role in shaping policy direction around this highly disruptive technology. That’s good for the country, and good for brands too.

 

For more information, refer to Intel’s Five Core AI Public Policy Principles.

Kathy Stershic, CIPM and CIPP-US, is a Senior Director of Content for W2 Communications.