On September 17, Politico hosted its second Summit on policy around Artificial Intelligence. The significance of the topic was evidenced by the growth of the conference, from last year’s inaugural session which ran 3 hours and enjoyed a modest audience to a full day event boasting a standing-room-only crowd at the Newseum. AI is a hot topic.
The day opened with the federal Office of Science and Technology CTO, Michael Kratsios, explaining the Trump Administration’s perspective on where we stand with the beginnings of a national AI policy. Noting Trump’s 2019 Executive Order and the Administration’s decision to name AI in its budget priorities, he described a high-level outline of policy ideas that form an early roadmap for what may eventually be implemented. While other speakers throughout the day challenged the notion that we have a national policy as of now, putting politics aside, everyone acknowledged this is a serious issue that needs to be addressed proactively, before AI implementation in the commercial, federal, law enforcement and academic sectors proceeds much farther.
Privacy and security are integral to the AI policy discussion. One interesting insight that was discussed was that DARPA, always on the leading edge, is working now on polymorphic encryption, where analysis is done on large data sets without compromising the underlying data. The goal is strong protection of the growing volumes being fed into the ‘big data’ analysis on which AI depends. In fact, data privacy was discussed in terms of the need for a common agreement among nations about what acceptable standards should be. The beginnings exist in the OECD AI Principle signed last summer in Paris. But since attitudes toward privacy, innovation and human rights vary across cultures, panelists noted that it will take some time for even a basic universal position to be adopted.
All speakers agreed that, at least for the time being, the U.S. enjoys “dominance” as the world leader in AI. Many comparisons to China were made, suggesting a kind of next-gen space race is on. With the U.S. having the greatest number of scientists and technologists working on AI development, the prevailing sentiment of the day was that the U.S. must keep up the current pace in order to retain first position.
Many other topics were addressed: the imperative of workforce retraining; public trust in AI technologies and the transparency that will be needed to sustain it; applications of AI that currently hold the highest value (back office functions that involve repetition and standardization); the need for more government research investment; spectrum management, particularly with the pending advent of 5G; and much more.
There was an interesting discussion around the social impacts of AI as an emerging force. Diversity was emphasized, as panelists discussed the need for more women and minorities in the field to bring a fair cross-section of perspectives to algorithm development. Integrating diverse points of view will help avoid bias and ensure the economic opportunity emerging from AI is available to all.
Another compelling insight was that students, who are a kind of creative ‘clean slate’, show enormous creativity and imagination about what could be possible with AI. Just one example offered was the idea of using natural language processing technology to create an app that would assist people with autism in communicating with each other.
Perhaps the most intriguing part of the day was the ‘future of the future’ panel, when several PhD. researchers discussed forward-looking AI applications that sounded like science fiction, but are already being explored. Lynne Parker, an expert in robot-human symbiosis, was the voice of calm, encouraging us not to adopt Terminator-type fears around AI, but to look at it as a practical tool that can help humans perform work better. But several panelists discussed in realistic terms a likelihood that at least this writer finds uncomfortable – the actual physical connection of AI to humans through implanting chips in the brain. While the idea of brain prosthetics does offer great promise to help people with neural disabilities, other uses like accelerating cognitive performance, advancing the human capacity to create, and driving occupational productivity are discomfiting propositions. As one panelist put it, “Do you really want Facebook inside of your head?”
Is speciation from merging humans with AI in our future? One panelist noted the human brain has limits, and we’re running up against them. I have to wonder if that’s such a bad thing. At the dawn of AI, we’re definitely starting on an unprecedented journey. Thoughtful policy will help toward getting it right. The time to act is now.