Once a backwater filled with speculation, artificial intelligence is now a burning, “hair on fire” conflagration of both hopes and fears about the revolutionary technological transformation. A profound uncertainty surrounds these intelligent systems—which already surpass human capabilities in some domains—and their regulation. Making the right choices for how to protect or control the technology is the only way that hopes about the benefits of AI—for science, medicine and better lives overall—will win out over persistent apocalyptic fears.
Public introduction of AI chatbots such as OpenAI’s ChatGPT over the past year has led to outsize warnings. They range from one given by Senate Majority Leader Chuck Schumer of New York State, who said AI will “usher in dramatic changes to the workplace, the classroom, our living rooms—to virtually every corner of life,” to another asserted by Russian president Vladimir Putin, who said, “Whoever becomes the leader in this sphere will become the ruler of the world.” Such fears also include warnings of dire consequences of unconstrained AI from industry leaders.
Legislative efforts to address these issues have already begun. On June 14 the European Parliament voted to approve a new Artificial Intelligence Act, after adopting 771 amendments to a 69-page proposal by the European Commission,. The act requires “generative” AI systems like ChatGPT to implement a number of safeguards and disclosures, such as on the use of a system that “deploys subliminal techniques beyond a person’s consciousness” or “exploits and of the vulnerabilities of a specific group of persons due to their age, physical or mental disability,” as well as to avoid “foreseeable risks to health, safety, fundamental rights, the environment and democracy and the rule of law.”
A pressing question worldwide is whether the data used to train AI systems requires consent from authors or performers, who are also seeking attribution and compensation for the use of their works.
Several governments have created special text and data mining exceptions to copyright law to make it easier to collect and use information for training AI. These allow some systems to train on online texts, images and other work that is owned by other people. These exceptions have been met with opposition recently, particularly from copyright owners and critics with more general objections who want to slow down or degrade the services. They add to the controversies raised by an explosion of reporting on AI risks in recent months related to the technology’s potential to pose threats of bias, social manipulation, losses of income and employment, disinformation, fraud and other risks, including catastrophic predictions about “the end of the human race.” Read On: