[ad_1]
//php echo do_shortcode(‘[responsivevoice_button voice=”US English Male” buttontext=”Listen to Post”]’) ?>
The United States and European Union are divided by thousands of miles of the Atlantic Ocean, and their approaches to regulating AI are just as vast. The landscapes are also dynamic, with the latest change on the U.S. side set to roll out today—about seven weeks after a big move in the EU.
The stakes are high on both sides of the Atlantic, with repercussions in practices as disparate as determining prison sentences to picking who gets hired.
The European Union’s Artificial Intelligence Act (AIA), which was approved by the Council of the EU on Dec. 6 and is set to be considered by the European Parliament as early as March, would regulate AI applications, products and services under a risk-based hierarchy: The higher the risk, the stricter the rule.
If passed, the EU’s AIA would be the world’s first horizontal—across all sectors and applications—regulation of AI.
In contrast, the U.S. has no federal law specifically to regulate the use of AI, relying instead on existing laws, blueprints, frameworks, standards and regulations that can be stitched together to guide the ethical use of AI. However, while business and government can be guided by frameworks, they are voluntary and offer no protection to consumers who are wronged when AI is used against them.
Adding to the patchwork of federal actions, local and state governments are enacting laws to address AI bias in employment, as in New York City and the entire state of California, and insurance, with a law in Colorado. No proposed or enacted local law has appeared in the news media to address using AI in jail or prison sentencing. However, in 2016, a Wisconsin man, Eric Loomis, unsuccessfully sued the state over a six-year prison sentence that was based, in part, on AI software, according to a report in The New York Times. Loomis contended that his due process rights were violated because he could not inspect or challenge the software’s algorithm.
“I would say we still need the foundation from the federal government,” Haniyeh Mahmoud, global AI ethicist at DataRobot, told EE Times. “Things around privacy that pretty much every person in the United States is entitled to, that is something that the federal government should take care of.”
The latest national guideline is expected to be released today by the National Institute of Standards and Technology (NIST).
NIST’s voluntary framework is designed to help U.S.-based organizations manage AI risks that may impact individuals, organizations and society in the U.S. The framework does this by incorporating trustworthiness considerations, such as explainability and mitigation of harmful bias, into AI products, services and systems.
“In the short term, what we want to do is to cultivate trust,” said Elham Tabassi, chief of staff in the Information Technology Laboratory at NIST. “And we do that by understanding and managing the risk of AI systems so that it can help to preserve civil liberties and rights and enhance safety [while] at the same time provide and create opportunities for innovation.”
Longer term, “we talk about the framework as equipping AI teams, whether they are primarily people designing, developing or deploying AI, to think about AI from a perspective that takes into consideration risks and impacts,” said Reva Schwartz, a research scientist in NIST’s IT lab.
Prior to the release of NIST’s framework, the White House under President Joe Biden issued its “Blueprint for an AI Bill of Rights” in October. It lays out five principles to guide the ethical use of AI:
- Systems should be safe and effective.
- Algorithms and systems should not discriminate.
- People should be protected from abusive data practices and have control over how their data is used.
- Automated systems should be transparent.
- Opting out of an AI system in favor of human intervention should be an option.
Biden’s regulation-lite approach seems to follow the light regulatory touch favored by his immediate predecessor.
Don’t wait for legislation
There’s no AI law in the U.S. because the technology is changing so fast that lawmakers cannot pin it down long enough for them to write legislation, Danny Tobey, a partner at the law firm DLA Piper, told EE Times.
“Everyone’s putting out frameworks, but very few are putting out hard and fast rules you can plan around,” he said. “We were promised an AI Bill of Rights, yet we got a ‘Blueprint for an AI Bill of Rights’ with no legal force.”
Tobey sees regulatory proposals globally cohering around third-party audits and impact assessments to test AI-based applications for safety, non-discrimination and other key aspects of ethical AI. These are tools companies can already use, he said.
“The solution is for companies to begin testing AI technologies for these expected criteria even before the legislation is final, to aim to build future-proof, compliant AI systems anticipating coming regulations,” he said.
At least one company in the EU aligns with Tobey’s thinking: NXP Semiconductors, based in the Netherlands, has developed its own AI ethics initiative.
Does U.S. need a specific AI law?
Could it be that the U.S. already has the laws it needs to protect the public from the unethical use of AI?
In September, Gary Gensler, chair of the U.S. Securities and Exchange Commission, addressed this question at the MIT AI Policy Forum Summit to Explore Policy Challenges Surrounding AI Deployment.
“Through our legislative bodies, we’ve adopted laws to protect the public,” he said. “It’s safety, health, investor protection, financial stability. And those are still tried and true public policies.”
Rather than thinking we need a new law because we have a new tool—AI—lawmakers and others should focus on how existing laws apply, he said.
The SEC looks to existing investor-protection laws, while banking’s corollary is the Equal Credit Opportunity Act. The Fair Housing Act protects people from discrimination when they apply for a mortgage.
Is self-policing the answer?
Leaders at Diveplane, AI-powered software developers for business and defense, want Biden’s blueprint, the EU’s AIA and more.
“This is going to help protect consumers,” Michael Meehan, Diveplane’s general counsel and chief legal officer, told EE Times. “People think of that as being contrary to what the companies may want. But the truth is that most companies, Diveplane included, want guidance.”
Meehan noted that neither government provides for “safe harbors” in AI law or regulation that would reduce a user’s risk.
A safe harbor is a provision in a law that grants protection from punishment or liability if certain conditions are met. For example, if a properly implemented, instance-based AI loan-approval system detects bias, it can flag the matter for human review.
Diveplane CEO Mike Capps welcomes regulation, too, but he is also a proponent of self-policing in the industry.
To illustrate why, he points to the patient-privacy law in the U.S. The 1996 Health Insurance Portability and Accountability Act (HIPAA) offers a safe harbor for users who scrub identifying information from medical records. Unfortunately, cross-referencing that scrubbed database with another data trove can help tease out the identity of people who should otherwise be anonymous.
It is possible that is something the 20th century lawmakers did not anticipate.
“If you set something hard in stone about how computers work today, you don’t have the ability to adapt to … technology that didn’t exist when you wrote it,” Capps said.
That thinking prompted Diveplane to co-found the Data & Trust Alliance, a nonprofit consortium whose members “learn, develop and adopt responsible data and AI practices,” according to its website. Capps sits on the alliance’s leadership council with representatives of such entities as the NFL, CVS Health and IBM.
The group is working on standards for ethical AI.
“And those rules will continue to change and evolve because they have to,” Capps said. “Would I enshrine it in law? No, I sure wouldn’t, but I would certainly look at it as an example of how to build a flexible system for minimizing bias.”
Mahmoudian said that as new data emerges, the EU AIA has language for revisiting the risk level assigned to applications. That is important in cases like Instagram, for example, once considered innocuous but shown to negatively impact teenagers’ mental health several years after its onset, she said.
[ad_2]