Artificial intelligence liability: the rules are changing
Artificial intelligence (AI) use has blossomed. The AI market was valued at $27.3 billion in 2019 and is projected to grow to $266.92 billion by 2026. Associated AI applications have also grown. For example, the market for facial recognition technology, much of which uses AI, had a value of $3.72 billion in 2020 and is forecasted to grow to $11.62 billion by 2026. At the same time, AI has been known to misidentify faces, among other things, when used in facial recognition technology. If you are an AI investor or entrepreneur, you must know whether and under what circumstances an AI company can be held liable in the US or EU for malfunctioning AI.
The benefits associated with AI applications have grown immensely. In 1996, for example, Lynn Cozart disappeared just days before he was to be sentenced by a Pennsylvania court to spend years in prison for molesting three children. For years, investigators searched for him. However, the case went frigid. Then, in 2015, the Facial Analysis, Comparison and Evaluation Services, the FBI’s team responsible for face recognition search, matched the mug shot to the face of one “David Stone” who lived in Muskogee, Oklahoma, and who worked at a local Wal-Mart. “After 19 years,” FBI program analyst Doug Sprouse says, “[Cozart] was brought to justice.”
AI has also been used to flag “fake news” and “deep fakes.” Cheq, based in Tel Aviv, for example, uses various variables to determine the authenticity of content, including the status of a site’s reputation and whether the source of the content is a bot. This can assist with online digital reputation management.
Notwithstanding, AI-programmed facial recognition technology can misidentify subjects. For example, a 2012 study titled “Face Recognition Performance: Role of Demographic Information”, which was co-authored by the FBI, found females more difficult to recognize than males. It also found that that the commercial algorithms tested had the lowest matching accuracy rates on subjects aged 18-30. These inaccuracy rates can reach high percentages. For example, the algorithm running the London Metropolitan Police’s facial recognition technology was reported at one time to have an error rate as high as 81%. Read On:
Comments
Artificial intelligence liability: the rules are changing — No Comments
HTML tags allowed in your comment: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>