Companies developing AI technologies might have to consider and embed the data protection issues
Washington:
After spending billions of dollars on cutting-edge artificial intelligence technologies, Europe's banks and insurers face tougher scrutiny of the tools they use to help root out fraud, check borrowers' creditworthiness and automate claims decisions. New European Union rules stress human oversight and consumer protection, which may hamper companies trying to build the tools of the future.
"Companies developing AI technologies will have to consider and embed the data protection issues into the design process," David Martin, senior legal officer at Brussels-based consumer advocate BEUC, said in an interview. "It's not something where they can just tick a box at the end."
The rules could present an obstacle to coders looking to design ever more sophisticated algorithms. That may handicap EU firms that are competing with rivals in the U.S. and Asia to develop new technologies, according to Nick Wallace, a Brussels-based senior policy analyst at the Center for Data Innovation, a nonpartisan and nonprofit research institute.
"For an algorithmic model to be transparent to a human, even a human with a fairly good understanding of algorithms, it needs to be kept within a certain level of complexity," Wallace said. "The more abstractions you have, let alone the more data points, the harder it's going to be for any human being to sit down, read through all of it and scrutinize the decision."
Regulators worldwide are trying to catch up with the financial industry's rush to automate everything from trading desks to lending decisions and customer help-desks. The banking industry will invest $3.3 billion in AI and related technologies this year, making it the second-biggest spender after retail, research firm International Data Corporation estimates. Overall spending on the technologies will grow to $52.2 billion by 2021 from about $19 billion this year, according to IDC.
The EU's General Data Protection Regulation, scheduled to take effect on Saturday, generally requires firms to get consent from people when their personal data is used to fully automate certain types of decisions that have significant effects, such as whether to award a loan. Clients will have the right to demand a firm's human employee intervene and review a decision, and they will have the power to get details about an automated process to help guard against discriminatory practices.
"Major corporations recognize that this is a challenge and that privacy rights and data protection rights need to be given full consideration during the design and development of any kind of product or service," John Bowman, London-based senior principal at IBM Corp.'s Promontory Financial Group subsidiary, said in an interview.
As policymakers ironed out the details of the regulations over the past year, financial industry lobbies including the Association for Financial Markets in Europe and U.K. Finance pressed authorities to tread softly and to acknowledge ways the technologies can benefit consumers. In a 24-page letter to policymakers, the European Banking Federation said "profiling activities should not necessarily be perceived as having a negative impact on customers."
The law is being closely watched by the insurance industry, where four out of five executives say that AI systems will be used alongside human staffers within the next two years, consultant Accenture said in a report this year.
"Much more than just a technological tool, AI has grown to the point where it often has as much influence as the people putting it to use, both within and outside the company," Accenture executives said in the report.
The U.K. arm of Ageas, a Brussels-based insurer, is looking to speed up the handling of thousands of claims for car insurance by using AI software to review images of vehicle damage and help estimate a repair job. GDPR won't affect the current technology, and the insurer has included the law's requirements in its processes, an Ageas spokeswoman said.
(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)
"Companies developing AI technologies will have to consider and embed the data protection issues into the design process," David Martin, senior legal officer at Brussels-based consumer advocate BEUC, said in an interview. "It's not something where they can just tick a box at the end."
The rules could present an obstacle to coders looking to design ever more sophisticated algorithms. That may handicap EU firms that are competing with rivals in the U.S. and Asia to develop new technologies, according to Nick Wallace, a Brussels-based senior policy analyst at the Center for Data Innovation, a nonpartisan and nonprofit research institute.
"For an algorithmic model to be transparent to a human, even a human with a fairly good understanding of algorithms, it needs to be kept within a certain level of complexity," Wallace said. "The more abstractions you have, let alone the more data points, the harder it's going to be for any human being to sit down, read through all of it and scrutinize the decision."
Regulators worldwide are trying to catch up with the financial industry's rush to automate everything from trading desks to lending decisions and customer help-desks. The banking industry will invest $3.3 billion in AI and related technologies this year, making it the second-biggest spender after retail, research firm International Data Corporation estimates. Overall spending on the technologies will grow to $52.2 billion by 2021 from about $19 billion this year, according to IDC.
The EU's General Data Protection Regulation, scheduled to take effect on Saturday, generally requires firms to get consent from people when their personal data is used to fully automate certain types of decisions that have significant effects, such as whether to award a loan. Clients will have the right to demand a firm's human employee intervene and review a decision, and they will have the power to get details about an automated process to help guard against discriminatory practices.
"Major corporations recognize that this is a challenge and that privacy rights and data protection rights need to be given full consideration during the design and development of any kind of product or service," John Bowman, London-based senior principal at IBM Corp.'s Promontory Financial Group subsidiary, said in an interview.
As policymakers ironed out the details of the regulations over the past year, financial industry lobbies including the Association for Financial Markets in Europe and U.K. Finance pressed authorities to tread softly and to acknowledge ways the technologies can benefit consumers. In a 24-page letter to policymakers, the European Banking Federation said "profiling activities should not necessarily be perceived as having a negative impact on customers."
The law is being closely watched by the insurance industry, where four out of five executives say that AI systems will be used alongside human staffers within the next two years, consultant Accenture said in a report this year.
"Much more than just a technological tool, AI has grown to the point where it often has as much influence as the people putting it to use, both within and outside the company," Accenture executives said in the report.
The U.K. arm of Ageas, a Brussels-based insurer, is looking to speed up the handling of thousands of claims for car insurance by using AI software to review images of vehicle damage and help estimate a repair job. GDPR won't affect the current technology, and the insurer has included the law's requirements in its processes, an Ageas spokeswoman said.
(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)
Track Latest News Live on NDTV.com and get news updates from India and around the world