AI ethics should be hardcoded like security by design

Businesses need to think about ethics from ground zero when they begin conceptualising and developing artificial intelligence (AI) products. This will help ensure AI tools can be implemented responsibly and without bias. 

The same approach already is deemed essential to cybersecurity products, where a “security by design” development principle will drive the need to assess risks and hardcode security from the start, so piecemeal patchwork and costly retrofitting can be avoided at a later stage. 

This mindset now should be applied to the development of AI products, said Kathy Baxter, principal architect for Salesforce.com’s ethical AI practice, who underscored the need for organisations to meet fundamental development standards with AI ethics. 

She noted that there were many lessons to be learned from the cybersecurity industry, which had evolved in the past decades since the first malware surfaced in the 1980s. For a sector that did not even exist before that, cybersecurity since had transformed the way companies protected their systems, with emphasis on identifying risks from the start and developing basic standards and regulations that should be implemented. 

As a result, most organisations today would have put in place basic security standards that all stakeholders including employees should observe, Baxter said in an interview with ZDNet. All new hires at Salesforce.com, for instance, have to undergo an orientation process where the company outlines what is expected of employees in terms of cybersecurity practices, such as adopting a strong password and using a VPN. 

The same applied to ethics, she said, adding that there was an internal team dedicated to driving this within the company. 

There also were resources to help employees assess whether a task or service should be carried out based on the company’s guidelines on ethics and understand where the red lines were, Baxter said. Salesforce.com’s AI-powered Einstein Vision, for example, can never be used for facial recognition, so any sales member who is not aware of this and tries to sell the product for such deployment will be doing so in violation of the company’s policies. 

And just as cybersecurity practices were regularly reviewed and revised to keep pace with the changing threat landscape, the same should be applied to polices related to AI ethics, she said. 

This was critical as societies and cultures changed over time, where values once deemed relevant 10 years ago might no longer be aligned with views a country’s population held today, she noted. AI products needed to reflect this. 

Data a key barrier to addressing AI bias

While policies could mitigate some risks of bias in AI, there remained other challenges–in particular, access to data. A lack of volume or variety could result in an inaccurate representation of an industry or segment. 

This was a significant challenge in the healthcare sector, particularly in countries such as the US where there were no socialised medicine or government-run healthcare systems, Baxter said. When AI models were trained by limited datasets based on a narrow subset of a population, it could impact the delivery of healthcare services and ability to detect diseases for certain groups of people.

Salesforce.com, which cannot access or use its customers’ data to train its own AI models, will plug the gaps by purchasing from external sources such as linguistic data, which is used to train its chatbots, as well as tapping synthetic data. 

Asked about the role regulators played in driving AI ethics, Baxter said mandating the use of specific metrics could be harmful as there still were many questions around the definition of “explainable AI” and how it should be implemented. 

The Salesforce.com executive is a member of Singapore’s advisory council on the ethical use of AI and data, which advises the government on policies and governance related to the use of data-driven technologies in the private sector.

Pointing to her experience on the council, Baxter said its members realised quickly that defining “fairness” alone was complicated, with more than 200 statistical definitions. Furthermore, what was fair for one group sometimes inevitably would be less fair for another, she noted. 

Defining “explainability” also was complex where even machine learning experts could misinterpret how a model worked based on pre-defined explanations, she said. Set policies or regulations should be easily understood by anyone who used AI-powered data and across all sectors, such as field agents or social workers. 

Realising that such issues were complex, Baxter said the Singapore council determined it would be more effective to establish a framework and guidelines, including toolkits, to help AI adopters understand its impact and be transparent with their use of AI. 

Singapore last month released a toolkit, called A.I. Verify, that it said would enable businesses to demonstrate their “objective and verifiable” use of AI. The move was part of the government’s efforts to drive transparency in AI deployments through technical and process checks.

Baxter urged the need to dispel the misconception that AI systems were by default fair simply because they were machines and, hence, devoid of bias. Organisations and governments must invest the efforts to ensure AI benefits were equally distributed and its application met certain criteria of responsible AI, she said. 

RELATED COVERAGE

READ MORE HERE