Ethics and AI

For as long as Artificial Intelligence has existed as a concept, people have had concerns about the ethics of its uses and implementations. As AI reaches further into our lives, affecting everything from our economic and political decision making, to how traffic flows through our cities, to how good we look when we take a selfie, the ways in which technologies are controlled, developed, implemented and deployed grows in importance.

Academic institutions, technology companies, government agencies and non-governmental organisations are all currently directing significant resources towards building frameworks for AI to grow and thrive within. Although none of them currently agree entirely, there is enough overlap for us to take a look at how our own systems and the roadmap for future developments can meet the requirements.

As a company, SAI has always taken great care when developing our systems and training our models to ensure that we not only “do no harm”, but that we actively “do what is right”. As we grow and our platform delivers more complex applications to our customers, we will continue to test our systems and processes against stringent ethical criteria.

The Council of Europe has laid out a framework that our systems conform to, consisting of seven key requirements:

  1. Human agency and oversight: AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights.
  2. Technical Robustness and safety: AI systems need to be resilient and secure. They need to be safe, ensuring a fall back plan in case something goes wrong, as well as being accurate, reliable and reproducible.
  3. Privacy and data governance: besides ensuring full respect for privacy and data protection, adequate data governance mechanisms must also be ensured, taking into account the quality and integrity of the data, and ensuring legitimised access to data.
  4. Transparency: the data, system and AI business models should be transparent. Moreover, AI systems and their decisions should be explained in a manner adapted to the stakeholder concerned.
  5. Diversity, non-discrimination and fairness: Unfair bias must be avoided, as it could could have multiple negative implications, from the marginalization of vulnerable groups, to the exacerbation of prejudice and discrimination.
  6. Societal and environmental well-being: AI systems should benefit all human beings, including future generations. It must therefore be ensured that they are sustainable and environmentally friendly.
  7. Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

We hold ourselves to very high standards in this regard. The technologies being   developed now and in the future hold great potential, but it can all be lost if there is no basis for trust from our customers, partners and the public at large.

Chris Bell
Toronto, Canada