Search

The Times commentary on trusting AI and value-alignment testing



This is a question of automation rather than “forced” actions. The question is the same as the autonomous car and how much we can tray it’s semi-autonomous decision making. This is different when physical activity could result in physical harm if the device operation is not fully tested or not able to handle events that it’s training data did not cover. It’s s mater of trust and explainability in actions. The question of manipulation and coercion as seen in fake news and deep fakes to involuntary or inability to take control of these automations relate again to the safeguards and overrides that may or may not be in place. An extreme sad example is the Boeing 737 max where the faulty automation was both complex and human override interventions failed to regain control. It’s a matter of risk and degree in how the operating envelope of automation overlaps with human control to prevent out of control conditions but it is often less clear when the speed and inappropriate “trust” is given over to these automatons. A key question is how automation safety is certified and reproducible and verifiable.

0 views
  • Black Facebook Icon
  • Black LinkedIn Icon
  • Black Twitter Icon
  • Black YouTube Icon

© 2020 Artificial Intelligence Innovation Network, Warwick Business School, UK