The why question. It's so seductive: the answers, so elusive. One of the disconcerting revelations about writing fiction was grasping a reader's hankering for cause and effect within our made up worlds. We experience a pleasurable sort of frisson when an early clue pays off later in the book. We want to know WHY things happen. Just because isn't good enough. Dunno is even worse. Even if it's true.
So why write about AI and empathy? It's partly the ick factor. The idea of robots deciding to act by themselves - pulling a trigger, for example - provokes in us a profound sense of wrongness. What is that about? Many reasons - but one objection to autonomous robots in war zones rests on a robot's lack of empathy.
We currently can't code empathy into robots. They don't feel - period. Though they might make an appropriate response to our cues if they're given enough examples to learn from.
How could a machine make a life or death decision if it can't understand us or share our feelings?
Even when we have empathy, we don't always make good decisions. Would creating robots more like us make them better? Better for what? Better for whom? Who decides?
I wanted to explore the ethics of this technological conundrum and wrap it up in a story.
The debate about autonomous robots is live and it's real. We'll have to live with the consequences of those debates - everyone will. How awesome and scary is that?
Hence the book.