Even well-meaning interventions can have devastating and heartbreaking consequences, as this story reminds us.
One of the well-known tropes in science fiction — especially the sort of sci-fi you see in comics and superhero films — is the mad scientist. The person who fancies himself (it’s usually a him) as ruler of the world and invents some dastardly device to aid in his devilish designs.
But in sensible science fiction, the scientists are often not so much mad or intentioned, but driven. Or, simply, their inventions and breakthroughs lead to unintended consequences.
Flowers for Algernon comes into that last category. Someone makes a biological breakthrough and finds a way to increase IQ. As always, it’s tried out on animals first, and seems to be a miracle cure. It’s a long time since I read the story, so the specific details of how this comes about are somewhat hazy, but a person of low intelligence takes the drug and it improves his intelligence. The story is written in the first person in the form of a diary, and as the protagonist’s IQ improves so does his spelling, thinking and writing.
He falls in love with his special school teacher, and she with him, and they become research colleagues. But then he notices that the effects of the drug on the animals is only temporary. They revert to their original state, and he realises that that is going to happen to him. It does, and that’s reflected in the worsening of his writing and his memory.
I realise that this has more to do with science than computing, but the common thread I think is the ethical one. Should there be some sort of ethical oversight committee to evaluate new developments, in much the same way as new drugs have to undergo extensive trials? And even if that were to happen, who would be qualified to sit on such a committee? Why would its members have any more insight into the possible consequences than anyone else?
We’re in an undesirable situation already, with artificial intelligence. AI comes up with a solution for something, and nobody knows how it did so. So you have what is, in effect, a black box making decisions that affect people’s lives.
Under the General Data Protection Regulations in the UK and EU, companies are supposed to provide transparency into how their algorithms work. Well, good luck with forcing companies like Google to do that. But even if they wanted to comply, would the companies always even know?
I don’t have the answers to such questions, but I think they might be interesting to discuss.
For more articles in this series, please see the Dystopian Fictions index.
I've never read this one, though it's been on my list as a SF Classic that I'd like to read at some point.
I didn't know the GDPR covered that. I thought it was only for how they use the data. I doubt many big tech companies want their to be any transparency over their algorithms. I think AI will fast outpace any measures that are meant to be put in place. Soon the AI will be generating its own AI (only half joking...)!
It's alive! It's alive! It's alive! Such a classic! Really made me laugh this morning, Terry. Another film with a similar theme to Flowers for Algernon is Awakenings ( Robert De Niro and Robin Williams). As for your AI question... I have a feeling we will not know the true danger until it is too late, and it is already too late. Reminds me of the truism that the first symptom of heart disease is cardiac arrest...