View profile

Your current AI approach is broken

Hello everyone to another issue of The Aleph Report! Today I'm writing a much personal piece. I'm sur
Your current AI approach is broken
By Alex Barrera • Issue #28 • View online
Hello everyone to another issue of The Aleph Report! Today I’m writing a much personal piece. I’m sure some will disagree and I would love for everyone to have a good discussion on the issue. Feel free to ping me with your opinions! Happy week people!
4 minutes read.

Your current AI approach is broken
Photo by Mika on Unsplash
Photo by Mika on Unsplash
Some weeks ago a friend asked me how to automate and achieve scale for his mental health startup. It’s not been the first time someone has asked me this. Scale and automation are, under the technology creed, synonyms for Artificial Intelligence.
I was hesitant to answer. I told him that I am a believer on AI, but that you couldn’t apply AI as it stands today to mental health problems.
Most AI methods are based on function optimization. In layman’s terms, the algorithm looks for the best (optimal) solution to a given goal. The problem with this is that such goal rarely contains information about its moral worth.
“A system that is optimizing a function of n variables, where the objective depends on a subset of size k < n, will often set the remaining unconstrained variables to extreme values; if one of those unconstrained variables is actually something we care about, the solution found may be highly undesirable.”

Stuart J. Russell. Of Myths and moonshine. The Myth of AI. Edge. 2014
In other words,
“Building an agent to do something which (in humans) correlates with the desired behavior does not necessarily result in a system that acts like a human.”

Nate Soares. The Value Learning Problem. Machine Intelligence Research Institute. 2016
And this is the key to the disturbing stage of technological evolution we are experiencing. As I’ve pointed out before, I believe we’re living a major moral crisis. One of the consequences is that we’re blindly applying mathematical formulas that don’t factor in human moral values. The goal is to automate a problem and do it fast an efficiently. The issue is that the human mind can’t be reduced to a set of equations (yet).
Human psychology is incredibly complex. We’re governed by dynamic systems that defy most rational explanations. During the last few decades, we’ve witnessed the failure of the Keynesian doctrine in economics. Heralded by behavioral scientists like Dan Ariely, more voices are demanding better models. Models that account for human irrationality.
However, we’re, once more, making the same mistake. We’re applying sophisticated algorithms that overly simplify the reality they deal with. One could argue that most AI systems are employed to non-human, repetitive, tedious tasks. And they’re right. Still, many of those AI systems are making decisions that do affect humans and their mental models.
An excellent example of this is Filter Bubbles and Echo Chambers produced by algorithmic news feeds. As much as it pains us, Deep Learning techniques obviate any moral judgment of the outcome. The surprising thing is that this effect, nicknamed the “Sorcerer’s Apprentice” problem by Norman Wiener, has been known since 1960. Systems that follow instructions but misinterpret the developer’s intent.
But maybe the most striking thing about the current AI wave is the lack of awareness of the problem. The recent demo of Google Duplex is a perfect example of this. Natasha Lomas wrote a brilliant piece on the issue.
“Yet despite all the thoughtful ethical guidance and research that’s already been produced, and is out there for the reading, here we are again being shown the same tired tech industry playbook applauding engineering capabilities in a shiny bubble, stripped of human context and societal consideration, and dangled in front of an uncritical audience to see how loud they’ll cheer.”

Natasha Lomas - Duplex shows Google failing at ethical and creative AI design
It’s discouraging to watch history repeated. In 1962, Rachel L. Carson published one of the most impactful books in science, Silent Spring. In it she criticised the indiscriminate use of chemicals (DTTs) and the poisonous effects of these in the ecosystem, humans included.
“Technology, she feared, was moving on a faster trajectory than mankind’s sense of moral responsibility.”

Linda Lear’s introduction to Silent Spring by Rachel L. Carson. 2002
At the time, the US Chemistry industry, one of the largest beneficiaries of the Cold War, operated at large. Science and chemists were considered the top of the food chain. No one questioned their knowledge. No one questioned their products. What Carson uncovered, documented and publicised was the other truth; lethal ignorance, greed, and Capitalism.
“Carson questioned the moral right of government to leave its citizens unprotected from substances they could neither physically avoid nor publicly question.”

Linda Lear’s introduction to Silent Spring by Rachel L. Carson. 2002
I can’t but draw parallels with our current situation. I question the moral right of Google or Facebook to leave their users unprotected. But this isn’t a problem with the prominent technology corporations, but with most AI-powered solutions. Not even with such solutions, but with the lack of awareness of the developers and designers.
All this said I’m bullish on AI. I’ve been a defendant and ardent believer of the field. This is why I’m so vocal about the current misdirection.
Yes, we need autonomous agents. We need to apply AI, but we need to incorporate moral values into the equation. This in itself is a considerable challenge. It’s becoming one of the newest research lines in AI, but most advances are, so far, theoretical. New companies deploying new systems should be aware of the problems. They should try to apply new AI models and build moral safeguards.
“The systems will need some method for learning and adopting prosocial preferences, in light of the fact that we cannot expect arbitrary rational actors to exhibit prosocial behavior in the face of large power disparities.”

Nate Soares. The Value Learning Problem. Machine Intelligence Research Institute
As I told my friend, yes, you need to scale your mental health approach. But you can’t do what everyone else is doing. Stakes are too high to play moral depravity with our minds.
“Man has lost the capacity to foresee and to forestall. He will end by destroying the earth.”

Albert Schweitzer
If you like this article, please share it, and invite others to follow the newsletter, it really helps us grow!  
Did you enjoy this issue?
Alex Barrera

The Aleph Report

If you don't want these updates anymore, please unsubscribe here
If you were forwarded this newsletter and you like it, you can subscribe here
Powered by Revue