As we come to rely more and more on the output of machines, we tend to forget that they are intrinsically binary operations.
The output is either black or white.
One little mistake or variation in the output that can be driven by something as simple as the placement of a punctuation mark in your prompt, can deliver very different outcomes.
When you string a series of tasks together even simple ones that rely on the output of the previous output, any error that is not picked up will be compounded.
Therefore, my contention is the greatest problem with AI is not technical but our belief in the output and that any error anywhere in the system compounds.
The danger of compounding errors is obvious. One simple O-ring worth a couple of dollars that was not malleable enough to accommodate the cold weather on that January morning in 1986 led to the end of the Challenger space shuttle. The unusually low temperature that morning was not anticipated, and so not included in the variables that determined the specifications of every piece in that multi-billion dollar vehicle.
Boom.
The only antidote to this misleading, erroneous and potentially nasty outcome is human scrutiny. We may have been released from the repetitive and mundane parts of all our jobs, but that release comes with a price: we simply must be highly sceptical of the outputs, and subject them to vigorous scrutiny starting with first principles.