The paradox of the reliability of Artificial Intelligence | by Cassie Kozyrkov | Science and Data

by time news

Too good to fail? The surprising way a high-performing system can hurt you.

  • Careless Daniel: is a constant disappointment to you, performing well on assigned tasks 70% of the time and producing utter embarrassment the rest of the time. Watching Daniel make 10 tries is more than enough to elicit an “oh my gosh” response from you.
  • Carlos Reliable is another story. You have seen Carlos in action more than a hundred times and he has always impressed you.
Source: Image

Overconfidence is a problem. When a system is obviously flawed, it plans around its errors. You don’t trust perfect execution.

When you increase the scale, you will come across the long tail.

Image: Source.

Don’t rule out the *possibility* of observing bugs when their *chance* is lowered.

The “unsinkable” Titanic is an example of mental rounding. According to NBC, “the phrase was originally ‘virtually unsinkable’ and came from an obscure engineering magazine, but after a while it didn’t matter.” Also, someone claims to have heard the ship’s captain, Edward John Smith, say “God himself could not sink this ship.” Image: Source.

Just because you haven’t noticed an error yet doesn’t mean your system is perfect. Plan for failures and create safety nets!

Relying on perfection is dangerous. Think of perfection as a bonus, but never trust it.

Errors *will occur.*

  • What safety nets are in place to protect people from the consequences of those mistakes?
  • If the whole system fails – safety nets and all – what is the plan to fix things?

Better is not the same as perfect.

As long as the tasks are complex or the inputs are varied, errors will occur.

You may also like

Leave a Comment