Cover photo

Geoffrey Hinton's AI worries

Making use of technology that humans create, can have intended and unintended consequences. The consequences are intended, when we really understand how a technology works. We can predict its behaviour and explain why a certain outcome will or won't happen. The consequences are unintended, when we don't understand how a technology works. We can't explain or predict its behaviour. When making use of powerful technologies like AI, it's the same thing. We understand it or we don't, and it will have its consequences.

Creating new technologies has always followed the same logic. We want to achieve a goal and we create a system to achieve that goal. Or, we create a system, and by making use of it, we discover what goals it can achieve. It's the same with AI.

What is different about AI, is that the system that is supposed to achieve goals, is not something we design, meaning, we don't encode the systems exact behaviours. Instead, it "evolves", through learning. We don't design it and we don't fully understand it. We therefore can't exactly predict how it will behave in unique circumstances.

Understanding this, allows us to relate to Geoffrey Hinton's two "bad outcome scenarios of AI". In the first scenario, AI is a powerful system which gives incredible power to people making use of it. The AI is understood well enough to be predictable, and people with bad intentions can use it to harm other people. In the second scenario, AI is an even more powerful system that we don't fully understand, which does something unpredictable, such as killing all humans, while pursuing a goal that we gave it.

Loading...
highlight
Collect this post to permanently own it.
Le bougnat logo
Subscribe to Le bougnat and never miss a post.
#ai#technology#geoffrey#hinton