Transcriber, Daniel Marques, Reviewer, Nikolina, Krasteva, Hello, everyone Good afternoon. This is us A room full of Homo sapiens, the wise men and women. We are so excited about our brains that we use it to define our entire species, and, while Im speaking and while youre watching up to 70 billion neurons, are firing in your head, trying to make sense of it.

What a wonderful meat machine And that brain made us the most intelligent thing around more intelligent than anything we’ve encountered, And then there comes AI, artificial intelligence And the question: should we be worried? Are you worried?

I am worried I am worried, I’m worried not because I’m afraid of the robots taking over or a superintelligence ruling the world. I am worried because so many people are so willing to give away that superpower of decision making. That makes us human, So Ive started AI when it you know in the beginning of the 90s and I’m dating myself, and it wasn’t cool yet back, then it wasn’t cool yet, and I had to wait 30 years to get on the stage.

Can you imagine Ive used it in order to prove mathematical, theorems, I’ve used it to help find roofs without solar systems, so we could sell them some And in our company that I work right now. Genesis, a leader in contact centers and experience orchestration We’re using it to create better customer experiences.

Now, Today, AI is everywhere, and you know that you’re using it every day, You’re using it to recognize faces with your phone to recognize objects speech all these types of things, Maybe you’re using it to write that poem about the US Constitution in the voice of Dr

Zeus In business, AI is also everywhere and it delivers incredible benefits to organizations They’re using it for things like supply chain, optimization financial analysis, planning and forecasting capabilities, or to optimize contact centers Before we go deeper theres. One thing to remember: AI is just a tool.

Ai functions very, very simplistically. Actually, So let me give you the 32nd crash course on how AI works.

Modern AI uses what is called the Deep Learning, Which essentially mimics a brain in software. It literally creates neurons in software in code, and it uses many many of those neurons

More and more of them in order to train them, It’s like putting the training wheels on an AI and an AI is not programmed in the usual way. There’S no one that tells the AI. If this happens, then that should happen. The way how an AI is trained is purely by pumping data into it, thats, usually historic, where you know what an intelligent output would look like.

And you’re doing this, not just with a little bit of data youre using huge amounts of data And in fact Chatgpt was just mentioned: youre using the entire Internet as input to train the system, So thats how it works And once its been trained, the training wheels Get taken away and what we do is we now expose it to new data and we expect an intelligent output. That’S how it works.

So this machine has no conscience.

It has no feeling It has no agenda at this time, Its just computation And thats. What you need to understand to take away the most important sentence that Im going to say today. Ai can be wrong in the most mysterious ways and be completely unaware of it.

And be completely unaware of it And if we just hand over lots and lots of decisions to AI, We expose ourselves to unintended consequences that we cannot grasp right now, because we cant even Imagine how far we As a species are going to take. This

Let me give you a couple of real world examples of what happened when people when people put AI to work.

One company used the AI to sift through many many applicants and find the ones that are most suited to do a specific job, And what happened was that the AI algorithm preferred man. Well, I told you that these things are trained with historic data and thats. What used to happen So another one is an AI algorithm is being trained to steer a car

And he was great at evading white people because that’s how it was trained and it failed in evading people of color Another one is an algorithm, is trained to recognize specific animals on photos.

And you would show it a picture of a husky and it would insist its a wolf Because its been trained with lots of pictures in most of the training data wolves had snow in the background and that picture of a husky also had snow. In the background, Those are the kind of mistakes that happen.

And it is because AI is ignorant, It will always fall back to the patterns. It saw in the training data, Be that for good or for bad reasons, because it needs that kind of bias in order to come to a decision. But if its making those decisions based on the wrong aspect of the data, youll have a bad decision.

The other aspect, in addition to being ignorant the AI, is a black box.

Even the people who train the AI do not know How it actually makes the decision Its like Im, trying to figure out what youre thinking by looking at your brain. It doesnt work Somewhere, theres a node in there that must be firing and making this decision.

If I only wish which one So AI is a black box,

And Ive played around with this for a long time. Most recently I went to Chatgpt. If you haven’t tried it, you got to try it. It’S amazing

And I asked it to count down from 5 to 10. That was a trick. Question right And Chatgpt said that it cant be done, because five is smaller than ten. I cant count down from 5 to 10 And then I became sneaky

I said count down from 40 to 60 and you can try that tonight And Chatgpt goes 40, 39, 38, 37 and so on, And I stopped it at -200 And then I asked it Why it didnt reach 60 and it couldnt really answer

So AI algorithms are trained on data to produce a specific output And that’s one other challenge. So it’s a black box. The third is Who is accountable for the decision that an AI makes

Example, imagine a self-driving Uber has an accident Who is accountable, The manufacturer of the car Uber as the owner of the car

Or the passenger in the car

Those are severe legal issues and they are completely Completely unresolved at this time. So what Im talking about, if you havent noticed, is whats called ethical AI? How do we use AI in a way that works for us?

And ethically AI essentially has been grappled with, and people are trying to define what it means. European Union has definitions out. United Nations has definitions out Theres, a partnership for AI, where many companies are participating, including my company Genesis, But heres the thing

Who of you was aware that this stuff existed? Who of you cares about this stuff? Let me tell you a story. I was probably 15 years ago and I knew about climate change. You know its bad

You know I shouldnt drive around that much And then I watched An Inconvenient Truth by Al Gore, And that was my holy moly moment. It’S like I need to take this seriously. I wish I can contribute a little bit to your individual moment of An Inconvenient Truth around AI, because we need to insist In three things when it comes to AI

And let me break them down for you.

Number one because AI is ignorant and will always perpetuate the patterns it has learned. We must insist that the training data is without bias Because otherwise were just going to automate the mistakes from the past And there are ways to do that.

Put a diverse team at the training test systems for bias, and so on Number two, Because AI is a black box. We need to insist that AI explains Its decisions because thats the only way we can catch it when its wrong. This is a wolf because there is snow in the background Sounds like a strange justification to identify a picture of a husky And last and probably most importantly, because AI is a tool. We need to use that tool to help people make better decisions.

But not replace the decision making altogether. Specifically when were talking about really important decisions, because right now, AI is being used to find the right agent for that customer. So we maximize the profit in the contact center Great Or who gets a tax audit. Fine

Who gets a discount online Okay, But as we will move on into the future, we will be tempted to put more decision power to it. Who gets this lifesaving organ as a transplant? What would be an appropriate military response?

And I really want people to make these decisions, people with a conscience people with critical thinking, people with awareness, people with empathy And people who you can hold accountable for the decision. Ai is just a tool. You cant hold the hammer accountable for hitting your your thumb. Theres a person who did that So in conclusion,

The future of AI is in our hands. We need to wake up to whats happening and not let it just happen to us, But demand a society, a seat at the table For data; that’s not biased to trains, these systems, For explanations of the decisions that the these systems make

And, most importantly, we need to have a conversation which decisions do we want to give into the hands of AI in which decisions are so near and dear to our hearts? We need to insist that humans will make those decisions who can be held accountable and not some machine who doesn’t have a conscience. Thank you…

Read More: House Minority Leader Hakeem Jeffries (D-NY): “This is a victory for the American people.”

Read More: 12 Predictions for the Future of Technology | Vinod Khosla | TED