Follow The Brand Podcast with Host Grant McGaugh

AI Governance: Balancing Innovation and Responsibility | with Hernan Londono of Dell Technologies

Grant McGaugh CEO 5 STAR BDM Season 3 Episode 31

Send us a text

Discover the extraordinary journey of Hernan Londano, a renowned AI strategist at Dell Technologies, as he takes us from the mountains of Colombia to the forefront of technological innovation in the United States. Hernan's nearly three decades of experience, including his pivotal roles as Chief Technology Officer and Chief Information Security Officer at Barry University, provide a unique lens into the world of AI governance. This episode promises to enlighten you on how to balance cutting-edge AI innovations with ethical responsibility, ensuring alignment with strategic organizational goals.

We venture into the provocative realm of AI's role in strategic games and its real-world ramifications. Hernan discusses the historic matches of AlphaGo and AlphaZero, highlighting AI's unprecedented capabilities in generating novel strategies. But the conversation takes a serious turn as we address the ethical and responsible use of AI in military contexts, where decisions can literally be a matter of life and death. This chapter emphasizes the necessity for inclusive, diverse discussions and continuously evolving governance frameworks to ethically harness AI's power.

The complexities of AI, free speech, and legal ramifications unfold as we dissect a real-world incident involving OpenAI and Scarlett Johansson. This episode doesn't shy away from addressing the urgent need for clear guidelines and regulations as AI technology continues to permeate our daily lives. Hernan also explores the transformative impact of AI across industries, from AI-generated music to business strategy, urging the development of governance frameworks that ensure transparency, accountability, and equity. Join us for this compelling exploration of AI's future and its governance within business and beyond.

Thanks for tuning in to this episode of Follow The Brand! We hope you enjoyed learning about the latest marketing trends and strategies in Personal Branding, Business and Career Development, Financial Empowerment, Technology Innovation, and Executive Presence. To keep up with the latest insights and updates from us, be sure to follow us at 5starbdm.com. See you next time on Follow The Brand!

Speaker 1:

Welcome to another episode of Follow the Brand. I am your host, grant McGaughan, ceo of 5 Star BDM, a 5 Star personal branding and business development company. I want to take you on a journey that takes another deep dive into the world of personal branding and business development, using compelling personal story, business conversations and tips to improve your personal brand. By listening to the Follow the Brand podcast series, you will be able to differentiate yourself from the competition and allow you to build trust with prospective clients and employers. You never get a second chance to make a first impression. Make it one that will set you apart, build trust and reflect who you are. Developing your five-star personal brand is a great way to demonstrate your skills and knowledge. If you have any questions from me or my guests, please email me at. Grantmcgaw spelled M-C-G-A-U-G-H at 5starbdm B for brand, d for development, m for masterscom. Now let's begin with our next five-star episode on Follow the Brand. Welcome to another exciting episode of the Follow Brand Podcast. I am your host, grant McGaugh, ceo of Five Star BDM, where we help you build a five-star brand that people will follow, and today we have an extraordinary guest who brings a wealth of knowledge and experience to our discussion on AI and governance. Our guest is Hernan Lendano, a man whose journey has taken him from the mountains of Colombia to the forefront of technology innovation in the United States. Hernan's love for technology was ignited during his college days in Miami, where he found himself drawn to the dynamic world of computer science, and little did he know then that this passion would shape his remarkable career spanning nearly three decades. Hernan is currently the Distinguished Chief Technology and Innovation Strategist at Dell Technologies, but before joining the corporate world, he spent 24 years at Berry University, where he held pivotal roles, including chief technology officer and chief information security officer. It was during this time that he earned his doctoral degree from Nova Southeastern University, further cementing his expertise in the field. And beyond his professional accomplishments, hernan is a true renaissance man. He has published articles in prominent IT and cybersecurity trade magazines and has spoken at national conferences, captivating audiences with his disruptive thinking and insights on mitigating industry-related crisis. Today we are diving deep into the critical challenge of balancing innovation and responsibility in AI governance. Hernan will share his unique perspectives on the ethical and practical considerations of AI, the importance of diverse and inclusive conversations in shaping governance models, and how organizations can align AI initiatives with their strategic goals of fostering innovation. You will not want to miss Pernod's compelling analogy about AI and the battlefield, which sheds light on the profound implications of this technology. So get ready to engage and share your thoughts using the hashtag AI governance, and join me in welcoming the insightful and thought-provokingoking Hernan Ondano to the Follow Brand Podcast, where we are building a five-star brand that you can follow.

Speaker 1:

Welcome to the Follow Brand Podcast. This is Grant McGaugh, your host, and today is a special day. We're here, live at the Levant Innovation Center. We're going to talk about artificial intelligence a big topic, huge topic but we're going to take it to a different lens. We're going to talk about AI and governance, and I can't think of anyone else more qualified to have this type of conversation, because it's so, so important, as the entire planet is consumed with artificial intelligence, and what its use cases are as we now go forward on this new platform. So I'd like to introduce you to Hernan Lindano, who I have known for a few years. He comes from the world of academia and he's made this transition over into enterprise, and we're going to have an in-depth conversation. So, hernan, would you like to introduce?

Speaker 2:

yourself, grant, fantastic, thank you. Thank you for having me in your show. Couldn't agree with you more. Ai is the topic you happen to have picked, the topic that is quite interesting within AI, which is the topic of governance Somewhat philosophical, I will argue. Not entirely settled at this point. You know a lot of things to be yet discovered in AI governance, but happy to be here. Yeah, hernan Londono, I am an AI strategist. I've been a technologist for 30 years and now I'm with Dell Technologies as an AI strategist. So happy to be here, well, happy to have you here.

Speaker 1:

We're going to step back just a little bit before we get too deep into it, because you have an interesting past. We were just talking about that just a little bit that you've come from the mountains of Columbia, all the way going to New York, come to Miami, but you were at Barry University for a very long time. I want to give the audience a better understanding of your background, where you're coming from and then how you got to this point in time.

Speaker 2:

Yeah, yeah, well, listen, long journey, you know.

Speaker 2:

Like I was mentioning, I've been in the technology landscape for now almost 30 years, 24 of those almost 30 years spent at Berry University and originally I came as a transfer student to the computer science program and my idea was, hey, I will stay, finish my degree and then move back to New York.

Speaker 2:

And I found this great place, great program, very innovative. This is a while back when I came entered the program and I was hired in the IT department, I didn't know that that was going to happen and I had the fortune of being in that university and in the IT side of things for a very long time, had the privilege to work with amazing people you know our first CTO back then, john Bobran, the current VP of IT, yvette Brown and had the privilege of serving as CISO, as CTO, for a long time and learned a lot about academia and also learned a lot about the application of technology and part of my academic background. By the way, fitting to say, we are a Nova Southeastern University. I was able to do my PhD in computer science here at Nova Southeastern University, so my understanding of this topic of AI, you know, comes from the application of AI while I was a CTO at Berry University, but also from my academic studies both at Berry University and Nova Southeastern University.

Speaker 1:

And that is a storied background and I'm glad it has led you right here at this point in time, right now. As we talked earlier, ai is a moving target. It's not just an emerging technology. I would describe it as a fountain of data, of possibilities, things that are happening. It's in the use of so many different people. Now. How do you want to use it for text? Do you want to use it for video? Do you want to use it for audio? Do you want to use it for automation? Do you want to use it for predictive analytics? There's so many use cases. I don't know how we're keeping up. So ai and governance give us your take, how you see the landscape right now and how it might possibly progress in the near future.

Speaker 2:

Yeah, yeah, well, listen, uh, you know our in our opening we mentioned the fact that interesting topic, right, governance and before we really started today this interaction, you and I were talking about emergent technologies from past and you know, just to bring an analogy to the table as we continue to talk about AI governance, you know we remember the times of cloud and even cybersecurity, and these are emerging fields that at some point or another, we have all grappled with governance at some point. So the emergent technology or the emergent field gets underway and begins impacting society. Everybody's worrying about the practical application of anything specifically, let's say, cloud, and then in the background, there is this need about understanding what is responsible, what is ethical, you know, what is fair and you know leads to a governance discussion, and so that's what we're doing right now. So you know you and I talked about earlier that we're probably about in the 20th or between 20 and 25 years of cloud, and cloud has evolved at its own pace and velocity and we're still grappling with cloud governance. So it is always a moving target. So I see this AI governance movement now understanding that AI has a higher velocity.

Speaker 2:

There's some that have already indicated that technology's evolution moves at an exponential rate. However, ai moves at a double exponential rate. The evolution, the rate of evolution is much higher and so, understanding that velocity, there is a conflict between governance and the evolution of the technology. The moving target that you talked about is moving a lot faster, so it is an interesting dynamic. How do we catch up? Or should we catch up? Maybe the idea is, should we create governance models that are very elastic and flexible? I you know that's an idea, but definitely it is a moving target.

Speaker 1:

This is interesting as you were talking, I'm thinking through mobile computing, social media, which really kind of took off in 09, 10, right after the big recession. Right, all of a sudden, social media and you think about now here we are in 2014,. You talk about governance. We're just now. We just had some government hearings around. You know Facebook and Snapchat, right, and Instagram, and the use case and what's really happening from a human experience, good and bad, and what that looks like it can influence elections. I mean, this is really big things. And I remember talking to another guest of mine who brought up some very interesting topic, meaning we have now you have the nation states, like the United States, the European Bloc, the African nations, at least South. These are nation states. Then you have tech states. Well, these tech states are more influential than these nation states, and then in between that, you have power and your people. So you talk about governance. Now, how do you govern all of that? So that's a big wow. I'm not going to step you into that.

Speaker 2:

It's a big wow.

Speaker 1:

But I want to bring you into. We have a lot of technologists that are on the call, they're listening and they want to know is, as I I implement ai, am I going to shoot myself in the foot because governance comes in and then I've got to change all my protocols and policies? I mean, how do I move forward with this model?

Speaker 2:

yeah, yeah, well, listen, there is um, absolutely, and it's a good question, because everybody um it's in a different, a different place in the journey of implementation of AI. Some organizations and some people decided to start this very early, and AI back then was not what it is today. So to your point about having to maybe change and maybe recycle some things in terms of governance that were adopted in the past, the answer is yes, because the technology is evolving at a pace and scale that we did not imagine, and so you know. Quick example is think about artificial intelligence. Probably five years ago, you were looking at a lot of predictive analytics, artificial intelligence but you weren't looking at the ability to produce images based on your prompt. You weren't looking at the ability to produce deep fakes somebody cloning your voice and cloning your image. So any governance structures that may have existed then maybe cover some portion of AI, but not entirely, and so what I'm seeing now is, at least when I speak to organizations trying to implement AI, is a real desire to begin to understand from the ground up. Do we have the foundational elements? Forget about AI for a second. Do we have the foundational elements that can support the implementation of AI in a more organic way. I'll give you an example.

Speaker 2:

We started today in this podcast to talk about AI governance, but before you get there, you really need to have data governance, because without data, you're not going to have outcomes in AI. You're going to need data, and so how do you govern that data? In terms of responsible use of the data, even in absence of AI may be important. How do you respect privacy? How do you protect security of that data? What do you mean when you say responsible use of the data? So if you go back and you start thinking about your data governance, you have a good chunk that can contribute towards AI governance, right? So the idea here is approach the conversation from the foundational elements and understanding that data is the most foundational block to enable AI. That's probably the best place and the first place where we should all go to begin to figure out governance.

Speaker 1:

Now you bring up another great topic, the foundational layer. Me, I'm more of an infrastructure guy. My starting infrastructure telecom got into the land local area network, wide area networks which led to cloud computing as we know it and then led me to cybersecurity, data center. All of that, right, I understand data and I also understand there. Cyber security data and all of that right, I understand data and I also understand there's good data and bad data. Meaning, just like me, I have in my phone I probably have maybe more than 50 percent of what I would probably call bad data I don't need, I don't use, it's junk, it's junk. So how do you now, if you are sitting there with this data warehouse you've got this data lake and everybody's like, wow, you know, somebody comes to you. I want you to to actualize all my data and make it AI responsive? How do you get the data integrity to then be responsive? And then what you just talked about with governments that's a tall task.

Speaker 2:

Yeah, yeah, well, listen. When it comes to data of, and then what you just talked about with governments, that's a tall task. Yeah, yeah, well, listen. When it comes to data, I think there is a little bit of misconception about data and that good data and bad data. Um, I will probably say data that you can use for a purpose, but maybe not for other data that you can't use for that same purpose. But all data in some way may be good, and so we have to think about that.

Speaker 2:

The value of the data in many cases is intrinsic, and it is intrinsic because, prior to AI and prior to having this amount of compute power, it was very difficult for us as humans to discover patterns in the data, and so we may have seen data sets for which we couldn't make sense, and so we couldn't extract value because we didn't have the capacity on the compute side and the speed to do it.

Speaker 2:

We probably could have done it. We would have been given years to analyze the data set, but we don't have years, right. So what we're seeing right now is yeah, we have these data sets and we have the ability to correlate an incredible amount of attributes and find patterns that perhaps we were never able to see. Perhaps we were never able to see and that paradigm shift in terms of compute power and velocity is enabling organizations and people and companies to leverage that data set in new ways Right, not the traditional way, but in new ways, because the patterns are being discovered, and that's the interesting part about AI. But, anyways, because the patterns are being discovered and that's the interesting part about AI Like, what do you do with those patterns that you did not know exist?

Speaker 1:

Wow, you just opened up a big can of worms there. I mean, this is why this is so explosive. Is that? I remember the thing was in Taiwan or China, where the Master of Go Master of Go is like a chess game. It's like very, very advanced, complicated, centuries old. Someone that can operate at that level is the Master of Go. No one can beat the Master of Go, and so they had a contest with the Master of Go, right? And so they had a contest with the Master of Go and the AI and the AI. I think it. I don't know if it won the first time or the second time, but it actually won the series, and what it was able to do was exactly what you just talked about. It started to present patterns that had never been seen before.

Speaker 2:

Absolutely, I say you can't do that you know, there's a good case by the way, I think it was before the goal case and this was, uh, probably one of the very first good, practical demonstrations of generative ai, right, and that is um. That was a machine called alpha zero and that was, um, one of the very first generative AI machines. That was programmed by DeepMind and it was programmed to play chess, and the undisputed champion up until that point was another AI called Stockfish 8. And that was a reinforced learning AI machine and so it was being fed back with new things that was learning on playing the game. But it was originally fed with all the moves of all the grandmasters. That's how it was programmed, with all the moves of all the grandmasters. That's how it was programmed. That was the data set All the grandmasters, their moves, given to Stockfish 8, ingested, it learned, and it was undisputed. Nobody could actually win against that machine.

Speaker 2:

Alphazero came, and AlphaZero, being a gener generative AI tool, was not given any moves, none, zero. The only thing that was given was the rules of chess and a mission, and the mission was you have to win. And he trained, he generated all the moves, he trained itself and he played 100 matches against Stockfish 8. He won 26 and the rest were draws. Stockfish 8 was never able to defeat Alpha Zero at playing chess and the grandmasters were pulled onto analyzing the moves and they were wowed like we have never seen this strategy ever including, uh, sacrificing the most important piece and the chess board to win a match.

Speaker 2:

And a lot of people was paying attention to that because chessboard, in some cases, battlefield right. So the implication of that was okay, what if we put a generative AI to operate in a real battlefield? Would that AI sacrifice some of our own pieces? Because the mission is, it understands only the mission. Right, and we're getting into the ethical, yeah, responsible. So the military has been I'm sure everybody knows in the podcast. The military has been um exploring the user generative AI in the battlefield. There's an article that came out maybe two days ago. There was a demonstration of a very high ranking official from the government sitting in an F-16, modified F-16, being piloted by a generative ai engine and doing dog fights against a human pilot and that's that's. That's not classified as has been published, and there's a lot of two days ago I saw the article and everybody's just mind boggled. It's a place where you're now you have to really define responsible and ethical use in a context where lives are at stake and there's nothing else higher than that.

Speaker 1:

That is so key. We've heard these scenarios about does the AI like autonomous vehicles right?

Speaker 2:

Does the AI you?

Speaker 1:

know, avoid an accident? Does it run over someone else to protect? You know, depending on the mission, what is the mission? Right, but what we're really talking about is that from a governance to how are you going to govern something where you're not really sure of its complete capability, right, right, or its parameters, like it might, like you know what? Yeah, that's a governance for those parameters, but I am now operating outside of that. Uh, construct, yep, yep, because it's part of the mission.

Speaker 2:

What do you do? Yeah, well, listen, there's a few things that can be done. The first is there is very smart and committed people all around the world having this discussion and they are coming up with frameworks, with guidelines, in some cases with regulation, that it's not very stringent, but it provides a place where everybody can begin to have a conversation around it. And this GDPR you brought up GDPR right. Gdpr was not originally intended to govern AI, but what it was intended to govern was the use of data. So a lot of people that is looking for frameworks for AI governance are already looking at GDPR because, to begin with, it provides a very good foundational guideline for the responsible and ethical and even compliance in the use of data, and so we do have examples now. So that's one thing that we can do, right. We can begin to look at things that exist and adopt them and, if we need to do, adapt them to fit the AI model. The other thing that we can do, which I believe is exactly what we're doing here, is we need to have conversations about this right, and today we're having conversations around. Some of it is somewhat the technicalities of AI and the use of AI, but we're having conversations also about you know, what does it mean to govern AI? And the conversations that we need to have, I would argue, need to happen in the context of a lot of diversity of opinions, a lot of inclusive opinions.

Speaker 2:

You and I may think differently, and this is a moving target. That's how we started today. So, if it's a moving target, difficult for me to say that you're wrong. More important for me to say let me listen to Grant and see what he's thinking and continue to shape this conversation in a way that, ultimately, will lead to a more practical application of a governance model, for example.

Speaker 2:

Right, you need to do that externally, but you also need to do that internally. You have your companies, your organizations, where traditionally, emerging technology has been a responsibility of the IT team. I think that model is old and needs to be destroyed, because I think Garner said this maybe about six or seven years ago um, you know, every company is an it company and and then they've further said every I, every budget is an it budget. So their responsibility for emerging technology in an organization cuts across all lines of businesses. So, having internal conversations, you know, the business unit, the it unit, uh, the marketing unit, the legal counsel all together around this governance. It's absolutely super important. Those are some things that people can begin to do should already be doing at this point.

Speaker 1:

It's an interesting discussion because IT, as you just said, information technology has morphed and exceeded just the traditional. You know, I have to maintain the system, the computer systems of their organization. It is a competitive advantage. How do I utilize this competitive advantage in my business objective to reach my mission, my vision and my goals? Because this is a tool. Now, getting back to what you were saying earlier, which I love, what you just stated about governance. Think about what I talked about earlier about the nation states. Nation states are governed, like in the United States, by a constitution. You have certain amendments there, right, you have the right of free speech. So then, how do you work that when you start thinking about an AI, is the human that created the AI has the freedom of speech, or does the AI of itself has the freedom to create outside of the context of that technology? Is I've heard people say that what happens when the technology is smarter than, so to speak, than the technologist?

Speaker 2:

right and and I will I will say I will say it in a different way, because this is a fantastic question that you're posing and I don't claim to have the answer for it. I'm going to be very transparent, but let's not talk about what happens with the when the technology is smarter than the human. Let's ask the question what happens with the technology is very difficult to discern whether it's a human or it's technology. Right, the fidelity of the experience interfacing with the technology is so high that you don't know if you're interfacing with a human or you're interfacing with an AI. So I'll give you an example.

Speaker 2:

I was coming over here today very excited about this podcast and I was listening to the news and you know what was in the news today is that it is now known that OpenAI approached Scarlett Johansson twice to be the voice of the OpenAI digital assistant and she declined. She said no, I don't want to be the voice. And OpenAI did a pilot of a digital assistant and she started getting phone calls from everybody Like, hey, your voice is in the pilot of the digital assistant for OpenAI and she couldn't figure out. How did that happen? Well, how that happened is anything can be digitized today, so the pattern of your voice, the inflection, the tone, and so we are in an era again, incredible amount of compute, an incredible amount of capacity we can begin to create models where we can mimic your voice, right? So what they did is they created a voice that sounded very not exactly, but very similar to the voice of Scarlett Johansson. It's in the news today and so she sent some letters to OpenAI saying I really want to know how do you do this?

Speaker 2:

And that speaks to intent, right? You know, talking about free speech, what was the intent? To create a voice similar to her or hers, or it was just a random occurrence? And is there a legal grounds to say what, what, if, what, if they change their her voice and then they do a random voice for that digital assistant and it sounds like your voice, randomly, what, what? Then? What is the legal recourse here? And, by the way, I said her.

Speaker 2:

So all of this happened because there is a movie called Her Right, I'm sure you're familiar with it, from 2013, I believe, and it is a movie, but, just for the audience, in case they don't know, it is a movie about a person, a, who engages in a relationship with an AI voice and it's a romantic relationship. At some point and the fidelity of the AI is so high that it lends itself to a romantic relationship and the voice that is her in the movie is Scarlett Johansson. That is her, and the movie is Scarlett Johansson, and that's how the whole thing came about using her as the voice of the OpenAI digital assistant. So we're in a world where these things are happening and we still don't understand the legal grounds for one thing or the other.

Speaker 1:

Ready to elevate your brand with five-star impact? Welcome to the Five Brand Podcast, your gateway to exceptional personal growth and innovative business strategies. Join me as I unveil the insider strategies of industry pioneers and branding experts. Discover how to supercharge your business development. Harness the power of AI for growth and sculpt a personal brand that stands out in the crowd. Transform ambition into achievement. Explore more at FirestarBDMcom for a wealth of resources. Ignite your journey with our brave brand blueprint and begin crafting your standout Firestar teacher.

Speaker 1:

Today, it's more it's. We've gone. We've got to re-look at these things foundationally. What is what and what are your rights and what are not? There are copyright laws. Everybody knows that. You can't just make a jingle or a song. You'll probably get sued or there'll be along with you.

Speaker 1:

I just used an application called Suno S-U-N-O an application called Suno S-U-N-O that is able to generate AI music just from a few prompts. We're talking about a sentence. I had a sentence just on brand identity. A prompt, a prompt, just a simple prompt. It made a minute and a half song out of it. That was beautiful, yeah, beautiful.

Speaker 1:

Did it sound like someone else? As far as I know, it doesn't. Yeah, because I didn't put like a prompt in there to say I want you to sound like a certain artist, or something like that. I just gave it simple text. It came back and it was this is revolutionizing our world of art, our world of music, our world of journalism our world of music, our world of journalism, our world of tech. Like you said earlier, it's affecting every facet of business and life and we've got to start to like. Now here's the question Is it too? Has the horse left the barn? I mean, can you reel this back in, or is it just too late? No, I don't know, we need to catch up. I mean it you reel this back in, or is it just too late?

Speaker 2:

No, I don't know. We need to catch up. I mean, it happens right. This is the way things happen with emerging technology. I don't know, at least I don't know of a case of an emerging technology where people sat at the table to begin the creation or the visioning of an emerging technology and said, ok, before we create a technology, let's create a governance framework. It has never, to my knowledge, has not happened. It's always a catch up game. So, yeah, listen, the technology is moving and everybody begins to see how it's affecting different elements in society. And then we are as a human race, we are reasonable in many ways and we say, okay, well, there's something we need to do about it. You know, this is not going the way we want, so we need to do something about it, and that's normally the governance piece We've got to do that?

Speaker 1:

Remember Napster? Remember that yeah For music. Right, it got out there. People are like, wow, I can download any music from anybody at any time. People were doing this right and left and then finally it got to a congressional state, right, yeah. And then they passed some laws and then you know, but the technology is still inexistent. The streaming services you can stream, everybody can be at Pandora, you can speak to your locks ahead. I wouldn't listen to this. And that the technology didn't change absolutely.

Speaker 2:

So there's. So there's the setting, the technology, and then there's the setting, the governance. But you know, for ai, I'll say what we need to do is if, if you're an organization and you are already using AI, or you are already beginning to think about using AI, you probably need to take a little bit of step back right. And you know, one of the things that people can do we did this in Dell Technologies is you have to create some, at the very least, principles for the use of AI. Right, things like the use of AI, or the AI that is created, needs to be beneficial, but it needs to be equitable, transparent, responsible, accountable. These are principles that will guide your governance, and so you don't end up in a case where you know OpenAI is gonna have to probably honor the request from Scarlett Johansson and say this is how we created this voice. That is explainable AI. Right, that's what we're talking about today. Today, we're talking about do you have a process internally in your company to where, if you need to explain something to somebody, you can? Right?

Speaker 2:

Algorithmic transparency how is this algorithm creating this output from these inputs? What do we know that the algorithm is not biased? Uh, how do we identify, if the bias is intrinsic and the data, and what can we do about? You know, one of the things that you and I have not talked about today in relationship to the impact of AI is, uh, the magnifying and the amplifying effect he has because of the capacity that we have for computing the velocity. So if there is intrinsic bias in data and we unleash an unchecked AI on that data because of its exponential capacity to grow, we know that if that bias exists and that data set, it will be amplified many times and very fast through the AI. And so how do we put balances and checks in the production of that output to make sure that we don't amplify biases that are intrinsic and maybe historic in the practices of companies that exist in that data?

Speaker 2:

And that's governance, that's creating audit trails. Do we create this output through AI and we just feel good about it? Or should we have any place in that journey where we do an audit and we say, okay, but let's see what happened here specifically, what data was analyzed, how did it happen? Uh, you know what? What was the output? This and not that. And are we maybe, uh, negatively affecting, let's say, a segment of the population because our training set only reflected this other segment of the population. All these are questions related to that governance model and the principles that we need to have to govern how we use AI in companies.

Speaker 1:

There's no question about it. Right now, anyone can go. Even when you use Google these days, it's biased in a way that it'll give you results that aren't necessarily reflective of who you are from a cultural standpoint, Just automatically. You haven't given any cultural indications, but that's just what comes back. So you already know that the data is somewhat skewed because it's going to reflect whoever built it Right, so we're built it. Usually, their influence is going to be more amplified than anyone else.

Speaker 1:

We have to understand that and understand how we're going to, to to. We have to govern it to a certain degree. There's no doubt about it. But what is that going to look like? We don't want to limit freedoms of creativity that are out there. We might be able to discover things that we're grappling with, let's say, with cancer, right, or with any other kind of human element. You know there might be the solution for for a lot of the scarcity that's on this earth that could be found out through some of these algorithms. So you don't want to just shut everything down and don't want to dumb it down, let's say, but there's got to be certain ways. Are there any specific use cases that you are aware of that you said you know what that was a great use of AI and governance and you got a positive result.

Speaker 2:

Wow, that is a very interesting question. Well, what I can say about that is I've seen some good governance models and I have seen some really good AIs historically doing really good things. So you think about Watson by IBM. So you think about Watson by IBM. It's one of the very long standing AIs in the scene and because they've been around for a while, they have gone through a lot of different elements of governance and if you go to the IBM website and you look at IBM website for AI governance, you're going to find that it's really well explained. But they have had the time to do it Because Watson is what began to be created in the 90s. So they've gone through a number of iterations and you have Watson doing things incredible, things like playing chess and winning against Kasparov back in the day. But you also have Watson, to your point, discovering things related to cancer cures and unfolding of proteins and things like that. So, yes, yes, it exists.

Speaker 2:

You can have, and I think your question is along the lines of is governance at a conflict with innovation? Can you have both at the same time? And I think you can. I honestly think you can. Would it slow you down at the beginning? Perhaps it may slow you down a little bit At the beginning. Perhaps it may slow you down a little bit, but I think the general innovation premise of what you can do with AI is not going to change, and I'll tell you why.

Speaker 2:

Eventually, whether you do it at the beginning of the journey or at the end of the journey, you're going to have to worry about ethical and responsible use of AI and you have to govern it. Because in today's day and age, if the ai does something bad, everybody's going to know and so it's your responsibility. How do you want it done? You want to find out the hard way at the end, when somebody comes uh down on the company like a ton of bricks, or you want to give everybody the sense that what you're doing is responsible, right and begin to implement equitable ai, fair ai, explainable ai these things that maybe require a little bit of thinking.

Speaker 2:

But also, you know one thing that we haven't talked about today strategically, does the ai align to the mission of your specific organization? Because you, almost every company that I know, has a mission, and sometimes a mission and a vision, and they have strategic goals and then they have, you know, operational elements that support the strategic goals and AI happens to be one of those operational elements. Can you connect the good use of AI to your strategic goals? Say, public companies, that's what everybody's looking at. Shareholders are saying, okay, we want to make sure that you're good company. You can explain if ai is supporting that in a responsible way. Right and it? And it doesn't mean you're going to have to kill innovation. It means good alignment. It's a difficult moment, but perhaps it takes a little bit of doing, but it's not impossible. So I think there is no real conflict between the two.

Speaker 1:

This reminds me of that movie, I Robot. Right, you have the three laws. Right, you have these three laws, and those three laws governed what the robots could and could not do, and you know you couldn't go outside of that, except you know you had somebody that programmed one a little differently than the others because of something else that was happening. There's always going to be anomalies, don't matter what. But I wanted to change the subject a little bit. You are at Dell and at Dell you have a different role. Now I want the audience to understand, because this is almost like are you the first person to be in that role and just explain to us what exactly are you doing at your company?

Speaker 2:

Yeah, no, no, no. Listen, dell Technologies I think right now, probably 130,000 employees globally. I think right now, probably 130,000 employees globally. Right now, as we speak, we're in the middle of what is called Dell Technologies World, and that is the biggest customer conference for Dell, where all the big announcements are being made.

Speaker 2:

And through that fabric of human capacity, there's different roles. The roles evolve according to the needs of the company, and so one of the things the company probably realized a long time ago is that there is a gray area between what the company does and how it aims at solutions and products and what the application of those solutions and products are in the customer base, and that depends on very specific things in every one of the verticals that the product plays a role in. And so you think about a server as a server, but the application of that server is different in the financial sector than it is in the state and local government sector, and the velocity which with things happen in the financial sector it's a little different than the velocity which with it happens in uh state and local government and the outcomes are different. Right, in the financial sector the outcomes may be we need to enable uh stock trading faster and the state and local government, you know the outcome may be. We want to be equitable about making services available to all citizens in a state or a county or a municipality. So when you think about that server, it's the same server, you know. How do you then bridge that gap between what it does here and what it does here? And normally what it requires is somebody with that vertical expertise to contextualize for everybody the company and the customer what that server is good for. Right, that I'm oversimplifying. Obviously the role but you know the role of a strategist in a company like Dell is to bridge that gap. Know the role of a strategist in a company like Dell is to bridge that gap.

Speaker 2:

All of us that have that strategist role, we have a deep vertical expertise. So I come from the higher education side of the house and an institution that had both an academic practice and a healthcare practice, so I have kind of those two and so I'm able to contextualize the use of technology, the application of technology, uh, in a particular vertical. But for the company, when the company is looking at what should be responsible use of this technology in this vertical, we understand, the strategist understands that very well and we can contextualize internally why doing this like this is not good. Maybe, why should we do this this other way?

Speaker 2:

And even even why would, why would we say something like this in this fashion and not in this fashion? Because it means different things and different verticals, and so, um, you know, my role is not unique. Uh, there is different layers of you know the strategist role, and it happened to be now, and you know, one of the fastest, most dynamic units, which is, you know, we call it AI acceleration, as if AI was not already accelerated and very fast, very fast moving life. You know, when it comes to that, but I enjoy every bit of it, every minute.

Speaker 1:

I can see that your passion is definitely radiating throughout the podcast. Right? I want you to help us. This is our last question and this question. Talk to us from that contextual layer. If you were talking to the artist, they want to understand. I've been tasked to now implement AI in my organization. I want to make sure that I'm making some fundamental good choices. If you were going to advise and advise a person let's just say they're from an academic institution, like we are right now what should they be doing and why?

Speaker 2:

well, I think there's some simple things I mean, we talked about throughout this podcast today. Um, when it comes to governance, by the way, there is two fundamental models, I think. There is informal governance and there's formal governance and, um, you know, the difference between the two is, you know, the informal governance happens almost coincidentally and is not very consistent, is not bound by protocols, and the formal governance, it's entrenched in formality. There is consistent ways to track it, there are metrics that are associated with that governance process. There are protocols that people can follow. There is training that is provided to all layers of an organization. Are you a consumer of the technology? Are you a provider of the technology? What are your responsibilities when it comes to that?

Speaker 2:

So I'll say, the first thing is, think about if you're in a place where you can adopt, uh, formal governance right, and it requires decision makers at the table to sit down and say, okay, we're gonna do this and let's, let's frame it. If you're not there, it is okay to start with informal governance. Maybe, if, maybe, maybe, do, maybe, do your first audit. Ethical use of AI in my organization. Have we ever done an audit? Do we understand how it's being used and are we doing it ethically, and if you don't have the expertise to do it in-house, there is people externally that can do this for you.

Speaker 2:

The idea here is start somewhere, don't let it linger, don't let the technology continue to advance in your organization and create, perhaps, value for your organization and create value for your users without doing these checks and balances.

Speaker 2:

Doing these checks and balances, start getting acquainted with these governance frameworks and initiatives that exist in governments, whether it's external, like the GDPR, or here in the States. You know we didn't talk about this too much, but GDPR very comprehensive European Union regulation. Data governance plays a heavy role in it and that's why people are looking at it for AI In the States it's a little bit of a fragmented approach because we have a federal government and we have also the ability at States that have their own autonomy, to have their own laws, so we have probably more than a dozen for sure states that have created guidance or laws related to the use of ai. They have communalities and they have differences. We should begin to understand what those do and and educating ourselves about what the differences are and try to apply things internally in our companies. That will be the best advice I can give people at this point.

Speaker 1:

That is excellent advice, Hernan, and I want to thank you for being on the Follow Brand Podcast. This has been wonderful. If the audience, and I'm sure people will want to follow up, ask their questions, and I'm sure when we air this like, yes, send in your question, because this is very, very important. What's the best way to contact you? Right?

Speaker 2:

right so easily to be found in LinkedIn. Obviously Hernan Londono, and there's. I don't think there's too many Hernan Londonos on LinkedIn, so you see my picture and myself with my glasses and a blue suit. That's me. That's Hernan Londono from Dell Technologies, and feel free to email me at opendoorpolicyhernanlondono at dellcom.

Speaker 1:

This has been wonderful. Thank you again for being on the show. I want to encourage your entire audience to tune in to all the episodes of Follow Brand. You can tune in at five-star BDM. That is the number five star S-T-A-R-B for brand D, for development and for masterscom. This has been wonderful.

Speaker 2:

Thank you again for being a guest Grant. It was a pleasure to be here with you. Thank you for the great conversation. You're welcome.

Speaker 1:

Thanks for joining us on the Follow Brand Podcast. Big thanks to Full Effect Productions for their incredible support on each and every episode. Now the journey continues on our YouTube channel. Follow Brand TV Series. Dive into exclusive interviews, extended content and bonus insights that will fuel your success. Subscribe now and be a part of our growing community sharing and learning together. Explore, engage and elevate at Follow Brand TV Series on YouTube. Stay connected, stay inspired. Till next time, we will continue building a five-star brand that you can follow.