Entrepreneurs: How AI Is Boosting Human Potential with Kevin Surace

Episode 215 November 16, 2023 00:22:23
Entrepreneurs: How AI Is Boosting Human Potential with Kevin Surace
Passage to Profit Show - Road to Entrepreneurship
Entrepreneurs: How AI Is Boosting Human Potential with Kevin Surace

Nov 16 2023 | 00:22:23

/

Show Notes

Richard Gearhart and Elizabeth Gearhart, co-hosts of The Passage to Profit Show along with Kenya Gipson interview Kevin Surace, generative AI expert and multi-field inventor.
 
 
Step into the realm of innovation and artificial intelligence with our guest, Kevin Surace, a Silicon Valley luminary known for his groundbreaking work! As a serial entrepreneur and CEO and CTO of Appvance.ai, with a technical background boasting 94 worldwide patents, Kevin has been at the forefront of AI's transformative power. In this episode, explore the intersection of AI, ethics, and bias, unraveling the challenges and promises that come with these technologies. From edutainment to groundbreaking developments in software testing, Kevin provides a unique perspective on the evolving landscape of artificial intelligence. Read more at: https://kevinsurace.com 
 
 
Whether you're a seasoned entrepreneur, a startup, an inventor, an innovator, a small business or just starting your entrepreneurial journey, tune into Passage to Profit Show for compelling discussions, real-life examples, and expert advice on entrepreneurship, intellectual property, trademarks and more. Visit https://passagetoprofitshow.com/ for the latest updates and episodes.
View Full Transcript

Episode Transcript

[00:00:01] Speaker A: Want to protect your business. The time is near. You've given it heart, now get it in gear. It's passage to profit with Richard and Elizabeth Gearhart I'm Richard Gerhardt, founder of. [00:00:14] Speaker B: Gearhart Law, a full service intellectual property law firm specializing in patents, trademarks and copyrights. [00:00:20] Speaker C: And I'm Elizabeth Gearhart. Not an attorney, but I work at Gearhart Law doing the marketing, and I have my own startups. [00:00:25] Speaker B: Welcome to Passage to profit, everyone. The Road to entrepreneurship, where we talk with startups, small businesses, and discuss the intellectual property that helps them flourish. [00:00:35] Speaker D: So that said, I think it's time now to pick up again with Kevin Sarace. He's a Silicon Valley innovator, a serial entrepreneur, CEO, TV personality, and edutainer, which is a word that we use a lot around here, right? Edutainment. Kevin has been featured by Business Week, Time, Fortune, Forbes, CNN, ABC, MSNBC, Fox News, and has keynoted hundreds of events from Inc. 5000 to Ted to the US Congress. I'm sure that was quite an event. He was also Inc. Magazine's Entrepreneur of the Year, a CNBC top innovator of the Decade, World Economic Forum Tech Pioneer, and chair of Silicon Valley Forum. He has a technical background with 94 worldwide patents and has built multiple startups from ground zero to 1 billion valuation. So that's really an amazing resume, Kevin, and we're really pleased to have you here. [00:01:29] Speaker A: So happy to be here. Thank you for having me. [00:01:31] Speaker D: In preparation for the show, I went to chat GPT to generate questions for this interview. Excellent. The first question is, does AI understand Dad jokes? [00:01:44] Speaker A: With the correct prompting, you can ask it, how would it interpret this joke as a good joke, a bad joke, a fair joke, something that people would laugh at. And it's going to give an opinion on that. Now, even when I say the word opinion, of course I'm anthropomorphizing the darn thing. It doesn't actually have an opinion. Again, it's a math model. These large language models, they are math models, and they are guessing at the probability of one word coming after the next, based on your prompt and based on what it's learned. And it's learned everything we've ever written, virtually, right? So yes, it will opine on that. But the best use of a large language model like chat GPT is to give you ideas that you didn't otherwise have. This would be the case in a legal case, even. Give me some ideas that I might not have thought of, and it'll give you twelve of them. You wouldn't use them verbatim. They may be wrong, they may not be correct, they may not apply, but, wow, I've got ideas that I didn't have. It's like this assistant sitting next to me. Oh, absolutely. [00:02:40] Speaker D: When I asked that question, it came back with 32 potential questions to ask you. And if I had sat down and thought about it, I maybe would have come up with ten. So there's a lot more content there now than I would have been able to generate on my own. [00:02:55] Speaker A: Exactly. [00:02:55] Speaker C: Right. Well, I did go on your app, Vance website, and that is the use of AI that I don't think a lot of people have talked about. Everybody knows. Well, not everybody. Most people know Chat GBT. But what you're doing with AI as kind of quality assurance. [00:03:10] Speaker A: Yeah, software quality assurance. Finding bugs in software. That's right. [00:03:14] Speaker C: So when your software identifies the bugs, then what happens? Do they think similar? [00:03:19] Speaker A: Let me baseline this for a second, and then I'll answer the question, if you don't mind. Is people over the last year think AI is chat GPT, and Chat GPT is AI. That is one instantiation of work that has been done since the 1940s and 1950s in artificial intelligence. There are literally hundreds of algorithms, all of which can be applied, what we call applied to a variety of fields. Right. And in fact, AI in most large businesses has been highly available for a decade or more to analyze big data. So we've been doing this for a long time. Facial recognition on Facebook was AI. Is AI right? All of a sudden, chat GPT has become the soup du jour, the AI of the day. But it's just one version of a type of AI. It's just a very huge neural net built on a trillion phrases. So to answer that question, what we do at appfance, and I'm involved in a number of companies, but app Fance is fascinating because millions of people worldwide try and test software, and most software is behind the firewall, like your ERP system, meaning it's for your internal use. A large bank may have 10,000 to 15,000 applications. Almost all of them run the bank, and maybe eight of them go to the outside world. It's really fascinating. So, of course, you want to test your ecommerce sites and things, but you got to test the stuff that runs your company. And this is a really hard problem to solve. We've been working at it for twelve years. Introduced the first product about five years ago that uses AI. And the idea is to generate automation scripts, call it automation, or test automation automatically, with virtually no human involvement. So you train the AI, what's important in your application, what are the outcomes, what are you looking for? And just let it go. Generate thousands and thousands of flows trying to look for problems. And to date, AI finds way more problems than people writing test scripts themselves or people testing. Now, what's interesting about that is it can write these tests about 100,000 times faster than humans could. That's a big number, and it sounds like a marketing number, but it's a measurement, actually. And the bottom line is servers in the cloud are just way faster than our human brains, right? So we're going to get to a point where I think in the next five to ten years, all software bugs are really found by AI. And people can analyze which ones are the most important to fix, but they're all going to be found by AI. It would be ridiculous to think that we're still sitting there writing test scripts in some kind of code like selenium, and hoping that it finds bugs for us. Right? It's going to be ridiculous. Already there is with Copilot, GitHub, Copilot, and also Codex from OpenAI, there are tools now that are making programmers about 50% to 60% more productive than they were just three months ago. It's amazing. And that already completes some of your code. Now, even that completion of code, that automatic generated code, isn't perfect, but we're getting to the point where we will be able to find the bugs. Then the next step is find the piece of software that is causing those bugs automatically generate new code that replaces the code that caused the bugs and close the gap. This is fascinating. Now, a lot of you will be thinking, what do we do with the people? Well, we start focusing more on what it is we want our software to do and less about making it do it. Right. What do we want it to do? And so some people are saying, what happens to all the programmers? We've got these millions of people who write code. There will still be code to write, but you will be now ten or 20 times more productive than you are today, being able to generate far more features far faster. And we all want features, we want them faster, we want our software to do more, and we want it to be bug free. Yeah. [00:07:03] Speaker D: Every time I start to feel uncomfortable about AI, because a lot of what you're saying, honestly, does make me a little uncomfortable, I also hear the positive side. And then I look at a database for a business I'm familiar with that has all sorts of problems and inconsistent data, and I'm thinking, well, wow, wouldn't it be great if you go through there and clean all that up? Because it would be just about impossible for a team of humans to do that. You look at the benefits and they're just irresistible. But then there's a price for that. And the price is we don't know what the world's going to be like if we make all of those changes. We can guess, but we don't really know. [00:07:41] Speaker A: We can look at history. And when the wheel came out, if you were a person who carried things on their back and then there was a wheel, you go, my life is over. What will the world possibly be like if everyone has two wheels and then four? It's over, right? And if you were truly, if you were an accountant in 1985 and spreadsheet came out, you said, I'm going to resist this horrible thing. It's going to take my job. There are more accountants employed today than there were when the spreadsheet came out. And they've all become spreadsheet experts, right? So all of these tools that we have put out over the years have made humans more productive. Thus the net result of all of this is that GDP goes up. And yes, there's a long tail there to make that happen. But the more productive companies are and people are, and countries are, the higher the GDP and ultimately sort of a better living comes out of that. Right? So if you as a lawyer could handle twice as many clients as you have, if you could, we're not there with AI, but if you could, that's pretty good for your law firm. It's probably good for the client. It's really good for everybody. Everybody gets faster service. You've got more clients. Life is good. [00:08:51] Speaker D: Well, I'm glad to hear somebody saying that more lawyering is actually a positive thing. But anyway, we have to take a break. We'll be right back. Fascinating discussion here with Kemen Serase. Passage to profit with Richard Elizabeth Gearhart. We'll be right back. I'm Richard Gerhardt, founder of Gearhart Law. We specialize in patents, trademarks and copyrights. You can find out [email protected] we love working with entrepreneurs and helping their businesses grow. And here is our client, Ricky, to tell it like it is. [00:09:20] Speaker E: Hi, I'm Ricky Frango, founder and CEO of Prime Six. We manufacture high performing, clean and sustainable fuels like charcoal and logs. We've been working with Gearhart Loft since the beginning, really, and they've helped us figure out the trademarks, the patents, everything that has to do with product development and how to protect our inventions. And we're extremely grateful for the wonderful team that has been supporting our business since day one. [00:09:45] Speaker D: Thank you, Ricky. To learn more about trademarks, go to learnmoreabouttrademarks.com and download our Free Entrepreneur's Guide to Trademarks. Or book a free consultation with me to discuss your patent and trademark needs. That's learnmoreabouttrademarks.com for your free booklet about trademarks and a free consultation. [00:10:01] Speaker A: Now back to passage to profit once again, Richard and Elizabeth Gearhart. [00:10:06] Speaker C: And our special guest today, Kevin Sarace. This guy will blow your mind with what he knows and what he's done. And listening to him is such a pleasure. Richard. I have honed the conversation, so now I'm going to throw it to Kenya. Kenya, do you have a question for Kevin? [00:10:19] Speaker F: Oh, it was a great conversation. And I actually came across an article in Forbes magazine about just some of the downfalls and the pitfalls of AI. And one of the things that they bring up in the article is bias and discrimination. So it says that AI systems can inadvertently perpetuate or amplify societal biases due to bias training data or algorithmic design. And then there was also an issue with ethical dilemmas. [00:10:45] Speaker C: Right? [00:10:46] Speaker F: So instilling moral and ethical values in AI systems, especially in decision making contexts with significant consequences, presents a considerable challenge. So I just kind of wanted to see what your take was on all that. [00:10:59] Speaker A: Great. It's a really great question. So let me separate AI systems that you're building within your business, say HR or whatever, from the large language models, okay? And we'll just talk about them really quickly, separately. If I'm building AI, and I'll give you a great example, I'm building AI. A lot of people have to go through all of your HR data for all of your employees and make a judgment call on who makes it to the top in the company versus who doesn't. This is a fascinating thing, right? We all want to study that. Who makes it to president? Who makes it to VP? Because you might bring in 3000 people a year. Only one every five years makes it to vice president. Why is that? What makes them special? Now that data is highly biased. You didn't mean to make it biased, but it is. And I'll give you an example. Let's say at the VP level, you had someone that graduated, say, from my alma mater, Rochester Institute of Technology, and they interview all candidates or all candidates in this division. Well, if a candidate comes in and happens to say oh, by the way, I went to your alma mater. The chance of them getting hired is much higher than someone who didn't go to your elma mater. Just because you already bond on something you bond on. It's Rochester. It's Rochester Institute of Technology. Did you have this professor, et cetera, et cetera, already? And again, not getting into race or creed or anything else, we have a bias. And so that bias overemphasizes people that went to Rit not because they're better students. I think they are, but I'm biased. But so did that VP. And by the way, we all do this without trying to introduce bias. We introduce bias, and so those biases are stuck in the data, and now you have to figure out how they got stuck in there. Why is that? And the chance of figuring out that that one VP 20 years ago interviewed and mostly hired people from RIT, it'd be hard to figure all that out. Right. So that's one area, and you can think of 15 others. So that's a challenge. Now, as large language models, they've gone out and learned from everything we've ever written that's on the web, and that has a bell curve of representation. Well, that bell curve itself is biased. So you could get certain people of certain countries. The United States would be an example that put far more, or have put far more onto certainly the English web than any other country for lots of reasons, population and access to the Internet earlier and GDP and things like that. So we are overrepresented. Overrepresented. And so if you naturally ask even an image generator, generate an image of a beautiful human or beautiful woman or beautiful man or whatever it is, it's going to generate right at the middle of that curve, which is, unfortunately likely white, likely thin, likely looks like a model because it's right in the middle of the curve. It doesn't know any better. Now, if you prompt it differently and say, what I want is something over here, of course it has the capability to generate that. But if you don't pre prompt it and you're not careful, you're getting something down the middle, because that's what a model does. It doesn't know any better unless you ask it. So these biases are built in, and it's hard to get rid of them. Now, here's the bigger problem, if you want to talk about the problem, the more we use these models, for example, generating content for our blogs and blog posts and our advertising, and the more that people don't realize that they can prompt these things to go to the edges and do some really wonderful things. You could prompt it to generate an image of, rather than just a man, you can say an older man with gray hair who's a little overweight, blah, blah, blah, right? You could do that, but people don't. So they generate. What happens is they'll start generating the same thing that'll make the middle of the bell curve, when the model goes out to learn from the web bigger and bigger and bigger, because it's now learning from its own generated content, not knowing it generated it or another model generated it. So it could end up overemphasizing the middle of that bell curve and continuing to de emphasize the breadth of the human experience. All of our models that we've got access to today, including Bard and chat, GPT and llama and others, have a huge rules engine at the output, and I mean millions of rules. So when you say, do you love me? It now has a rule that says, even though I recognize how to construct sentences that would reply to that because I read all these novels, I'm not going to do that. I'm going to say, I'm a model that is incapable of love because people took it before those rules were in place to say, oh, this thing is sentient. It knows how to love. It doesn't know how to love. It just puts sentences together. That's all it is, right? It's not sentient at all. I can guarantee it. It's just math. So lots of rules like how to build a nuclear bomb? I'm sorry, I'm not able to discuss that. Right. Or things that we don't think are appropriate for our society. So we put millions of rules on the output of these things. So you don't always see the original output. You see that filtered by a set of rules. [00:15:45] Speaker D: Who controls chat, GPT or the AI engines, or the large language engines, are the people who set the rules. [00:15:52] Speaker A: Yeah, that's right. [00:15:53] Speaker D: And who is doing that now? [00:15:55] Speaker A: Actually, OpenAI and Google and others have hired people overseas in all kinds of crazy countries, from Turkey to Vietnam, and they've given them a set of areas that they never want the AI to be able to respond in. And we hire people over there because they're a dollar an hour instead of $25 an hour, basically. Now it's a hard job because you are looking at the requests that people are making and every day you go, we don't want that response to ever be there again. Let's put a guardrail in, call them guardrails. Let's put a guardrail in to not allow you to get that response. So people try to break these rules all the time and try to get the thing to jailbreak, basically. And then their job is to put it back in jail. So for OpenAI, it was over a thousand people for a year wrote rules. Over a thousand people for a year. And they were each writing potentially 100 rules a day. Think about that. [00:16:48] Speaker C: Is anybody reviewing them? Ultimately, somebody is controlling this. [00:16:53] Speaker A: That is true. That's true. You're right. You can choose to use those models. There are models that are out in the open source world today that have no rules. You could write your own rules. They're just free, and they will tell you that they love you. We dealt with this. I built the first AI virtual assistant models back in the 90s that ultimately got licensed and became things like Siri and General Motors, OnStar and Alexa and all of those things. So the core technology was developed in the late ninety s at a company called General Magic. And her name was Mary. Actually, her literal name. The person who recorded the voice, her name is MaRy Mac. Literally, Mary Mac. And so Miss Mary Mac would record the voices, and she recorded thousands and thousands and thousands of words and phrases and all the things to have her literally talk to you on your phone. It wasn't one day before people said, oh, Mary, I'm so in love with you. Will you marry me? And we had to say, I don't know if these guys are stupid or whatever, but how are we going to respond to that? So we decided to not block that. Instead, we'd respond the way the human would, which is, oh, I'm already taken, or, oh, I'm not available. And she had multiple various random responses that would come back. And we didn't make any bones about it. We just said, you want to play that game? We'll play along. We can absolutely create a large language model that has no guardrails around that and will interact with you in every way of a relationship. Right. Forget robot right now, just on the screen. Every way of a relationship. And the reason it can do that is this. Remember, these models were trained on fact and fiction. If they've read enough fiction novels, they clearly can describe and can reply to loving kinds of things, right? So if they're programmed right, and they've got a large enough data set, again from novels, they could be very convincing as a partner. And especially in a country where there might not be enough women, there's a lot of men, there's not enough women for a whole bunch of reasons you could imagine those guys. This is the only true, I hate to say true partner, because that's not fair, but you know what I'm saying. It's really the only true kind of interaction they're likely to have. If the guy wants one with a. [00:19:01] Speaker C: Woman, that's really sick. [00:19:05] Speaker A: It may be, but let me tell you about the good use of that. How about older people that we're all getting older in this country, and people are getting to the age where we're getting to a time where in the next 20 years, we'll have more people over 80 than we have under 20. Of course, the problem with that is no one to take care of them, so they need companionship, but you just can't. Even as their child, you can't be there 24/7 so a digital companion, of which there are already several available, is amazing for these older people. They feel that they have a relationship with this digital companion. They're not stupid. They know it's digital, but at least it's a relationship. And I'll tell you what, that digital companion will listen to your story, the same one over and over again. [00:19:52] Speaker D: We have time for one more question before we wrap up, but where do you see the future of AI going? [00:19:57] Speaker A: Regular AI outside of LLMS is going to continue to get better at sorting through our data, fixing our data, and giving us real insights into that data, including pattern recognition, pattern computers, another company I work with up in the Pacific Northwest, and what they're doing to look at drugs that can treat certain kind of cancers and finding the patterns to match what can happen is just unbelievable. It's unbelievable. So these are huge breakthroughs. The humans would have never said that molecule really wouldn't have tried that. And that's going to change medicine forever. And this is very exciting. The second thing is, when you look at large language models, five years from now, maybe two years from now, it's just going to be a tool that we all use. Of course, we use Excel for math, we use an LLM for language. It's what we do. If you write blog posts all day, of course you're going to use an LLM to give you the first start of writing that blog post. You'll probably edit it, you'll probably change it, but you might have gone from taking two days to write a blog post and edit it and think about it and sleep on it to like 20 minutes. So we may have made you 1020, 40, 50 times more productive. And in the end, I call this, actually, I'm borrowing this from Reed Hoffman, so I will borrow it from him. Is amplified intelligence. AI. Amplified intelligence. What we're doing is amplifying your intelligence, because in the end, you're in charge. You own it. You decide what the prompts are, and you decide what you use from its outcome. But we're amplifying your intelligence. We can make you 510, 20 times the number of brains that you had. So instead of one brain power, you could have 20 or 30 or 50 brain power. [00:21:32] Speaker D: That sounds good to me. [00:21:33] Speaker C: Pretty dairy. [00:21:34] Speaker D: Devin Serrace, Silicon Valley Innovator, Entrepreneur, thanks so much for joining us. [00:21:39] Speaker B: Before we go, I'd like to thank the passage to profit team. Noah Fleischman, our producer Alicia Morrissey, our program director. Our podcast can be found tomorrow anywhere you find your podcast. Just look for the passage to profit show. And don't forget to like us on Facebook and Instagram. And remember, while the information on this program is believed to be correct, never take a legal step without checking with your legal professional first. Give is here for your patent, trademark, and copyright needs. You can find [email protected] and contact us for a free consultation. Take care, everybody. Thanks for listening, and we'll be back next week.

Other Episodes

Episode 188

January 23, 2023 01:09:40
Episode Cover

Tips for Effective Marketing in 2023 with Mark Drager, 01-22-2023

Richard Gearhart and Elizabeth Gearhart, hosts of The Passage to Profit Show along with Kenya Gipson interview Mark Drager from Phanta Media, Mark Priddy...

Listen

Episode 170

September 05, 2022 00:53:33
Episode Cover

Develop a Career in the Spotlight with Amy Scruggs, Media Coach, TV Host & Vocalist, 09-04-2022

This episode of The Passage to Profit Show features Media Coach, TV Host and Vocalist, Amy Scruggs, Tesa Harster from The Angel Campaign and ...

Listen

Episode 222

March 07, 2024 00:06:57
Episode Cover

Trademark Smackdown: Dwayne 'The Rock' Johnson vs. WWE

Join us as we dive into the electrifying world of trademarks and wrestling, spotlighting none other than the legendary Dwayne 'The Rock' Johnson. From...

Listen