Hi everybody, happy Wednesday! Welcome to our webinar. My name is Ann with XLT, and I’m here with my colleague, Dr. Dan Gron. Today, he will be presenting “Generative AI for Executives,” where he will cut through the hype and give you some real talk on generative AI. Sound about right, Dan?
Dan: Yep, awesome!
Alright, we will start in just a few minutes. I just wanted to welcome you all on behalf of Excel Instructor-Led Training. Together, we are Accelerate, ExitCertified, and Web Age Solutions. Among the three of us, we have more than 60 years of combined IT experience, and we’re here for every moment of your employee technical training lifecycle.
We each have our specialties:
Accelerate specializes in customized courses for teams.
ExitCertified focuses on vendor-authorized certification courses.
Web Age Solutions offers upskilling programs for entire organizations, as well as public training.
No matter who you get in touch with—Accelerate, ExitCertified, or Web Age Solutions—we’ll connect you with the right person to get you the training you need, whether it’s public for individuals, private customized, or even on-demand self-paced learning.
We teach a wide range of topics and courses, including:
Generative AI
AI/ML
Data Science
Python
Data Visualization
Cloud
Microsoft
And much more
As a thank you for joining us today, we are offering 25% off all courses exclusively for webinar attendees. You’ll receive more information after this presentation, and I’ll also put this in the chat. This session is being recorded, so you’ll have access to it whenever you want.
Now, let’s meet our presenter, Dr. Dan Gron. Dan has been working in tech since 2006 and is a seasoned AI researcher and cybersecurity expert. Thankfully, he also teaches with XLT. He spent over a decade working with the Department of Defense and the US Intelligence Community to deploy cutting-edge AI intelligence applications. His current research focuses on AI and cybersecurity, which is very topical and important these days.
Dan doesn’t just teach this stuff—he lives and breathes it. He’s won multiple awards for his publications on AI and responsible AI. What really makes him stand out is his excellent communication skills. If you’re not sure where to start with AI/ML for your organization, he’s great at pinpointing what you need and guiding you to the right curriculum. He’s a sought-after trainer, speaker, and panelist, and he develops and teaches many of our courses, including ethical and responsible AI.
I couldn’t think of a more perfect person to present today. Thank you so much for doing this, Dan.
Dan: Thank you! Would you just confirm that you’re able to see my screen?
Ann: Yes, I can see the “Practical AI for Executives” slide. Fantastic!
Dan: Awesome. I’m really excited to talk today. As I prepared this, I realized there are so many different avenues to explain generative AI, what’s happening with the technology, and how it’s impacting the world today. One of the things I fundamentally believe is that knowledge is the antidote to fear and uncertainty. If we take the time to look a little deeper at how some of this technology works, it will give us practical skills to think through the different applications and ethical implications of the technology. Not just in a rote, checklist manner, but to get a real intuitive sense of what is happening with the technology and how it can be applied.
With that in mind, I’ll beg your indulgence to go a little deeper than is often done with an executive audience, but for the purpose of explaining this technology and seeking to demystify it.
I’ll begin with the question: What is intelligence? Intelligence comes from the Latin word “intellectus,” which comes from the Proto-Latin word “ingo,” meaning “to select between.” I love this definition because it implies that the core of intelligence is simply selecting between options. I think this is a good definition. Today, intellect is often conflated with understanding, but if we boil it down, let’s think about intelligence and AI as selecting between options.
I demonstrate intelligence when I select which route to drive to the store, which of the 50 brands of toothpaste to buy, or what to watch on Netflix tonight. If this is intelligence, then artificial intelligence is simply intelligence demonstrated by machines or non-human entities.
AI works by learning from data. There’s a big set of information available, and we have algorithms that build up some representation of the patterns in the data so that we can use those patterns to make selections. There are a couple of different ways AI can be trained:
Supervised learning: Learning from examples with input and output pairs (e.g., 2 + 2 = 4).
Unsupervised learning: Learning patterns from data without explicit input-output pairs (e.g., clustering similar items).
Reinforcement learning: Learning in an environment where actions have long-term rewards or penalties (e.g., autonomous vehicles).
On the right in this animation is something called linear regression, one of the simplest examples of AI. It fits a line to a cluster of points. AI has advanced so much that we don’t even really think of linear regression as AI anymore, but it does fit under the category of supervised learning.
So, what can AI do? It can classify inputs, such as identifying spam emails. For example, there was a Charlie XCX post where she used the word “brat” in a positive context. The embedding of the word “brat” in AI models captures this usage, even though it’s not in the dictionary. Because large language models (LLMs) have seen “brat” used in many ways, the embedding captures the positive aspect of “brat” in this context.
Embeddings, even though they are just numbers, have the ability to represent the meaning of text. This is incredibly important to the way LLMs work. Think about a good memory you have. Is that memory just one singular moment, or is it a collection of small moments? A memory for us is very much like an embedding for a model—it helps understand the holistic context of these tokens. At the same time, our memory can be faulty when we think back on it of these tokens at the same time my memory is faulty when I think about things that have happened to me in the past it is very possible for me to misremember them and with these embeddings because the model has seen so much because it's been trained on so much text it's very easy for the model to misremember these memories and to do what we call huc and so while there is this holistic understanding embedded into um embedded into the embeddings it isn't able to capture all the specifics just like humans can't capture all of the specifics in our brain the models aren't going to be able to capture the the specifics it's more of the Gestalt that overall understanding now one cool thing is that there is a direct connection between the embeddings that llms are learning and the FMI scans that humans are learn in fact you can map them back and forth and say hey this word in in embedding space represents this pattern in a brain and what that goes to say is that something more fundamental is happening so there's a hype going around that gen is currently sensient or conscious I don't think it's there yet it is possible that it gets there one day and that's a whole conversation but what it is is a complex algorithm working on these embeddings that said geni is not just fancy math something is happening with these learn structures that is revealing something fundamental about our reality if our brains are learning and embedding in the chemical and the pathways that is similar to what the llms are learning in these embeddings even though we are two entirely separate organisms then something we are converging on something which is more fundamental than simply the pattern of a brain it might be a mathematical representation of our reality and how cool is that um so we have we have we we've taken in our text we've converted it into identifiers we've converted it to embeddings and if we did math on these beddings we could analyze their semantic meaning and we pass it through the model ignoring all of that math and we simply predict the next token and this is where the math gets complicated in reality we're not just predicting one token there's a whole lot of tokens that there's a probability of and we just s the next one almost at random using some different strategies to bias the random towards more common tokens so for instance I have some input text here um it's a beautiful day don't let it get blank and if I left it here and asked the model to predict the next token it would give me these probabilities 21% chance it's U 19% chance it's two 16 that it's n six that it's any six that it's two and three and a half that it's away um any of these could be a valid next word it's it's something that humans do we start a sentence and we don't necessarily have the ending of the sentence in mind we're kind of building it on the Fly and so this is sort of what the models are doing they are starting sentence and if this model picked you as the next token it doesn't know necessarily what the next five tokens are going to be but it knows that you is a good next token or next word to say and once we have this we can keep predicting tokens we can let it say away and continue the next word and the next word and the next word and as it builds up these words as it builds up these tokens that is how we get the output of uh gen of an llm of chat GPT so that is this process we're going from the input to the tokenization to the embedding to the prediction and back and the core thing that I want you to understand here is that this is mathematics it is not magic it is not indistinguishable for magic it is a very real process by which we are learning patterns and data and then reproducing those patterns in ways that are similar to our brain in some ways and in ways that are dissimilar in
others so if I look at what the full gen architecture looks like the model is actually a small component of it um in this diagram we have the model all the way over here on the right and it's just a little bit of a comp so if I'm going to go online and I'm going to interact with chat GPT or Gemini I'm not putting my text directly into the model what's happening is a whole bunch of steps before it and after it which enable it to be better than just the model on its own so there are some caches in here which just help to make sure they're not overusing the gpus or overusing the system resources but as myself as user enters text there's going to be a period for enhancement where there's an option to do things like a web search or look through documents and to provide those along with my request to the model in something called retrieval augmented Generation Um you can think of that as a test with open Notes it's not just that I'm able to answer a user's question the model is able to answer a user's question but it's able to see some results from the Internet or from a document alongside answering that question and if you think about the embeddings the embeddings alone might not contain enough information to answer every question out there like what is the population of Algeria in the year 1986 but if I'm able to look that up on the Internet look up maybe you know ask that question and find a database that includes populations for Algeria and I can give it to the llm alongside the question then the llm can select that number and give it to the user much more reliably than if it was just relying upon those embeddings those memories of the things that it has learned in the past there's also going to be some safety guard rails on the input and output to make sure that the model isn't doing things that are harmful or um going to be a liability whether security or ethics and it does have some options to do some actions and to update models on update things on the output so for instance it could send a chat message via slack just by hitting an API so that's what the Gen architecture looks like it is a model it's generating text through that next token prediction but then in totality it's something beyond that it is including the enhancement it's it's including the context around it it's building in the safety and the guard rails and if this is beginning to look a lot more like a complicated software architecture um that's because it is uh there's a big hype that gen is this easy button to add new features to Simply say will throw AI at it and it will solve all the problems and while AI can do some really cool amazing things it does take some skill and train to be able to implement successfully because it is very much a new way of thinking about software and a new way of doing development now speaking of that speaking of implementing AI there is a specific AI team it's not the same as a software development team it takes a little bit of different skills and over time I'm sure that the roles and titles and responsibilities that I'm about to share will kind of be more formalized and set between companies but for the time being um they do very widely one company might call the role one thing another one might use another term but here are roughly the four main roles in a AI team there is a data engineer who is going to work with the data to make sure that it is accessible that it's has the right data that there's no problems in the pipelines that are bringing the data from point A to point B there's a data scientist who's going to work to turn the data into models and if right here when I refer to models it's not just that they're building llms or building um a traditional AI system a predictive AI it's that they're using the data to get to some output so the llm may already be available but they're going to use that llm to solve some problem then there's going to be an ml engineer that helps take the model that the data scientist has produced uh to get their solution into production and that's really useful because the ml engineer is going to have kind of the split of expertise between a traditional devops expert and a data scientist so they have a bit more understanding about how these um unique AI systems are working but they also have the ability to know how to hook those up into the modern architectures that we use to serve applications and lastly to keep everyone in the executive and director role happy there's project manager who is going to smooth that coordination uh ensure there's a timely delivery I do think that's very core to an AI team because you have such diverse people working on it now how do we look at AI teams versus a traditional software team so while a software team is going to be looking at specific functionalities and AI team is likely going to be focused more on building intelligent models or building intelligent systems so if the software team might say Implement an API to get access to a support ticket that someone logged the AI team might be building a chatbot to share the status of the uh issues with that that user and maybe to provide updates and context to it uh now certainly the AI team can develop specific features that are using AI um but that is not necessarily their focus in terms of skills the software team is going to have software skills while the AI team is going to have more of an understanding of AI certainly statistics and algorithms because these are at the core of what's happening and you need to know statistics to be able to evaluate the output of these systems and to be able to calculate metrics which are meaningful to evaluating how well the models perform in terms of development we have done software development for a long time now I have a Fortran book on my shelf back there that was published in 1964 and uh I have actually used that uh as a reference still works I won't tell you where I had to look up for TR uh and use a book from 1964 but I did on the other hand we've only been doing kind of AI in production for a few decades maybe uh let's say two and really in Earnest over the past couple years and so the development process is much more iterative it's much more experimental and it is going to be grounded in the data and results so it's not necessarily that we're just hitting a requirement the AI team is going to say we need to get to this level of accuracy or this level of performance in order to consider the results of success and when I say it's iterative I mean that it you're going to start with a baseline functionality and that Baseline functionality might not be great it might be accurate only 60% of the time but from there you can build and eventually have a model which is accurate 70% of the time or a system which works 80% of the time 99 um eventually hopefully and so that's what I mean by iterative the deliverables are going to be different because your AI team is going to be delivering the algorithms and insights where software is going to be more about the applications and the challenges around the AI team are going to be much more around the data quality the bias of the model um very complex deployment and especially if you are in a highly regulated environment or if you're working with highrisk models in the EU and subject to the EU AI act um you're going to need to provide some explainability of these models and you know what let me go ahead and pull up a uh Google page and I'm gonna just kind of what is the EU AI act I'm going to ask a question here and we can see that there is a Google search labs you've probably seen this before this is generative AI if I look here I can see show more I can see generative AI is experimental it's going to tell me that generative AI is being used if I click up to learn more I can learn more about how AI is being used in this if I click here um it's not pulling up right now but typically that hamburger is going to show me um oh yeah it's going to say I can give feedback there's a source here these results are not personalized so one of the challenges of deploying AI is not just that explainability but making sure that your users know how AI is being used and uh to be transparent about its usage so some hype and reality the hype is that gen can be implemented by anyone and that is certainly true that you could implement it by anyone but to really make sure that there's sucess it's probably going to take upskilling some of your software teams to really make sure that the Investments are good and worthwhile and to avoid some of the common pitfalls which can come with the implementation of
AI I want to talk briefly about the challenges of AI project management um I think there's a lot of difference between AI project management and software project management and it really does require a different way of thinking because with AI project management and geni project management data is critical while software might be dependent upon the data um it's rarely the thing which is going to determine the success or failure of the project um when we talk about software there can certainly be high complexity software especially if you're looking at simulations I used to do atmospheric simulations those are still some can be more complex than ai ai models um but the AI models are almost always going to be complex and unpredictable in some cases experimentation is going to be necessary you're going to have requirements which are changing all the time and one thing that's often not taken for granted is or often ignored is that with AI systems they're going to need to be upgraded if you think about a software product I'm sure all of us has intera have interacted with a software system that may be 10 20 years old and it's still chugging along sure you might have to go turn on a Windows 2000 machine to use it but it plays a core role in the business functionality and nobody's paid to replace it yet so you just kind of put up for put with it and allow it to keep operating um AI models don't quite have that dependability because as the data comes in that data is ultimately tied to the real world um and when I say the real world I mean the human interaction natural physical world not the Digital World um and as as the physical world goes on the real world goes on it changes and so an AI model that is trained 10 years ago probably doesn't language today or how certain words interact today because it language has evolved over the past 10 years um so they models have to change consistently um explainability can be incredibly challenged one of the real big differences is evaluation and deployment to be able to say is my gen working and to use the appropriate metrics to measure that to make sure that you're not just um measuring accuracy but making sure that when there are failures those failures aren't significant enough to have a major impact that sort of evaluation is not always just it's not running unit tests it's not um a regression testing it's a complicated process of really making sure these models are going to work for the business purpose one of the things that I love to do in the evaluation of models is to use a cost Effectiveness ratio so this is drawn from healthc care which you just say hey here's our you know our support system that's just using the support from uh a humans human interaction for tech support and this is what it costs this is what its value is and then if we say if we replace those humans with an AI system what is going to be the cost difference how much um cheaper is it and what's the value difference so what's the difference in cost and benefit and what's that tradeoff and being able to say hey it's might be more expensive to use the humans but we can't a or the risk of the or the decrease in quality that comes with the AI system right now that's the kind of valuation um that you really have to do to understand it now one of the hype uh geni is just like an API so it's just a software project in reality generative AI is experimental and requires time for research um I like to say that AI is both research and development because you are going to need some time to just explore to experiment do um hypothesis testing and if you don't build that time into the project and allow for it to work um if you just kind of assume this is software development then you're really setting up the team for failure there's just different approaches to managing research projects than there are development projects and when they are coinciding and have to live together it takes a lot of knowledge and a lot of careful balancing to make sure that you don't put too much emphasis on one or the
other so all of this is is kind of laying the ground or we have gen we have understood a little bit about how it works know that it's not software it's not a easy button to fix all of the problems that we're having and I want to talk a little bit in this last few minutes here about ethics so at its core AI ethics is going to ensure that the use of AI Technologies aligns with human values and benefits society as a whole really what I mean here is that we want to be proactive rather than reactionary if you look at the history of Technology there are things in history like DDT that companies were built on they were wildly successful on but then as DDT made an ethical transition it be they suddenly lost a huge amount of Revenue they fell on the negative side of the public and uh companies had shut down over that so making sure AI ethics to me is making sure that when we deploy AI it's of a benefit to society but more importantly we're not deploying AI that in two years is going to be banned or is going to be so controversial that it has to pull back and it's going to be a reputational harm so the hype here is that AI ethics is just going to make sure that we don't uh have some doomsday scenario and all the humans die at the hands of Skynet in reality AI ethics is practical it serves business goals and it makes sure that there is a solid return on investment and also hopefully stops AI from taking over the planet now to do that we do have really practical tools we have the nist AI risk management framework this has been adopted by the federal government it has been adopted by the uh Department of Defense it has been adopted by California it's a wildly successful even if early phases framework to plan out AI risk and it involves these functions you set up some governance to make sure you have appropriate systems for AI risk management you map out where your risks are you measure them to make sure that you can say here's a metric that defines my risk and then you manage them whether it's Insurance whether it's um say I'm going to mitigate that risk I'm going to avoid it entirely and do something else um and what's really useful about the nist AI risk management framework is that it's practical it's broken down if you go to your their website you can see what each of those phases is about and get actions about how to follow and Implement them at different levels of fidelity but more importantly the AI RMF talks about the characteristics of trust things that are necessary for us have trust in AI systems and most importantly the systems are going to be reliable they're going to work and they're going to be accurate um they're going to be accountable and transparent like what I showed on Google where it said generative AI is experimental it's you can see that AI is being used you don't want your AI to do things that are unsafe whether it's to actively harm people or to break systems kind of go Rogue and use so much of your GPU that it catches fire you want to make sure that the AI is secure that it's not accessible to malicious users that if there's an attack on it that it's going to be resilient you want to make sure that it's not leaking people's private information and you want to make sure that any of the biases in the model are carefully managed so that you're not outputting um
discriminatory results based upon protected characteristics all of these things are kind of characteristics of trust that aren't just true about um about AI but are kind of true about companies in general and humans in general you want your humans to be accurate you want them to be transparent you want them to be safe um you want them to keep things private and so sometimes I think of what does it take for me to trust a human it's kind of the same thing that it's going to take for me to trust an AI system now when it comes to generative AI there are a unique set of risks and the nist AI risk management framework does provide a good list of those there's the risk of making chemical biological radiological and nuclear information more accessible hopefully doesn't apply to many of us and especially if we're using llms through apis then it's a much lower risk Hallucination is a big one what's called confabulation here so this is just a confident assertion of Truth when you have nothing to ground it on now don't get me wrong humans are perfectly capable of hallucinating and just saying this is what's accurate with absolutely no basis in reality um but models do it too and we need to make sure that we're not simply reliant on on that there's a risk of generating text that's makes violent recommendations a lot of these llms have at times recommended self har to users Absolut a problem making sure that data is safe and if you have environmental goals as a business consider how you're rolling out generative AI in a way that's environmentally friendly it could be something as simple as you know what this um this particular region in our cloud services we know has 100% renewable energy so we're going to do all our gen processing in this region or in this set of regions that's renewable energy but having some way of thinking through the environmental aspects of it incredibly useful um information Integrity is one that we think of in terms of misinformation and disinformation information security is like my uh all-time favorite it's making sure that we don't make hacking too easy for people with generative AI um intellectual property is a huge one and I would say at most of our levels we're not going to have to worry about um the kind of lawsuits that are coming out between open Ai and the New York Times or something because they use the content really the big concern is that right now anything that's generated by AI does not have a copyright um it is uncopyrightable um so there is concerns that you might be reproducing intellectual property um and that the output of the llms isn't intellectual property um you always want to make sure that there's nothing obscene about the generative AI uses um there's huge issues with both camon and CI um going around with Gen now and then as always bias and as you integrate geni into your supply chain making sure that it's transparent and it's not introducing risks that you are not aware of I think this covers most of the Gen risks which are unique to gen and I could go into a lot more specifics um but it is kind of such a broad category that it really depends on your application which ones are applicable and which ones are not so one of the Hypes you're going to see around gen risks is that it takes an advanced degree in ethics to understand it that you have to know all these things to make sure that you're not um engaging in some form of bias and making sure that um you're not producing harmful material but in reality we have tools that are helping us take steps towards effective AI ethics and tools and training that's really all you need to make sure that you are doing well at the AI ethics today so I have one more hype and reality um the hype is that generative AI will revolutionize everything I don't think that's necessarily wrong um but I'll say the reality is that AI not necessarily generative AI but all of artificial intelligence is already revolutionizing everything and it has been since the at least the early 2000s and the transformation that we see in the world around us with the Google homes with Siri with chat GPT with the self-driving cars that are coming and are already here all of these things are going to revolutionize the world but it's not going to happen overnight just like electricity didn't get rolled out to every city in every home right away just like indoor plumbing still isn't available to everyone in the world and just like uh any example in technology we don't see these things rolled out overnight it does take time but we do see these Technologies as revolutionary so I would say if you feel behind as a business leader if you're saying oh shoot like our company is doing nothing with generative AI we don't have a generative AI team we don't know anything about responsible AI That's okay it's it's not like the difference between now and 3 months ago or 3 months from now is going to be make or break when it comes to AI from my perspective I'm sure I could argue that around the edges but get started build a build an AI team build a generative AI team get some training in responsible AI I'd love to give it to you um everything takes time and the last few concluding thoughts I don't think that generative AI is going to replace the creativity of humans I have yet to see that spark of Brilliance in AI art um AI May augment human intelligence but I do believe our intelligence is indispensable because we simply we're better than the machines right now and while there are great results of generative AI people are becoming sensitive to its outputs so I'd be careful about just sending out generative AI outputs without carefully vetting them and as I already clicked through it is spooky season so uh I went a little SNL there um happy to take whatever questions you have about any of the topics that we covered today um or anything that is on your mind about AI or generative AI
and feel free to either throw that in the chat I don't know if you're able to unmute yourself but I am watching chat yep I'm watching the chat too Dan and um yeah just go ahead and throw them in there but may maybe in the meantime where we're getting some some questions um I can show some of our horses especially the ones that you talked about um so if you oh I think maybe just one came in top okay does not Justin sorry that was a typo on my part oh gotcha um okay uh so Dan would it be all right if I just if I took over absolutely shared real right quick okay thank you um oh one second I feel a little strange like being a disembodied voice here there we go okay um let's
see yeah okay so I'm hoping that you can see um my screen for AI and generative AI training um can you see that Dan yep okay awesome um so just wanted to show you just as as Dan was talking about the different AI um roles for developers devops data scientists managers end users um so you know if you've got an organization that needs total upscaling we can take every part or you know if just have a team here and there that's
fine getting them to the right place um and I'm just going to go down here to end users we've got an introduction to generative AI course that is coming up I'm going to just go ahead and view details um so this is available for for private um customized training but we also have a public date here October 14th this is guaranteed to run so if you just have one or two people to train you might want to go with uh this public class and a lot of the courses um have public dates associated with them too so let me just go back and see any questions that oops let me just stop my share for a moment um let's see okay so we do have a couple questions in here I yeah we have a couple questions I'll start with that one on pricing and I'm going to pull up the um the pricing for open AI GE and the reason why I want to do that is to kind of emphasize the the difference in pricing that we've seen over time here if I look at um I apologize for scrolling down I was trying to find 3.5 but it looks like they've reorganized this recently the input tokens for uh GPC 3.5 cost $3 per 1 million input tokens and $6 per 1 million output tokens uh GPT 4 here is 250 for input and one um that's the batch and $10 for output so in some regards it is slightly more on the output side on the input it is cheaper that said a million tokens is about the length of War and Peace So if you are interested in getting started you don't have first you don't have to start with the top models you can easily start with something like GPT 40 mini which is much cheaper and experiment around with that I think the primary expense is going to the time of your individuals and what to kind of get started since I don't know the pricing of the time of your individuals I would take some of your top top people who are able to adapt quickly to different Technologies get maybe get them some training just have to sell what we do but also find a use case which seems attainable in your company and have them try that out just do something um small scale and useful even if it doesn't get rolled out that will start to develop the process and that institutional memory and you'll be able to kind of feel out what are the pitfalls what are the um things that you can do and I would say if you Scope that well enough and keep it small enough you might be looking at uh four 4 we to three mon e month effort to just kind of get started um but kind of the mythical man month I wouldn't go too big too soon I I would scale it to make sure that you're scaling well um I know that was kind of a non-answer on pricing so I'll just say um budget a$1 thousand uh so a question here I'm assuming most use of gen will be via Services provided by Major players IE Google open AI Etc what factors what will factor into decision to self host um there are models which are open weights um the big ones are llama which is put out by meta um and unless you are like Tik Tock you can use it fine or the dod it you can't use it for military purposes um another one is Microsoft's fe3 which is an incredibly small model model it's only 7 billion parameters I would say if you have a bunch of gpus available if you have a good team that knows how to set up those gpus to run models and especially if you have a unique regulatory requirement to say keep data inside of certain networks then it might be appropriate to sell most now do keep in mind that some of the open weight models are they're not going to be as robust as the API models because the apis aren't just the models themselves they have some structure around them and so um it can be it my preferences towards apis first and then I would fall back to selfhosted if there are particular reasons too um at this point I think it's probably most cost effective for the apis for most companies um how would I rate chat GP versus Claude versus
Gemini I I don't know I use I'd use Gemini one of those things it's preference log into something else and just do a whole separate thing doesn't really appeal to me I hear really good things about Claude though um I'd say go with what you're comfortable with and one of the things that I do like um as an option I believe both chat GPT and Jim and I have this now is the ability to create your custom system prompts and to provide input to say hey this is how I want you to interact with me um do we have ai tools that measure ethics and responsibilities so yes there is the Microsoft responsible AI tool kit tool box uh this is one of these tools um that I will pull up here um it
is right here is kind of an example of it this is a way to go through your model and kind of find out where there could be biases in it um um this is one example um I took a a class through this recently and we took real world data from the CDC developed a model to predict the likelihood of diabetes and then ran it through this and without me doing anything to artificially bias the data all the models that the students produced were if if you were over a certain age it was like yeah you have diabetes and if you were under a certain age it was like no you don't have diabetes just absolutely not and so there was this bias based on a built into the data um that was really really fascinating to see so we have tools like that there are lots of specific metrics and specific benchmarks that you can do to test the ethics um it really comes down to your use cases and what you want to allow and putting guard rails around it to say hey if there's a particular word let's say you are Pepsi and you don't want your uh model to every say ever say Coca-Cola you put a guardrail around there to just say if it gives this output say I'm sorry I can't help with that instead so um there are tools this is just one example there's lots of even Services which will help out with the ethics of it and filtering
um let me see building your own solution versus using outof the-box Solutions like chat GPT teams or create custom gpts for a chat GPT like solution
um it depends it depends on the size of your company um it depends on what sort of budget you're able to allocate towards it I would
say if you choose
to use a SAS solution then I would make sure that you run that through a team first a smaller team because there are you know not chat GPT but there are gen companies out there that are a little bit more hype than I know that would like to admit sometimes that you know just kind of rid in the bubble um so make sure that the tool works but there can be so much time and effort which goes into development that unless there's a a specific feature that you really want that is not being met by a out of the box solution the out of the box is probably going to work pretty well um there's a reason why everybody uses GitHub or gitlab there's a reason why lots of people use jira it's because they work pretty well and um that said there are going to be times when you do need to do gen yourself and those are typically going to be around specific use cases something like hey we get emails coming in from 10 different vendors and they all have their own way of writing and we just need a way to rank the priority of them and the Gen may be able to rank the priority really well but that's something that is a particular use case and you're probably going to have to write some some code or at least some prompting around
that um okay so any examples of lwh hanging fruit to tackle
first I am sure that your company has people in it who just have entire lists of things that they want to automate and those automations some of them are just rule based but some of them can be gen and this might be something like if you are in customer service not yeah a if you had an off out of the box chatbot that is probably low hanging fruit another one might be an internal chatbot which you were able to query say your SharePoint about um hook it up try it out run it there um another one I'm trying to make it generic now there's lots particular to Industries um another another one may be a bot that was able to answer questions about specific projects inside of slack or something um that just you typed hey at gen what's up with project I don't know so this security project and it would just go be able to go retrieve something and give you a summary um things like that where you're working with your data and especially internal because you have a low risk level right there when it's internal instead of customer facing I would say those are are kind of the low hanging fruit and discret automation tasks those could be very quick and very very beneficial uh question from Marcus about how to ensure that companies data exchang with GPT G will not be processed or stored out of the country um yes so I don't know in particular about the open AI API and how doable that is I do know that Azure open AI allows you to select open aai Services which does connect to the open AI models does allow you to select regions and do the processing in those regions and if you have specific requirements around um say fed ramp levels you can go to those specific fed ramp levels for for the Azure open AI services so those are I would go more to the cloud providers than straight to open AI um and I think that gcp has an equivalent version um but you would definitely have to check out with gcp so I would look at their regions and I would say generally if you're in something like Azure or gcp it's going to fall under their standard processes there um there's a question about is there any PID ORF free tool that allows to access chat GPT Gemini Cloud llama other llms all inone interface and one payment plan yes there is and I do not remember the name of it off the top of my head [Music]
um yeah I it's the name is escaping me right now there are several different companies which do it the model behind it and that said they're going to add a little overhead but if you are just evaluating models to determine which one it's definitely a way to
go um Justin asks are there ethical Frameworks to select from I recently learned of a constitutional model that's anthropics version in addition are there ways to not have a model hallucinate or provide vague information when it doesn't know and how are mistakes removed from a model um ethical Frameworks yes um there's constitutional models and a lot of these come down to fine-tuning how you are um forcing the weights of the model to not produce harmful
things so there's lots of ways to do it like in that training stage if you're not training an llm or not fine-tuning an llm a lot of what you're going to do is on the input output and prompting so making sure that you don't put bad inputs into it bad outputs out of it and that's sanitization just like you would for uh like a chat interface um and you can use an AI model to say hey is this safe to send to a customer that's possible that's very doable and it's happen a lot um and then inside the prompt itself which goes to the model um you can give it instructions like do not advocate for self harm um do not give Health advice things like that um another common statement is if you do not know or if if the answer is not supported or supportable in the information that you have do not answer um things like that and with that um question of AI hallucination retrieval augmented generation is among the best ways that we have right now and there are some tweaks to that and advanced methods to kind of get the model to reflect and then double check to make sure that you can verify the output with the input um but it is not 100% it will still get through to which I would ask the question do you assume your human will have 100% accuracy um because we don't so yes there are ways to get it to not hallucinate but it's not going to be perfect
um yes covered the AI ethics one I don't think there are any more questions but happy to stay on if there are any more
well Dan I really enjoyed that and um I was on the edge of my seat really this is one of my favorite webinars and I don't say that after every
webinar yeah and I will just for those people who are in here I'm going to drop my email in here there are not two M there feel free to reach out directly uh with your AI questions I'm happy to to answer them um I think I have two more questions so was there anything else you wanted to say Alex before I finished up here with the question um I was just going to put in the URL that I showed in the chat the web AG Solutions so it's in there now just showing the AI training there's a map in there and um Dan's a great person to to get in touch with to um you know to help point you in the direction or set something up um training models can I speak towards the difference between Rag and training models yes so
rag versus training I'm trying to think of a good example here so the train training is going to update the weights this is this is just the kind of the technical answer when you're training you're going to update the numbers which make up the models rag is not going to update those numbers those weights um rag is more like if you didn't study for an open book test and expected to just gain all the answers from the information that was in the open book um when you are training it is more like this studying and making sure that you are learning the content in the book and what you'll see is something called fine-tuning or you're not just doing the full training from absolute scratch but you're taking a good model and then fine-tuning it on something like medicine or law or health and what you'll often see like a fine tuning on that language to do better on health tasks and even then you still use rag on in addition to that to help the reliability even more so they're not things that are in either or they can be used together they can be used separately um but at a fundamental level rag gives information inside the prompt that the model can use and training is going to update the weights you are not going to see that training data at an individual prompt level um from subas I apologize for that name um what's the best platform tool solution you recommend for creating a chat bot for all of a company's knowledge base um you know I'm not going to recommend one right off the top of my head let me see though um I'm going to pull this just to the side for now you can take a look at my my girls [Music] um chat
UI I'm GNA try to pull up this sheet there are a lot of options out there for building your own here it is this is something that I've been evaluating this is just a big spreadsheet of the different different solutions that are out there for giving a chat GP like interface and some of them are pretty well known like GPT for all um this lets you run a model locally and kind of interact with it in that interface I would say the ones that have been standing out to me are chatbox and Libra chat as kind of op options for these are uis and maybe little bit of tooling around them and then you are able to hook them into your own systems um so I think that's as far as I'm going to go with like a recommendation because there are just a lot of providers out there and
I I I don't feel confident enough to say like this is the one that I would go
to um trainings I would recommend to get started more exposure how to develop a use case calculate the ROI yeah so we have trainings for both business users data scientists developers ml Engineers um things like that I would what I've been kind of recommending to some of our clients is a a tiered approach so you have a couple of your real thought leaders in the business take some training to just dig into generative AI couple days and then from that have them brainstorm develop some use cases where they think generative AI can be applied in uh your particular company and Industry and that's something that I'm happy to help in the brainstorming process with um and then once you have that bring in more of your developers more of your data scientists for the generative AI training and we can customize the examples that we provide specifically towards those use cases so that it is we're we're really tailoring it to you because ultimately like you just want generic training what you you're not going to come to us you come to us because we are going to provide you something tailored for your use cases for your industries and for your company um and and really high quality so yeah we we have a whole bunch of classes and I think that that link is in there so any of those would be a great yeah I can share that link of AI UI let me let me pull that up now this I'm just going to share the Google Docs I think that it is a public one um but if anyone like um yeah anyone who has the link can access no signin required um so you should be able to let me just make sure I have the right one you should be able to access that um I did that is not mine for my credit that was found online and someone had already done the work so stand on the shoulders
science I think that's all the
questions yeah there were great questions great questions some of them I was wondering myself perfect um okay awesome and and I did get the question a couple times just in case you're still wondering yes this session is being recorded and we'll send you the link to the video as soon as we have everything processed either today or tomorrow by the latest um yep so you feel free to share it around and and watch it um also for attending this webinar we're offering 25% off their public or private courses and as I said before Dan is a great resource and can help point you in the right direction for a an upskilling program or just you know training for your team and customize it which was kind of what makes us unique because as Dan said we we can use your data your use cases and you know tailor it to exactly what you're looking for um yeah it was great yeah thanks for all the nice kind words for the presentation in the chat we appreciate it thanks
everyone oh I have another one here blocking app like chat GPT and an Enterprise to protect our data um at my last company I wrote the generative AI policy um we were in the uh Government Contracting realm and I did say that you can't use it for anything business critical um because of the risks associated with
it I would say if you are going to block it provide an alternative because people are going to use it people want to use it and to Simply kind of stifle that creativity um I I would hate to just outright block it that would be my um find a way to make it accessible and still protect the data that would be my goal
um how prevalent are you seeing products that have it embedded is there transparency seen there
um I would say it is safe to assume that
I would I I would assume a company is using AI now that is my general assumption when I'm interacting with chat support I do not assume that it is a rule-based system I assume that it is generative AI until it says that I have a human and if it has says that I have a human then I assume it is a human um I have not yet met a case where I'm like ah this doesn't seem like a human um so far I've seen pretty good at transparency I have not had a case where I've ex I've suspected that generative AI was being used and it wasn't transparent to me with one exception I there was an article
that was sent around recently the article will remain aimless uh
Ai and I started reading through it I went I'm like 95% sure and that that kind of passing the llm work off as your own like I like there are times when I'm using the llm to help me to to get me from point A to point B faster um but that's not a substitute for me getting from point A to point B um because if I'm just relying upon the model entirely to say generate some text to I don't know to write an email that email might be really bad um in general I can write better than the AI right now um or at least I can write my intentions better so uh it was kind of long-winded um we've had difficulty we ban AI because the data we process but vendors seem to have it used in their products yeah I would
probably I I would love to have like a one-on-one conversation about your your particular use cases but my what I suspect is that that's something you're probably going to have to write into uh terms and conditions in contracts to say like be this data cannot be used in this way um government Health it yeah um I I did a research project on infant mortality rates with CDC data and got work data to the level that was de I could de anonymize it and actually identify people in it and so had to go through that security level and
I I think it's going to have to be in the terms and conditions um so that you can hold people to account if they're misusing that data that said um most companies are use useful for data or used to data restrictions [Music] um perhaps you could put some penalties around the misuse that would be my my thoughts but I am hashtag not a lawyer
well I feel like if I say there are no more questions there will be more questions I'm I continue to be happy to take them but no one needs to feel like they have to stay
around no this has been great um yeah I don't think there's anything else coming in it looks like it so maybe we we'll everyone out into the world with all this new generative AI cutting through the hype business so thank you so much thank you so much Dan and thank you guys for for all 28 of you who hung on Till The Bitter End thank you all so much um oh there's Dave hi Dave trainer um so so happy to see so many people here some people we know and a lot of new people anyway we hope to see you on the next webinar we'll be sending that out and we'll be sending out this recording as well all right thank you everybody have a wonderful rest of your Wednesday and we will see you next time ciao