In this episode, hosts Erik Brown, EJ, and Ryan Elmore dive into how large language models (LLMs) work and provide a behind-the-scenes look at "Nigel," a generative AI platform developed by West Monroe. The hosts analyze various LLMs, their functionalities, and their real-world implications while offering insights into the balance between technical prowess and practical utility. They also highlight generative AI's current capabilities and debate its future potential across various sectors.
Compare Different LLMs: The hosts delve into the functionalities and effectiveness of various large language models, providing insights into their comparative advantages and suitable use cases. This segment helps listeners understand how to choose the right model for specific applications.
Understand Current Limitations in AI: The episode covers the current state of generative AI, including its limitations and potential future advancements. This discussion is aimed at providing a comprehensive view of where AI technology stands today and what developments might be on the horizon.
Get a Behind-the-Scenes Look: Dive into how generative AI and large language models are applied in real-world scenarios, including a detailed look at the interactive platform "Nigel" by West Monroe–and why West Monroe created it.
We have a great assessment of what the right LLMs are to start with. I you're a publisher that is reviewing and editing novels. You have a huge context window, meaning that you can upload or interact with more tokens as part of a question response. Well, you can upload an entire novel. And then ask questions about it. I haven't tried this, but I imagine that it would be better at saying, hey, are there some flow issues with the stories or the movement from chapter to chapter? It can find common editing errors. It can do that easily. But you have this large context window to work with. So a publisher might want to start with that.
A lot of these platforms that organizations are building are prescriptive. And what we're doing here is we're trying to build a culture of iterative product improvement and innovation across the organization with generative AI, and help users identify places where they can see strong efficiency, strong value, or really great support in their day-to-day work.
We just finished up our knowledge training last week where we showcased some of this. And one of the pieces we talked about was be conversational with it. You don't just say, hey, I need this thing. Cool, copy paste, go. Say, no, piece of garbage. You don't know what you're talking about. Let's try it again.