Now Hiring: Are you a driven and motivated 1st Line IT Support Engineer?

AI Agents – What they are and our experience | Ep. 143

1724424308_maxresdefault.jpg

AI Agents – What they are and our experience | Ep. 143

Unleashing the Power of AI Agents in Modern Technology | Ep. 143

Welcome back to the intriguing world of artificial intelligence where breakthroughs are more frequent than ever! In today’s episode of the NTIS Anarchy series, I, Adam CIO, co-founder of NTIS, am thrilled to dive deep into an innovative aspect of AI technology known as AI agents. These agents are revolutionizing how we interact with and utilize AI, extending capabilities far beyond the simple, direct commands we’re used to with traditional large language models (LLMs) like ChatGPT, Meta’s LLaMA, and others.

What are AI Agents?

AI agents are specialized AI-driven entities designed to perform specific tasks or processes autonomously. Unlike the typical LLMs that process one-off requests, AI agents operate continuously, learning and adapting as they go. They can be likened to a virtual team, each member with unique expertise and responsibilities, working collaboratively to achieve complex objectives.

The Evolution from Zero-Shot Requests to AI Agents

Traditionally, interacting with AI has been a linear process: input a request and receive an output (zero-shot request). The limitations are clear—each interaction is isolated, and the AI does not retain context or learn from past interactions. This is where AI agents deliver a significant advantage. By orchestrating multiple agents with specialized roles, you can create a dynamic system where tasks are delegated and processed in a more intelligent, cohesive manner.

Example of AI Agents in Action

Imagine a scenario where you want to deploy a robust cloud infrastructure based on an architecture diagram. One AI agent, acting as a Python and AWS expert, writes the necessary code. Another agent then verifies this code, running it, and feeding back errors or suggestions for improvements. This inter-agent communication continues until the code is polished and functional, greatly simplifying and speeding up the development process.

Leveraging Speed and Efficiency with Advanced AI Tools

The Role of Platforms like Grok

With platforms like Grok, which is renowned for its speed, the interaction between AI agents becomes nearly instantaneous, further enhancing productivity and reducing the time from development to deployment.

Utilizing Different LLMs for Tailored Tasks

The flexibility of AI agents allows them to utilize different underlying LLMs based on the task requirements. For instance, one agent might use OpenAI’s model for intensive reasoning tasks, while another might use Google’s LLM for quicker, less complex queries.

Building AI Agents with Modern Tools

Exploring Leading Agent Development Platforms

  1. Google Vertex AI: Though it primarily caters to enterprise solutions like chatbots and customer service, its agent development capabilities are growing.

  2. Microsoft Autogen: Known for its robustness and relatively positive reviews, it presents a reliable platform for developing sophisticated AI agents.

  3. Lang Chain: Provides a versatile platform with extensive pre-built functionalities, making it a standard in many AI applications.

  4. Crew AI: An open-source, user-friendly tool that allows local deployment, ensuring data privacy and security—ideal for internal, confidential operations.

Integration and Orchestration of AI Agents

The real magic happens in the orchestration of these AI agents, where they are managed in a structured process flow. This includes setting up tasks, assigning tools, and managing outputs—all flowing smoothly through a well-coordinated orchestration layer.

Memory and Learning Across Tasks

AI agents can share insights and learn from each interaction, utilizing a shared memory system. This enables the system to evolve and provide increasingly refined outputs based on previous interactions.

Why AI Agents are the Future

AI agents represent the next leap in artificial intelligence, moving beyond simple question-and-answer setups to complex, multi-layered interaction systems that mimic human efficiency and capabilities. They enable businesses and developers to automate processes in a more reliable, sophisticated manner, potentially reducing operational costs and increasing efficiency.

In conclusion, as we continue to explore the capabilities of AI agents, it’s clear that we are just scratching the surface of what’s possible with AI. Whether it’s speeding up software development, enhancing customer service, or creating new ways to manage data, AI agents are at the forefront of this exciting frontier. Dive into the world of AI agents and start building your own intelligent systems— the possibilities are limitless.

[h3]Watch this video for the full details:[/h3]


We get into AI Agents on this episode such as Google’s vertex.ai, Microsoft AutoGen, LangChain, & crewAI. Laying down the basics in terms of what they are, and what they can be used for.

Be sure to subscribe to the show on all podcast platforms and follow us on all social media @Nyedisiam

00:00 Intro
00:19 Not the kind of AGENT you know about
00:41 Incorporating AI Agents
03:14 What makes these agents awesome
05:01 Major companies developing Agent software
06:35 What these systems are composed of
10:01 Advantages of using different LLMs
10:43 Browsing the different agent tools
13:44 Human as a Tool
16:16 Epilogue

#ai #aiagents #llm

[h3]Transcript[/h3]
hey everybody welcome back to ntis Anarchy series I’m your host Adam CIO and co-founder of ntis and today I want to talk about something that I just found out about AI That’s amazing and they’re called agents so they’re not secret agents like corporate Espionage type stuff these agents are like building a virtual AI crew of your own little people to delegate tasks to to do all kinds of crazy stuff and it’s it’s it’s almost like finding AI all over again because it really just opens up the world to making AI way more powerful than just using an llm so most of us are familiar with using things like chat GPT or metais llama 3 Claude all these other fun ones that are out there Gemini from Google if you want to it’s really not that good though um but we always do zero shot right so we’re always going in saying hey I typed something hit the button and it gives you the result so that’s called a zero zero shot request right you’re putting in in there and the llm just spits out the response you’re like okay this is good enough or you might tweak it back and forth now what you’ve probably seen me talk about before and and how I develop with AI is it will show me like hey like okay I’m going to build out a process and in that case I said hey here’s an architecture diagram now I actually want to go and build out the entire infrastructure as code and deploy to AWS all from a picture and it did it and it’s amazing but it took like an hour because what I had to do was it would give me code i’ take the code I’d put it into python or wherever it was The Interpreter and I’d run it it dump an error I take the error put it back into the llm it tells me how to fix the code gives me the code and puts it back right so that’s amazing first off like let’s not undermine how freaking awesome that is now when we incorporate agents this is when things go crazy and this is like the obviously logical Next Step so an agent is just a particular AI bot that you put that you build out that will do a specific task and run certain processes for you and what it will do is whatever you tell it so an example of two agents let’s say if we were to take that how I develop with AI video and I made agents for this so I’d have one agent that is I’d say hey you are an expert level python developer infrastructure as code cdk developer you know everything about AWS and your goal is to build the most robust multi-threaded architecture for any type of a cloud infrastructure whatever so that’s that agent that’s his job every time I tell that that llm to do something that’s what it’s going to spit out now what I’m going to do second is I’m going to create another agent and this agent is going to be the verifier so this agent is going to say hey you’re going to receive code I want you to run this code verify that it works and any errors or or issues spit back back to the code writer so now what’s going to happen is these two agents will then talk to each other using another agent that would be a processor so we’ll go into all these things are in a minute but the idea is this the code guy is going to kick off and write code he’s going to spit that result to the to the verifier the verifier is going to run the code we’ll get how that works in a minute too and he going to spit back the errors back and they’re just going to keep going back and forth until the code is refined and doing exactly what it’s supposed to do and works then that result is then delivered to me the end result user that’s using this thing that’s what’s crazy now before remember we were talking about speed being important right so if we’re using let’s say chat GPT for or um Claude Opus they’re really good results but they’re a little bit slower which means every time you go through these iterations it’s going to take a few minutes to get that full result back and deliver to you now enter grock we’ve talked about grock before right and how blazingly fast grock is so we can actually leverage grock’s apis and do this all over with their Cloud Network and use grock llm as set up as an agent um llm so now these responses are instant back and forth so instead of it taking minutes to generate code between these agents and refine it and make it better it’s seconds or might even feel instantaneous really the lag is the network community communication between them and what’s also awesome is you can have every individual agent run a different llm so I could have the code llm let’s say I have that one running on Claude Opus but I have the verifier llm running on chat GPT 3.5 because I want something super speedy and but it doesn’t really matter just good enough and you can spread it all out so this is why agents are crazy because you can have an unlimited amount of Agents you can chain them all together there’s different ways to do it you can do it serially right so one agent does this then the next one then it passes it on pass it on pass it on you could do hierarch hierarchically I’m going to pronounce that wrong but whatever so that’s the one where basically you can say okay I have a task and I’m going to delegate these tasks out to different agents based on what they do and I’m going to get all the results back piece it all together and deliver to the result there there’s a few other different methods like I said I’m new to this so I’m definitely not an expert in setting up agents in AI but I really wanted to get this information out to you guys because this is where it’s all going like zero shot llm is so yesterday like it’s all about agents now and setting this stuff up so to help out with this stuff there’s three Ma I should say four uh major companies out there that have agent software that you can use with these llms um the first one Google just announced theirs uh it’s called vertex Ai and they have an agent builder in it uh from what I’ve seen it’s really not the best but they do have a lot of big corporate clients using it but all for pretty much the same thing which is online help chat Bots or call center stuff so they’re not really doing anything too crazy yet um and from the reviews I’ve I’ve seen it’s not really the greatest one uh the next one is Microsoft they have one called autogen that one’s actually from my brief playing around with is pretty good um and also has a lot of good reviews it’s been out for a while since I think 2023 so it’s this is not a New Concept it’s just new to me let just make that clear I’m not like discovering something amazingly brand new but um so the Microsoft autogen is a really good Agent agent system to put together to use uh the other one that is super popular it’s not really really it wasn’t really meant to be as an agent Builder uh is Lang chain and it has a ton of pre-built stuff into it which we’ll go over here in a second but Lang chain is is massively popular and has almost become a standard in a lot of places for a lot of this agent stuff and tools and things and then the other one which is the one I’ve been leaning into a lot more recently is called crew AI so crew AI is very easy to implement it’s free it’s open source so you can use it all locally that’s my big thing is I’m trying to build everything locally so that’s nothing’s going to the cloud I’m not sharing any proprietary confidential information and I can do everything you know internally as quick as I can so that’s that’s why crew AI for me is a big one too now all of these systems essentially do the same thing so they all cons they all consist of uh certain components right so you have agents which we kind of talked about those are the individual personas that are going to do certain things we have tasks which is okay what are those agents going to do your task is to write code your task is to review code your task is to go search the web and find relevant articles your task is to do whatever so each agent you can assign tasks to uh then there’s tools this is the part where it gets crazy so think of like this is where the if then else SL you know endless ability for AI to now reach all these tools then allow it to do anything so you can say you can make a tool that will run code you can make a tool that will search the web you can make a tool that runs something locally on your bash shell you can make a tool that parses CSV files so you can have all these different tools and make them available to your agents to perform the tasks so this way you can have a llm that says Hey I want you’re you’re the researcher you’re the researcher agent and your task is to find every article that a news article on the web that’s relevant and and current about whatever is fed into you and then it could use a tool like a search agent to then go find it in a markdown Editor to flip everything into markdown so therefore this agent will then scour the web using you know existing API calls like Google API just to return search results then it will process it in markdown and return it back within that whole process which brings us to the next part which is processes so processes are how everything’s put together and what’s also really awesome is you’ll see the term orchestration flown around so you all everyone knows my what my feelings are on identity orchestration it’s amazing right because it lets you seamlessly plug in whatever you want you have this middleware layer that can then do anything and that’s why orchestration is red so these agent Builders have an orchestration layers it’s called processes and this is Ai orchestration and it’s exactly what you think if you’re familiar with identity orchestration it works the same way so you have an orchestrator and this orchestrator agent then is responsible for all the other agents that’s where the hierarchical stuff comes into play so you can have one agent is kind of like your project manager and you say you build out that agent to say hey look your job is to first get this information from this person make sure they’re this agent when this agent comes back with the data hold on to it and then spit out this data to the next agent and you you chain a whole whole flow if you will of these AI agents to do whatever tasks are that you set up for it the last piece of this is memory so you have all these different agents which are all individual llms and they could be talking to completely different providers right like I said one could be using open the ey one could be using Gemini one could be using Claud one can be use a local llm and that has an internal memory system so as all these systems return results you can have you can have a local short-term and long-term memory system Within These agent Builders so what that means is you can hold all the information that’s coming back from all these agents and then within all that info then infer the solution or the result or the answer that you asked for from the very beginning so your agent that’s writing the code your agents that’s getting all the r the latest news articles from online the other agent that’s going to test the code and do everything all of that comes back into a single memory bank that then that can be inferred upon to get your results so you get really highly detailed very specific answers that just a single zero shot llm is impossible to do now what’s cool about using these different llms right so like the open AI I think they have a is there I think jet gbd4 has an 8K an 8K token um input limit but Claude Opus I believe has like a 200,000 input limit so therefore you could load entire massive code blocks and research articles back into claw so even if open the eye can’t take that huge chunk to then infer off of you could throw it through Opus and then get your final result through them so so it’s really awesome it really makes it flexible think of it like every AI agent could be talking to a different IDP except those different idps are just different llms hosted llm providers so these agents are crazy you can literally do anything you want and I’m going to show you here real fast is I have um a couple of these things kind of bookmarked here I just want to kind of show you what’s already built out in in these things so if we switch on over here so first this is Microsoft’s Auto genen free by the way all these tools are are free to use so the autogen tool really rad there’s a little video here that explains what agents are and what autogen does and how it works definitely good to go through um this tool is this is a really good tool from my little bit bit of playing around with and from the the reviews and stuff of of listening to other people that are way smarter than me talk about it autogen is a good product uh vertex AI so this is Google’s uh agent Builder so vertex AI is kind of a conglomerate of tools an agent Builder is one piece of it um but like I said not really the greatest but it is being used by a lot of corporate clients so it’s one of those things where if you’re getting into the AI business you’re going to want to know this because it’s going to this is one of those things where people are going to say hey we’re looking for someone that knows how to use Google vertex AI to come and build agents for us because they’re a Google shop right so you’re going to want to learn it for that reason really and then we have crew AI this is the one that I’ve been using the most because it’s really easy they got a really simple SDK to plug and play um this one uh has a there’s a ton of YouTube videos out there to explain how this stuff works and and how like giving you a bunch of demos and stuff definitely recommend handson playing this is something where you really need to get in there and just start making and doing to really understand and grasp it just like reading and and watching some videos isn’t enough um the last one here is Lang chain so these guys have been around for a long time uh they were kind of Agents before agents were considered like a thing and they have a ton of resources so I want to show you is kind of the tools right so agents are just their specific llms that we’ve given it a roll essentially the processes that’s another agent that’s going to control how it work together and then the other thing right was was then the tasks is what we tell the agents to do but the tools those are what we give agents to use to do things outside of just a regular llm query so to show you what’s available uh on the lane chain side so here’s the Lan chain tool set right here so if we look at the tools I mean look at this these are tools that are they’re all open they’re all anyone can grab these and use them and what’s awesome is I believe crew Ai and I think autogen as well you can take the Lang chain tools and actually use them in crew AI so you don’t have to use just Lang chain that’s what I’m saying is like Lang chain is so um widely used that it’s kind of it’s like hugging face it’s everyone has incorporated it into their tools already so definitely get familiar with Lang chain but I mean just check out look at this list of tools here that’s available I mean chat GPT plugins Bing search uh bash shell AWS Lambda command uh we have file system stuff Google Drive connections Google lens for image searching uh you can ALS obviously just Google searching graphql hugging face Hub tools um if then else or if then else if this then that web hooks now what’s interesting is this one right here in the middle it’s called human as a tool so what you’re going to hear a lot when we when we’re when talking about agent building is human in the loop so zero shot like I said is when you just say do the blah blah blah to the LM it gives you a result zero shot answer agents when we have this whole workflow this orchestration using these processes of the agency’s tasks what’s happening is let’s say you have all these agents doing something but you want a human prompt in the middle so the the agents comes back and says hey what do you want me to do to this or hey I have these five different ones which one do you want me to do or hey this is what I have does this look good or do you want me to refine it so whenever you want prompting injected into that orchestration flow of these agents doing things that’s called human in the loop so you inject this human in the loop and you do that with this right here so this plug in from L chain is human as a tool so you become the tool for the agent and this is what’s hilarious actually is to me this is the first time where the human has become a unfortunate necessity to complete an AI task where it’s like oh I guess we’re going to need a human to do this so let’s employ human as a tool so that I can finish my job um so you don’t want to incorporate it too much because then you’re just doing it yourself but there are obvious reasons why injecting human in the loop is going to be important to really refining get you better answers um well memorize so for memor memory utilization of of tested things that have already come back weather maps P let’s see Reddit search requests for making just r r calls SQL database connections uh twillow for text message sending Wikipedia connections wolf from Afra YouTube zapier so it’s like there’s so many different things and that’s just a tool part if you go through you can see there’s a bunch of different retrievers you can pull from that they’ve already built out so retriever go into you know obviously retrieve data uh there’s so much information out there it’s great now also the crew AI they have their tools so here’s their GitHub repo of all the tools that they’ve built out that you can use right away so we have uh CSV searching directory reading uh reading docx files so Word files and stuff like that to pull those in so let’s say you have a whole bunch of corporate repo of all your documentation you could suck all that in and reference that with a rag which is currently highlighted I need to talk about Rags Rags are amazing we’re not going to go into that right now but Rags are huge way to infer answers from an llm from your local data store but we’ll we’ll talk about that later but it’s huge but yeah so there’s tons of tools already built out there for all these agents to build out and like I said all this stuff is open source and free like even Microsoft they’re giving away their their autogen tool it’s awesome so if you really want to get into Ai and do some power building and making actual like applications that do stuff using AI this is what you’re going to use you’re going to use agents so get familiar with it I highly recommend starting out with crew AI just cuz it’s really simple if I had to pick two I would say crew Ai and Lane chain just watch some YouTube videos there’s tons and tons of information out there but it’s all huge this is this is where AI is going so if you haven’t started playing with agents you’re already behind so check it out it’s not terribly difficult and it’s a ton of fun to see what you can do with these things so if you’ve already played with agents I’d love to hear what you’ve been doing down Bola because I’m like like I said I’m new to this too I’m just like I’m up till 2:3 in the morning just reading documentation and playing scripts just to see what I can do with this stuff because it’s so awesome it’s like every time you’re getting these new tools you’re like this is crazy and then you figure out the new thing it’s like this is even crazier it’s like this is nuts so play around with agents you’re going to have a ton of fun this is it’s going to be a crazy world I’ll see you guys around later hey everyone you going to identiverse this year May 28th and 29th in Las Vegas we’re going to be there and we’re going to make it awesome how we’re having a scavenger hunt where you can win awesome prizes hats cool bottle opener butterfly knives awesome whiske key stones that are all custom branded leather posters a real weapon and cool fun stuff like an invite to our private party the way that you’re going to find it is find these little poker chips all over the convention tap your phone to them and that’s how you’re going to start the scavenger hunt lots of cool prizes and not everyone’s going to be able to make it hope you can do it I’ll see you guys there later [Music]