How To Use AutoGen STUDIO with ANY Open-Source LLM Tutorial

1717306758_maxresdefault.jpg

Introduction

Navigating the world of Open-Source Large Language Models (LLMs) can be both exhilarating and challenging. With the advancements in AI technologies, tools like AutoGen Studio are becoming indispensable for developers and AI enthusiasts looking to leverage the power of LLMs in their projects. Today, we’ll dive into an in-depth tutorial on how to integrate AutoGen Studio with two prominent open-source LLMs: OAMA and LM Studio. This comprehensive guide will walk you through every step from installation to creating intelligent AI agents.

Setting Up AutoGen Studio

Before diving into the complexities of connecting with LLMs, the initial step is to get AutoGen Studio up and running on your system. AutoGen Studio simplifies the process of connecting and managing LLMs, providing a user-friendly interface to interact with different AI models.

  1. Installation: Start by installing AutoGen Studio using the Python package manager pip. If AutoGen Studio is already installed, simply upgrade to the latest version:

    pip install autogen-studio
    pip install --upgrade autogen-studio
  2. Starting the Server: Launch the AutoGen Studio server using the following command:

    autogen-studio -d port 880

    This command will activate a local server which typically runs at port 880, accessible through your browser.

Integrating OAMA with AutoGen Studio

OAMA, an open-source LLM, can be effectively utilized within AutoGen Studio by following the steps outlined below:

  1. Downloading and Running OAMA: Visit the OAMA website (ama.com), download the software, and run it. Ensure that it is active (indicated by an icon in your system tray or dock).

  2. Downloading a Model: Use a new terminal in your coding environment (e.g., PyCharm) to download an OAMA model using the provided command:

    olama pool <model_name>

    For this example, we used the ‘FI’ model due to its compatibility with the system capabilities.

  3. Creating a Model in AutoGen Studio: Navigate to the ‘Build’ tab in AutoGen Studio, click on ‘New Model,’ and input the name of the model you downloaded. Use ‘ama’ for the API key and configure the base URL to connect to your local OAMA server. Test the connection to ensure everything is set up correctly.

Setting Up LM Studio

LM Studio is another powerful tool in the AI developer’s arsenal. Connecting it with AutoGen Studio opens up even more possibilities for developing sophisticated AI applications.

  1. Installing LM Studio: Download LM Studio tailored to your system specifications from its official site. Run the application and ensure it’s correctly initialized by viewing the latest version and available models on the dashboard.

  2. Choosing and Loading a Model: Within LM Studio, select a model that fits your requirements. For instance, if we continue with the ‘5_2’ model, ensure to start the server and load the model.

  3. Connecting to AutoGen Studio: Back in AutoGent Studio, set up a new model corresponding to what you’ve loaded in LM Studio. This includes configuring the API key (lm-studio) and the base URL as per LM Studio’s settings. Validate the setup by testing the model connection.

Creating Agents and Workflows

With both models configured in AutoGen Studio, you can now focus on building agents and workflows to utilize the capabilities of these LLMs fully:

  1. Creating Agents: Navigate to the ‘Agents’ tab, and create new agents for each model. Specify the appropriate model for each agent to ensure correct data handling and AI responses.

  2. Designing Workflows: Set up workflows that define how user inputs are processed and responded to by the agents. This setup controls how the agents interact within your applications.

  3. Testing and Deployment: Use the ‘Playground’ tab to send queries and test the responsiveness of your AI configurations. Tweak settings and workflows based on the responses to optimize performance.

AutoGen Studio provides a robust platform for integrating and managing open-source LLMs. By following this guide, you can harness the power of OAMA and LM Studio within your projects, leading to enhanced AI-driven applications. Whether you are developing complex AI solutions or experimenting with new models, AutoGen Studio equips you with the necessary tools to succeed.

[h3]Watch this video for the full details:[/h3]


Step by step setup guide for a totally local LLM with LM Studio and Ollama using AutoGen Studio. Now one thing to note…there is still a small issue with LM Studio and the 2-token limit with Studio. I hope this gets fixed soon, but when it is…I show you how to connect.

You can download the IDE I use and you can use the Conda Environment with the following download as well:

* ???? PyCharm Download: https://www.jetbrains.com/pycharm/download
* ???? Anaconda Download: https://www.anaconda.com/download

— — — — — — — — —

???? ???? ???? Don’t forget to sign up for the ???? newsletter below to give updates in AI, what I’m working on and struggles I’ve dealt with (which you may have too!):

=========================================================
???? Newsletter Sign-up: https://bit.ly/tylerreed
=========================================================

— — — — — — — — —

Join me on Discord: https://discord.gg/Db6e8KkHww

Connect With Me:
* ???? X (twitter): @TylerReedAI
* ????‍♂️ GitHub: https://github.com/tylerprogramming/ai
* ???? Instagram: TylerReedAI
* ???? LinkedIn: https://www.linkedin.com/in/tylerreedai/

— — — — — — — — —

???? 31 Day Challenge Playlist: https://youtube.com/playlist?list=PLwPL8GA9A_umryTQCIjf3lU6Tq9ioNe36&si=4XCDtT8ep1U6KjkR

????‍♂️ GitHub 31 Day Challenge: https://github.com/tylerprogramming/31-day-challenge-ai

— — — — — — — — —

* ???? Ollama Download: https://ollama.com/
* ???? LM Studio Download: https://lmstudio.ai/

— — — — — — — — —

???? Chapters:
00:00 Welcome to the Course!
00:47 Studio Start
01:27 Ollama
02:04 Model, Agent, Workflow
04:53 LM Studio
06:54 Model, Agent, Workflow
10:11 Outro

???? If you have any issues, let me know in the comments and I will help you out!

[h3]Transcript[/h3]
today I’m going to show you how to connect oama and LM studio and create agents with them in autogen studio let’s see how it’s done all right well the first connection we’re going to deal with is AMA now just open up any project and we first need to install autogen studio and you can use this command pip install autogen Studio or if you already have installed and you maybe just need to upgrade you just type in– upgrade autogen Studio go and hit enter and then let it upgrade once that’s done then we need to actually run autogen studio so you type in autogen studio space you space– d-port and then any port number you can use 880 by default hit enter and then it will open up a Ser a local server for you so we’ll just click this all right now that we have it open the first thing we need to do is actually run o llama server so in order to do that you need to download olama at ama.com or you can type in. and it just uh forwards you to ama.com and there’s a big download button the center so just go a and click that download it install it and then just run it okay I’ll have a couple links in the description on videos that I’ve already done about AMA but once you download it and just run it for instance up here on my Mac I have a little icon knowing that it’s running now before we go back to autogen Studio we actually have to download a model so I’m coming back to pie charm and in my terminal you know the first one is being used by autogen studio so I can click the plus button open up a new terminal and the command to download a model is called pool so we’ll type in O Lama space pool and now we need to give the name of the model there is a list on their website of all the models they have but for my machine which isn’t the best I’m going to download the FI model so I’ll just type in fi and then it show you it downloads I already had it downloaded so this it only took me a few seconds but it might take you a little bit longer depending on the size of the model okay now our setup is complete we have started oama we have a local server running when you do that and we downloaded a model so now we need to go back to autogen studio and create a model for autogen so now back at autogen Studio go to the build Tab and then go to models and then at the top right we’re going to click new model now it’ll have something default on this first on this first input just erase that because this is going to be the model name and what goes here is the name of the model that we just pulled or downloaded through olama so I use the F model so I’ll just type in F that’s it for the API key because we’re using AMA you just type in ama and then for the base URL you’ll just copy this so it’s Local Host Port 11434 slv1 and this is the connection to the local oama server and you can test that this works by just clicking the test model button and it may take you know more than a few seconds but it should come back that it succeeded and there we go model tested successfully if you get that this failed make sure you have this URL for olama correctly maybe you may have the one for LM studio and make sure you typed in olama for the API key and then you have the correct name of your model so we’re going to choose save so now we have our five model we’re going to go to agents click create new agent I’ll just call this F assistant all this stuff here is okay but here for the model we don’t want to use uh the GPT for preview we want to add our new F model okay and the last thing we need to do is create a workflow so go to the workflow section we’re going to at the top right click new workflow we can name this oama workflow now we already have the user proxy agent you know that’s fine so we don’t need to do anything with that but for the receiver the primary assistant we don’t want to use that one we want to use the FI assistant so one thing you have to do about this is you have to click on another one and then go back and click the new assistant and then that allows it that registers it to actually use the FI assistant so then we’re going to come down here and click okay then you click okay again and we have our workflow so we created the model then the agent for the model and now we have a workflow so we’re going to go to the playground tab under the sessions on the sidebar to the left going to click new and we want to choose our olama workflow so click create and you can type a message here ask it any question we’re just testing it right or you can come down here um try out one of the example prompts here so I just want to do the markdown so once I did that it says list out the top five rivers in Africa their length and return as return back to us as a markdown table okay so it finished it took uh about 3 minutes um and then it created this markdown table now I don’t know if this is actually correct however I I don’t I don’t know I’m not going to actually fact check this but I just want the goal is that you were able to create a connection with AMA right and we created the agent the model and a simple workflow now let’s move on to LM studio now I do have other videos where I go maybe a little bit more in depth with LM Studio but if you just go to Elm studio. and then there’s three download links one for depending on the machine you have just click that and then just run the software now once you had the software running it will be something like this as of right now the latest version is 0.222 so on the front page here has some of the new ones that it generally updates um with the software so here’s llama 3 the the newer version uh 8 8 billion parameter instruct here’s stable code uh here’s uh here’s Google here’s Hermes 2 you know and so forth there’s a bunch here but you just go and download one so by clicking the download or you can search for models up here and you can uh let’s see we can type in llama and then once you type that in it’s going to give us a bunch so go ahead and choose one of these and onever whenever you click download at the bottom here there will be a spot for downloads right you can see the ones that I’ve already downloaded then on the left hand side there’s uh one of these uh sidebar buttons called local server click that and now at the top here this is where we start our local server to connect our agent to so you’re going to at the top select a model to load choose the model that you want again I’m just going to use 5 2 just to show you the connection so I’m going to load 5 2 it’s going to take a second and then it automatically starts the server so if it doesn’t start automatically for you there this will be green the start server it’s a big green button just click it and then it’ll start a couple of the pieces of information that you’ll need here as you can see this is the URL that will’ll need to connect to LM Studio server it’s going to be different than OAS it’s not the same and then we’ll need the actual name of the model like with amaama I used F you’re going to need the exact name here which if you scroll down uh a little bit in this example so for AI assistant python you scroll down a little bit here is the model so you’ll click you’ll just I’m sorry you won’t click but you’ll copy paste this one so the bloke slf2 GF okay so here’s where I left off I’m going to go to the build tab go back to models and now we need a new model so at the top right click new model for the model name this is going to be what I just showed you in the software so you’ll just paste that here so mine was the bloke 52 ggf the API key is lm- studio and you can confirm clicking this lm- studio and for the base URL if we come back to the software again like I mentioned this is the base Ur that we need so we can just copy this go back to aen studio and paste that here and just test the model again to make sure that it’s connecting to Alm Studio server and here it is it connected successfully if you get get something if you get a failure make sure that you have the name of the model correct and you have the URL correct as well once you’re done it’s save now we need to go to the agents tab on the sidebar on the left and create a new agent so we’ll create a new agent we’ll call this LM Studio assistant it does not like Spaces by the way and then for the model we’re going to get rid of this one and then add the the bloke F to GG this is the LM Studio model that’s the name of it okay then once you’re done click okay now we have the new one right here LM Studio assistant now go down to the workflow section we’re going to create a new workflow so at the top right click new workflow and we can just you know name this LM LM Studio workflow again the sender user proxy that’s fine but for the receiver we need to change that we’re not using the primary assistant so click the drop down remember just choose some other random assistant click it again and then choose the one that you want it just until that’s fixed that’s what you need to do right now then click okay click okay again now we’re done we can go to the playground and now we can create a new session so click new over here on the left we’ll choose LM Studio workflow click create and I just want to test that the connection is running properly so I can just say something like what is 5^ squar if I can spell it correctly what is 5 squ and then we can press enter but I know that’s actually connect connecting to LM Studio because it didn’t fail yet and also you can go back to the software and you can see that the last message well what is 5 squar so uh it got the message and it’s trying to come up with a reply from the 52 model and just so you know with LM studio and autogen studio uh you can see that the assistant only came back with five asteris I assumed it was going to say 5 * 5 = 25 uh there is a hug with LM Studio where only will give you the the two tokens as the completion right so this is this was going to be the completion here the which means the output from the model on that we downloaded on LM Studio that’s an issue right now now the connection works right we know that the connection works but with with autogen Studio there’s this problem that they’re fixing um right now there’s a workaround where you can go into their database and fix the fix the max tokens property but I’m just going to wait till that’s that’s actually fixed so that we don’t have to worry about this however the connection does work if you’re to use with pi autogen there they don’t there is no issue with uh only getting two tokens back just so you’re aware okay thank you for watching that is how you connect AMA and LM Studio to autogen studio if you tried a month ago things have changed so it might be a little different so this is the updated video I have a Discord community in the description you’re more than welcome to join we try and talk and discuss things autogen or just AI related there’s also a newsletter that I have in the description where if you sign up you get a free newsletter every Sunday at noon here are some more videos on aen thanks for watching and I’ll see you next video