Now Hiring: Are you a driven and motivated 1st Line IT Support Engineer?

Bridge the chasm between your ML and app devs with Semantic Kernel | BRK250

1723722367_maxresdefault.jpg

Bridge the chasm between your ML and app devs with Semantic Kernel | BRK250

Bridging the Chasm between ML and App Developers with Semantic Kernel

In today’s rapidly evolving technological landscape, where artificial intelligence (AI) and machine learning (ML) continue to transform industries, the gap between AI researchers and application developers significantly impacts the deployment and integration of AI innovations into practical, scalable applications. This gap often results in siloed teams and hinders organizations from unleashing the full potential of AI technologies. Recognizing this critical barrier, Semantic Kernel offers a robust solution designed to bridge this divide, promoting a seamless collaboration between AI researchers and app developers. Here’s how Semantic Kernel is revolutionizing the interaction between machine learning and application development, enhancing productivity and innovation.

Introduction to Semantic Kernel

Introduced at a major tech conference last year, Semantic Kernel has dramatically transformed the way organizations approach AI integration. Despite the dynamic changes in the AI landscape over the year, the vision of Semantic Kernel remains steadfast—empowering every app developer to enhance productivity and deliver exceptional services. Maintaining its commitment to accessibility and innovation, Semantic Kernel continues to be open source and MIT licensed, reflecting its community-driven approach to solving real-world problems.

A Year of Growth and Listening

Since its launch, Semantic Kernel has seen a significant adoption rate, with over 1.1 million downloads, reflecting the tech community’s enthusiastic response. The platform’s evolution included a substantial update with the release of .NET 6, demonstrating a commitment to keeping pace with cutting-edge technology. The ongoing dialogue with the community has been pivotal, shaping enhancements in the platform to include crucial enterprise components like security, monitoring, and compliance, ensuring that Semantic Kernel is ready for enterprise-level deployment right out of the box.

Seamless Integration Across Diverse Programming Languages

One of the fundamental challenges in AI application development is the disparity in the programming environments used by AI researchers and app developers. While AI researchers prefer Python for its extensive libraries and flexibility, app developers lean towards languages like .NET and Java, which are traditionally used for enterprise applications. Semantic Kernel addresses this challenge head-on by offering SDKs in Python, .NET, and Java, and ensuring compatibility across these platforms to maintain consistency in functions and processes.

Simplifying the Transition from Development to Deployment

The transition from a proof of concept in AI to a fully functional application in a production environment is often fraught with challenges, primarily due to the differences in operational environments between researchers and developers. Semantic Kernel simplifies this transition by ensuring that assets created by AI developers can easily be handed over to app developers for deployment, without the need for extensive rework or understanding of complex AI models. This not only speeds up the deployment process but also reduces the potential for errors that can creep in during the translation of concepts.

Enhancing Collaboration through Community Feedback

Community feedback has been integral to the evolution of Semantic Kernel. Regular interactions through forums, webinars, and direct consultations have provided valuable insights that have been instrumental in shaping the roadmap and features of Semantic Kernel. This ongoing conversation ensures that the platform remains responsive to the needs of both AI researchers and app developers, fostering a community that is engaged and invested in the platform’s success.

Conclusion: Empowering Developers with Tools for the Future

Semantic Kernel stands at the forefront of a transformative movement in AI and application development. By providing tools that integrate seamlessly across different programming environments and simplifying the deployment of AI models into production systems, Semantic Kernel is not just bridging a gap but also paving a path towards a more collaborative and efficient future in technology development. Whether you are an AI enthusiast, a seasoned developer, or an enterprise looking to integrate AI into your applications, Semantic Kernel provides the tools and support to make the process as smooth and effective as possible. Join the revolution and start building smarter and more responsive applications today.

[h3]Watch this video for the full details:[/h3]


AI experts and enterprise development teams often use different tech stacks with different concepts and resource types. Now that Semantic Kernel is V1.0 in Python, C#, and Java though, your teams now have the opportunity to speak the same language. With Semantic Kernel, ML teams can create the same assets for plugins, planners, and agents, evaluate them, and then give them to app dev teams to deploy them. Come to this Cozy AI Kitchen session to learn just how easy it is to reuse AI assets.

𝗦𝗽𝗲𝗮𝗸𝗲𝗿𝘀:
* Matthew Bolanos
* Evan Chaki
* Hiro Kobashi
* John Maeda
* Dan Schocke
* Adam Tybor

𝗦𝗲𝘀𝘀𝗶𝗼𝗻 𝗜𝗻𝗳𝗼𝗿𝗺𝗮𝘁𝗶𝗼𝗻:
This video is one of many sessions delivered for the Microsoft Build 2024 event. View the full session schedule and learn more about Microsoft Build at https://build.microsoft.com

BRK250 | English (US) | AI Development

#MSBuild

[h3]Transcript[/h3]
[Music] all right set 363 days ago we’re right here at build announcing publicly semantic Colonel to the world a lot has changed with AI in the last here but a lot stay the same semantha Colonel still open source and MIT licensed our mission to empower every app Dev to increase productivity and exceptional service Remains the Same we started our journey withn net and the response has been amazing 1.1 million downloads of theet Kernel since we launched and late last year here we release net to the world with 1. o.net and we’ve also spent a lot of time listening to you our community and really learning along the way with your feedback we ensure the Enterprise components you needed to go to production were available you can deploy semantic kernel at Enterprise scale with security monitoring and compliance hooks ready to go for you to use today our listening incl tour included things like our office hours that we still host via teams every week one ones with you and also on S sites to visits to really see how does AI really work at Enterprise scale and we we saw something we saw a divide we saw that AI development includes more than just AI developers it’s it includes AI researchers who are pushing the envelope of what AI means today to support that entire life cycle though AI developers need an easy way to take their poc’s right and transition those over to their appdev counterparts whose dayto day includes things like supporting life site right monitoring deploying to production AI libraries today don’t allow this transition between these two teams and because of this there’s a huge Chasm right between these two teams to be able to really deliver AI value to to Enterprises until today we’ve spent the last six months closing this Chasm on the AI and app developers having a seamless development experience that brings them together and leverages the best skills from both teams so let’s bring Matthew out to the stage so we can see what’s possible today aome well thank you Evan for sharing why we are so passionate about bridging this divide this Chasm between AI researchers and app developers because I fundamentally believe that it shouldn’t be that way there shouldn’t be this divide in fact they should be working hand toand as one team because together they can best use their own capabilities AI researchers are looking into the future trying out the latest models the latest strategies Enterprise app developers keep the lights on they build performance secure reliable services that create customer trust these are two different skills that work together but they don’t work together today because of the tools they use so AI researchers are predominately python your app developers are using net and Java and so the the python devs once they get to the point where they’re releasing application like oh shoot I have to do Liv site I’ve never done that before Enterprise app Developers you feel on you’re on the sideline you don’t know how to do all this AI stuff as well as some of your counterparts so there needs to be a bridge we don’t want you to have to learn a whole new set of skills in order to become Enterprise app developers or to learn the latest cool AI stuff in Python we want you to work together and have a bridge so that y’all can talk to each other collaborate right and that bridge in Microsoft is semantic kernel and this week is really exciting not just because of build because today we are announcing we have two blog post out the V1 versions of both Python and Java that means all of our sdks python net Java are all V1 plus now concretely one that means we’re not breaking you I don’t know any other AI SDK that is committed to V one and not breaking folks that means you can build something today a year two years you’re not going to have to do a refactor right two if you do use us across your different teams you can align on Concepts we figure out what is an agent we figure out what is a a plugin so yall don’t have to have those semantic arguments right you can focus on building things for your customers and then the last one going back to that bridge metaphor we made sure that for each of our v1s they had the same inputs and have the same outputs so if you have a prompt if you have a plugin and you use any of these they’ll work the same that’s huge because your AI developers can do what they do best build poc’s figure out what’s coming next make sure your company sells relevant and once it’s ready to go to production they don’t have to figure out devops and they’re not angering their Enterprise devs asking them to refactor and rebuild everything from scratch they’re just passing over assets so to show you just how easy easy it is to go from python toet I’d like to invite John ma onto the stage and we’re going to have a special Mr M cozy AI Kitchen on stage today hello everyone honored to be here because uh you know my function has been to help more people understand Ai and be less scared about it you may know the famous semantic Colonel toaster on Mr Co AI kitchen um you may know that the gp4 model has been upgraded notice the eyeballs um and also we have the jumbo jar of personas to represent the agent era embeddings Etc but you know in the process of uh really creating more AI chefs out there uh I am humbled because I have sat two desks away from Matthew over here truly a master chef all of you know people out there who like learn a craft you one day someone can learn the craft more than you and can kick your behind that is Matthew Matthew I I hand over to you the AI Chef apron officially to continue with your cooking thank you I actually kind of shocked I knew this was going to happen and I actually feel kind of emotional so thank you John thank you um so yeah let’s uh let’s do some cooking so I’m going to open up my computer here and we’re going to switch so theoretically you should now oh yall can see my screen so like any great cooking demonstration um we are going to take a look at the ingredients so I am pretending to be an AI researcher so I’m coding in Python don’t worry we’ll get to net and Java in a bit uh here we have our kernel our toaster we can add in our different Services we’re going to be adding in plugins in a bit and this is kind of the secret sauce this is where we send in requests to Oh no you’re actually seeing the final stuff go away go away so this is where uh the magic happens where we actually send requests to the AI uh and have it do things right everything else on this screen is just loading chats from mongodb saving it and uh powering my UI which Speaking of which let’s go ahead and just do a quick hello you’ve seen this a thousand times um nothing special right here now you’ll notice on the stage I have a lot of props I have a bunch of lights and what I’m going to do is actually throw a party on stage in celebration of our V1 of python and Java now to help I want the AI to turn on the lamp now as many of you know the LMS are just LMS they don’t have capability so it’s going to say I don’t know what to do and this is where plugins come in so that code that you shouldn’t have seen but you did so you got A Sneak Peek is just that so this is is what it’s like to load in a plugin there’s many different ways of loading plugins but this is the way for grabbing an open API spec and this is my favorite way because this is I don’t know about y’all but this is what I’m familiar with as an appdev this is the way to document Enterprise apis and here I have my light API I can retrieve lights I can change lights it’s everything my AI needs to know and just like humans why we create this is so that humans can understand how to use the apis exact same way for the we know the description what it does we know what the inputs are we know what the outputs are and we take all of that information in these few lines of code and give it to the AI and so now when we ask it to turn on the lamp if we pray to the demo Gods it’s on it’s on so uh I want to have it turn on the rest of the lights um y’all are going to have to tell me if these things actually where I I can actually see okay there we go there we go great but no party is great with very harsh white lights we need to set the mood and so this is where I want to introduce Rag and how we think about it within semantic kernel so rag is obviously retrieval augmented generation you have to retrieve information in order to improve the response of the AI in this case we need to retrieve some really lovely colors so we can set the mood so for that again I’m pretending I’m a AI researcher I have built another um another I’m just going to delete some stuff another API this is a flas g I have a scene and what it does it’s going to generate an image with generative AI it takes in a prompt and it creates it and then I have a image generated for me where I can grab colors right because this is an API that was built by uh python Dev I can go to its own open API UI and I can even test it out so I’m just you can see the inputs here uh I’m asking it to generate some colors based on the theme of Microsoft build how fitting and so I can go ahead and execute this it’s going to think a bit and what I hope you notice is it’s kind of slow right because we’re asking this thing to generate an image just to give us some colors and if we ran this at scale that that’s not good right all we did is spend all that Wonder time just to generate and I purposely used a bad model this oh no can y’all see it y’all can’t see it has the feed uh okay so for some reason y’all can’t see uh that picture which I guess this is fine okay have you uh lordy okay we will keep going so there was a really bad picture but it had the colors that I need and that’s all that matters now to improve the actual speed of things this is where the concept of vector DBS comes into play how many of you have heard of vector DBS and actually use them okay great um we are going to be using this to store this information to make it faster to retrieve this image it’s just like how you use it to retrieve information from your documents and uh uh use it to populate the context of the AI so what I’m going to do here is I’m going to uh go ahead and add those bits back at the top here I’m generating an embedding which is this like way of uh kind of understanding how semantically similar something is and then trying to find a match it’s kind of like a cache and then at the end here we are actually saving whatever the images are and so if we come back to our service and we run the scene one more time looks you are able to see my screen now oh thank you instantly instantly comes back U so that’s great and since y’all missed it last time I think it’s just this one page like kills it so we’re we are not going to look at that image the image is cursed it’s a cursed image y’all okay H you know if that’s the worst thing that happens during an AI demo I will be happy okay so um it I can do it okay uh one thing to note is as an AI Dev I’m probably wanting to use things locally I want to develop fast so I am using uh we8 uh we8 has this really great capability of actually allowing you to create a Docker container for we8 so you can get it spun up locally and that makes sense right but once you pass it over to your Enterprise devs that’s when you would either use the we8 service or or Azure open AI search right or Azure Azure AI search so we’ve set up all this stuff we can go ahead and come back and load in some additional plugins um obviously we now need to add this plugin that will retrieve the scene and and I’m going to go ahead and add a speaker plugin because we also want to some music right so I’m now going to ask it uh please play. python. wve but first set the mood uh with the lights okay so we’re going to send this over and we’re going to give it a bit this is using the kind of native planning capabilities of the model it’s using automatic function calling under the hood and what it’s doing is it’s first asking for hey what color should I use for the mood in this case it’s kind of like hopefully it comes back with colors that are for Python and then it’s going to play the music uh so oh it changed the lights great and uh it should be playing a song I don’t hear it so maybe the audio is busted um but it does say it’s succeeded I know for a fact that if it didn’t fail it would clearly tell me the AI is really good at letting me know that it did not do a good job so we’re having some fun AV Equipment issues but that is okay that is okay that is okay Al righty so that’s Python and now what we want to do is actually move it into some of our other Enterprise uh applications like Java so I’m going to pull open what Java looks like okay and it should start to be similar right we have the same ingredients we have the kernel we have our plugins we have our services and I guess what when I first proposed this talk I was like oh I’m going to like literally copy and paste files but where I imagine the future actually moving is instead you have like either monor repos or you use like git uh actions to like create these pipelines that aren’t ml Ops and Dev Ops but a combined Ops where all these things are combined and here you can see I have this folder for plugin resources that all of my devs whether they’re python. net or Java can use and again that that reinforces as an AI Dev I can make these changes push them up to get and then everyone else on my team can use it in their respective language that’s really really powerful okay so I love to get to net but uh we’re going to to take a pause and actually hear from one of our partners and see how they are using semantic kernel so without further Ado I would like to invite hero kabashi uh from Fujitsu to speak about composite AI thank you very [Applause] much um thank you very much Matthew uh having me here and every well um it’s very great pleasure to talk about the our collaboration uh between this and Microsoft um just in case you are not familiar with the Fujitsu uh Fujitsu is one of the the largest ICT company in the world and we are the number one in the Japan market and our main business is B2B and we are providing the many kinds of ICT services to the customers and actually in these days the trend of customer requirements is about AI yes of course and especially after the Advent of the generative AIS the demand is getting higher and higher so therefore the our the issue is to how can I streamline the AI uh delivery to the customers so that is the reason why we have recently developed the fitu K platform which is make the AI delivery efficient from the research development to the product and actually we are providing many kinds of Technology through this platform already that is good um however there’s there’s however uh there are two issues in front of the qu recent so one is the uh number of the AI Technologies as you know every single day we are meeting the new AI Technologies so therefore even we get a uh customer request for AI it’s very difficult to identify which AI technology is good for the the the customers so how can I solve the issues is the one of issues for us second request uh second issue is about the the diversity of the uh customer request so as I said fitu is a big uh B2B company means the our customer is spread out multiple business domains so even we created a good AI system for one customers but that system doesn’t work for the others so somehow we need to solve that these issues in the for the the our AI uh solution so that’s the reason why we are collaborating with the semantic C and and we have developed a new uh AI system that’s called composite AIS the composite AI is the the is the systems that understand the business challenge via uh chat I mean the in natural language and this automatically suggest Solutions using the best AI model and datas as you can see in the right hand side it that that is the very simple architecture diagram of the composite a it consists of three parts one is the main part of composite Ai and one is the model rake one is the data rake as you can imagine from the world uh model rake and data rake are stores and model rake stores AI Technologies and data rake stores datas and by utilizing the these AI technology and data we will create a very nice Solutions on top of cage so that composition so and I want to give you the more detail how we are utilizing semantic card by this slide and as you can see the in the left there is the users and the user give uh composite AI the uh request and the request go through the plan uh which is implemented by the semantic kind of functionalities actually of course and once uh we ask the plan now we can get the plan and that plan will be passes the composition part and try to create a uh the uh AI Solutions and in this cases the plan I assume the plan is plan consists of three part three subtasks one is utl which is the traditional machine learning uh Technologies and second step is for the optimization and we call it as opto Ai and third step is the visualizations so by combining these the the modules we can provide a good Solutions and of course these subtasks will be assigned to the task agent which is implemented by sem kind again and the task agent are connected to the uh model rake and data rake and once all uh the result has been produced the result will convert into one and pass it pass back to the the users so this is a very simple uh sequence but this is the pretty much everything which I am doing in the composite I love new llms with these traditional models because they’re not perfect at everything right but you’re able to confu confuse them together with composite AI you’re using symmetric kernel planners to decide which traditional models to use right yes to get the final answer it’s everything based on the customer request customer give us a context and we will decide which one is best for to utilize this and that one is very good actually no one knows how to use the the aut or op but semantic con will help us yeah proceed and actually the most important thing today I want to say is that the compos is not a concept it’s a functional realities so I mean I brought at two use case these are two real use cases and I just show you the the demo video later but I let me explain about the use case the detail first so first use case is about Nakayama transport which is a Logistic Company in Japan and the logistic companies in Japan are facing the One issues so-called uh logistic uh 2024 issue and this means that the in the last April the new Ro has been uh introduced in Japan and there are so many complex rules are introduced logistic companies so they need for RS but if if they want to create efficient driver assignment or planning it is very difficult to create by manual so that the reason why they come to us and we utilize composite to create a plan and they utilize the plant for the their purpose so this is the one second use case is about my case I mean our use case f this is utilizing the composite of course in our business uh as I said f is a big B2B company and we have a service desk service in our uh capabilities and in the service desk we are getting the many incident from customers and we need to carefully think uh which incident must be assigned which agent otherwise we may violate the SLA then then we need to pay a penalty to the customers so such kind of thing uh composite I suggest us to create a very nice Solutions and that we that is the composition of the prediction and optimizations means uh in a prediction we did a uh incident completion prediction I mean then we can get the uh time of how long does this take for each instant and based on the result we will uh optimize the assignment of the incident so demo time yep yeah let’s go demo them so this is the the uh let’s say start page of the composite area and as I said so there are the two use cases in here yeah it’s start okay great um okay there’s the two data source in the data rake and there’s the 34 AI Technologies in model Rec now and these uh the 34 Technologies are stored in kuchi which is another AI system in Fus so composor will utilize this technology for the the they based on the request so uh let’s get inside the Nakayama transport first so this uh use case it’s already done so this so let’s say uh this is the kind of completion Pages already so and you can see the overview in here task division here here and the constraint here and they are utilized they were utilized for the context to create the plannings for the optimizations and this is the result which we got from the composor of the of and the vertical mean trucks and truck has the each boxes and the boxes mean that driving sections so the one truck is going to the several sections in here and this is well aligned for the schedulings so the nak transporter can utilize that plants for their purpose for this use case this one as you can see it’s not started yet just there’s the little context in here so therefore I want to put the more context to the compos now so and the through the chart so you can give him the the more context what you you want to do do and the based on the the input compositor will understand what can I do however there sometimes we the he need uh compe needs more context so then he replied back to us and I give more context what I want to do again so in this case I give some agent characteristic and some conc of the jobs and then compet understand what should I do and once he understand he starts the planning something like this so this is a planning produced in the Json F format and of course you can see it in uh the dashboard so this pipeline is quite simple it load data from data Lake pass it to the otml and please optimize by utilizing digital an and I love this because you’re basically making planers your own you have a UI on top of it and I guess what you’re seeing right now is you can edit it right that’s right changing the the the uh module so if you want to you don’t want to UTI a digital Ana you can change it to the other one and the other one is something like this is it’s very easy it’s very quick yes to do that and once everything they completed so just say yes and then that process uh will going on so I mean that process is going with the planner task agent and will try to execute everything and once everything happens so in this case is data is loaded from service now and it’s passed to the automl and the a AI will uh uh did the optimization for us so this is everything uh pretty much everything and of course we can dig inside and like this one so Auto ml part so this one is the these code are synthesized by the generative so repeat that like the AI wrote this code executed it so no human is trying to figure out how to use these traditional AI models the the AI is doing it AI did it and also the uh it’s going but but the we did a formalization so you know the in optimizations we need a formalization beforehand but the that one is quite uh difficult for non expert uh but in this case it’s ji so our composite can do it for you automatically and most important thing I want to show in here is the uh will be appeared in the right hand side uh go down a little bit so upto AI as you can see there are some status are here and it’s uh finished uh successfully but you can see some errors in here but this error is automatically revised inside of the composite this is a very nice actually the good uh functionalities in here and once everything has been done we can get this kind of result that is great great and this is the assignment of the agent for each incident so the the top is the aligned means so we can finish the in the same times for the these five agent right this is everything I want to show you here and also H one more one more oh yeah yeah uh we have de uh published the uh Microsoft blog in yesterday if you are interested in the semantic can collaboration with compos please check it out we have the uh white paper of the comp well thank you before you go though we did have yes planners and personas and so you are now truly an AI Chef yeah well thank you very much thank you thank you thank you so let’s go back and as I promised we’ll quickly quickly do a net demo and what I want to show off is actually really inspired by what uh hero just uh demoed for us um so at the very end he showed how the AI was able to write python in order to uh uh complete some fairly complicated tasks right and so what I have done is I have uh spent uh some time talking back and forth with my AI yesterday um let’s just see okay so um what we can see here here is uh me having a back and forth conversation with the AI and how it can help me with this particular demo and what I wanted to do is actually have it synchronize the lights to the music now because audio is not working we’re just going to skip that part of the demo um but what I really want to highlight is how the AI is able to use Python one thing that we’ve learned from customers like Fujitsu and custo and other projects like task Weaver is uh the best way to have the AI create and execute plans is with with python and so as we scroll through this chat we can actually see how the AI is able to use things like traditional models like librosa how it can use my plugins like my lights down below and as it runs into issues yeah sometimes I kind of course correct it every so often but it’s typically smart enough in order to actually fix those problems itself now traditionally semantha colel has kind of shied away from using python as that language to do that planing capability um but but I am proud that Microsoft in the last week has shipped as part of azure app uh uh containers Azure container apps a new feature called Dynamic sessions if how many of you know what code interpreter is okay so we have a few people so in assistance API the AI can actually write and execute code what’s special about Dynamic sessions is it’s the technology that powers all of that right that’s that’s cool right and they have now released it as a service that all of you can now use and so all of this code that was run and executed and tested by the AI was actually done by that service and that’s something now that you can use it’s fully locked down it can’t make HD request unless you want to but what’s powerful with semantic kernel is you can make it talk to your plugins so that you can do things like change the lights okay so with that uh we’re going to actually one last thing um if we did see the demo working I’m not going to risk it because clearly there’s like uh uh very evil spirits in this room um uh if we were doing this there is a risk I could have the lights actually strobe and give someone a seizure uh that’s that’s not good right and so one of the pieces that’s important as part of responsible AI is actually having control over what the AI ever does in the real world and so what we’ve been recently introducing to semantic kernel is this concept of hooks or filters and this is key to Enterprise uh production deployments right this is hard to do in many other SD because they don’t they don’t think about these things right every time the AI tries to invoke a function every single time this filter will be called and you can see I have this nice little case if the function that’s called is change light State I can actually do a detection they see like is this too fast am I strobing am I doing something dangerous and I can stop it right and if you kind of think about Enterprise scenarios let’s let’s just take a step back you have your Swagger apis that you can use that’s something that you already have and in terms of safety you can use these filters to make sure that the AI isn’t doing something inappropriate like spending too much money or sending an email when it’s not supposed to you can Loop in a human this is powerful this is basic 101 stuff if you want to ship a safe AI application and I think that’s kind of a motive that we’re we’re hearing here hooks safe python Dynamic session containers it’s kind of the safest way you can do things right using your apis this is the Enterprise story that Samantha kernel is trying to provide you so with that we’re going to round things off with uh one last customer we’re going to be inviting Accenture to the stage but before they had a quick video that they wanted to play for us I want to go ahead and play it and hopefully the audio works are we having any audio no so in that case I’m going to uh go ahead and pause this we’re going to invite uh Accenture to the stage please help me welcome Adam and Dan to the stage thank you thank you So Adam you’re the chief Ai and data architect for all EX centure Dan you’re a director uh it’s great to have you you are using CRA Co can you tell me about it since our video unfortunately did not work yeah AB absolutely so I I lead up our strategy and architecture for Accenture Global it so what we do internally how we run uh the Accenture business and we a lot of times what we try and do is become the first best credential for then what we can go deliver deliver to our clients and so our our Flagship gen product right now internally is called amethyst and we’re using semantic kernel uh at that orchestration planner level um to really build out uh an AI assistant Accenture is a very large company um about 750,000 people um a lot of different groups a lot of different organizations you can imagine navigating a company that large um can become very cumbersome challenging trying to identify the human to be a rag is very very difficult so you know we we’re able to use semantic kernel and geni to develop that rag um and we have quite a few plugins um to be able to go and get quick tasks done so the the video that we’re going to show um you know if you need to go identify your PTO time if you need to go identify what case studies or clients have we done um work on in specific spefic Industries or areas or who are the experts we need to go identify um we’re able to really build up this Dynamic set of uh requests and responses and plans um with a lot of using um and leveraging a lot of our existing apis throughout the Enterprise that’s great so Accenture is a big place you’ll have a lot of different development teams I’m assuming folks use Python java.net is that right like what what’s the AI development story across all these very different teams yeah for sure I I think like a lot of other companies we’ve had our ups and downs and and trying to figure out what’s the right balance between data scientists and our AI researchers and our app developers right so my group uh spends a lot of time building automated templates and you know kind of cicd tools and and you know gated pipelines right that just don’t let bad code into production right um and you know often there’s a lot of friction between my team and our AI researchers who are very used to things like notebooks and workspaces and you know running a little python script um and they produce some really wonderful stuff and then they get really upset with me when I say that didn’t pass a security scan you can’t release it to production yeah yeah yeah and it’s it’s developing that culture right and making the two types of developers kind of understand each other um yeah yeah no so I mean you know for us something like semantic kernel being available in multiple languages so things that are you know AI researchers like working in like python right is their number one um and you know the same exact code or the the same exact features being available in something like C right which I much prefer in terms of you know just in terms of scale and efficiency and energy usage right I can spin it up quickly um I can run it in serverless thing you know serverless uh mode um you know so that’s way better from my perspective um to get it into that so what we ended up relying on a lot of times was really having a very small set of developers that kind of knew both and could put a foot in each world and do that translation before from you know python methods to something that you know is more like you know c um you know a traditional web API um and you know with semantha Colonel releasing on both sides that’s a lot a much more natural fit we don’t have to do transpilers or it’s it’s one less thing for folks have to learn to create that that trusted culture right take away the technology bit and now it’s about processes so um maybe one last question um youall have customers too they probably use a bunch of different uh languages and have different processes can you tell me more about like how extent is helping customers work with AI yeah absolutely I um as I mentioned before a lot of times we go out and we’re that first best credential for our clients so a lot of the work we’re doing here is turning into assets that we can then go deliver on and the beauty of semantic kernel is its ability to span all the different needs of you know our diverse clients um you know internally we do a lot ofn net a lot of our clients do Java and the vast majority right of the AI work happening uh is happening inside of python and so I think we’re able to accommodate a lot of that and as you all know when you get into these Enterprises and these it shops there’s a lot of existing processes already um designed for things like net and Java and so being able to have the patterns and the assets to start treating these plugins and these services and these agents as just microservices that can adapt and leverage all the existing operational tooling and performance and security um that already exists uh is is a huge win for us and for our clients perfect well thank you thank you thank you for joining us at build a big round of applause for Dan and Adam Dan and Adam said [Applause] PL pleas apron is for you well and and I love the Accentra story because y’all are a no yes yes yes yes yes thank you thank you because y’all are able to build assets over time that can work across all of your customers no matter what language they’re using so thank you thank you thank you uh we have a few minutes left so I’m going to be inviting Evan my boss back thank you uh to talk about how you can join our community how you can become a Master Chef yourself so Evan take it away awesome thank you Matthew all right unfortunately well we’ll try this and see uh we had an amazing community video um so we’ll post it online on our blog you have to check it out a lot of people responded to our call to tell you about what’s going on with spany kernel so you can check out our blog you can see the whole thing we’ll post the long video um thank you to the community for for doing this this is uh really amazing to hear these stories for you right it’s awesome to hear directly from customers it’s awesome to hear from the community and to see what they’re creating with AI now now is your turn right you can be an AI Chef we want to see what you create it’s never too late to start that AI journey start today get involved join the community and we can’t wait to see what you’re doing and showcase that on stage again a year from now next year to have you on the videos to have you on stage talking about your AI Journey so thank you again and uh thanks for coming [Applause] I e