Sidecar Sync

The Death of Prompt Engineering (?) & OpenAI’s Operator Agent | 57

Amith Nagarajan and Mallory Mejias Episode 57

Send us a text

In this episode of Sidecar Sync, hosts Amith and Mallory explore whether prompt engineering is becoming obsolete with advancements like Stanford’s DSPY and Zenbase, and discuss OpenAI’s upcoming Operator agent, which promises to revolutionize how AI integrates with workflows. From understanding effective communication with AI to weighing the risks and rewards of emerging AI agents, this episode is packed with actionable insights for professionals navigating AI-driven transformations.

🔎 Check out the NEW Sidecar Learning Hub:
https://learn.sidecarglobal.com/home

📕 Download ‘Ascend 2nd Edition: Unlocking the Power of AI for Associations’ for FREE
https://sidecarglobal.com/ai

🛠 AI Tools and Resources Mentioned in This Episode
Stanford’s DSPY ➡ https://stanford.edu/dspy
Zenbase ➡ https://zenbase.io
OpenAI Operator ➡ https://openai.com
ChatGPT ➡ https://openai.com/chatgpt
Claude ➡ https://anthropic.com

Chapters
00:00 - Introduction
05:45 - Is Prompt Engineering Dead?
07:59 - The Evolution of AI Communication
14:15 - Cultivating Communication Skills & Creativity
18:42 - OpenAI Operator and its Upcoming Capabilities
23:42 - Privacy and Security Concerns with AI Agents
28:16 - Guidelines for Using AI Tools Safely
33:16 - Microsoft Copilot and Integrations for Associations
35:24 - Closing

🚀 Follow Sidecar on LinkedIn
https://linkedin.com/sidecar-global

👍 Please Like & Subscribe!
https://twitter.com/sidecarglobal
https://www.youtube.com/@SidecarSync
https://sidecarglobal.com

More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.

📣 Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan

Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.

📣 Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias

Amith:

I think this is where people are going to get their first taste of automation, but when the model starts weaving its way into other aspects of their workflow, working for them in various ways, that's going to be, I think, a pretty interesting kind of general purpose capability that will get exciting. Welcome to Sidecar Sync, your weekly dose of innovation. If you're looking for the latest news, insights and developments in the association world, especially those driven by artificial intelligence, you're in the right place. We cut through the noise to bring you the most relevant updates, with a keen focus on how AI and other emerging technologies are shaping the future. No fluff, just facts and informed discussions. I'm Amit Nagarajan, chairman of Blue Cypress, and I'm your host. Greetings everybody and welcome to the Sidecar Sink, your home for all the content you can possibly stand related to artificial intelligence and associations. My name is Amit Nagarajan and my name is Mallory Mejiaz.

Amith:

And we are your hosts and we can't wait to get into another exciting episode with two fantastic and super timely topics in the world of AI and associations. But before we do that, let's take a moment to hear a quick word from our sponsor.

Speaker 3:

Introducing the newly revamped AI Learning Hub, your comprehensive library of self-paced courses designed specifically for association professionals. We've just updated all our content with fresh material covering everything from AI prompting and marketing to events, education, data strategy, ai agents and more. Through the Learning Hub, you can earn your association AI professional certification, recognizing your expertise in applying AI specifically to association challenges and operations. Connect with AI experts during weekly office hours and join a growing community of association professionals who are transforming their organizations through AI. Sign up as an individual or get unlimited access for your entire team at one flat rate.

Mallory:

Start your AI journey today at learnsidecarglobalcom Amith, how is it going today? We haven't recorded a podcast together in a couple weeks here.

Amith:

Yeah, it's been a bit of time, so I'm doing great. How about you?

Mallory:

I'm doing really well, have some big news on the personal front that I wanted to share with our Sidecar Sync listeners and viewers that I got married on November 9th Not me forgetting already November 9th and then I went on my honeymoon with my husband to Mexico City. So that's where I've been for the past week or so. Just got back into the office yesterday. We had a fantastic time. The wedding was great. That was back in New Orleans with all our friends and family, and then we had truly an exciting, amazing few days in Mexico City as well, eating some of the best food ever.

Mallory:

I will say, amit, I don't know if you've been to Mexico City, but the food is fantastic. Well, I was in Mexico City once when I was a kid, like single-digit years. I don't remember exactly when, but I days where we didn't think about anything else. But I'm happy, I will say, to have the wedding behind me. As fun as it was, it was quite a feat to plan so many little details and when we're already dealing with events, you know, in the world of sidecar it's kind of a crazy thing to also have personal events going on simultaneously. So happy that November 9th is behind me.

Amith:

Yeah, it's pretty amazing. You managed to pull off Digital Now and then, a couple of weeks later, got married. And if my planner brain was kicking in if I was doing something like that, I might say, hmm well, I wonder if I can combine the two and get a better deal.

Mallory:

No, no, no, you're hearing it right now.

Amith:

We could have had a session just for you guys at Digital Now.

Mallory:

That might sound like my worst nightmare, meath. You don't expect our first one-year anniversary to be combined with Digital Now in Chicago. It might have been a good deal to save, but what I will tell you is that it's all relative, because so many people, so many of my friends, said, oh, a wedding must be a lot of details to take care of. But I said, well, after putting on digital now, honestly the wedding was easy. Like I hate to say that. I'm sure for some people it is not easy at all, but not having to coordinate all these sessions and speakers, honestly it was one night, a few hours. So I would say it's all relative there with the event stuff.

Amith:

Now, mallory, the most important question in the context of this podcast I can ask you about that is did you use AI at all to help you with your wedding?

Mallory:

Oh man, I was not expecting this question. I don't know that we did. I hate to put it out there. I don't know that we used any AI. My husband is also an avid AI user. He loves a good chat, gpt and Claude, so maybe he did on his end. We did write our own vowels, but I explicitly stated we were not allowed to use any AI models to write our own vowels. So allegedly we should be good there, but no, I don't think we really did in the planning process. What a shame.

Amith:

But no, I don't think we really did in the planning process. What a shame. Well, maybe not. You know, there are definitely domains where it makes sense because you want to really kind of get into all the details, and so that's. I think perhaps the opportunity with AI is that which makes us uniquely human, is what we want to focus on Right, where we get to spend more of our time because AI takes care of a lot of the other things. So interesting, that's true.

Mallory:

More of our time because AI takes care of a lot of the other things. So interesting, that's true. That's a really good segue into our first topic of the day, which is kind of a hard question to ask Is prompt engineering dead? And then we're going to follow that up with topic two of the day, which is OpenAI's operator agent that should be released soon. So the first question is prompt engineering dead?

Mallory:

We've obviously all seen how important prompt engineering has become in the AI world, and if you're not familiar with that phrase, prompt engineering it's the art of knowing exactly how to phrase your requests to get the best results from AI models. It's become a skill that some people have built entire businesses around, so pretty important within the last few months, few years. But now we're seeing something fascinating the emergence of tools that could automate this process. Tools like Stanford's DSPY and startups like Zenbase are developing systems that use AI to optimize prompts automatically. These tools can test thousands of variations, learn from successful outputs and systematically improve results without human intervention. So, instead of manually crafting prompts, developers can now show these systems examples of good outputs and the AI will reverse engineer the optimal prompt.

Mallory:

So this raises interesting questions about the future of prompt engineering. If AI can optimize its own prompts. Will human prompt engineering become obsolete, or will it evolve into something new, perhaps focusing more on defining what good looks like rather than crafting the perfect instructions? So, amit, this is an interesting question for us at Sidecar because, as you all know, we have an AI prompting course. So I hope, in this moment, right prompt engineering is not totally dead. What are your initial thoughts?

Amith:

Well, so we do have our AI prompting course for only $24. People can jump on and check that out. It's a great introduction to AI and the reason we still have it up there is because it's still relevant. And prompting is something that, as like a defined discipline, if you will, probably won't exist. But will you be able to? Will you need to be able to communicate effectively with AI? The answer to that question is absolutely yes. But what's happening with AI and prompting is the same thing that's happened with computing since the very beginning.

Amith:

You know, computing started out with very esoteric and difficult to use ways of communicating with computers. You know the first computers you programmed with like physical switches, and then a big innovation came out and there were punch cards. And then after that there were, you know, some basic ways of inputting text and programming. Languages evolved, but they started off very low level, super, super cumbersome to use with things like assembly language, and then we went into higher and higher levels of languages. So if you think about the history of computer languages, you know back in the 60s we had something called BASIC, which was the first kind of sort of human readable computer language. That really took, you know, significant market share and people started using that weren't like deep computer scientists, and that progress has continued and continued and continued, and computer languages have continued to get easier to use and smarter. What the big deal back in October November 2022, when the chat GPT moment occurred, was that the general public first had a taste of what it's like to communicate at a pretty high level of resolution with a computer through natural language right, and so that first interaction was a major step change for everyone to say hey, wait, you can actually talk to these things without knowing how to code or without knowing the correct sequence of clicks in a particular application. You could just literally type something and get a response.

Amith:

Now, of course, the first versions of that were quite rudimentary, and so our prompt skills were super important, because if you were good at prompting the computer or really the AI model I should say was capable of producing a useful output. But in the earlier days, if you didn't have really good prompting skills, it would be difficult for you to get a whole lot of value. A great example that actually is with imaging models or image generation models. You know, using the early versions of Midjourney through Discord, using the early versions of DALI, were very, very difficult to get anything useful. And then when OpenAI integrated DALI into ChatGPT I think that was about a little bit over a year ago they actually did kind of a meta-prompting layer that what you just described where you say, hey, I want an image that describes you know two people getting married in New Orleans and then flying off to Mexico City for their honeymoon and you'd get something if you prompted the original DALI with that. But what happens now is ChatGPT takes that and goes through a little bit of what I'd call kind of meta reasoning or meta prompting, where it then sends a prompt to the DALI model that was actually written by ChatGPT and all of it to you is totally transparent. But if you actually click on the image you can see the prompting that's going on behind the scenes.

Amith:

What you're describing, mallory, is just more of that. It's going to be more and more and more of this higher order stuff and the models themselves are just going to become smarter. So if you kind of look at that progression that I've tried to paint the picture of, from the earliest days of competing through now, we have had essentially higher and higher taxes that we paid early on in terms and tax in the sense of metaphorical tax in terms of pain or knowledge requirements to interact with the computer. To now, almost nothing. Right. It's super fluid and it's getting better.

Amith:

So I would actually say that the most important thing that you have to really look at if you want to know where the puck is going to be, so to speak, is communication skills. So if you want to be effective in getting what you want from an AI, you have to be good at articulating yourself. Just like if I go to an employee and I say, hey, I really want you to produce this project for me and I do kind of a lousy job of defining the requirements and I don't really provide a lot of detail or I'm kind of ambiguous in the language I use you might not get something super great right. And with AI it's the same thing. I think AI systems are going to be better at coming back to you and refining the knowledge or the ideas with you and then executing on them. But that's my two cents.

Amith:

I don't think prompting as its own skill is like something people are going to be talking about five years from now. But I do think understanding how to communicate with AIs is going to be pretty important, and the AIs are going to get better at understanding our fairly bad communication skills generally as a population, right? So I think that what's going to happen is there will always be people who are at the frontier doing really cool things, that have the greatest skills in this area. Right now, what I would say is zooming back to late 2024, where we are today as we record this episode. Prompting is super important. You have to know the basics, you have to understand what you are trying to do and put it in terms that the AI can understand. Well, you'll get so much more out of it, which is going back to the earlier discussion, why we have an AI prompting course in the fall of 2024.

Mallory:

So, then, it's safe to say that we're all heading in the direction of prompting being baked into these models for us. But if you want to move quickly, if you want to move right now on this, prompt engineering is still an important skill.

Amith:

That, if you're, better at prompting, you're going to get a significantly different output, and you know the smartest models are already quite good at interpreting what it is that you're trying to do and asking follow-up questions, things like that. But we're going to have AI everywhere. We're going to have AI on our phones. We're going to have AI on our watches. We might have it somewhere else, you know, like on our refrigerator or something. So and a lot of these other environments are not going to have frontier AI. They're going to have OK AI, and so even OK AI is getting pretty good.

Amith:

But I would say that just being thoughtful about what you're asking for and how you interact with the AI will make you a better communicator in general. The kind of weird things that you've had to do, like think step by step. You know that's a prompt engineering tactic that's available and actually is still quite useful in many environments. That's going to go away. You know those kinds of things that are more like how do you kind of seed the model to behave in a certain way? The model is going to figure that stuff out. You're not going to have to be that explicit. But again, if I say, hey, I want the model to write a blog post for me and I'm not very specific in what I ask for I'm going to get you know. Not the greatest response.

Mallory:

So prompting is kind of like searching the web. Most people will be able to do it without kind of even realizing what strategies they're using, but then some people will be better at it than others.

Amith:

Totally.

Mallory:

And then I also want to loop back to this idea. You mentioned, amit, of cultivating communication skills on your team maybe not necessarily prompt engineering skills over the next few years. How do you recommend going about that, especially as it pertains to communicating with technology?

Amith:

Well, you know, probably the most important skill we all can work on is getting better at listening. And so then you might say well, how's that related to AI? You want to just listen to the AI do its thing. And well, I think that the reason I bring that up is listening requires you to be fully present and to think about what it is the other party, whether it's a human or AI, is trying to communicate to you, and then be more thoughtful in the way you respond.

Amith:

A lot of people, their version of listening, is essentially waiting for the other person to stop talking so they can speak, and to the extent that they're thinking much at all while the other person is talking, they're thinking about what they are going to say they're not really activating fully their listening skills, fully their listening skills. And so, in a way, that's super critical here, because if you really pay attention to what the AI is doing, I think you'll be a better partner to it in a sense, and you'll ultimately get a better result from it. But I think it's kind of cool, because what would make me better at using the AI systems of 2026 are probably the same things that would make me a better teammate in 2026 or even today.

Mallory:

That's a really interesting take on it. We've gotten the question before in the intro to AI webinar that I'll present to you now. Do you think that using AI models in this way especially as they get more sophisticated and we don't have to prompt them as much, let's say, if we're brainstorming for a new project that we want to launch do you think AI models will in some ways make us lazy, in the sense that we'll do less thinking on the front end because we know we can kind of pass off this duty to a technology? What do you think about that?

Amith:

Well, I think it's a great question. You know, I think if you broadly look at you know kind of the bell curve of people, you're probably going to see more of that than not. I also think that people who really want to look ahead and say how can they be the best versions of themselves, you know, as a human, you're going to say, well, what can I do with my time? How can I actually use those free cycles to work on the things that I feel like I can contribute the most to, or the things that I get the most value from personally? What is that intersection, right? A value creation and happiness and passion slash purpose.

Amith:

You know, and I think that we have a lot of times when people have talked about that. You know, it's kind of the most generic, like, you know, what do people talk about? Like, hey, where do you want to focus your energy and what do you want to be when you grow up, or where should you go with your career or whatever. I think the most generic skill or the most generic advice is to focus on skills that are at that intersection of you know, what brings you joy and what can create value for the world and in reality, unfortunately, throughout human history, the areas where most people are able to create value, that's economically, you know, viable for them, you know, in terms of income for their, for themselves and their family doesn't usually create a lot of joy.

Amith:

But this is what's interesting here is that with AI, potentially, you know, there is the opportunity to say, hey, let's delegate all this stuff. So therefore, we're being lazy. Um, the flip side is, I do think, that a lot of people are going to be able to switch on a whole bunch of circuits in their brains, categories of jobs created, new types of work that people are doing, but it's going to take people wanting to actually participate in that, you know, and a lot of people are going to say this is awesome, I'm just going to go watch more Netflix. And that's a problem, because if you have that kind of a mindset, you're probably going to quickly find yourself not working, and that's a general societal issue that I have no answer for, but I think there's going to be a lot of that as well.

Mallory:

Yeah, At the end of the day, we're not rats on a wheel, even though maybe some of the tasks that we do can feel like that sometimes. I think we're all innately creative and how we tap into that is different for everyone, but I agree with you. I think there's the opportunity for people to be lazy, but there's also a very exciting opportunity in the near term for us to be much more creative, which is exciting. Moving to topic two today, OpenAI's operator agent, which could be a good title of a movie, I think. Openai is set to launch an AI agent, codenamed Operator, in January 2025, designed to perform tasks on behalf of users with minimal human intervention. Some notable capabilities would include booking travel arrangements, managing emails and schedules, conducting online research, completing online transactions and automating administrative tasks. This AI agent is designed to operate directly on user devices, function within web browsers, use a computer to take actions on behalf of users and process information in real time. Now it's said that OpenAI plans to release Operator as a research preview through their API for developers in January 2025, so we would likely see a full release a few months after that.

Mallory:

Amit, we've talked about Claude computer use on the podcast. Now we're talking about OpenAI's operator. We haven't talked about Google's Jarvis project, but it is similar in the sense that it's an AI agent that can use a computer as well. We maybe a bit too early predicted 2024 would be the year of the AI agents, but it's looking like 2025 will be the year of the AI agents, but it's looking like 2025 will be the year of the AI agents. I'm curious what your initial thoughts on this announcement are.

Amith:

So, mallory, I think this is one of the most exciting areas for everyone, because this is a type of agent that's not going to require any customization, configuration or setup. It's just basically saying yes to open AI or cloud or somebody else having access to resources in your computer. That might be to actually control the computer so the operator can actually do stuff for you. You can prompt it, it can go into Excel or Word or Google Sheets and it can actually operate the computer for you. But also, another aspect of this is also just having the computer, having what's on the screen and what you're doing be visible to the AI model, which by itself, even if the AI model doesn't actually take the action for you. There's a lot of value in that and I'll come back to that in a second. But I think this is where people are going to get their first taste of automation. They've been getting a taste of automation in the sense that the model has created a lot of artifacts for them, whether it's blog posts or code or whatever. But when the model starts weaving its way into other aspects of their workflow, working for them in various ways, that's going to be, I think, a pretty interesting kind of general purpose capability, that will get exciting. So you know, some of the use cases you mentioned, I think, are great.

Amith:

One of the ones that I would mention, though, is, before you actually even think about the operator, you know doing things on your computer. If you were to say, hey, I'm just going to let it see what's on my computer and then I'm going to have a conversation with it. So let's say, for example, that, mallory, you were working on our website, which is in the HubSpot platform, and let's say, you were trying to figure out how to do something. So you might say to ChatGPT or Claude hey, I'm running into this problem. I'm using the HubSpot CMS, I am trying to position this particular element over here, but it keeps appearing over in this other spot. What can I do about it? Right, it's like kind of an inherently visual kind of thing that you're trying to explain. Kind of an inherently visual kind of thing that you're trying to explain. You might, if you're particularly advanced in current AI, you might say I'm going to take a screenshot of that. I'm going to drop the screenshot in Cloud or ChatGPT. What if you just said to those apps okay, I'm going to let you see my screen, and then you're not even necessarily prompting it. You're basically like through audio, basically saying and it could be through text as well, of course but you're just essentially saying hey, claude, listen, as you can see on my screen, I'm trying to move this element from here to here. I can't do it, what do you think? And then Claude might respond to you and say well, actually, mallory, you need to click this other option over here. Would you like me to do that for you and Claude does it for you or not? You know you can do it yourself, but even just the idea that you know the computer being given the right to see what's on your screen, or portion of your screen perhaps, is pretty interesting in terms of workflows In the world of programming.

Amith:

There's a massive value creation there, where you know you're working on something in a database or in a website or whatever it is, and the computer, the AI model, can see what you're doing. There's also massive productivity gains Because a lot of times people are going back and forth. They have model open in one window, like chat, gpt, and then the other window, side by side. They're actually doing their work and they're going back and forth, but they're having to pay the tax, right, like as I like to describe it, of describing to the model what's going on and what they're trying to accomplish, whereas if you imagine the model just always being there and you can talk to it in audio, which all these things support, right that could get really interesting, really fast. That's even before the model turns into an agent and takes control of your device. It just gives that model more senses, so it could be more effective in helping you. And then, of course, the layer on top of that is letting the thing actually control your computer.

Mallory:

I think sharing your screen with an AI model is one thing, but you mentioned providing access to files that you have as well. I'm sure there are people listening to the pod or watching on YouTube who think I'm probably not going to sign up for this operator agent until maybe the masses try it out and let us know if it's safe and secure. What are your thoughts on that of how to approach kind of privacy security concerns?

Amith:

on that of how to approach kind of privacy, security concerns. Well, this is a little bit of a side note, but I wouldn't necessarily want anyone to think that just because the masses are going to go do something, that it's safe and secure, because there's lots of people who do a lot of foolish things in the world right, both online and in the. You know the world of atoms as opposed to bits, so I think that's one thing to always be thinking about with AI. But I agree, like it's the same privacy and confidentiality kind of question you would ask of yourself before you upload a document to ChatGPT manually. But if you give it access to your full computer and you say, hey, you can do whatever you want to do with my computer, of course, then you're giving it, like you know, sysadmin level privileges to do anything, and so there are probably shades of gray within that. I haven't used this particular tool. There's probably various settings that you have, but ultimately you are giving that tool a tremendous amount of power. But keep in mind the flip side of that is you do that routinely.

Amith:

You first of all use an operating system that's produced by a company. It might be Google's Android on your phone, it might be Apple for their iOS, or it could be the Mac operating system or Windows or Linux or whatever, and fundamentally, just by virtue of running on an operating system, those companies have access to a lot of stuff. A lot of us also use cloud-based storage, whether it's Dropbox or FileShare or Microsoft or Google, so we are accustomed to using a lot of digital services that have very wide access to our digital lives, and so the AI having access to a wider access of our digital lives isn't necessarily, to me, the problem. The question is is the company trustworthy? And that is an open question for all of these businesses, because they're all brand new, they don't really have much of a track record. For all of these businesses, because they're all brand new, they don't really have much of a track record. Openai, anthropic, mistral they all are fairly new companies. Mistral is like a year and a half old. Openai is a handful of years more than that. So I'm not suggesting I think they're not trustworthy. I'm simply saying that one of the reasons I think people are so comfortable with giving Google or Microsoft so much of their data is because these companies have been around a really long time. They're obviously very large. They say that they've done all these things to safeguard your data and they have.

Amith:

But these other companies that are earlier in their journey, younger, less capable in some ways, or perhaps less focused on security, that's where you have to think about it. So it's 100% a topic that you should be thinking about deeply, you know. The thing I would point out is you know, I mentioned operating systems and file sharing tools and stuff like that. I've mentioned this on the pod a number of times and if you've heard me speak, you've probably heard me say this on stage. But a lot of people just let any AI note taker just jump on in to their meetings. It's like, hey, I'm going to have this super confidential meeting, I'm going to talk about whatever, something. I definitely don't want to be outside of the confines of a handful of folks, but I'm going to let them all bring their meat geek and their readai and whatever the hell else they want to bring with them, and I'll think about it.

Amith:

And I don't do that because I'm paranoid about this stuff, but I have no idea who those companies are. Right, I don't know who they are. I don't know if they're on the free plan, which means the data you provide. Typically, if you're free, is essentially you contributing to that company's future, or if they're buttoned up, things right. So I think we make these decisions.

Amith:

There's this dissonance between reality and our perception of reality that we just kind of block off, and so I think people need to be aware of all this stuff. I'm not saying don't use these tools. What I'm saying is is just be have a consistent approach to saying, hey, I'm going to think about how much diligence I do before I use Read AI or Meet Geek or any one of these other tools. In a similar fashion, with a tool that's going to have access to your computer, think about it a little bit, because everybody and their cousin is going to have a similar tool, like the operator tool and like Anthropix tool that has access to your computer. Everyone's going to have that. It's going to become super commoditized and you'll find stuff for free. A lot of that free stuff will probably be malware and you know there's going to be a lot of collateral damage, unfortunately.

Mallory:

And that's from bad actors. But even with companies that have good intentions you're going to have problems. So use the to Blue Cypress, our parent company, and saying you know, we really want to try out this operator thing and Blue Cypress as a whole will have to kind of go through this decision-making process of will we allow it, what kinds of actions will we allow operator to take potentially, or another tool, who will be at fault, right, if operator does something incorrectly? So I would like to hear a little bit initially about what you think that thought process will look like in a few months when, as a business leader, you have to make that decision.

Amith:

Well, I'm going to look at it from my perspective. I'm going to look at it as let's try it out with a handful of people who are a little bit more savvy, who are a little bit more cybersecurity conscious. We do a lot of work around Blue Cypress to give people basics in cybersecurity, but some people are obviously both a lot more knowledgeable but also a lot more thoughtful like day-to-day about that type of stuff. A good example is the person who uses a VPN when they're on the airport Wi-Fi versus the person who's like whatever. The latter case, that's not going to be one of our early adopters. So we kind of have a good sense of that. Our company's not super big. We have a pretty good idea of who our people are and what their skills are, so we'll probably start with a fairly small number of such users and then expand it.

Amith:

There's risk with everything in life. The easiest thing is to say I will mitigate my risk by not using any AI at all. Using any AI at all, Of course, then you're creating the biggest risk of all, which is the fact that it's actually not even a risk. It's a fact that you're going to be out of business pretty soon. So I think you have to balance these things, and so for us, I sense that we will probably operationalize it gradually, particularly because this is a pretty big potential security hole if you don't manage it reasonably well. I also think that part of it has to be just going back to education.

Amith:

People don't spend time thinking deeply about these things as much as maybe we'd all like them to do in theory, but we also want them to do their jobs right, and so you know we need to provide guidelines and policies to people to say look, this is what you're allowed to put in your computer, this is what you're not allowed.

Amith:

With our work-issued laptops, we can control what's allowed to be installed on there, so people can't just install garbage. But at the same time, people use a lot of personal devices that an IT department doesn't have control over. So you might feel all safe and secure because IT has policies on your work laptops, but then someone might be logging in on their home computer or whatever, and they might have computer use turned on and that computer may have access to a terminal window, to something in the corporate environment, or that particular computer might just have access to files that are business files. Right, that happens all the time, and how do you control that? The short answer is you really can't. So you have to double down on education with your team.

Mallory:

What do you think about controlling or I shouldn't say controlling, but creating guidelines around what types of activities these AI agents could be used for, because that can be so vague as well. You can't quite possibly list out all of the activities it can be used for, but what do you think about that?

Amith:

Well, I think we should be trying to do is to give people good examples and bad examples. So, the more we are able to say, look by example, we're going to show you these are good use cases, and here's why these are the reasons that these are good use cases. Everything has a little bit of risk to it, or sometimes a lot of risk, but the reward is such that it is a good example of a good use case and we provide examples of vendors and we have agreements with vendors like OpenAI or Anthropic or whoever that we say, hey, we trust these vendors, so use only them. That way, you have guardrails on the tool use to some extent and then also share bad examples. Share, like the thing that I just mentioned, which is just like random note taker tools popping into your Zoom call or whatever. Don't allow that. Don't feel bad about declining entry to your Zoom meeting to a AI notetaker. It's not like you're denying access to a person, so it's just.

Amith:

I think stuff like that needs to be constantly. There's no end to the education process. This world is moving so fast. You've got to invest in educating your people and policies. Policies are going to be a little bit different by organization, but there's some general purpose kind of common sense things. I think the biggest thing is just everyone needs to understand these tools at some level and if they have no concept whatsoever and they view it as just a magical machine, you're not going to have good results, you're going to have a problem.

Mallory:

We know Microsoft owns 49% of OpenAI, so we will surely be seeing something like an operator agent pop into Microsoft Copilot, probably sometime within 2025. But based on what we've seen thus far, copilot tends to be a little less strong than what we see come out of OpenAI, at least directly, at least initially. So what are your thoughts there? Should associations kind of wait until they see this appear in Copilot? Should they start experimenting beforehand? What do you think?

Amith:

You know, copilot within the Microsoft Office world and just in general, across all the Microsoft products, I think, is a super interesting thing. It is an AI within the context of your existing work environment. So if you sign up for Copilot, it has access to your SharePoint, it has access to your OneDrive, it has access to all your stuff and so, yeah, clearly, because it's coming from Microsoft, you're going to look at it and say that they're going to be compliant to the same level of security and standard that they have for all of your other stuff. So I think that's probably a reasonable perspective to have. My experience with the Microsoft Copilot first generation has been both positive and negative. It's generally not as capable as ChatGPT or Cloud or even Gemini, but it is quite useful to have that AI sitting in your Word document or sitting in your Excel document, and this new wave of capabilities that they've announced over the last couple months are quite exciting. I think it's great marketing right now because it's not something that we actually see in practice people are using, but the opportunities are immense. They're on the right track with it and Microsoft is famous for this, but it usually takes them three major iterations of a product before it really is the product people want, and what I like about that is they go out there in the market and they put something out there and see what happens.

Amith:

You know people talk about like minimum viable product and lean startup mindset and all this stuff as if it's some brand new invention that people out of like really smart young people in Silicon Valley came up with it. Well, microsoft's been doing this since 1975, and a lot of other people have done similar things before them. So the key to it is just to understand where these products are in the life cycle. You're buying into a vision more than you are buying into a set of capabilities today. So you know, we're a big Microsoft shop. I think they're doing a really good job with what they're doing, but they're doing it at a scale that is really unprecedented. So part of the reason the limitations exist is that they are rolling it out for, you know, millions and millions of users in a way that's a lot deeper than a consumer product like ChatGPT that doesn't have to consider the context of, like, your whole SharePoint and things like that.

Mallory:

Well, that is a wrap, everyone on episode 57 of the Sidecar Sync podcast. We will see you all next week on holiday week.

Amith:

Awesome. Thanks for tuning into Sidecar Sync this week. Looking to dive deeper? Download your free copy of our new book Ascend Unlocking the Power of AI for Associations at ascendbookorg. It's packed with insights to power your association's journey with AI. And remember Sidecar is here with more resources, from webinars to bootcamps, to help you stay ahead in the association world. We'll catch you in the next episode. Until then, keep learning, keep growing and keep disrupting.