Sidecar Sync

Exploring AI Audio – Part 2: Association Engagement Strategies | 55

Amith Nagarajan and Mallory Mejias Episode 55

Send us a text

In part two of this special audio-focused episode of Sidecar Sync, Amith and Mallory dive into the innovative ways associations can leverage audio and AI to engage members and enhance accessibility. Amith discusses real-world applications of text-to-speech AI tools like ElevenLabs, exploring how associations can maintain a consistent brand voice across platforms, convert complex technical content into audio formats, and create interactive knowledge modules. The conversation also touches on the future potential of intelligent voice support and real-time translation, making association content more accessible worldwide. With real-world examples, they share how AI-driven audio solutions open doors to engagement, from enhanced sponsor content to democratized language access at global conferences.

🔎 Check out the NEW Sidecar Learning Hub:
https://learn.sidecarglobal.com/home

📕 Download ‘Ascend 2nd Edition: Unlocking the Power of AI for Associations’ for FREE
https://sidecarglobal.com/ai

🎬 Sidecar Sync Ep. 47: Project Strawberry, Hugging Face Speech-to-Speech Model, & AI and Grid Infrastructure
https://youtu.be/lXcnx_HECes?si=7kVStK8JO09Qylfo

🎬 Sidecar Sync Ep. 47: Previewing digitalNow 2024, Google NotebookLM, and xRx Framework Explained
https://youtu.be/bpINBxVSM4s?si=vTx01nZyUWde22MT

🛠 AI Tools and Resources Mentioned in This Episode:
ElevenLabs ➡ https://elevenlabs.io/
Notebook LM ➡ https://notebooklm.google.com
HeyGen ➡ https://heygen.com

Chapters:

00:00 - Introduction to Part Two
03:21 - Consistent Brand Voice Across Content
07:58 - Making Technical Content Accessible 
13:16 - Sponsor Content Enhancement
15:32 - Capturing SME Knowledge Through Interactive Audio
18:35 - Intelligent Voice Support 
26:07 - Breaking Language Barriers at Conferences
32:20 - AI as a Personal Communication Coach

🚀 Follow Sidecar on LinkedIn
https://linkedin.com/sidecar-global

👍 Please Like & Subscribe!
https://twitter.com/sidecarglobal
https://www.youtube.com/@SidecarSync
https://sidecarglobal.com

More about Your Hosts:

Amith Nagarajan is the Chairman of Blue Cypress 🔗 https://BlueCypress.io, a family of purpose-driven companies and proud practitioners of Conscious Capitalism. The Blue Cypress companies focus on helping associations, non-profits, and other purpose-driven organizations achieve long-term success. Amith is also an active early-stage investor in B2B SaaS companies. He’s had the good fortune of nearly three decades of success as an entrepreneur and enjoys helping others in their journey.

📣 Follow Amith on LinkedIn:
https://linkedin.com/amithnagarajan

Mallory Mejias is the Manager at Sidecar, and she's passionate about creating opportunities for association professionals to learn, grow, and better serve their members using artificial intelligence. She enjoys blending creativity and innovation to produce fresh, meaningful content for the association space.

📣 Follow Mallory on Linkedin:
https://linkedin.com/mallorymejias

Speaker 1:

If you could call a number and it's instantly answered and it has a real expert who's wonderful to work with, who's very quick at understanding you, it's safe and secure and it's easy, I'd make a phone call a lot more frequently. Welcome to Sidecar Sync, your weekly dose of innovation. If you're looking for the latest news, insights and developments in the association world, especially those driven by artificial intelligence, you're in the right place. We cut through the noise to bring you the most relevant updates, with a keen focus on how AI and other emerging technologies are shaping the future. No fluff, just facts and informed discussions. I'm Amit Nagarajan, chairman of Blue Cypress, and I'm your host. Welcome to the Sidecar Sync, your source for all things associations plus AI. My name is Amit Nagarajan.

Speaker 2:

And my name is Mallory Mejiaz.

Speaker 1:

And we are your hosts. And today's episode is the second half of a two-part episode all about audio. We're really excited to complete this half of the equation for you. Before we dive in, let's take a moment to hear a word from our sponsor.

Speaker 3:

Introducing the newly revamped AI Learning Hub, your comprehensive library of self-paced courses designed specifically for association professionals. We've just updated all our content with fresh material covering everything from AI prompting and marketing to events, education, data strategy, ai agents and more. Through the Learning Hub, you can earn your Association AI Professional Certification, recognizing your expertise in applying AI specifically to association challenges and operations. Connect with AI experts during weekly office hours and join a growing community of association professionals who are transforming their organizations through AI. Sign up as an individual or get unlimited access for your entire team at one flat rate. Start your AI journey today at learnsidecarglobalcom.

Speaker 2:

Amit, how are you today?

Speaker 1:

I am doing great. How about yourself?

Speaker 2:

I'm doing really well. I'm excited to be back for part two of our AI and audio lesson. I know it's a topic that you love.

Speaker 1:

I'm excited about it. I think there's so many use cases here that we're going to go over that are just the beginning and hopefully will inspire people in the association community to experiment. That's always my hope is that the work that we do at Sidecar is something that is a catalyst for people going out there and trying stuff out. You know, between our content and trying to get people to learn the basics Really, ultimately, what we're trying to do is catalyze them, taking action, and the best type of action when you're learning something is to go play with these tools and see what you think. So I'm excited to hear about these use cases and discuss them with you and then hopefully we'll get some great feedback from our listeners and viewers that they've gone out and done all these cool things with audio.

Speaker 2:

Absolutely and from our listeners and viewers that they've gone out and done all these cool things with audio Absolutely. And we tried to pick a mix of use cases here, some that I think are fairly easy, I'll say, to implement or things that you could go out and do very soon, and some that might be a little more difficult to implement. But we can kind of talk about the process for each of those. First use case I want to talk about we talked a little bit about in the part one episode, which is consistent brand voice across content. So I'm going to frame this as kind of a problem and solution for each of these use cases.

Speaker 2:

The problem is that you as associations produce content across multiple channels and formats, leading to inconsistent voice and tone. Potentially. Now, the cost and complexity of maintaining consistent voice talent across all your content is prohibitive, but a potential solution here is creating a custom 11 Labs voice or just selecting one of their templates that embodies your association's brand personality and can be used consistently across all your audio content. This enables scalable professional audio production while maintaining your brand's integrity. Now, amit, this one seems like a nice to have. It would be great. But I want to talk about kind of the tangible business benefit that you see from this kind of scalable consistency.

Speaker 1:

Well, I think, for what I see is there's the voice dubbing, where you can take videos or audio that you do have, that humans have recorded, and to translate it, language wise, change the voice, all sorts of things you can do like that. But I think the other thing you could do is imagine if every piece of content that you have on your website had a listen button, and there are some new sites that do that. The Wall Street Journal has that on quite a few of their articles, if not all of them at this point. I think. The New York Times has that as well, and so being able to listen to an article, I think is incredibly useful for a lot of people for lots of reasons. One, it might be that they're on the go or they just prefer audio, but I think it's something that is now democratized and is extremely inexpensive.

Speaker 1:

So the way we're doing it, or suggesting that people do it now, with 11 labs, that happens to be at the moment, I think, the leading audio-specific model in the world, but I anticipate that there will be widgets you can drop on your website that just do that for you automatically. In fact, I know another company, haygen, that we've talked about more in the video realm has a partnership with HubSpot that they just announced at HubSpot's inbound conference this year, where if you turn on the HeyGen plugin for HubSpot, you can generate a little video about the page automatically. And so that's, I think, even the next level beyond what we're talking about here. But just like a listen button, I think can be fairly easy to do, and I'm sure there'll be tons of WordPress plugins and other tools that make this super, super easy. Now, using a tool like 11Labs with a professional voice that's consistent across the brand, probably a good idea to go through that little bit of effort to get a higher quality product.

Speaker 2:

And we've talked about on the pod many times, that associations do tend to have really strong brands. So do you think this is something that would kind of bolster that having that same voice or same few voices across all content?

Speaker 1:

I mean, I think whether it's the same or if it's even different voices, you know there's lots of different reasons why one or the other or a combination would be better. I think that part of what ends up happening in any organization is they might have dependence on, like, a star member producing a lot of content or something else like that, and so that potentially provides a little bit of insulation there. But I think ultimately part of it is just getting people into the audio modality potentially can get more engagement.

Speaker 3:

You know it's like.

Speaker 1:

Think about how much time you spend, on average, reading articles on a website versus how much time you might listen to a podcast. Now we know, of course, for the Sidecar Sync, those of you watching and those of you listening always listen to the entire episode, end to end. We know that is a certainty. However, to the entire episode end to end. We know that is a certainty. However, podcast statistics at large tend to suggest that people listen to roughly a third of a podcast on average, but they are listening to a third, sometimes more, of a podcast than average.

Speaker 1:

And so think about that for a minute. If the typical podcast is, say, half an hour 45 minutes, that's 10 to 15 minutes of consumption of content, which is far longer than most people's time spent reading an article on a site. And then usually, I think there's a lot of especially for frequent listeners, there's a lot more of a connection, almost like a relationship people say they form with podcast hosts. So I think that's part of the allure of audio is that it's a richer, more human-like experience. So I think part of it is, yes, accessibility, but I think it is nice to have you're right but at the same time, I think, opens up another door for a richer form of engagement making technical content more accessible.

Speaker 2:

So the problem here is that members lack time to read dense technical papers, but need to stay current. Complex technical content often goes unread or underutilized, despite its importance. Solution here convert your technical content into audio formats with consistent brand voice which enables your members to learn while commuting or multitasking. Add interactive elements for better engagement. Now, amit, this makes me think of the example you gave in part one, where you dumped an email in there and a PDF and a contract and you had this nice audio. Would you trust AI at this point to transform technical papers, where accuracy is paramount?

Speaker 1:

Well, if accuracy is paramount, then the answer is no as of late 2024. I think I'll be able to say yes within a year. But if accuracy is not as important because I'm looking at as a somewhat crude summarization tool, like in my case that I described with Notebook LM and I just wanted a quick summary and audio and I wasn't worried about being perfect and I knew that then it's amazing right now and there's a lot of things where getting 90% of the information is better than zero. So think about this. Think about clinicians in oncology. They're working hard to serve their patients every day. They can't possibly keep up with all the research that's being published. Even in their sub-discipline. Most people are specialists and sub-specialists. That's why there's an association for every possible branch of the medical tree, and that's true for other professions as well. It's really hard to keep up. No matter how diligent, how hard you work, it is hard to be. You know in any field and keep up with it. So what if you could get these little three to five minute audio overviews for?

Speaker 1:

new research that's coming out that may be relevant to you. So if there's 500 new papers, a quarter coming out in cancer research, maybe even with audio reviews, you can't really like listen to all of them. But what if the AI was able to recommend the best papers for you based on you and your profile, and from there then says, hey, these are the five that you should really pay attention to out of the 500. And of those five, first of all, create an audio summary of all of them and then perhaps, if you drill into it, to say, hey, I'll give you an audio summary of each of them that you want a little bit more information on. And I think if it was in a little bit more engaging format that was consumable on the go, that would help people stay aware.

Speaker 1:

And, of course, if they want to, actually, particularly in a field like anything in healthcare, you'd obviously go read the actual paper or dig deeper in some other way, or ask someone else to help you dig deeper into this paper, say, oh, there's this new clinical trial that's starting that's super relevant to one of my patients that's suffering right now.

Speaker 1:

I wonder if that's a good fit for them, right? And then you have an AI agent go off and do the research and say let's, on a very secure way, take the patient's EHR and then combine it with insights from the research that is behind this clinical trial and determine whether or not my patient might be a good fit for this trial. Right, I mean, that could be unbelievable. But I have to know about the trial to begin with if I'm the doc helping the patient. So and that's just one example which I keep coming back to this in that particular field, partly because we work with tons and tons of medical societies and there's so much opportunity to improve quality of care and better outcomes and lower costs and all these great things, but it's applicable to every field.

Speaker 2:

And when we're talking about accuracy being paramount, you said at this point, in late 2024, you don't think that AI could give you 100% accuracy.

Speaker 1:

Is that because of hallucinations?

Speaker 2:

Or is that because you can't be sure that AI is going to pull every single piece of information in the summary that you would need? Or both?

Speaker 1:

I just think that it's hard for me to say that 100% is available, even with the best human translator.

Speaker 1:

So if I hire a, really knowledgeable person in that domain to create an audio overview for me from a technical paper. I can't say that's going to be 100% perfect either. I think that would be 99% good if it's like a leading person in the field. So that's the standard I think we have to set to. So in my mind, that's the idea of perfection is not really the goal, because we're not there, but the idea of being at human quality. That's where I think we'll be there. We might already be there.

Speaker 1:

Actually, I'm probably being a little bit conservative in this particular respect, but definitely in the next year we're going to have capabilities that are at that level. I mean, we didn't talk about O1 in a ton of detail, but GPT-01 or just O1 from OpenAI is this new reasoning model and it has PhD-level capability across a wide variety of different domains. So you think about something like that and then scale it down, put it in small models that are available everywhere and then add audio to them and, yeah, you're going to get to 99, 98, 9.9% quality very, very soon. I just don't think we're quite there yet. That doesn't mean that the utility is zero. It just means that you have to have awareness of the fact that this is a summary and it's audio and it's AI generated, and so you work through that, but the utility can still be enormous.

Speaker 2:

Yep For our next use case. We're talking about sponsor content enhancement, which could be a potential revenue generator. Sponsors struggle to reach members effectively through traditional content formats. Written white papers and case studies often go unread, leading to poor ROI on sponsorship investments. Transforming sponsor content into professional audio series using the association's brand voice maybe have some multilingual versions for global reach as well, and you could package this as a premium sponsorship offering. Now, this is not something I had ever thought about, but I'm curious for you, amit. Looking at it from the vendor side, would this be something that would be attractive to you?

Speaker 1:

For sure. I mean, I think a vendor is ultimately going to look at it and say what am I buying and how many leads does it generate? And then how good are those leads as well? So if it's more engaging, if it's more helpful for the consumer of that content and therefore I get more leads, or better qualified leads because they understand my offering better, if they're more aligned with the types of topics that I'm an expert in, that's great. And you could also say well, the vendor might have their own AI solution for customer service and sales and various things, and if those AIs can collaborate with the association's AIs in solving these problems.

Speaker 1:

There could be a very natural transition from the experience that people have on the association's content to the vendor's side of it once they actually go over to the vendor's website or whatever other channel they're using. So I think it's a very interesting use case.

Speaker 2:

And I think there's also opportunity as well. If an association has a podcast, you could use something like 11 Labs text to speech and generate really quick ads on the fly as well. The next use case is subject matter expert knowledge capture. The problem here subject matter expert knowledge is trapped in lengthy videos or written documents that members rarely access fully. The time and cost of producing professional education and content limits how much expert knowledge can be shared, so a solution here is to convert SME presentations and insights into interactive audio learning modules with consistent voice quality enabling easy knowledge transfer while maintaining professional delivery. How would you see this playing out in the world of associations?

Speaker 1:

You know, two things come to mind. One is learning management systems. A lot of associations say that they have an LMS chock full of all sorts of great content when in reality what they have is a bunch of proceedings or recordings from a whole bunch of sessions from a lot of conferences. Usually the audio is pretty poor quality. A lot of times the presentations aren't that great either, Even if the presenter was brilliant. A lot of times it's just not their strength. And so what if we could do what you're describing, which is to transform that content, maybe shorten it to make it more consumable or more quickly consumable, but deliver a more impactful and relevant educational experience through an LMS? That would be one way to do what you're describing, I think. And then, more broadly, then, if you just think about the overall knowledge cycle for an association or any organization, is you have the constant creation of knowledge.

Speaker 1:

That happens, of course, through the modalities you described, which is, you know, videos, written documents. It also just happens in daily life. You know you're having a phone conversation or a Zoom meeting with someone or you're emailing them back and forth, and there's just all this knowledge accrual that's happening. There's also a lot of noise, right, but there's knowledge accrual happening in little bits and pieces all the time, and if the AI can help us parse out the true knowledge accrual from the noise which it can, and also help us continuously increase the aggregate knowledge base which it can, and then if we can create audio and video and text and other modalities from that on demand for different kinds of people based on their needs and based on where they're at in their learning journey, I think that's a pretty tremendous opportunity. Because, you know, what I just said is somewhat abstract, but it essentially means that you know, you have this concept of, let's say, a learning management system that just has a what do you want to learn question and the user starts talking and say, hey, I'm doing this and this and this and I want to learn these topics, and instantly they have a live interactive uh course which was created for them based on the content.

Speaker 1:

The association has, of course, and grounded on that content um, with interactive you know multi, you know multi-party kind of thing, where it's really them plus the AI, the AI, you know leader Um. So there's a lot of opportunities that can be done through voice, it could be done through text, Um, but I think that whole idea of the knowledge cycle to me is an incredibly important idea, and we already are capturing a tremendous amount of knowledge, just not we're capturing it, Um, but we're not actually using it right, Like we're putting it into one of our giant digital storage buckets like email or SharePoint or Dropbox or one of these other things, or like video archives, but then we don't activate it, and so, to me, the bigger opportunity is the general idea of managing the knowledge cycle and activating it. But, to your point, I think the idea of audio summaries that give people better access is a really compelling possibility.

Speaker 2:

And I guess we're really talking about two things here. One of those is audio summaries or kind of like listening to a podcast on a certain topic, but then what you're talking about is that interactive audio element where you can actually go back and forth and have a conversation.

Speaker 1:

Yep.

Speaker 2:

Next up is intelligent voice support. So the problem? This sounds familiar, right? Member support teams are overwhelmed with repetitive questions and limited by time zones. International members often need support outside of standard business hours and written FAQs go unread. So the solution here would be to implement speech-to-speech AI trained on the association's knowledge base to handle member inquiries 24-7. The system can understand context, provide accurate information and maintain a consistent professional tone, while connecting members with pieces of content that they might be interested in and are looking for. Now. This is one that I would group in the might be difficult to implement, amit, but at the end of part one we kind of talked about how this is already in the works, so can you talk a little bit about that?

Speaker 1:

Well, in the case of our family of companies, our knowledge agent, betty, has a voice mode coming and at Digital Now which, by the time you listen to this, will have already occurred we unveiled a preview of Betty's voice mode is literally exactly what you just described, and while that product is not yet available the Betty is available, of course, and has been for two years, but the voice mode that I'm describing is in preview. We expect to have it out there imminently and like in the next several months. So, and it'll just get better and better, like the curve keeps doing in general. So I don't think this is far away. I think this is something.

Speaker 1:

There will be lots of ways to do this. There's lots of tools that you can buy. You can assemble solutions. There'll be other products that you can buy, so I think this is an opportunity for associations immediately.

Speaker 1:

One of the things that I talked about in my Digital Now keynote was customer service technology as a general kind of trend line of why does customer service technology exist and through date.

Speaker 1:

The basic reason is really simple it's about saving money. It's about the company being able to serve their customers at the tolerable level like kind of the lowest tolerable level, honestly, for most brands at the least possible cost per interaction. And that is why for many years you have been subject to phone trees, these interactive voice response type systems that are just terrible, that put you in loops and on endless holds, and when you have dealt with chatbots on the web, historically it's been very formulaic and menu-ish, because those technologies were never capable of really improving the experience of the individual on the other end of it. But that wasn't the objective anyway. The objective was to save the company money and to provide different modalities of the experience. Obviously, and people generally don't view customer experience or customer service as a good experience, with the rare exception of a handful of brands who've really deeply invested in that and it's part of their brand promise. But for the most part that's not true.

Speaker 1:

Think of airlines, think of banks think of really any brand that you want to deal with. You don't really want to pick up the phone and call them. You want to try to do it asynchronously through a website if you can, if you're more technical or you know you just don't want to have to deal with it. It's horrible. So, but what if you could flip that script on its head and say, look like, call a number and it's instantly answered and it has a real expert who's wonderful to work with, who's very quick at understanding you, it's safe and secure and it's easy. I'd make a phone call a lot more frequently.

Speaker 1:

I hate making phone calls to companies because it's such a painful process just to even authenticate who I am. So if we can use AI to support this, it's going to create, I think, a modality people are going to actually enjoy, especially if the knowledge assistant is more than just solving, like billing problems and stuff like that. It's like, hey, let me actually help you in your work. There's something there. There's something really interesting. Like what if you could, you know, include that knowledge agent with voice capability in your Zoom meeting and just get their feedback, you know? So I think there's a lot of interesting possibilities.

Speaker 2:

And if we see in the next year, right, a product like Betty work with call centers and so you pick up the phone and you get some version of Betty that can answer all of your member support questions, but also knowledge questions as well. Do you think there are any considerations or concerns that you kind of need to think about before envisioning that future?

Speaker 1:

Yeah, sure, first, I mean, people are going to be concerned with is it correct, is it accurate, Is it giving people the wrong information?

Speaker 1:

So there's an important layer of testing you have to go through before you roll these things out, and you also have to have, in my opinion, human oversight and human in the loop on certain kinds of inquiries. So you wouldn't want to just put, like an AI knowledge agent as the only line of support. You wouldn't want to put it out there unmonitored. You'd want to have a system in place where there's a way of the user asking to speak to an agent or a person which is a possibility, obviously and to have some kind of intelligent routing in your architecture where not everything goes to the bot. You know you may say, oh well, I've got an inbound email and that email is about, you know, something that's kind of sensitive topic. I don't want the bot to handle it. I'm going to forward it to an actual person who can deal with this customer or deal with this member. So I think there has to be some nuance to it that's respected and you have to think through some of these processes.

Speaker 1:

I do think the promise is substantial improvement in member experience, which is exciting while at the same time reducing your operational overhead in terms of how much time you have to spend dedicated to the basics of this task. In my mind, what ends up happening and I think it's a concern and an opportunity is what happens to the people. So if you have a team of 10, 20, 30, 50, 100 people who do this kind of customer service work, what do they do? And, first of all, not all of this customer service is going to be automated. A lot of it can be, and there will probably be more volume of inquiries because people will have a good experience, but I think there still is a need for really high caliber customer service agents to be involved, Because when you do get that human, you don't want to get the person who doesn't know how to answer your questions. So there's that piece which might mean that the handful of people out of the total pool that are still doing that direct role are really the best in that role and you pay them extremely well because they're extraordinary at what they do and they're your SWAT team to handle the most complex problems that AI can't handle. As far as what everyone else does. Who's part of a team like that?

Speaker 1:

I do think there's opportunities in general with AI displacement for retraining. I think the most important thing is to learn, learn and learn, but AI is absolutely going to do a lot of the customer service in the world. We see that already with a lot of companies deploying AI. So I don't think it's a question of if this is going to happen in association land it is happening and it's a question of how do you do it in a way that is actually a great thing for everyone long term and I don't have all the answers to this, or even close, because I think there's some fundamental questions we have to ask but from a member experience perspective, I think it has to be a massive increase in value. It can't just be parity and internally it obviously has to be a big win too.

Speaker 2:

Exactly, and this is already happening right now with text right.

Speaker 1:

This isn't necessarily an audio specific thing, but audio is something that you would say is kind of phase two or near term phase two email and we're like, hey, let's just get on a quick call and you'll call me, or I'll call you and we'll like talk about it for a minute. Why do we do that right Versus text? It's because it's a more natural and richer modality. And certainly videos, even more than that, screen sharing is another layer of that right. So there's all this richness that comes from the modality that makes it more likely to actually help the person in their journey.

Speaker 2:

Our next use case is breaking language barriers at annual meetings. The problem international members struggle to fully participate in conferences due to language barriers. Traditional translation services are expensive and logistically complex. A solution here, or two solutions deploy an AI-powered real-time translation or create post-event dubbed versions of key sessions. This preserves speaker dynamics while making the content accessible across languages. Now, amit, this is another one I would kind of group in the, on one hand, could be easy to implement. On the other hand, sounds really tough. I think post-event translation feasible. You can do that right now. Real-time translation. I don't think we're there yet, but what would you say to? I don't think we're there yet, but what would you say to that?

Speaker 1:

I think we're extremely close to that. I think, depending on your tolerance for latency, you might already be there. I think there's just some really interesting. I mean, you can do this. It's technically feasible you take the audio stream of the incoming language and then you translate that and then you generate back the speech from it. So there's a little bit of there's there's an audio to text audio thing going on. So it might be like 500 milliseconds or 700 milliseconds latency, which is not quite perfect. But if it's a continuous stream of of talking, like in a in a session at your conference, that latency may not be a big deal. It's slightly delayed. If it was like three or four seconds, that's a problem. Then it's not synchronized anymore. It's problematic, but I think that is a possible solution fairly soon.

Speaker 1:

And again, these models keep getting better. You want that to truly be done by a multimodal, multilingual model, rather than having multiple models involved, because then you're going to have no loss of information, like we keep talking about, where it's audio to audio. That's going to be the best experience and I don't think we're quite there yet, but we're pretty close. There was actually a demo of OpenAI's advanced voice mode a number of months ago, where they were actually doing this. They were saying hey, I'm going to speak in English, I want you to repeat that in some other language. And then the other person spoke back in the other language, and then that, and then immediately it translated to English. It wasn't quite real time. It was like wait for the person to stop talking, and then it translated it and then it sews back and forth, but it was kind of like that universal decoder kind of idea. So we're very, very close to that opportunity.

Speaker 2:

Yeah, I don't know if you know the answer to this. Are we doing real-time dubbing as well? Or when I say dubbing, I mean like a real-time voice cloning, to where you're not just hearing a translation, but you're hearing it in that person's voice. Is that possible? That?

Speaker 1:

would be the ideal scenario, Of course there's all sorts of rights and privacy type issues you have to deal with for that in terms of people opting in, and that would be, I think, the value add from the association engaging in this in some way, where you're providing something in an app that provides that mechanism. I think the other part of it, too, is like consumer grade technology is going to kill this. Like, if you think about it, like what you'll have available in your earbuds and in your phone is going to do exactly this in the next few years. It might not be 2025, but 2026, 2027. I'd be shocked if there isn't an AirPods release from Apple or an equivalent from the Google world that does this in your ear continuously, across all languages, in real time.

Speaker 1:

I mean, that's where we're going to be, and then you speak and then the assumption is, the other person has an earbud in as well. That also does the same thing. Right? It's very sci-fi. So I don't think that associations will necessarily have to solve this in the context of natural language to natural language. It may be something.

Speaker 1:

There's a reason why you do it, but I think there will be consumer-grade technology that will completely solve this in the next few years, which is really exciting. The other part, though, of translation that we talk about in Ascend, our book on AI for associations for those of you that aren't familiar which, by the way, is available for free download on our website, and we'll put that link in the show notes if you haven't gotten it. It's just sidecarglobalcom slash AI. You can get that book. So we talk about language translation, and we talk about it in the context of natural languages, human languages, one to the other.

Speaker 1:

We also talk about content transformation in the context of different knowledge levels, and so imagine if you have a meeting and you have people from a particular field that are attending this meeting, but they all have different areas of specialization, and so that specialization maybe some people are deeply expert in field A, versus others are deeply expert in field B, and maybe they're not that similar, maybe there's quite a bit of differentiation between those fields.

Speaker 1:

Well, what if we could make it possible for someone who isn't an expert in a particular field to be able to comprehend and consume and participate in a subject that is not their primary expertise. A lot of that is really inaccessible to people. If you go to a conference on AI, for example, you might think, oh, everyone can flow back and forth, but there's people in all sorts of subdomains of AI and even an expert in AI in one branch might have a hard time following a paper being presented by someone in another. So why I find that interesting is because if you connect people who have different backgrounds and it's less of an echo chamber, then you're more likely to find novel solutions because of the diversity of thought that comes from those different backgrounds. But there's some significant like knowledge barriers that prevent people from collaborating that way.

Speaker 2:

That sounds so sci-fi, though I cannot. I can't imagine that type of real time translation. I can imagine language, for whatever reason, easier to comprehend but thinking different, like expertise levels or education levels. Real time, that sounds crazy.

Speaker 1:

In that scenario you might not even want it to be continuously translated, but it might be a scenario where in your earbud you know it's connected to the association's app and you know I have someone on stage presenting something and I'm like you know, I just I tap a button on my phone and I say didn't understand, and then so it grabs like the last topic and quickly summarizes it and recaps it in language I do understand.

Speaker 2:

That that's bizarre. I mean it's exciting. It's definitely an exciting use case, but just crazy to think about.

Speaker 1:

Well, entrepreneurs are really good at making things up, so that's what you know. I'm kind of like a language model, you really?

Speaker 2:

are skilled in that, I mean.

Speaker 2:

I feel like sometimes you'll say things and I'm like I never even thought about that, but I think that's just. That's a Meath LLM right there. That's right, meath. I did want to talk about one more potential use case. You and I met with the founder of a company called Udly recently, which is an AI public speaking coach. So you can either real time or drop in recordings of yourself speaking and get AI generated feedback on things like speed and tone and, I want to say, even word choice, and I think this is kind of a totally different audio use case in the sense of having AI help you become a better communicator. So I just wanted you to share a little bit of your thoughts on that.

Speaker 1:

I think you know, communication is a skill we can all always strive to improve upon, and having an accessible tool that will give us feedback so that we can all raise our game is really important. You have folks that are very comfortable speaking in public forums, and you have some who are terrified of it. You have folks that are very comfortable speaking in public forums and you have some who are terrified of it. You have some that are objectively really good at it and others who are not, and so you know, depending on the background of the person, it might be a severe limitation in their career, their opportunity, when they have great ideas, but it's hard to get them out, and so I think that the idea of democratizing access to great speech coaching is an unbelievably exciting idea. The founder of this company came out of Google and had an experience in his own life that led him to build this company, which I find really deeply compelling, and I think this category of application really does what I just said democratizes access to a world-class communications coach that can be used for helping people do a better job in conveying their thoughts, whether that's in an interview or in a public speaking scenario. I know, mallory, like you know in your background, in your other world as an actor, you've probably had lots of coaching over time to be really effective as a communicator and that comes through in everything that you do. But you know that that's an uncommon skill set for a lot of people.

Speaker 1:

In my own experience I had probably the only thing I did in high school was speech and debate. I was one of those speech geeks and I traveled around and did competitive debate and it was kind of the only thing I enjoyed in high school and it was super fun. But it prepared me extremely well for a career in entrepreneurship, because literally just a year or so later I started my first company and I was on the road pitching people in boardrooms when I was 17 years old. So it was kind of crazy. But that helped me refine my skills and I'm always looking to level up and be a better speaker. But I guess the question I have for you is, like you know, when you think about what you've, the opportunities you've had to improve your communication skills, and then you think, hey, this tool potentially could help people in a similar way but at scale, how do you feel about that?

Speaker 2:

I think it's.

Speaker 2:

It must be incredibly frustrating for individuals who feel as though they struggle with public speaking, but who have so much great stuff in their minds and in their brains that they feel like they can't communicate to others properly.

Speaker 2:

And, as humans, that is kind of a key thing that we must do is communicate with one another, and I grew up in the theater, so I've had a lot of exposure to to that world and I competed in public speaking in high school, so it's something I'm comfortable with.

Speaker 2:

But, at the same, exposure to that world and I competed in public speaking in high school, so it's something I'm comfortable with, but at the same time, to your point, something I'm always trying to improve. It's not something that I feel like I don't have to work at. I feel like I practice a ton if I have to speak, because that's what works for me, but the idea of having AI assist in this process and being available to anyone at a really low cost is incredibly exciting. And it's interesting because in many ways, we think of AI as this thing that's going to hurt us as humans or some people take that perspective but in this regard, right, it would help us be better communicators, help us connect with others, help us share our thoughts more eloquently, and I think that is an exciting way to flip the paradigm a little bit.

Speaker 1:

I think you said that really well. I'm very excited about this. I think it opens up a lot of doors for people and you know, democratization is to me about access. People have to choose to walk through the door, so a tool like this could make it possible for people to go and have access to a free communications coach. Doesn't mean they're going to go use it and a lot of people won't, because it will take effort and it's kind of painful in a way.

Speaker 1:

You know to have a coach say, yeah, you say all these filler words and you know your hand gestures make you look like a robot and all this other stuff. Right, and you know, that's the kind of stuff I've heard over time and I know I still do a lot of things like that. But ultimately it's uncomfortable, but it helps you get better and not everyone's going to want that. But for those who do but don't have access to it because a high quality public speaking coach costs thousands of dollars, not everyone can have that. So that's exciting To me. The democratization of that kind of capability is going to be world-changing.

Speaker 2:

And if that sounds of interest to all of our viewers and listeners, stay tuned because we might be talking about Udly a bit more in a future episode. Well, amit, that wraps up part two of our AI and audio episode To all of our listeners. Thank you for joining us on this journey and we will see you next week.

Speaker 1:

Thanks for tuning into Sidecar Sync this week. Looking to dive deeper? Download your free copy of our new book Ascend Unlocking the Power of AI for Associations at ascendbookorg. It's packed with insights to power your association's journey with AI. And remember, sidecar is here with more resources, from webinars to boot camps, to help you stay ahead in the association world. We'll catch you in the next episode. Until then, keep learning, keep growing and keep disrupting.