Beyond Social
Welcome to the premier Social Media Marketing podcast, Beyond Social by Vista Social! We're the sensation turning the social game on its head – faster than you can double-tap a Gary Vee motivational post!
In this corner of the podcast world, tech talk is far from dry and dreary; it’s as thrilling as scrolling through your feed at midnight, always in sync with what's trending. Tune in every Wednesday with our super hosts, Reggie and Vitaly (who could give late-night TV hosts a run for their money) along with the crème de la crème of the Vista Social team and surprise marketing expert guests!
Whether you're a social media master, a digital marketing newbie, or just in it for the memes, you're not going to want to miss it! Why? We're serving Social Media strategy with a side of SaaS. Tune in!
Beyond Social
Team-Produced Content: Building or Breaking Your Brand?
Can your team truly represent your brand online, or is the risk too great?
In this insightful episode of Beyond Social, host Reggie Azevedo teams up with Vista Social’s Head of Product, Vitaly, to dive into the delicate balance between creative freedom and brand safety in social media management. They explore how brands are navigating compliance, team-produced content, and the ever-present risks of human error in a fast-paced digital world.
From the challenges of enforcing brand safety policies across distributed teams to the game-changing role of AI in preventing costly mistakes, this episode covers it all. Whether you’re managing content for a restaurant chain, financial institution, or startup, the conversation offers actionable insights on safeguarding your brand’s voice without stifling creativity.
So, what’s the cost of a single misstep in your social strategy, and can you afford it? Tune in to find out how to protect your brand’s reputation while empowering your team to thrive!
- Try Vista Social for FREE today
- Book a Demo
- Follow us on Instagram
- Follow us on LinkedIn
- Follow us on Youtube
I mean, we're not gonna 100% Karen-proof your socials, but we're gonna help those Karen situations deescalate by not allowing people to respond to certain things. According to policies and whatnot. Hey, welcome back to the Beyond Social Podcast. My name is Reggie. This is the show where we go behind the scenes on how marketers are doing amazing things on social and how we're building a tool to support them. I've got Vitaly here today. Yes. So I'm Vitaly. I'm with uh, I'm a head of product here at Vista Social, and we wanted to talk about how brands are treating and thinking about content that is produced on behalf of the brands by their social media management teams and how much safety and how many checks exist to prevent uh, content that maybe doesn't quite align with the brand's perspective on what the brand voice should be. And uh, perhaps in certain situations, does it align with any kind of compliance or legal boundaries that content cannot produce? I think the examples could be if uh, um, in a medical industry, obviously patient data is an issue. How do you make sure that the patient data doesn't accidentally get published to your socials or in response to a certain message, in the cases of maybe brands that aren't necessarily subject to any kind of compliance laws, uh, just more broadly, how do you make sure that your team responds and in a way that you like them to respond obviously, avoiding any sort of profanity, derogatory terminology, just how do you control that? Right? Absent um, actual system, sort of like absent any features that allow you to enable this. You'll really be relying on training and you'll really be relying on just trusting that your team essentially can implement whatever policy you have in place. So, I mean, let me, let me paint a picture maybe for our listeners because I think it's easy to think through financial services, right? And the compliance legally around that, but let's just say I'm a brand, maybe a chain of restaurants and corporation is pretty strict guardrails around, not only what should get posted on social, what shouldn't get posted on social, really beyond that is how the brand is perceived on social comes down to also what do we engage with? So DMs coming in, comments coming in, what does my social media manager respond to, what are the things that they shouldn't respond to, and when they do, what sort of tone or what sort of things they shouldn't say, right? So, I mean, walk, walk me through maybe a few examples on how this will play out. So your typical relationship between a brand and a marketer or a team, right, or marketers that manage social media on behalf of that brand would be one of, sort of like a guided understanding of what the brand is about, right? Um, and, it's people coming together and formally or informally deciding on the overall uh, spirit of, of the messaging that they put out um, overall approach to the way they respond to comments and questions. Do they joke when, do they try to be a little unhinged when they're responding? Do they try to kind of maybe um, be a little extra sort of familiar with the person, right? When responding um, and obviously in these conversations, brands and marketers would agree on do's and don'ts, right? When it comes to what is it, you know, that they do and don't do they, for example, you know, maybe they don't make guarantees or promises. Maybe they don't um, quote people right, uh, on prices. Uh, Maybe they uh, obviously it goes without saying, no profanity, no derogatory terms. And if you have people maybe that aren't as proficient in English, maybe you want to make sure that nothing sort of accidentally, you know, kind of gets out there. So all these agreements, they would exist, but they would exist in, in the air, right? I mean, some, some companies do create policies, they do make people sign them, they do have training around them, but the enforcement of them really falls down onto the people who actually do it and then trust that you have with them. Right. But the mistakes are quite costly because um, once the messaging is out, it's out. Yeah. I think arguably one could say that a lot of content publishing tools have workflow related to content itself, but that's still humans, right? And that's still part of within that team that may still make that mistake, right? That will misunderstand your sort of, your perceived brand safety level, right? And how you want to kind of be, what sort of content you want to be producing. I mean, it almost sounds like it's going to save steps and time too, because traditionally maybe that workflow you're talking about is, you know, social media manager at the brand schedules some content that eventually gets approved by someone or rejected by someone. But it's almost like we could skip that step by having that system automatically warn the person scheduling, the social media manager, saying, "Hey listen, shouldn't be able to say this. You're not gonna be able to say that. This is our policy. Here's a reminder." And then they can make those changes before they even send it out for approval, hopefully. Really decrease the combination of both because I think realistically the content approval workflow doesn't always exist for the sole purpose of uh, ensuring brand safety or any sort of compliance. It could exist for the purposes of just overall scheduling of content. Are we announcing the feature when the feature is already launched? Are we offering a discount uh, before the discount is due? You know, so there could be lots of reasons why the content needs to get approved. And certainly this would at least having a system in place that prevents content from even getting drafted or getting put into the workflow that does not necessarily comply with whatever rules you wanted to comply with. So that is just one extra helpful step. And if you solely rely on content approval, it could still sort of not give you that safety simply because it still could happen outside of sort of, you know, within the teams that are, you ultimately want to police. So it's like, you're asking teams to sort of police themselves in some, some, some regard, right? So, whereas you just want to kind of like, uh, uh, I don't know uh, a spellchecker would highlight errors in your text, right? You want a similar kind of system in place when a person is making the mistake, right, um, of maybe phrasing your response in a certain way or creating content in a certain way that they shouldn't. Right. Is it fair to say that maybe for an agency listening to us that this could almost be like a training wheel when they onboard a new client, maybe the first 30 days, 90 days when they're starting to get the sense of what's okay, what's not? Because I mean, some of these policy documents are pretty black and white, but some of them are kind of broad, and so this could almost be a training wheel where you get that warning from the system every time you draft something that could be considered out of policy or out of compliance, so that by the time it gets in front of the client for approval, or just in general, you're just getting a little bit better at understanding that client's brand tone or their, their requirements. I think it's, it's, it's in the interest of agencies to have a system like that in place because I would imagine a lot of agencies outsource or employ people to work on behalf of their customers. Whereas a customer has, let's say, offered an agency their policy, right? Some call it the compliance policy, brain safety policy. Uh, The cool thing is that you can literally, as an agency, take that policy feed it into, in this case, Vista Social, right? As is, right? No interpretation, you're literally feeding in an AI engine would then assess whether a post or a reply is in compliance with that and would you as an agency want to take the risk of your consultant or your employee or your outsourced person or your freelancer or your intern, not quite interpreting it correctly and mind you a lot of compliance and brain safety documents quite sort of dense legal jargon, right? As to what it all means. So feeding that into the AI and, and, and serve that as a, as a gatekeeper effectively, that would, to me uh, it's not that brands want to do it for customers, although they do, I think it's that they, they, it's, it's a, it's a self preservation you want to do it for your own good to make sure that, you know, you don't, I mean, I would even maybe go as far as saying if your customer does not have one you should probably create one for your customer. Just a basic one that, you know, kind of offers the right brand voice and the right parameters around what's allowed and what's not allowed. Obviously the most basic one would mean, would do things like uh, prevent profanity, prevent derogatory statements, prevent, I don't know, belittling the customers, prevent making offensive jokes. It's a totally uh, solid policy for any brand. Um, And it's a great sort of service, right? And great sort of level of confidence that you can give your customers by indicating that you have a system like that in place, that these kind of errors can never happen. It makes you look good, right? You come in as an agency and like,"Hey, listen, we noticed you don't have a compliance document or, you know, a brand guardrail doc. We've got this that we usually use, you know, let's collaborate. Let's come up with something that makes sense for you." I think that definitely makes you look good. And I think to them, risk mitigation day one, they started saying,"Oh, wow, they, they're legit. They care about the outcome and the risk tolerance of my brain." And it could also cover the basics. It could cover misspellings. You're going to have the list of words that can never be used. You can prevent your content to be referencing politics or I don't know uh, sexual behavior, you know, things that are just completely outside of what even, what, what it is even possible to talk about for your brand. So it, it, you know, it, it can do quite a lot and very simply, it's literally a safety mechanism, right? That could avoid um, having content go out with something as innocuous as a misspelling or something as severe as say profanity. So let me, let me ask you this, because that the profanity, I think all those make sense in the cases is because we're talking about AI. I'm going to bring this up. What happens if there's a mistake by the AI? So let's say I'm, maybe the compliance says that we can't use the money sign, right? We're not allowed to talk about money. We're not allowed to give quotes to customers. Maybe it's because I include a percentage sign in my industry we're not allowed to talk about interest rates in a certain format. What happens if the context of what I'm writing is okay for me to have a money sign, but it gets flagged? Like, what happens, I guess, in that case? How do we Yeah, so I think you, you may kind of, um, allude into one side of it where I guess this compliance system is too strict and it misfires. Flagging something is non compliant where in reality it is compliant. Uh, There is probably also a possibility of compliance checks, not since it is AI, right? Um, Maybe letting through some small percentage of things due to the trickiness of languages, maybe, or the trickiness of the uh, words that are used um, in our tests, right? Um, It's very hard to bypass an AI based compliance system that we've built. Uh, It's very difficult to kind of trick it to allow content through. So that part of it is less likely. What about the flip side? Right. The, the false positives is more likely. Right. Uh, And you have sort of two choices there. Um, One is you can configure the systems, like in this case Vista, to simply issue warnings when this content is sort of, uh, being published, so that way you still allow your people to publish it, but you don't um, you get alerts. Right. That to me is kind of doesn't necessarily reduce the risks of what is it, you know, the problem that we actually try to address here, right? So I wouldn't necessarily recommend that. Um, I think in reality, if it does misfire Uh, you will just go back to your, the, the actual policy that you've created and just revise it and try to sort of inform sort of the algorithm as to why this particular one is uh, okay. So for example, if you talk about the, I don't know the word percentage, right? You can literally teach AI to okay, the word percentage right when it comes to pricing or quotes is not appropriate. The word percentage when it comes to all other matters is okay. AI is super smart. You provide that level of content to it, and again, going back to, I think some of the other discussions that we had uh, on prompt engineering. Um, For those of you who know the term prompt engineer is essentially a way to essentially have a conversation with AI, where AI gives you the best possible answer, so not every, but I mean, it's no different than Google search query engineering, like, oh, you know, you got to formulate the query to your search engine better. So I think false positives will be addressed with better prompt engineering. Again, fancy word for just writing a more elaborate policy. And I'm imagining the workflow would go along the lines of you get a report from your employee that they can't post a certain text, write a certain post because it's getting flagged. And by the way, Vista's compliance verification would explain why, so not only would it say that it's not compliant, it would explain why. And then you can take that feedback to your admin, and admin can revise the compliance policy with the brand safety policy to inform um, the engine, why this needs to be an exclusion or why, you know, provide the necessary level of context. That's awesome. And so a brand or an agency looking to implement something like this, it sounds like, well, we've built Vista Social at least, and I'll put that shameless plug in there, to be as friendly as possible on the setup side of things, right? Where you can ingest your, your company's existing policy. Um, I'm sure there are going to be resources available if someone's trying to put one together. Maybe we have a couple of examples, um, and maybe just the best way to try to train that model in case you end up seeing those false positives. Yeah, absolutely. And again, I think uh, obviously designing the policy that is very broad in nature. I don't know. Don't use humor. Clearly that's going to result in a lot of false positives. So I think you have to be realistic about what is it that you're asking the policy to do. So again, we by the way, also offer a policy generator. So you can literally tell us what you kind of want out of a policy and we will generate one for you. So that's a great way. Um, Last I looked, there are plenty online resources for what brand safety policies are plenty of examples. So chances are you may not even have to kind of start from scratch. Um, I do also want to sort of also point out that. Here we are talking about content going out on behalf of a brand and how that needs to be um, verified, whether it complies. But there are so many other areas um, such as, for example, responding to DMs, responding to comments, internal teams communication on content, um, generating ideas. Um, You also have um, ability to understand what is it that your teams are having to answer to. Right? So, so there's an incoming DM. Is your team allowed to respond to that DM, right? So right now, again, that part of social media, that part of sort of social media management will also be covered by some sort of training. Like if your customer is asking for, I don't know, their account number, you are not supposed to provide that account information to them. Well, that's a big ask of a social media person, or in this case, maybe like a support agent to be out there always, you know, careful and evaluative, whatever they're being asked. And these phishing, you know, kind of schemes, sometimes they're quite sophisticated. Um, But if you do have a system that basically checks the incoming messages, I mean, your agents will basically be prohibited from responding to stuff that was marked not to, not, not compliant. So I think. Uh, It's a such a complete um, sort of, it's such a complex problem that it's, and it's present in, in, in so many different sort of angles uh, corners of uh, social media management. Again, it's, it's content, it's responding, it's um, I don't know, looking at reviews, it's looking at the results of your listeners and monitors. You want to make sure the content that's coming in is compliant. You want to make sure that comment that's coming out is compliant. You want to make sure that your responses are compliant and notice that a lot of this is going to be handled by different teams. Oh, yeah. So your challenge without an automated system of verification is going to be a lot of training and in auditing. Let's be honest. And I'm sure you know, from the support standpoint. Mistakes are made. Information is divulged. People try to respond to questions that they shouldn't be responding to. And uh, It's a nightmare to deal with these situations and it's hard to kind of also place blame on people because if you have an agent that responds to, you know, a few hundred messages a day, I mean It's a human error situation. I mean we're not gonna 100% Karen-proof your socials, but we're gonna help those Karen situations deescalate by not allowing people to respond to certain things. According to policies and what not. But I think we'll get very close. I think uh, if you have the policy in place it will flag and be accurate at 99+% of cases. Nice. So essentially we can deliver a solution that's over 99% accurately going to prevent you from making mistakes and your teams, right? To your point, will there ever be false positives? I would imagine there will be. And through sort of the training of the system, you can deal with them. But again, what's better is, is it better to flag, you know, if you post as false positives? Or have a pose that's clearly wrong and detrimental and hurtful to your brand, get out. Yeah, the brand equity. I mean, you take decades to build something that can get destroyed overnight. Right. Yeah, sounds like an amazing tool. I'm excited to get our teams trained and looked into this, and all the resources we're going to be able to provide brands and agencies who want to explore building something. Yeah. And I think um, it is something to your very first point. Uh, This is sometimes the brand safety and particularly compliance is perceived as something that is only needed for the larger team, larger companies. It's quite far from, from the reality of it all. Right. You have even the smaller brands. Especially in the situations when you outsourced, right, especially in situations where you have dispersed teams handling this. Even if you have internal teams, right, the amount of effort it will take you to enforce standards, from the training, from documenting those standards, it, it, it, it's quite a laborious undertaking uh, that will never yield the, the perfect result. So by uh, you know, lucky for all of us, AI does these tasks extremely well. And by simply uh, uh, using a product that enforces it all uh, for you as part of your SMM, as part of your social media management Um, you've essentially arrived, like you have that safety both from the brand perspective "I want to be safe," but also from the agency perspective,"I don't want to be responsible." Yeah. So maybe you don't even continue offering your social media management services unless there is a brand safety policy in place. And again, if your customer doesn't think that way yet, then go out there, find a very basic one that is very sort of sturdy and will protect you against the most sort of egregious violations. Yeah. And, and that way you're not going to one day get a call from the customer indicating that, you know, somebody accidentally through autocomplete or something, you know, used uh, some, some really bad word, you know, in a message. Right. Yeah. I mean, it's definitely a risk mitigator, but definitely a trust builder, I'd say as well with that client. Absolutely. Hey Vitaly, thank you for being here. We're excited to check out the, the, the product. And for y'all listening or watching us on YouTube, make sure to check out the resources. We're going to try to link it down below uh, on our YouTube descriptions. Thanks for watching this week's episode. As always, if you want to hang out with Vitaly and I on an upcoming episode, head over to vistasocial.com/podcast. Uh, If you have any tips, any specific topics you want to see covered, that's where you can drop us a line as well. And we'll see you next week. Awesome.