The response to our recent Measurement Masterclass was overwhelming. It confirmed what we already knew: communications professionals across Australia and New Zealand are ready to move beyond “clip counts” and start proving real business value.
We had so many questions flooding the chat that we couldn’t get to them all live. But every question asked is linked to a barrier that might be keeping you from demonstrating your value – so I wanted to take the time to answer them here.
From battling “no budget” constraints to weaning conservative executives off AVEs/ASR, here are my answers to the most common questions!
“We have no budget for research!”
We received several questions about measuring sentiment, trust, and reputation without the budget for large-scale market research or brand tracking surveys.
This is the most common hurdle I hear. If you can’t afford the “Michelin Star” research, it doesn’t mean you can’t cook a great meal. You just need to be scrappier with the data you do have.
If you can’t afford external surveys, look for proxy metrics that already exist inside your organisation (The “Result” metrics we discussed in the webinar):
- Customer Service Data: Are complaints going up or down? What is the tone of the feedback in the CRM?
- Sales/Frontline Feedback: Ask your sales or customer service teams: “Are people mentioning XX media story or campaign?” – think about how they could record this data for you
- Owned channel analysis: You don’t need a survey to see what people are saying. Manually reviewing comments on your owned channels is a valid form of qualitative sentiment analysis.
- Retention: Trust is often best measured by behaviour. Are your members renewing? Are donors donating again? High retention usually equals high trust.
Q. Do you recommend small ‘Survey Monkey’ style questionnaires to audiences who have received a campaign… to ask if they found it valuable?
Yes, absolutely.
In our webinar framework, this is the perfect way to start to measure the “Reaction” (Outcome). You can always make a small step and grab a pulse check of a specific audience.
Don’t let the pursuit of “perfect” data stop you from getting useful data. Also always remember that finding a lack of engagement (maybe no one responds!) is also really valuable! Just remember that you can use this with other data, traffic, other calls to action, SRM or CRM data, social conversation – all of these can help to paint the picture of exposure and impact.
My execs are stuck in the past
Q. I’m in a very conservative organisation that is reluctant to let go of KPIs like ‘Distribute X press releases’ or ASR (Advertising Space Rates). How do I encourage change in execs who have been around for 10+ years?
Use the “AND” Strategy. If you take away their security blanket (ASR/Clips) cold turkey, this can cause an issue.
For the next 3 months, give them exactly what they want. Put the Volume and ASR on the first slide. AND, right next to it, put one new metric that links more closely to impact.
Then, add a simple narrative: “While our ASR declined slightly, our message penetration on the topic of ‘Innovation’ jumped 10%, which drove a record number of visits to our new product page and increased enquiries.”
Over time, they will realise that the ASR number doesn’t tell them anything useful, while the new metrics actually explain a more tangible contribution to the organisation. Eventually, they will stop asking for the old numbers as you continue to flesh out this better story.
Measuring policy and advocacy
Q. Outcomes for policy/advocacy are tough. Any tips for measuring outcomes… other than through surveys?
Policy is a long game, and often the “Result” can take years. In the meantime, you need to focus on measuring influence.
If you can’t survey the public, look at the language of the decision-makers (Parliament/Government).
- Hansard/Record of Proceedings: Are politicians using your specific phrases or key messages in debates?
- Submissions: Are other organisations referencing your research or arguments in their submissions?
- Access: Are you getting meetings with higher-ranking officials than last year?
These are all measurable forms of “Key Message Penetration” and “Influence” that prove your advocacy is working, long before policy is drafted, enacted and understood. I would also argue that most policy links to a broader topic or theme, and it’s always worth understanding how those narratives are shifting in public spaces (this could be traditional or social media) and what could be used in policy design or communication. Sometimes the best way to look at influence isn’t as linear as a claimed survey data, but about broader shifts in understanding.
Methodology and accuracy
Q.Can you discuss sentiment and its reliability in being accurate?
Sentiment analysis has come a long way, but treating it as a simple “Positive/Negative/Neutral” score often misses the point. It often gets used as a proxy for perception, and that’s often not how standard sentiment works. Sentiment is one input to assessing the quality of something, but it also needs context and nuance.
For example, 100 “Negative” posts about a minor website glitch is very different from 10 “Negative” posts about your CEO’s ethics. One is operational noise; the other is a strategic risk. A simple aggregate score might hide that distinction.
When you go a little deeper, you can consider:
- Who is speaking? (A random bot vs. a key stakeholder)
- What is the topic? (A product feature vs. corporate reputation)
- What is the intensity? (Mild annoyance vs. outrage)
- Who would have seen this? (Key stakeholder forum or a low follower account on X)
One of the easiest swaps to make in measurement and reporting is to refine the specificity of what positive means to you – remove 90% positive coverage and swap it for 40% positive message penetration.
“With the increase in publicly available social content, have you found that engagement is more, or less, predictable?”
It is definitely less predictable organically. Social algorithms are increasingly punishing “corporate” content and prioritising “human” content.
What worked yesterday might get zero engagement today because the algorithm changed. This reinforces why we shouldn’t rely solely on “Vanity Metrics” (likes/shares) as a measure of success. We need to focus on what happens after the click (the “Reaction” and “Result”).
Q. Does Isentia measure brand awareness with target audiences? I know that traditionally tracking brand awareness is a significant investment.
I love that you asked this, because “Brand Awareness” is often used as a catch-all term for three very different things:
- Awareness (Salience): Do people know you exist?
- Understanding (Clarity): Do people know what you do?
- Reputation (Trust): Do people trust you?
Yes, we measure brand awareness, but usually from the perspective of public narratives which is often a more actionable way for comms teams to track their impact. Brand awareness metrics are often used as a substitute for reputation, and often they aren’t measured the same way. If awareness is important to you, there are a lot of building blocks you can start with outside of these traditional survey based methods – space, voice and the impact of your messages and campaigns can be a good starting point.
Interested in viewing the whole recording? Watch our webinar here.
Alternatively, contact our team to learn more insights into meaningful measurement, KPIs and communicating using the right dataset.

