Ever since the coining of the adage about half of marketing spend being wasted, marketers have been on a quest to measure the impact of their work. The advent of digital marketing was meant to have revolutionised this endeavour, making it possible to track every click and interaction to its source.
This has not quite turned out to be the case of course and attribution remains a vexing challenge to anyone seeking to establish return on marketing investment for their business. It also frequently falls to marketing operations (MOps) teams to put into place the means to make this possible. This is not what I’m going to talk about today though.
While MOps are often the custodians of marketing measurement, the question then arises, how to measure the measurers? Mindful of another adage, “you can’t manage what you can’t measure”, it’s vital that MOps teams have a set of metrics that can be used to determine that they are delivering what is required of them. This is easier for other elements of the marketing function, such as demand gen, as there is a much more direct relationship with leads, conversion and pipeline. When it comes to MOps though, which is more likely to be managing the tech stack and audience data or delivering the aforementioned analytics, the connection with marketing outcomes isn’t as clear-cut. True, many MOps teams will be handling marketing automation campaign set-up and execution; this is likely to be at the direction of other teams though who are determining the approach, content, messaging and so on. Putting MOps on the hook for the outcome of this activity doesn’t seem entirely fair.
As such, I decided to approach a few MOps leaders who I respect and find out how they do it. What emerged was a remarkably consistent picture of how heads of MOps monitor their teams’ activities and outputs to ensure consistent delivery and quality. In my conversations, I asked about any commercial or marketing outcome measures for MOps, along with other aspects such as project objectives, work completion, operational monitoring and any other ways in which MOps activities are tracked.
“Everyone has the same number, and that’s pipeline,” responded Ian Bennison, director of marketing operations at outsourced services specialists Intertrust Group, when I put the question of connecting MOps performance to the wider commercial marketing picture. His team, with slightly wider than-the-norm responsibilities spanning aspects of demand generation and digital marketing, are included in the collective measure of pipeline value along with the rest of the marketing team. This is our “lighthouse” metric, agreed Helene Hornecker, head of marketing operations at developer security platform Snyk, whose team actually sits in a central services group outside of marketing itself. This group sets commercial targets for the entire business, which everyone then works towards. (This in turn raises the question of how much input marketing and for that matter sales have in defining these targets rather than just being handed them. A topic for another time…)
More widely, the link between MOps teams and high-level metrics like this seems to be fairly loose, with limited remuneration or bonuses tied to them. “Overly quantitative measures can be too restrictive,” suggests Helen Abramova, director, marketing operations at ecommerce shopper experiences service provider Bazaarvoice, failing to reflect the diversity of MOps responsibilities and activities.
A more widely adopted measure though revolves around throughput in some way. Most of the MOps teams in my limited survey operate some kind of workflow solution, such as Asana or Workfront, with a backlog of requests that are scheduled into sprints, agile-style. This then makes it readily possible to quantitively track the number of requests raised and completed, time-to-completion and queue wait times at an individual and team level. This moves the discussion in another interesting direction, that of MOps task management and planning. Again, not completely in-scope here, there were a lot of views expressed on ensuring visibility over MOps workloads, upcoming deliverables and backlogs. There’s a balance to be struck though cautions Helen, adding that plans and roadmaps “should be for communication and not micromanagement.”
Another area of consensus is in respect to data maintenance and stack performance. Most teams have metrics for data quality across aspects such as completeness, recency and email deliverability, with individuals targeted or monitored on their success achieving specified targets. Slightly less common were measures around system performance such as MAP/CRM sync queues, campaign hierarchy maintenance and general troubleshooting. In some cases, these measures are compared to industry benchmarks, especially email delivery rates and engagement. Mostly though, these are relatively “internal” metrics that MOps leaders use to monitor their teams and are not reported more widely. Ian summed this up saying, “I set individual targets for members of my team depending on their role, which are not circulated outside of marketing.” In this way, they contribute to individuals’ performance ratings in a semi-quantitative way.
An interesting area of much more quantitative measurement arises for those MOps teams with responsibility for their companies’ web presence. Websites provide the opportunity for adopting hard metrics such as search performance, visitors and conversion. Clearly, there is a plethora of web analytics tools from the ubiquitous Google Analytics to more specialised ones like HotJar or SEMrush, making it straightforward to measure many aspects of website performance. While some aspects, like overall visitors, may again fall under the category of being outside MOps’ direct ability to influence, on-site elements are more readily impacted by design and functionality. “My team routinely analyses website pages to improve engagement and conversion performance,” Lauren Sanyal, director of marketing operations at healthcare communication platform provider, TigerConnect told me. “We also use Zoominfo to improve webform completion rates and overall data quality,” Lauren added. While responsibility for websites does not universally lie with MOps teams (even though it should, if you ask me), where it does the scope for adopting robust metrics should be taken.
The mechanism for measuring MOps that most definitely enjoys universal adoption by the MOps leaders with whom I spoke is good old fashioned MBO (Management By Objective) or their more trendy update OKRs (Objectives and Key Results). Whatever you call them, setting specific goals and determining wether they have been reached fits well with the type of work that MOps typically undertakes. “We work on OKRs that are project based,” Emily Gravel, manager, marketing operations and shared services at cloud computing provider VMWare said to me, while Lauren added that she tracks her team’s progress on day-to-day operational agility and strategic projects that drive impact. This is where role specialisation within the MOps team can be useful, Helene suggests, making it easier to define and monitor objectives robustly. “I set quarterly ‘big ticket’ objectives for my team and review them at the end of each period,” she added. Overall, achievement of MBOs or OKRs, together with other more quantitative measures, tend to contribute to individual performance measurement, which together with a smaller corporate element, in most cases comprise bonuses where they are paid.
A topic that came up in a couple of my conversations, though not strictly relating to measurement, is nonetheless worth mentioning. This is the idea of advocacy and education, both within and beyond MOps teams. Helen is a keen proponent here, emphasising the importance of educating stakeholders and leadership on “what MOps is doing and avoiding being taken for granted”. Helen adds, of course, that, “stakeholders must be willing to listen”. This visibility of achievements, even if on a more qualitative basis, is important for conveying MOps’ contribution Helen asserts. This also comes back to the MOps roadmap already mentioned and ensuring this is widely circulated and understood. Emily also recommends tracking training within MOps to ensure on-going skills development. “It’s important to constantly ‘uplevel’ and build confidence, which I review every six months,” Emily said.
The final question I put to my MOps interlocutors was whether they had adopted any kind of “net promoter score” type measure to get a sense for how their MOps teams are doing overall among wider marketing colleagues or across their businesses. Mostly not, I found, and the idea met a mixed response. Helene had discussed an approach like this before concluding it wouldn’t work in her situation. Emily similarly could see the limited sample size and desire not to cause offence leading to a score that settled at a given level and didn’t really tell her much. “We do run a quarterly satisfaction survey,” she added though, which performs a similar role and provides useful feedback. Helen also felt that there was too much outside her control for a measure like this to prove meaningful. Ian was more positive though and thought the idea merited consideration. Again, across the board, everyone I spoke to took care to obtain regular feedback and input from piers and stakeholders. “I have an open door to any complaints,” said Helene, emphasising that she would always prefer to know about any problems so they can be tackled. “I look to have candid conversations,” said Emily, obtaining qualitative feedback from her stakeholders. At TigerConnect, Lauren benefits from a company-wide pulse check that provides a picture for satisfaction with various aspects of the business. A word of warning came from Emily though, in respect of soliciting feedback on MOps requests. “We used to ask for a rating on every Workfront ticket we closed,” Emily explained, “but people quickly got annoyed being asked after every request they made!”
All told, it’s clear that MOps leaders pay close attention to the performance of their teams, employing a range of techniques and approaches both quantitative and qualitative. Many of these metrics are internal to MOps themselves, although there are certainly a number that are shared more widely. It’s worth considering whether these measures could be shared more extensively, allowing anyone interested to follow them. As discussed, MOps teams are by nature project orientated, working to complete initiatives such as new martech deployment or implementing a new lead routing process. Clearly, these undertakings can be judged on the basis of their timely, effective and on-budget delivery.
Similarly, the more day to day deliverables that come MOps’ way can be tracked and measured quantitatively, especially when using a workflow tool. Crucially though, MOps must not fall into the position of just seeing the closing of request tickets as a mark of success. While without a doubt part of MOps’ role is to provide a service to Marketing, it is not there only to do whatever it is told. “Just following orders” has never been a good defence, and should requests be made that do not represent the optimum course of action, we should say so and provide the consultative guidance to ensure the best possible outcome. Shrugging our shoulders and saying, “I could have told you that wouldn’t work” is not acceptable.
There is also scope for improving the connection between MOps and marketing’s overall commercial objectives. Ensuring MOps have “skin in the game” in relation to pipeline and demand generation targets, even having a bearing on individual bonuses, engenders alignment and focus on these outcomes. Another option is to target MOps on incremental improvements across go-to-market metrics. Rather than explicitly saying that campaign engagement or conversion rates should be a specific level, MOps should be constantly looking at how to improve these measures. This again will ensure they are encouraging colleagues across marketing to raise their game through better use of technology, data and process.
At the end of the day, as in any aspect of management, success is in no small part predicated on leaders staying close to their teams and mission, keeping a finger on the pulse. “I have a pretty good sense for how my team is performing,” Helene said to me. “As the Italians like to say,” she added, “I know my chickens!”. And that’s a good position to be in, whether measuring the performance of MOps teams or Marketing more widely. Commercial alignment, quantitative metrics and qualitative feedback are all vital. Just make sure, in addition, that you know your chickens.
Many thanks to Ian Bennison, Helene Hornecker, Helen Abramova, Lauren Sanyal and Emily Gravel for their contributions to the piece.