Securely Connected Everything S4-6: The Race for Resilience: Matt Maw on Modernising the Gaming Experience

Join us as Matt Maw shares his transformative journey from bricks-and-mortar operations to spearheading digital change in the gaming industry, promising insights into maintaining revenue streams amidst diverse regulations.

Join us as Matt Maw shares his transformative journey from bricks-and-mortar operations to spearheading digital change in the gaming industry, promising insights into maintaining revenue streams amidst diverse regulations. You’ll discover how Matt navigated the complexities of transitioning Australia’s largest gaming and wagering business into a digital powerhouse while simultaneously revolutionising aged care with innovative digital care planning. This conversation provides a unique glimpse into Matt’s leadership strategies, highlighting the importance of balancing traditional and digital infrastructures.

Experience the high-stakes world of event infrastructure management as Matt takes us behind the scenes of major events like the Melbourne Cup. Imagine the pressure of managing a network spanning over 11,000 physical locations, where 80% of bets are placed in the moments before a race begins. Matt reveals the meticulous preparation and risk management necessary to handle such spikes in transactions and underscores the critical nature of seamless execution to avoid irreversible revenue loss. Discover the role of private cloud implementation in ensuring operational resilience during these peak periods.

In a world driven by data and innovation, Matt sheds light on the strategic decision-making involved in infrastructure management and team integration. He discusses the balance between private and public cloud services and the innovative “one TATS program” fostering a culture of collaboration. Learn about career growth opportunities that empower and advance professionals within and beyond the organization. Matt also shares his insights on building resilient infrastructures and teams, emphasising proactive planning and stress-testing to ensure operational stability during unforeseen disruptions.

Matt Maw:
0:02But in 2008 the primary means of communication to the branch network was via dial-up modem. I literally had banks of Netcom modems and US Robotics 56 6K modems. That was the single biggest connection and those that needed slightly more bandwidth were on frame relay. So at the time, tats was the single reason why Telstra was continuing to run the framework. They couldn’t shut it down until we migrated off.

Michael van Rooyen:
0:29

Today I have the pleasure in having a chat with Matthew Maw, known as Matt Maw. He is the Chief Delivery Officer for our critical infrastructure business, Orro, Critical Infrastructure, and today we are releasing this podcast around the Melbourne Cup event. Matt has a great history in working in critical environments. Particularly, we’re going to talk today about his experience and time at TATS Group, where he was a CTO between 2008 and 2016. And we thought it’d be brilliant to have a chat about the challenges of running such a major event. It’s pretty critical to the country. It’s not critical infrastructure as we think about today, but I think it’s important we talk about it.

Michael van Rooyen:
1:03

Once you gave me the history around the event and the criticality. It was fascinating. I thought we should let customers and people who listen to this hear that. So welcome Matt. Thanks very much, mvr. Before we get started, give us a bit about your journey. I know that you spent time at Cisco, nutanix et cetera, but you have quite a deep history. Do you mind giving us a bit about your career journey and then what led you to become the CTO of Tats, which is the largest gaming and wagering business, and particularly between 2008 and 2016?

Matt Maw:
1:30

Yeah. So look, my journey has been one of finding organisations and industries that are ripe for digital change. So, fundamentally, my career has been really driven around helping organisations adopt digitalisation and driving outcomes associated with that. So early in my career, I’ve spent time at pharmaceutical manufacturing organisations where we literally built factories and delivered physical pharmaceutical products. I’ve spent time in the mining industry and I’ve also spent time in the delivery of health services.

Matt Maw:
2:04

I was the first CIO for one of Australia’s largest aged care providers and was actually instrumental in putting in one of Australia’s first digital care planning systems within the aged care environment and certainly very topical these days as we get an older population and more and more of us starting to enter into that. Driving digitalisation in that space literally saved lives. We were losing patients due to poor handwriting, to not getting access to medical records. So, yeah, driving the digitalisation of that space was pretty critical. The highlight of that digital transformation really was around Tats Group and I was both, I guess, lucky and unlucky, depending on how you look at it, to be at Tats, right at the heart of the digitalisation of the entire industry. So when you think about today, we’ve got a plethora of online gambling providers I’m sure you can rattle off half a dozen straight off the top of your head.

Matt Maw:
2:56

When I first started at Tats, they didn’t exist. They’d just been a legal change to the environment when they were now flooding the market. So I took over an organisation that was predominantly bricks and mortar business that still earned billions of dollars through bricks and mortar type organisations. That’s the typical TABs, and I’m sure you’ve all got the image of the smelly TAB and the people generating, you know, bets through that environment. But then how do you digitalise that environment whilst not losing that revenue base?

Matt Maw:
3:25

It’s very easy for people to sort of look at a digital journey when you don’t have to worry about that sort of incumbency of things. And then how do you do so in a heavily regulated environment? So TATS was regulated on a state level, not a federal level. So although we ran central systems, we were still regulated by New South Wales, by Victoria, by Queen’s. Everybody had their own individual regulatory environments that we had to meet that were incredibly onerous. So you know, how do you drive a digital agenda in that space? So yeah, it was a fascinating journey to get to that point and glad I did it and glad I’m not doing it anymore.

Michael van Rooyen:
4:00

Well, I guess we’re on the cusp of Melbourne Cup again and, yeah, it would have been certainly a very interesting journey to go through the digital transformation, but also to have these multiple events running and the race that stops the nation. And you know, I guess I think about the race you had to run to get prepared for that day. I mean, what did you call it earlier when we chatted? The most stressful?

Matt Maw:
4:19

boring day ever is the outcome that we were chasing.

Michael van Rooyen:
4:23

I really like that and I guess it’s fascinating for me around the amount of preparation and how you had to plan and scale for that. You mentioned one point I’m sure we’ll touch on as we talk about the time you had a major lotto draw on at the same time as the Melbourne Cup, so even a double bubble, which would be fascinating. So off the back of that, you know managing infrastructure for large scale organisations, especially during these high-stake events such as the Melbourne Cup. It is certainly no small task on its own. Can you give us an overview of your responsibilities when you were the CTO and how your team supported these major gaming events?

Matt Maw:
4:54

Yeah. So it’s important to remember that Tats Group wasn’t just a wagering business. It also owned all the lotteries bar Western Australia, across the country. That’s your typical, your all sevens, your power balls, your typical block game lotteries that ran. We also had one of the country’s largest radio networks in radio TAB, so we had over 300 transmitters across the country.

Matt Maw:
5:14

That you looked after that we looked after. We also had a very extensive network of gaming within the pubs and clubs environment. So we did all the monitoring for the poker machines. We did things like gaming services. So if you’re in Queensland you know if you have to put your license in to get to a pub or club. That system was part of our environment. So in all we had a little over 11,000 physical locations on the network, which made it the largest corporate network in the country and you know, literally was a true 24 by 7 operation.

Matt Maw:
5:44

So when you look at my responsibilities, even though we had independent owners of each of those different lines of business, the decision was made to consolidate all of that infrastructure into a central, single platform, single hosted capability for economies of scale and for efficiencies, and so my role was literally to own that entire plethora of infrastructure and capability to deliver out those requirements.

Matt Maw:
6:09

And when you start to look at some of the interesting challenges that we had there, it wasn’t just scaling up to the likes of Melbourne Cup Days and Aus7 Super Draws which, yes, we had a Super Tuesday where we had both on one day and literally two-thirds of the adult population interacted with my systems at least once on that particular Tuesday. We also then had to scale down to a couple of transactions per second on a quiet Monday. So it’s not just good enough to scale up, but we also had to scale down and still be profitable at very small transaction rates, but then be able to scale up to very high transaction rates at the same time. The other thing that we typically tend to forget when we think about Melbourne Cup is that it’s right in the middle of Spring Carnival, and Spring Carnival constitutes nearly 50% of wagering’s total revenue for a 12-month period.

Michael van Rooyen:
6:58

Really.

Matt Maw:
6:59

So there’s six weeks of Spring Carnival. So when you think of Cox, plate, Caulfield Cup, all those sort of big race days, they literally constitute about 50% of Wadering’s total business. We also forget that Melbourne Cup is MR Race 7. That is burned into my brain. But the second biggest race for the year is MR Race 6 and the third biggest race is MR Race 5. So that race day, those spikes, and you watch it during the day. You know you can, you can see it through the transaction load. You can see well, there’s race one, there’s race two, there’s race three, and then all across the country, just about every racetrack across the country, runs their own race days for that particular year. So Brisbane, you know Ascot will run one, and then they’ll run one at Randwick, et cetera, et cetera across the country. So you have all of those people that are all betting on those local events at the same time. So it’s a lot more than that single race that stops the nation, which we all know and love. It is actually a very extensive race day for the year and you can imagine the sort of revenue that’s being generated from that perspective.

Matt Maw:
7:59

One of the things that’s critical to understand is that for a lot of organisations. They talk about things like downtime and the loss of revenue that they generate for downtime. You know the bank network is a classic example. If the ATMs go down, there’s significant millions of dollars lost per hour. The loss per hour is an interesting metric in that it’s not really lost money, it’s delayed money. You know so most people, if they need to get money out of an ATM, the ATM is not available. They’ll come back in an hour’s time, two hours’ time, three hours’ time.

Matt Maw:
8:26

Sure, there’ll be brand damage, there might be maybe a loss of reputation and it’s not a zero cost. But for us, from a tax perspective, once that horse race jumps, once the balls drop in the lotto, there is no ability to get that money back. That is gone. You no longer can get more revenue on that. So if you are down, then you’re not getting back those outcomes. What we also know is that nearly 80% of bets Melbourne Cup’s a little bit different, because it’s Melbourne Cup and that’s always a bit different. But nearly 80% of bets are placed within the last five minutes before a horse race jumps. So the spikes that we get, just even on a normal Saturday, are tremendous. So managing those spikes, managing those workloads was a critical factor in what we understood, and our Melbourne Cup journey started literally the day after Melbourne Cup. So we would spend 364 days preparing for Melbourne Cup and, ultimately, spring carnival that went around it.

Matt Maw:
9:20

So, yeah, it was a hell of a day. It was, as I said. It was hopefully the most stressful, boring day that we get, and if it wasn’t boring, then we had problems.

Michael van Rooyen:
9:29

Yeah, I was going to say that’s what you build for, right. So in that preparation the 364 days of preparing what were some of the unique challenges that faced the team in preparing and managing the infrastructure for such high demand events?

Matt Maw:
9:41

So critical to the journey was to actually understand the outcome. I know that sounds really, really simple, but we took a lot of time and effort to break down the Melbourne Cup day that had just gone, to understand those spikes, to understand those loads, to understand those requirements, to really then look at what did our data predict that we thought we were going to get to, and did we? Did we get there? So we, we used to take a whole lot of, you know, statistics and readings leading up to the event. You know what was corfu cup, what was cox play, what was? You know some of the state of origin. You know, etc. Etc. So that we could then try and get a bit of predictive analysis as to what those outcomes would look like. You know, what was the betting traffic going to look like, what was that competition doing? And then we would then do a deep dive as to how close do we predict it correctly, so that we could then look at moving forward Once we understood what the journey then looked like. Then what we had to do is then say, right, what is the most efficient way for us to get to those outcomes without having an unlimited budget, because ultimately it’s very easy to drive high transaction volume if you’ve got an unlimited budget, because ultimately, you know, it’s very easy to drive high transaction volume if you’ve got an unlimited budget.

Matt Maw:
10:48

But we were still an organisation, and an organisation that had very slim profit and operating margins. You know, wagering typically operates at about a two to 3% profit margin on its business by the time it pays its regulation, pays its fees, pays its staff. So it’s not a business that had unlimited budgets. So we had to make sure that we used that budget wisely in order to achieve those results. So it very much became about how do we get efficient at what we do, how do we utilise our scale, how do we use the economies of scale? And then what we started to look at is how can we use our other businesses to give us some of that buffer capability?

Matt Maw:
11:26

So we then drove a standardised approach that set our common infrastructure that exists across the lottery business, across the gaming business, across the wagering business. We could then reutilise that infrastructure and move it from place to place. Today we would call that cloud, you know, as most people would come to know and love that. But back in 2008, there was no such thing, as you know, the public cloud. There was no such thing as you know how do you do that. So we essentially had to build our own private cloud capability being able to move those, and it was a great plan right up into the point where we got Super Tuesday when we had an Aus7 Super Draw on the same day. Our DR plan for wagering was to use the lottery workload and the DR was to use the wagering workload. We had the same on the same day.

Michael van Rooyen:
12:06

So that was an interesting and fun day, super complex and super bit of planning. I just want to go back on a point you made earlier. So the massive spike, right, it’s a good point. First of all, everyone, most populations just think of Melbourne Cup, the, you know race seven. It happens, you know and it’s done. But you made a good point about the lead up and down. So I think about a statistical view of this big bell curve that you guys would go through and be pretty sharp up and down, very short, sharp bell curve. So if I think about spikes in traffic, even though they’re trans transactions, thousands of transactions a second, etc. What measures did you take to ensure you know reliability and performance?

Matt Maw:
12:39

you know with with your own systems, vendors, good architecture, etc so resiliency by design became an important part of everything that we did, and so we were maniacal about designing our systems to guarantee resilience from the ground up. We went to the nth degree in a lot of these respects when, when we were deploying our data centres, we ran our own data centres within our environments. We had one particular pit where the diverse path was going to run through from out of one facility to run to the other, and it worked out to be about three meters worth of common conduit that existed between the two. That was not good enough for us, so we then went and deployed nearly another kilometre worth of pits that we had to literally plow into the ground in order to make sure that the cables ran through diverse paths and did not run through that same three meter stretch of of pit in the road. So that’s the sort of length that we went down to and, in fact, on Melbourne cup Cup Day.

Matt Maw:
13:37

So in general, we used to run dual dual everywhere. So we’d have a primary host system, we would run a dual within the data centre and then on the redundant data centre we would run another set of dual hosts. So essentially, for every one production host, we would have three DR hosts and on Melbourne Cup Day we worked with our vendor community and we essentially had a fifth data centre worth of capability stored in the back of a Pantech and I hired two guys to sit in that truck, to sit halfway between the two data centres so that if the event of any equipment failure or any challenges that we had at either data centre, we were able to drive that truck to the location and to deploy that extra set of equipment. So that was the levels that we used to go to in order to make sure that on that particular day it was those sort of outcomes, but when you’re running at 1,200, 1,300 transactions per second through the systems, that’s the sort of thing that you can do.

Michael van Rooyen:
14:33

Wow, wow. But how did you approach the risk management and disaster recovery planning to ensure that the operations were seamless? Was it just again that meticulous planning, thinking of a fail scenario? What sort of opportunities did you have to simulate failovers et cetera? Was that part of the weekly monthly process?

Matt Maw:
14:49

So, look, it’s a really interesting question. It’s one of those things where you can’t just get paranoid and you can’t think, oh, what about if an asteroid? You know you, there is a limit to what you need to think about. What you also need to do is be prepared to be brave throughout the year. Yep, so things like a cable cut test. Most organisations are pretty reluctant to do that because you know that inevitably comes at a cost, because there’ll be downtime, because you haven’t thought about it.

Matt Maw:
15:17

We were prepared to suffer some of those challenges throughout the year to ensure that, on the event of Melbourne Cup Day, we had those true capabilities in place.

Matt Maw:
15:25

We were also meticulous and maniacal about learning from our mistakes and learning from our failures, so that every single time we had an issue throughout the year, we would do a root cause analysis.

Matt Maw:
15:38

We do a deep dive as to exactly what went wrong, what happened, what would be the potential impact on things like Melbourne Cup Day, super Draws, and then really made sure that we drove a learning experience out of those exercises that then drove into continual improvement programs associated with that. So lots of data, lots of analysis, lots of learnings, and we needed to have a culture of learning from our mistakes. We didn’t try and hide them, we didn’t try and cover them up. Mistakes were going to happen, issues were going to occur, challenges were going to be part of our everyday life. We needed to make sure that our culture was, that we would own it, we would uncover it, we wouldn’t take a persecutorial type perspective on it, and that we would have an open and comfortable place for people to put their hand up and say this went wrong. And so culture was as much about the outcomes that we’re able to achieve as much as was about good design and good architecture.

Michael van Rooyen:
16:35

Yeah, fair enough too. You touched on building your own data centres. Now I know this we’re talking about 2008 to 2016, and there’s a lot of discussion about public cloud et cetera, and I’m curious to understand you really led that transformation to building your own private cloud, which made a lot of sense at Tats and it may still be valid. So what motivates you to make that change, build, build your own and what benefits to get at the time?

Matt Maw:
16:59

so so there’s two elements to it. There’s a technical element and then there’s actually a commercial element associated with it. We used to have a this to be a running joke that we used to have undated resignation letters so that if we ended up on the front page of the courier mail on Melbourne Cup day, that was an easy. That was an easy answer. But, all joking aside, what it meant was that there was accountability and ownership associated with the delivery of those capabilities. The problem is, from a public cloud perspective, how do you write a commercial contract that says if you’re down for 30 seconds, I want an SLA check written back to me for the up to $1 million worth of lost revenue that we’re?

Michael van Rooyen:
17:37

going to get. That’s a fair point.

Matt Maw:
17:38

That’s a very difficult commercial structure for anybody to sign up to. So by owning our own data centres and having our own capability, we had control of those sorts of elements. Now, that’s not to say that I don’t think public cloud plays a very important role for those burst capabilities. So one of the things that we were looking at towards the end of my tenure there was can we take our test and our development environments? We were a very big development shop. We had over 400 developers within the organisation. That requires a lot of infrastructure, requires a lot of sand pits. It requires a lot of development environments. Can I pick those workloads up out of our private data centres, move them into the public cloud for the six or eight weeks of Spring Carnival, allow our developers to continue to operate based off the public cloud, get through Spring Carnival with the infrastructure we had on-prem and then bring it back down off the public cloud onto those private infrastructures to reduce our costs as we move forward? So, looking at it as that, how do I take our totality of workloads, not just production but all the other workloads that exist within an organisation, and then how do we drive that out? It was one of the critical things that we did was start to look at those workloads. We classified all our workloads as either revenue generating, business critical or business important, and so that everybody knew what those particular workloads were and where we would put them. Revenue generation never went into the cloud. Business important that was its place of first position.

Matt Maw:
19:02

What did we get from owning our own infrastructure? We ultimately had control. We ultimately knew that Melbourne Cup Day is the second Tuesday in November. Now everybody knows that. Sorry, first Tuesday in November we don’t talk about. Everybody knows that, but big cloud providers don’t. If you’re an Amazon or you’re an Azure, you don’t know that Melbourne Cup Day exists on a Tuesday. And so how do we know if they’re going to do a core infrastructure upgrade? How do we know if they’re going to do a network change? So we went to extreme lengths to make sure that we could lock our environment down, that we didn’t make changes. We even went as far as getting Telstra at the time as our core network provider. We went all the way to the CEO to ensure that Telstra made no changes on Melbourne Cup Day.

Matt Maw:
19:43

So we ultimately gained control of our own destiny and made sure that we knew and could predict and control as many of the variables as possible, which you don’t typically get in a public cloud environment.

Michael van Rooyen:
19:53

You touched on Telstra as your provider of the network. You also touched on 11,000 sites to deploy this network so people can buy a ticket anywhere or have a gamble or place a bet. I should say, if I think about again the timeline when you did that 2008 to 2016, really pre-NBN you would have had to do a lot of hard work with Telstra at the time, with carriers. Obviously, you must have made a significant increase in bandwidth and reduced costs as part of any WAN upgrade. There’s motivators why to do it. Can you share a little bit about this upgrade? And but also, if you have to reflect on doing that again today with the NBIN, would that play a role? Would you look at it differently from a SD-WAN point of view, like, can you just tell me what your thoughts are around that?

Matt Maw:
20:40

Yeah, it might be surprising for listeners to hear, but in 2008, the primary means of communication to the branch network was via dial-up modem. I literally had banks of Netcom modems and US Robotics 56 6K modems. That was the single biggest connection and those that needed slightly more bandwidth were on frame relay. So at the time, tats was the single reason why Telstra was continuing to run the frame relay network. They couldn’t shut it down until we migrated off Again. I mentioned earlier in the podcast the change that was happening from a macro perspective. We were having more and more competitors where we didn’t have in the past, with monopolistic retail licenses, and so we had an organisation various parts of the organisation was trying to drive a richer experience. So one of the things they wanted to do was drive video and you know, real-time video into those branch networks. Now you can imagine my feeling when they spoke to me and said I need to deploy real-time video over feeling when they spoke to me and said I need to deploy real-time video over dial-up modem. It was never going to fly.

Matt Maw:
21:47

So we also had a Telstra contract that was an amalgamation of some 30 to nearly 40 different Telstra contracts. We didn’t actually know exactly how many connections we had. We had well over 30,000 Telstra services at the time. We didn’t have backup for most of the sites, so it was a significantly antiquated environment. But at the same time, you’ve got to remember that the primary thing being transmitted was actually very small bits of data. A bet is literally sometimes as small as eight characters. It’s very small bits of data, just a lot of it being transmitted at the same time.

Matt Maw:
22:17

So we had to look at that and say you know, how do we drive an efficient outcome that was going to allow us to meet today’s requirements and then scale up. So, yeah, we looked at things like Ethernet, light and you know, the forerunners to NBN and BDSL type services. We drove a consistency of router. You know saying before you know, know how do we standardise our core infrastructure so we can utilise across multiple environments? We also did the same thing with our routers in our, in our retail network.

Matt Maw:
22:46

So rather than trying to find the best shoe for the best foot, we went with a common shoe that every foot would fit into. That meant that some sites got more than what they needed, but what it did mean was that we could roll out the same exact same piece of equipment across every one of those 11,000 sites, which meant our sparing, it meant our standard operating procedures, it meant our efficiency. It just drove down the price of maintaining and operating that environment. Yes, it cost us a little bit more from a CapEx perspective, but our OpEx capabilities really drove through the floor. And then, once we had that base level, then what we could start to do was look at those individual sites that needed more capability. We could then sort of use the, shall we say, the golden screwdriver and drive up those network capabilities as needed. So it was very much about driving for the outcome, driving for the end game, and looking at our operating capabilities as we moved forward.

Michael van Rooyen:
23:37

Yeah, fair enough, and I suspect now, with NBN everywhere, probably just your choice. You know, multi-carriage probably would have been a consideration. Obviously you want to hold the car to account because of the criticality of your SLAs, but I guess it would have probably given you a bit more freedom to have those discussions possibly Look, sd-wan is an absolute game changer when it comes to that sort of capability.

Matt Maw:
23:59

The ability to deploy different carriage services, different capabilities, wrap it all together.

Matt Maw:
24:01

Redundancy, you know, even something like Starlink or you know some of those sort of capabilities that you can now wrap up with, still maintaining that same level of operational consistency, those same level of operation. You know you don’t need to worry about because it’s site on carrier A or carrier B. It operates the same, it looks the same, it just happens from a background. So SD-WAN from that perspective, and that’s where again you’ve got to think about those operational characteristics. How do you consistently drive an outcome at the lowest cost of operations you can get your hands on?

Michael van Rooyen:
24:33

Yeah, fair enough. Moving slightly from the technology stack, I just want to talk a little bit about technology bit, about technology leadership. I know that you’ve obviously run large teams and driving innovation etc. So if I think about your time running across many, many organisations, from integrator to vendor to customer, clearly in many situations you know, if I think about your time, you were leading a technology function of, I think, think over 160 people. How did you manage and align such a large, diverse team, because no doubt you had all sorts of people in that. And then you know, when I think about how you combine those separate business units into a single cohesive team for the mission of delivering, obviously, experience to customers.

Matt Maw:
25:14

Yeah, look, so TATS was an amalgamation of a number of different entities. So TATS was an amalgamation of a number of different entities and the integration of teams before I took over the group. We will have them all reporting to the same manager or the same chief executive and then that’s integration done. And we actually ended up with a situation where one of the teams didn’t trust one of the other teams, so they put a firewall on the internal network so that they couldn’t see them.

Matt Maw:
25:42

And then the other team decided well, if you’re going to do that, so am I, so I ended up having two different firewalls on internal networks.

Matt Maw:
25:45

And you’ve got to remember that our firewall infrastructure was regulated. So I couldn’t make changes to the firewalls without the regulators pre-approving. So I couldn’t even do internal changes without having the regulators make. So you can imagine how inefficient that particularly was. So we actually embarked on a couple of things. The first was we embarked on what we called the the one tats program.

Matt Maw:
26:04

The really simple way to explain it is when we used to ask people who they work for. We started off with I used to, I work for golden casket, I work for tattersalls, I work for unitab, I work for all the different entities. And certainly by the time that I left, when we’d ask somebody who do they work for, they used to work for TATS Group, you know. And that was the change of behaviours, the change of to drive that together. And there was a whole raft of things we did in order to do that. You know, build an environment of safety, build an environment of career growth and opportunities. We used to celebrate turnover, if that makes any sense, in that what we used to do is celebrate when people would grow their careers to the point where they could no longer achieve what they wanted to within the organisation and then we would help them move into industry, move into other areas. So I used to lose people to, you know, people like Microsoft, people like Avaya, people like Dell.

Matt Maw:
26:54

They you know, one of my guys my lead PABX engineer went became the lead PABX engineer for Avaya for Asia Pacific, based out of Hong Kong. So we would then used to. You know, lord, that that was a fantastic outcome, and so we would get people who would want to come to join an organisation where they would grow their careers, grow their skills and have those capabilities. One of the key things I used to use a lot is things like guiding principles. So we spent a lot of time and effort to develop our guiding principles. We would then spend a lot of training exercise to make sure people understood what they were, but what that allowed people to do was be very autonomous, very low into the organisation.

Matt Maw:
27:33

You know help desk people and you know they knew what they needed to do and how they needed to lean in the right area without any need of guidance from management or senior leaders, and so it meant that they felt very empowered to do what they needed to do on a day-to-day basis. They didn’t feel micromanaged, they didn’t feel like, you know, like there was the the big weight of of the hand, but they knew when they needed support they could get that. Yeah, very much about helping people understand that, you know, growth and development was what we wanted, was what the outcome that was lauded and successful gave them the tools in order to drive that. And then, you know, promote from within all that sort of good fun stuff is what we did to make that work.

Michael van Rooyen:
28:15

Yeah, great, and is that the same sort of principles and approach that you took to driving culture of innovation, or fostering a culture of innovation and collaborating with the teams? You know, particularly during these periods of transformation and transition, as well as you know, having these high pressure events at the same time right, yeah, look, absolutely.

Matt Maw:
28:34

We used to have a concept of push workloads. So the architecture team used to have a requirement that says how can I find things that we can do better, more efficiently? And they’ll push that into the project teams to deliver. The project teams then delivered against it, pushed it into the level three operational support. Who pushed down to level two, pushed down to level one and then really interesting is level one then pushed back to the architecture team. They would find the frequent flyers, they would find the ones that you know, if we made some architectural changes, they could make more efficiency. And so we then created that flywheel of push that then allowed people to you know, genuinely have a view that says there weren’t a dumping ground, they weren’t just the ones that hold in the can that they had that opportunity to take new workload at the top, push workload off down the bottom and then create that flywheel of innovation that exists throughout the business.

Matt Maw:
29:27

We also then did things like our operational and project teams. We put in a process that every two months we would sit down as a management team, look at the individuals, look at the projects that we had on and then divvy up those teams between the operational team and the project team. So it meant that you never knew, as an engineer, whether you were going to be in the operational team or the project team, which meant you didn’t then end up throwing things across the fence that weren’t operationally efficient, because then you could potentially be the person who ended up being the person who had to support that or drive it. I like that, but it also meant that you weren’t the people that were always in that. You know, you got to play with new things, you got to do new things. So it really created that balance between those innovation and operational efficiency and operation, because ultimately, that’s what we were measured on was uptime, resilience, reliability, all those sort of good type environments.

Michael van Rooyen:
30:14

Yeah, that’s a great way to do it. And then, rounding that out, knowing that you’ve been a CTO in a number of roles, what advice would you give aspiring CTOs who are looking on taking leadership roles, whether it be in large-scale environments and high-demand industries like what you’ve worked with, or vendors, integrators, et cetera? Have you got any comments for people wanting to get into those sorts of roles?

Matt Maw:
30:33

I’d say a couple of things. The first is that it’s not beer and skittles. You know like it’s. There’s moments in my career I’ve had the phone call at one o’clock in the morning where the person’s gone. We unfortunately turned the power off to the data center and then turned it back on and everything came up in the wrong order and corrupted a whole lot of data and you know all that sort of good fun stuff.

Matt Maw:
30:55

So the whole duck on water is a good analogy. You know, sometimes what it looks like on the surface isn’t what it’s like underneath. It is a journey. What I would say a few tips or tricks is take the time to bring the team on the journey.

Michael van Rooyen:
31:11

Yes.

Matt Maw:
31:12

You are never going to do it on your own, and if you think you can do it on your own, then you’re in the wrong job. It is a collaboration effort, whether it’s your own internal team, whether it’s your broader ecosystem of partners and capabilities. Nobody does it all on their own. So take the time to bring people on that journey. Give them the tools, give them the way you want them to operate. Build an environment where people feel as though they can take a few challenges and take a few risks, but do so in a safe way, with appropriate guidelines and guardrails. It means that that doesn’t have an adverse impact to the overall organisation. Now, that’s easier said than done, but if you can create that culture, you can create that right environment. Then what you will find is that people will thrive, and when people thrive, they do amazing things. And that’s probably my biggest advice is that don’t try and be the smartest person in the room.

Matt Maw:
32:06

Hire a whole lot of people that are a whole lot smarter than you, and they will do amazing things.

Michael van Rooyen:
32:10

If I then just turn to think about your new role or the role you’re running today, which is Chief Delivery Officer for the Critical Infrastructure Business, and the reason. I want to just touch on this for a couple of minutes and we’ll certainly do a further session at some point around this, but maybe you could wrap up kind of those experiences of running these high valued events, the criticality of them, understanding risk, all these things you’ve just talked about for the last period where we’ve been talking how that really relates to the critical infrastructure industry. You know how those skills and what we’re developing and building for those customers and how they relate. Can you bring them parallel together?

Matt Maw:
32:44

Yeah, look, absolutely. The world of critical infrastructure is actually expanding into various different industry sectors. The need to have highly resilient, highly capable platforms to deploy these application sets that essentially make the world work.

Matt Maw:
33:06

Now the opportunity to have something as simple as a meat processing plant or a food distribution centre without technology now is basically zero and if the technology goes down then those operations stop. And if we think about our time at COVID, even the perception of a lack of toilet paper, for example, drove huge disruption into our supply chain and into our society really. So critical infrastructure is something that is a passion of mine and we need to really think about our critical infrastructure from the same mindset that I had at TATS, at RSL Care, at a number of different organisations I’ve been at, which is resiliency by design. We need to think about how we’re going to design our capabilities. We need to think how we do that in a cost-efficient and effective manner. We need to think about how we operate that for the long term and we need to think about what happens when things go wrong, because inevitably things will go wrong, whether that’s bad actors doing bad things, whether that’s internal people making changes for all the right reasons, but unfortunately it goes wrong.

Matt Maw:
34:10

There will be issues and things that happen within those environments and we need to make sure that they are resilient and that they can self-heal, and they are easy to resolve and bring back to an operational state.

Michael van Rooyen:
34:21

Just reflecting on some of this work you’re doing, what are some of the most memorable or challenging moments you experienced in managing the infrastructure for, like a Melbourne Cup and other large scale events, and what key lessons did you learn and take away from it?

Matt Maw:
34:32

Look, I remember vividly one Melbourne Cup. We had a memory leak in one of our applications that was consuming the resources on the environment and you probably don’t know, but the horse race runs for a couple of minutes and at that time we literally dropped to zero transactions for that period, because everyone’s watching the race and seeing what happens.

Matt Maw:
34:54

So I remember vividly we actually went through an entire web farm reboot during that period in a rolling reboot perspective to try and get it back up, to solve the memory leak, to keep us up and running for the for the period. So, uh, yeah, there are certainly some moments of of clenching, yeah, yeah and then certainly, from a external perspective, no one knew that that happened. But yeah, we, we, we timed it down to the literally and we came up with about six seconds before the end of the race before they came, everyone started logging back on.

Matt Maw:
35:23

So yeah, there was some moments. Yeah, some of the key takeaways for me is well, the old Scouts motto be prepared.

Matt Maw:
35:29

You know, do your homework, do your effort, make sure that you do it right. For us, Melbourne Cup Day used to start at four o’clock in the morning, where we always used to come on, have a breakfast, have the teams ready to go. Nothing was ever done on Melbourne Cup Day that we hadn’t done before. Everything was prepared, everything was. If this happened, then we did this. If this happened, then we did that.

Matt Maw:
35:50

We had structures, we had processes, we had no one had to think on Melbourne Cup Day and that was a key element. That we did is that, you know, we made sure that we had role played and we had a stress tested, everything before that environment. And so you know that from a key takeaway and learning is you know, do your thinking during the middle of the day, do your thinking when it’s pressure is off. Don’t try and think at one o’clock in the morning. Don’t try and think in the middle of a critical incident. Don’t try and think at one o’clock in the morning. Don’t try and think in the middle of a critical incident. Don’t try and make sure that you rely on muscle memory at those periods and that means you’ve got to do your homework.

Michael van Rooyen:
36:25

Of course, of course. And then one last question for today’s session. Tell me about the most significant technology change or shift you’ve been involved with or seen in your time in the industry or in your life even.

Matt Maw:
36:39

Wow, I’ve been through it. I think back I don’t know nearly three decades in the industry. You know I was certainly there when voice and data collapsed into the network and when I went through, you know we went through when virtualisation came in and kind of drove, so we went through centralised back to decentralised, back to centralised, back to back to decentralised. You know we’ve been through that round about many times.

Matt Maw:
37:02

You know, probably the most significant technology shift for me really was the, the, the rise of the, of the internet. You know I know that’s going to sound very, very old, old school of me, but you know, fundamentally that has driven an interconnectivity that you know we never saw before and we really still are seeing the implications of. You know, I mean, without the internet we wouldn’t have had a cloud. Without the cloud we wouldn’t probably have AI Connecting organisations together and really taking them out of essentially analog paper and really driving a digitalisation really was the first domino, if you like, of what we’re now seeing as a multi domino fall. So yeah, I guess probably I’ll go a little bit old school and say the rise of the Internet and the interconnectivity of organisations.

Michael van Rooyen:
38:00

That’s a good one. I mean, again, it comes down to plumbing, right. I think we’re now at the point where the internet and has been for a while, you know is stable. It’s there, it’s the hyper-connected network. Everyone’s transitioned to using it as their backbone. So I agree, I think, as a digital plumber at heart, I think the internet is absolutely one that I agree. It’s kind of one of the wonders of the world, effectively right to some point, and I think people that have grown up with it now, particularly the new generation it’s just there, Connectivity is there they don’t think about it.

Matt Maw:
38:26

It just works right. Yeah, the days of not being connected are gone. So, yeah, what we can now do and what we have done with it and what we are doing with it is is quite amazing. But yeah, technologist at heart I, I guess probably you know that’s that that’s probably it, and I agree with you.

Michael van Rooyen:
38:43

If I think about, uh, when the guys created that arpanet etc. You know I think it was mid 60s. I mean, if you think about fundamentally how it runs today, still, that’s, it’s a pretty impressive engineering design concept, uh, end to end. So so I completely agree, it’s a great one. Matt, I appreciate your time today. Thanks for the great insights into the major event that everyone in Australia knows about and I look forward to talking to you more around critical infrastructure in the future.

Matt Maw:
39:09

Thanks, NPR.

Subscribe to Securely Connected Everything

Other Podcasts

Season One
Embark on a critical journey into the heart of cybersecurity as MVR sits down with Michael Murphy from Fortinet to dissect the frontlines of operational technology and infrastructure protection.
Season Three
What does it take to transition from aerospace engineering to becoming a tech industry leader? Michael Reid, affectionately known as Chopper Reid, joins us to share his remarkable career journey, revealing the unexpected twists that led him from engineering to influential roles at Cisco.
Season One
Discover the cutting-edge world of network visibility and observability as host MVR sits down with Heath Russell, ThousandEyes’ technical solution architect, in a session that will revolutionise your understanding of digital infrastructure.