Written by Abhinov Gulati, CIO, Profectus Group We’re in a rush to simplify everything that seems complex, and rightly so. Complexity is a nightmare to manage in the business world, and often decision-makers will develop a headache trying to find a needle in a field of haystacks.
But the process of simplification is itself complex. And when you’re talking the analysis of data amid an ecosystem of disparate data sources, particularly when it comes to contract compliance, it requires not only access and interoperability of all of those sources but data governance, data cleanliness and most of all, data consistency. When this level of disparity is in a non-tech-native field, like supply chain and logistics, you’ve got no hope of doing so quickly, affordably and without adding additional complexity to a crowded environment.
Take the example of a national retail store with an active delivery system, something in the order of six million customer deliveries annually and store replenishments of around 60,000 per annum. With that comes an incredible number of consignments, supplier invoices and a need to ensure contracts have been adhered to per any agreement. This usually requires at least one person to do the job full-time, and that’s not even taking into account undertaking the accruals process accurately for reporting purposes, which is often a requirement even if an invoice from a supplier hasn’t been submitted. With this number of agreements and consignments to review, naturally, errors can occur, particularly if the process is manual.
To simplify this complexity, you need data consistency across your supplier list of couriers, product vendors and myriad others which can number in the hundreds; you need quick and easy ways to cross-reference completed and invoiced jobs and versus paying suppliers on time and accurately, and you need to ensure these are all in unison with other data formats and reporting functions within your business. Otherwise, logistics teams will spend all of their time on paperwork and recovery.
So how can this be done? Well, it’s not easy, particularly in this economic environment of high inflation, low unemployment and higher wages.
1) Find a data scientist. A business needs to have data skills to be able to farm data, clean it and combine it, and they need people to execute on it. The best person for the role from an internal perspective would be a data scientist – so it’s incumbent on your business to get to recruiting a data scientist STAT (pun intended). Sounds simple, right? Well, it’s not.
For one, the jobs market is as tight as it’s ever been in Australia – as of May, the unemployment rate is just 3.5 per cent. This means that to entice someone to your role, you likely need to coax them from a role they’re already in, so you’d need to offer a pretty strong package, and then the ability to guide and manage them.
And when it comes to data scientists, the package better be pretty good. According to Seek, the average salary for a data scientist is between $110,000 and $130,000, while other data suggest it averages out at $165,000. So get the wallet ready if you want to go down this path.
2) Open up a new business function. This department will be charged strictly with the analysis of the data your scientist can glean from the myriad sources. Yep, you’ve just added a role and significant salary to your business, and you need to add more to make sense of their work.
This department features a skilled analyst that understands the nuances of different types of contracts from logistics to fuel and rebates and also a data engineer to implement and maintain a data architecture, including databases and processing systems. And if you think it’s a challenge to find a data scientist that can work independently, this is even more so for a data engineer; LinkedIn recently ranked Data Engineer as the 13th fastest-rising role in Australia and the average salary, according to Seek, is between $120k-$140k.
Suddenly salary and headcount is starting to really add up isn’t it? And that’s not factoring in the cost of implementing the data architecture required to make this all hum.
3) Implement a recovery module. Now comes the fun part. You’ve uncovered significant lost revenue through your data scientist and data analysis team, so let’s get that money back!
But who does it? Is it Accounts Payable, who are already undertaking similar functions across the non-facilities management arms of the business? Is it the data analysis team, whose expertise is better served, you know, analysing data?
Often the responsibility of recovery is shared across the team, and this can lead to inconsistencies, a lack of process and more headaches than originally planned.
As you can see, simplification is actually quite complex. And sadly, there isn’t a one-size-fits-all, off-the-shelf simplification toolset – at least not yet.
Regardless, there is a role technology can play taking the grunt work out of this aspect of a business’ everyday functions and allowing it to reduce complexity – it’s just best off being external, rather than built within, as this can be done at a relative pittance compared to building a function from scratch.
And recently the emergence of Artificial Intelligence (AI) is proving a boon in enabling technology to do a lot of the heavy data crunching, cross referencing and analysis in real time, which will save countless hours. AI further enables those external suppliers to act as the recovery module on behalf of the business more than ever, speeding up the time to recovery and lessening the load on the team within the business to do the chasing.
We all want things to be simple – but simplification is akin to untangling wires behind your TV set. The goal makes sense, but in attempting the feat more tangles emerge, the blood starts to boil, and the hair starts to get pulled out.
Ultimately, you’re better off handing the untangling task to others so you can just get on with your day. Abhinov Gulati is the Chief Information Officer for Profectus Group
Comments