For many years, I was under the impression that the corporate function of Finance was the only business unit that used data to make decisions. Of course that wasn’t necessarily true, and it definitely shouldn’t be true of modern corporations. There are a number of questions that could be asked about data nowadays versus, say 10 years ago when I was just getting into corporate finance; Has data fully transitioned from a liability or risk to an actual asset that drives value? Are all of the corporate functions still consuming data, producing it, both? Have data scientists and the like taken over as the main “data drivers”? Are buzz-words like AI and machine learning what really matters? The list goes on, but I’d like to touch on pain points that I experienced while using data to drive business decisions that have financial impact, and how those same pains still exist in today’s data-driven world.
My story starts as I began to work as a financial analyst for a large international semiconductor company. With $4 billion in revenue, hundreds of customers and 20,000 employees across the globe, data was surely plentiful… However, let’s keep the scope limited to my specific role, which was focused heavily around global revenue reporting for North America. Now in an ideal situation, someone from the executive leadership team could go into a dashboard and press “refresh” to get all the information they needed, with the click of a button. Or similarly, an account manager could go into a portal and press the same “refresh” button but have the relevant sales and orders for his accounts visible within seconds. But in reality, is that how it works? If you have ever worked in corporate finance, especially 10+ years ago, you would know that there was typically a ton of manual data manipulation that took place to appease all the stakeholders.
For many years, I was under the impression that the corporate function of Finance was the only business unit that used data to make decisions.
First off, the supply of data came from disparate sources without a one-size-fits-all format, so this “raw revenue data” would end up in (at least) two different buckets. Granted, for direct sales, we had fairly clean data with up to 40 different fields that could be mined. Still, in terms of raw data this was pretty good, and usable to create a number of different revenue reports with a simple download from BO into excel, a quick cleanse and drop into an excel template with linked dashboards. Even as I write that it sounds awful… for someone to read this and imagine what should be done to make the data consumable, it is very unclear. Now add in the various POS data from our distributors, and the entire process gets 10x more complicated. POS data was generally reported just once per month (as opposed to daily for direct sales), and came in any number of different formats. Thinking back, I am not even sure how we were able to publish some of the reports we did… Good thing they were not for public consumption or there could be some audit implications there.
Even still, with all the raw data “cleansed” and dropped into the desirable excel file, you’d think my job was done. With revenue reported bi-weekly, all the different business partners from CFO to an account manager for a specific business line were eagerly waiting for that email, “Revenue Report Update - please click link to access on shared drive”. Without fail, the day a new revenue report was published, I would get at least 10 emails asking for an explanation, or a complaint saying that something was missing, or a phone call from a screaming manager telling me that I don’t do my job right… It was always easy to place blame on the data quality, or the APAC team, or use any number of other excuses, but still the problem remained that not everyone was getting what they wanted out of reporting. Many times my corporate finance team knew that we had poor quality data and spent too much time manipulating, but still after 5 years doing different variations of the same thing, nothing really changed.
The business was not constructively engaged; were they giving clear requirements or were they just being sent what the system produced?
Imagine the financial impact of that. Imagine how many man hours were spent ultimately reporting data that was not 100% correct. Then imagine how many business decisions were made, either rationally or not, based on this manipulated data. The implications are significant, and probably ranged in the millions of dollars, if not more. Thinking back, I wish we had an expert tell us how to do things differently. See, we spent far too much time in team or interdepartmental meetings trying to come up with solutions only to spin our wheels with no real fix. Most companies want to spend millions on tools like Salesforce or Tableau, thinking this will fix their data and reporting issues. But in reality, businesses need to identify gaps between three pillars - people, process, and technology - and there needs to be a grassroots data strategy with specific owners who can be held accountable for driving the strategy towards tangible results.
Businesses need to identify gaps between three pillars - people, process, and technology - and there needs to be a grassroots data strategy with specific owners who can be held accountable for driving the strategy towards tangible results.
The anecdote above about finance is just one small use case where data caused distrust and potentially caused misinformed business decisions, but tell me your story; it could be within HR, IT, sales and marketing, operations, etc. There are horror stories across the business. But you don’t have to have a horror story to improve your data management and strategy… you might just want a second pair of (expert) eyes. Data is all around us these days, and if we don’t start utilizing it as a corporate asset rather than a risk or liability, then we will fail or be left behind by our more advanced competitors.