It was end of August when a colleague reached out to me saying that there is a new project in the pipeline that I might be interested in. It turned out that the project was for a new client, so I didn’t have to think much before accepting. This project was indeed the perfect opportunity for me to build something from scratch and learn new things.
Long story short, the client was planning to launch a mobile payment solution for its B2B customers. Since the project was in the pipeline for some time already, at the moment we joined the project, they already had a pretty clear view on what were the technical options available for the roll-out. However, the client was unsure of the impact of these options on their business, as well as which one was the best from a business point of view. Therefore, we suggested a data visualization solution to answer these questions.
In order to make a success out of this project, it was crucial to us to know more about the client and the project it was undertaking. Unsurprisingly, the best way to achieve this is by meeting people and discussing about the project, the business and data available. This part of a project is extremely valuable for two reasons: on one hand you get insights that help you deliver a useful solution for the final users, and on the other hand you meet new people and learn a lot on a personal level. And in my opinion, this is what makes the consultant job really interesting.
Moreover, after digging up a bit into the initial ideas, we concluded that the client was mainly interested into an overview of its physical assets (sites) that combined geographical, technological and transaction information. In particular, the client wanted to:
Identify countries where to first implement the mobile payment solution
Identify customers for the early roll-out of the mobile payment solution
Analyse the data for asset management purposes
To answer these questions we had at our disposal non-aggregated structured transaction data from the back-office, as well as different excel files with data about the sites and customers (identifiers, names, classifications…).
The biggest challenge during this project was making sure to have reliable and clean data about the sites (physical assets) and customers. Indeed, since transaction data was coming from a reliable source, there were no particular issues with it. However, information about sites and customers originated from multiple sources for which there was no thorough maintenance. Therefore, we started preparing the data by using Alteryx and by meeting with the client to discuss data issues and assumptions to be made. After, a couple of such sessions, we managed to create several reference files for the data that we needed about the sites and clients. Allowing us to move onto the last stage which was data visualization.
For this stage, we used Tableau to create dashboards that the client could use to explore its data and find answers. Self-service data visualization is an interesting approach for two reasons. On one hand, the developer doesn’t have to have all the business knowledge required to interpret results, thus accelerating the delivery of results. And on the other hand, users can explore data with very little technical knowledge about the tool, while having a business background that allows for meaningful data interpretation. In this particular case, we created several dashboards that combined both charts and geographical maps in order to help the users find answers to the initial questions.
The initial project is now over but this isn’t however the end of the story. Through this exercise, we successfully raised data awareness and sparked the interest of other teams for data analytics. Which, at the end of the day, might be the most rewarding aspect of any project!
Any questions? Don’t hesitate to contact me: email@example.com
Stay updated on the latest articles, events, and more
Today, you take a picture of a paper bill and it gets suddenly processed by your banking app without you doing anything but confirming through Face Id recognition. Today, you speak to your microphone’s car while driving and it starts calling someone from your contact list. Today, you are probably old-fashioned if you never used google translate to process some sentence in another language, right?
In 2014, one of our clients (leading provider of packaging worldwide) sought a solution to bring structure to their customer base. They reached out to Keyrus who designed and developed the Customer Data Integration (CDI) tool.
Appropriate action is a combination of marketing automation and of the personal touch by your frontline staff. Make it data driven.
Data Science is running complex machine learning algorithms on ever growing datasets. The promise towards business stakeholders is to replace gut decisions and experience with objective and improving algorithms. But is machine learning the only game in town data scientists need to help business decision making?
And why you still can’t replace your employees with software completely...
A few times I have been asked what it is I do exactly as a data-scientist, and managers and potentials data-scientists especially are interested in the common struggles we as data-scientists have to deal with. Just listing all issues we comes across would not result in an interesting read, so I will present it to you in the form of an analogy you’re all familiar with: baking cake.
You might have heard the saying Data is the new oil. This mainly refers to their potential value and in both cases this value is not merely in the raw product but rather results from the way it is processed. In this article we present a commonly used classification of data and analytics into descriptive, diagnostic, predictive, and prescriptive analytics. We’ll discuss each of these separately including some of the commonly used methods. Thereafter follows how these four types of data analytics relate to each other. First however we’ll explain what we exactly mean with data and analytics.
You’ve made it to the third and final part in our series ‘The human behind the data’. This will all be about (illusionary) patterns and the importance of some good old probability theory.
In part 1 of this series you read all about the difficulties to stay objective when selecting the data you want to work with. Simpson’s paradox, multicollinearity, Robinson’s paradox, survivorship bias, and cherry picking were all issues showing how important your decisions as a processor of data are. In this second part we’ll show that you yourself can become data which will seriously influence the outcome of your research, and we’ll also show how critical it is to choose the right measurement tool.
Humans are not very rational beings, even though we think we are. This impacts our personal as well as our professional lives, and the latter is of particular importance if you often work with data. If you work in business intelligence, data science, or any related field, people expect you to deliver them an objective truth. In this article we’ll discuss a lot of pitfalls that undermine this goal. Knowledge of these will help you avoid these mistakes and also to spot them in other people’s work. Many of the topics covered in this article involve pitfalls that can be classified as biases or paradoxes.