Well, I believe we are only seeing the tip of the iceberg ! In a very short future, it’ll be common to request information on how to fix or to get fixed something just by taking a picture of it. It’ll be common to go to your favorite supermarket and ask your questions to a robot who/which will answer you with his personalized ‘human-like’ voice. It’ll be common to use screens integrated into forefront windows of retail stores to pick the appropriate item you’re looking for.
Some of you might say I exaggerate (in that case, e-mail me quickly because you definitely need a serious introduction to the capacities of AI), but for the rest, I would guess it’s no surprise this is just about to come ! And that’s great, right !?
If there is roughly one word behind all this…it is Deep Learning ! Of course, a purist would say there is also reinforcement learning â€¦ but let’s not go there for now. Of course, a negative soul would say that not everything is standardized yet, not everything is working smoothly yet, and not everything is ready to be industrialized. And I tend to agree with all that, but the pace at which this field is evolving is at Tsunami speed !
It has been a bit over a year that I’ve started my path into the world of Deep Learning and I would be thrilled to share with you the things I’ve learned, the things we’ve been recently working on and the capabilities we have at #Keyrus.
First of all, if you are a technical person reading this… how do you quickly learn about this because you want to avoid that “Everyone talks about it but no one really knows what this is about” feeling. Here are my three tips to make of you a Deep Learning Champion :
Online Coursera Ng’s specialization : I have just finished it last week-end as I was discovering the results of our elections. In a nutshell, the before/after disruption was rather comparable !
‘Deep Learning’ book (Goodfellow, Bengio, Courville): Written by some of the greatest minds, and acknowledged by our beloved Elon Musk, this is a book that can captivate both technical and non-technical profiles and teaches you every foundation you need to address contemporary challenges.
CS231 Stanford class : all videos are available on YouTube. The class is oriented towards computer vision but it just give you wings about Deep Learning ! Many use- cases are presented and contemporary challenges in terms of infrastructure : hardware, programming frameworks, … are also discussed.
Ok, going back to the main point I wanted to develop …
I think the way I prefer to see Deep Learning is in terms of structured vs. unstructured data. A structured data is, for example, a float64 number. You know (well, you might not remember at this very moment) what kind of number you should expect if you see such a format and you can be sure you’ll deal with such a number as a specific format was enforced. Deep Learning was firstly designed to address unstructured data, i.e. everything that is not structured data ! And what do we want to do when we face unstructured data ? Transform it some way into structured data.
Too cryptic for you ? Well, in simple terms, we want to process images, sound, raw text, and get something out of it ! Something for a computer is generally a quantity that can be stored, that has a specific format… Well, of course… structured data ! Maybe you want to know if a twitter message was containing your brand name and was providing a positive or negative feeling ? Or maybe you want to prevent fraud by implementing a face recognition algorithm on pictures provided by your client?
Neural network created in Python using ANNVizualizer library (building upon GraphViz) Among other analytics-based cases, at #Keyrus we have been recently involved in several initiatives to deal with “computer vision-based deep learning”, i.e. transformation of images into structured data. I hope I’ll get the time to write you about this in a future post… but you can always contact me if you can’t wait to hear about it.
Actually, since 2014 and the work of Goodfellow et al. on adversarial networks, another trend is up surging in Deep Learning… and that is Generative Modelling. What if what you are actually interested in is generating unstructured data from structured data ?
Have you foreseen the potential of such an approach ? GAN is the acronym for “Generative Adversarial Networks”, i.e. a Deep Learning model that takes structured data as input and outputs unstructured data. If we can do one way, we can do the other way, right ?
No ideas coming to your mind ? Well, imagine that given customer’s structured data, you would be able to create a semantically correct e-mail “out of the blue” or that you would be able to picture the new product you should focus on ? Imagine that given your internal structured data you would be able to formulate expert opinions as sentences for reporting purposes, â€¦ ? Imagine that given a specific supplier or customer provider over the phone, you would be able to generate a customized answering machine ?
Well, this is the time to think about it ! As mentioned above, all these techniques do exist, people in the field are well aware how powerful they are and everyone is also well aware they will need refinement. However, you don’t want to be the last to investigate these techniques because they appear “black box” or not “trustful” as you might read over the internet… They are somehow complicated indeed and they require, and will always require, a substantial amount of investment… but the delta you’re up for in terms of what you will be to improve in your customer’s experience or the added value you will bring to your business in terms of revenue increases, cost reductions, … is just crystal clear !
In summary, can you deal with Deep Learning ? Well, I would say “Yes you GAN” ! And if you need some help in implementing your ideas or just some thoughts in moving forward, whether in terms of learning experience, roadmap definition,… well, you know who to contact, right ?
Written by Francois Dehouck. Don’t hesitate to contact Keyrus for further information (email@example.com)
Stay updated on the latest articles, events, and more
In 2014, one of our clients (leading provider of packaging worldwide) sought a solution to bring structure to their customer base. They reached out to Keyrus who designed and developed the Customer Data Integration (CDI) tool.
Appropriate action is a combination of marketing automation and of the personal touch by your frontline staff. Make it data driven.
Data Science is running complex machine learning algorithms on ever growing datasets. The promise towards business stakeholders is to replace gut decisions and experience with objective and improving algorithms. But is machine learning the only game in town data scientists need to help business decision making?
And why you still can’t replace your employees with software completely...
A few times I have been asked what it is I do exactly as a data-scientist, and managers and potentials data-scientists especially are interested in the common struggles we as data-scientists have to deal with. Just listing all issues we comes across would not result in an interesting read, so I will present it to you in the form of an analogy you’re all familiar with: baking cake.
“In 2019, one of the leading actors in the Oil Industry, was assessing different possibilities for the implementation of a mobile payment solution in their B2B segment. In order to be able to take data driven decisions, they reached out to Keyrus to set-up a data visualization solution.”
You might have heard the saying Data is the new oil. This mainly refers to their potential value and in both cases this value is not merely in the raw product but rather results from the way it is processed. In this article we present a commonly used classification of data and analytics into descriptive, diagnostic, predictive, and prescriptive analytics. We’ll discuss each of these separately including some of the commonly used methods. Thereafter follows how these four types of data analytics relate to each other. First however we’ll explain what we exactly mean with data and analytics.
You’ve made it to the third and final part in our series ‘The human behind the data’. This will all be about (illusionary) patterns and the importance of some good old probability theory.
In part 1 of this series you read all about the difficulties to stay objective when selecting the data you want to work with. Simpson’s paradox, multicollinearity, Robinson’s paradox, survivorship bias, and cherry picking were all issues showing how important your decisions as a processor of data are. In this second part we’ll show that you yourself can become data which will seriously influence the outcome of your research, and we’ll also show how critical it is to choose the right measurement tool.
Humans are not very rational beings, even though we think we are. This impacts our personal as well as our professional lives, and the latter is of particular importance if you often work with data. If you work in business intelligence, data science, or any related field, people expect you to deliver them an objective truth. In this article we’ll discuss a lot of pitfalls that undermine this goal. Knowledge of these will help you avoid these mistakes and also to spot them in other people’s work. Many of the topics covered in this article involve pitfalls that can be classified as biases or paradoxes.