Interview with our CEO Philipp Descovich

Humai Technologies works with its industry customers and partners in continuously enhancing the abilities of humans (human augmentation) via artificial engineers and assistants.

How did your story with Humai Technologies start?

In December 2012 Peter Pichler, the CEO of Berndorf AG asked me if I could help him with a strategic planning exercise for a small software company they had invested in. The company had no real strategy indeed but what I quickly found out was that the small team was constituted by extremely talented technicians specialized in an exciting area: the combination of computer vision and augmented reality. During the strategy process, I fell in love with the potential that was dormant within this great team. So, during the final strategy presentation, I proposed to Berndorf to buy shares of the company and step in as the CEO.

What´s the vision of Humai Technologies?

Our vision is to use artificial intelligence and visualization technologies to create solutions that help humans to become more effective in their daily activities. We believe that what we do is to create superpowers for humans.

Do you have real customers?

Of course, we do. We developed the first superpowers together with the Bosch Group and their Bosch Cognitive Services unit. The main focus was to improve the way spare parts are managed and ordered in large production or construction environments.

What is the actual potential in this environment, today?

There’s enormous potential. Thanks to cheaper and stronger computing power, especially in the area of graphics cards, computer vision and artificial intelligence have reached a solid maturity level. I would say that the tipping point was sometime in 2015. After that, we were able to build use cases that solve the problem of accurate recognition of a component or product. E.g. “this is SAP-article AN47-S3471i”. Before that we were more on the level of “this could be a drill or maybe a dog.” The processing speed really nailed it. Not just in terms of recognition performance. Also, in terms of response times. An industry field technician can´t just wait 15 minutes for an answer. He needs to attend to the issue within a couple of seconds.

The superpowers we currently activate for our customers capabilities like “know all your 100,000 spare parts” or “view inside your machine with x-ray-eyes.” We are currently working together with a large customer to develop the capability to “go back in time” based on their monitoring data. “Travel into the future” is the next big thing. Predictive and prescriptive maintenance are big buzzwords but, in this area we need to maintain realistic expectations. The entire process is still in a very much research-oriented phase. The main problem is the large amount of historical sensor data one needs to collect and annotate to be able to start to work on solid, reliable predictions. There´s also a human issue: technology would be able to advance much faster if companies start to share their error-logs with others. But who really wants to share their problems with their competition? It would demand a complete change in the way the leading industry players think and an excellent technology to anonymize the data without losing valuable information.

Why is spare part recognition so difficult?

Over the last millions of years, our brains have become brilliant vision machines. The signals they receive from our eyes are quite weak compared to modern cameras, but the context-sensitive processing and the projection of expected results are incredible and they allow us to be very good visual animals. For a computing system, to get even close to this kind of performance is very hard. In addition to that many of the big platforms are instead focused on the so-called category-based recognition. They deliver results like “this is a drill, but it could also be a screw.” This kind of recognition is fine if you want to scroll through the different offerings at Amazon but it´s not enough in an industrial environment. Actuator 2345 should under no circumstances be mixed up with actuator 2346. This level of detailed recognition is a much tougher nut to crack. Our human brain is rather powerless at this kind of detailed recognition and among thousands of parts.

Does this mean that these technologies will replace humans?

I believe that AI is not really about replacing humans. Let me give you an example. Autonomous driving is currently all over the news. At first one would think that the driver of a school bus will lose his job sooner or later because it very much looks like one day autonomous busses will be safer than human-driven ones. Then think of all the other tasks the bus driver completes that are not listed in his job description. He ensures a certain level of discipline on the bus, he also has to make sure the bus is clean. In the case of an emergency or an accident, he will call for help and, more importantly, provide first aid. In other instances, he will help a kid when he is bullied by other kids in the bus. Nothing of that will be covered by AI systems in the near future.

Coming back to the subject of Humai, why is Artificial Engineering now possible?

We believe that things in industrial engineering and maintenance environments are not that much different from the bus driver case. Thanks to AI we will soon have systems that will predict an outage within the next 24 hours with very high precision. Today it takes humans weeks to compute such predictions. However, what about the reaction to the issue. Humans are just so much more flexible, context-sensitive and creative. I believe our “Artificial Assistants” will become better and better at showing very relevant information at the right place at the right time but in the end the decisions will be taken by humans.

What do you think in general about Artificial Intelligence?

Well, there is, of course, a running discussion about this topic and many people believe that it´s only a question of time until superintelligent systems will take over the planet. This has a lot to do with a lack of understanding of the realistic capabilities of those new technologies. If I showed an iPhone to a person from the 18th century, it would be considered as an incredibly powerful, magical device of almost divine power. The fact that you can talk to people who are not present in the room or that you can even see them, can create strong light without much heat. Even an incredibly smart person like Isaak Newton would probably think that the quirky-looking iPhone can also cook, make fire or heal diseases. The same is happening today in the AI field. Many people don´t know the limitations of our current AI technologies and exaggerate their capabilities. AI is a robust technical tool that will change our daily life like electric light or trains. But it is also constrained.

What´s the next step in your Artificial Assistant project?

We are just in the middle of a project where we implement a solid vision-based search solution for industrial applications. It allows you to get all the relevant information about a machine or machine component that you scan with your cell phone without having to call suppliers or looking through thousands of pdf documents. The other big step that is ahead of us is the “look into the past” to better understand how and when errors and downtime actually evolved. The third big step that’s in the process of being implemented is “automated documentation” because documentation is a major weak spot in many production and maintenance environments.

What´s the next step for Humai?

By the end of 2019, we will open our new office in Frankfurt/Bad Homburg to better service the German market and by the end of 2020 we want to be ready to jump across the big pond.


Share on twitter
Share on linkedin