August 7, 2020

Christopher Yeo is no stranger to the tech start-up scene in Singapore. Having successfully exited from one of his two former companies, the math-loving, self-professed software lover now plans to conquer the world with his latest AI start-up—

Since founding the company three years ago, Christopher has established a data marketplace (“data alliance”) that allows companies to trade and gather insights from highly granular data. His ambitions don’t end there. In the next six months, Christopher intends to launch a portal that serves the needs of end business users as well as the mass market, at an affordable price. This new feature has gotten the attention of investors, and to date, the company has raised more than US$5 million in funding.  

With the resources and a talented pool of almost 40 employees, Christopher is ready to lead into its next phase of growth—an overseas expansion into countries like Japan, Indonesia and the US. Over Zoom, we spoke with Christopher on his struggles as an entrepreneur, the market opportunities in AI and his eventual plans for an IPO in Tokyo.

High Net Worth: Your previous companies iFusion and Comex were both logistics software start-ups. Why AI as your third venture?

Christopher Yeo: During my undergraduate and PhD days, I was always fascinated by ‘expert systems’ and ‘fuzzy logic systems.’ These words, that you hardly hear nowadays, are the genesis of AI. The only problem was that during my start-up days in the 1990s, AI algorithms were just too complex for computers back then. But we did implement elements of AI in our logistics software. We automated many processes, optimised logistics flows, and even found the cheapest routes for shipping. 

Today, we have the machinery to implement AI on a large scale, especially when the algorithms have matured and evolved. Of course, we are not the first company in Singapore or the world. But what inspired me was the effect that AI has in terms of helping people be smarter and making lives better. The journey from the first start-up to today has seen the evolution of the internet, the evolution of GPUs, and the evolution of computer chips. In 2017, I thought the time was right for AI to benefit everyone whether they are rich or poor. And can we build systems that fundamentally make our lives better? That was how was conceived.

What were some of the challenges that you faced when you started

The challenge has always been whether the solutions we are trying to build is something that customers want. We tested market acceptance by building a minimum viable product, and further calibrating it to suit the needs of customers. Although most people roughly understand what ‘Artificial Intelligence’ means, unfortunately, not many know how to apply that in reality. At, we have the arduous task of determining whether our customers’ problems can be solved with AI solutions—and that is not always easy.

What can help with specifically? 

There are four key areas we focus on and do well in. And they are very large areas by themselves. The first area is computer vision. This involves writing algorithms to understand the visual world such as cats and other objects. The second area is natural language processing (NLP), a topic that is very close to my heart. NLP allows computers to process large amounts of natural language data. Now, our computer algorithms can read articles and documents, and understand topics based on keywords and concepts.

The third area, machine translation (MT), is a natural extension of NLP. How do you translate speech to text? This is where MT comes in, and we have voice technologies as well. Voice technologies are also quite important because there are more than two billion people today in this world who are still not very literate. They can’t use smartphones, so how do they benefit from AI then? They can use ‘dumbphones’ and speak to the ‘dumbphones.’ If we have AI-powered ‘dumbphones’ that understand speech and can respond, then they become very useful. 

The fourth area is tackling the huge amounts of data and making sense of it. In the last ten years, big data has been such a hot term. Today, that term is eclipsed by other terms such as ‘autonomous driving.’ Big data is still the fundamentals. We want to develop expertise in these four areas over the next 10 or 20 years. 

Can you elaborate more on computer vision and how would you translate speech to text in the local context?

Recently, we implemented computer vision algorithms for a customer in Japan to help them detect manufacturing defects. We tried it and it worked very well. Going forward, our computer vision models will be able to recognise more complex objects like rambutans, street signs, tourist destinations, buildings and landmarks. For speech solutions, there is Alexa, Google Assistant, and many other software that interpret speech. However, these software have failed in the local context, especially in Singapore. Why? Because locals in Singapore do not speak with a caucasian accent. Therefore Siri, which has been trained with data collected from the US and UK cannot interpret local speech. For that, you need data collected from within our own shores. IMDA has amassed more than 2,000 hours of speech from different segments of Singapore society, and we partnered with them to use local voices to train our speech model. Now our speech model understands “Tampines,” “Seng Kang,” “Alamak,” “Nasi Lemak” and other local colloquialisms. We rolled that out with another customer and it worked very well.

How do your speech algorithms compare to others in the market?

We feel that the performance of our speech algorithms in the local context is better than those that you use off-the-shelf from Google or Microsoft. Our next challenge is to replicate what we have done in Singapore, in other countries like Jakarta and Bangkok. 

What is’s value to software developers?

Essentially, we have created a platform that enables software developers who are not skilled in AI or data science to use it to create AI-driven solutions. This platform will have pre-packaged AI in there, almost like a supermarket for complex AI algorithms made simple. If you are a software or mobile developer who wants to develop a chatbot with AI features, you could use our API for that. We have made it very easy for software developers to start doing innovative things without being constrained by the lack of AI knowledge. 

I was looking through the website and found the Knowledge Graph within ‘AI and data’ quite interesting. Would it be possible to give you a bunch of information, and find undiscovered relationships within the data that is not obvious?

Our algorithms happen to do that very well! For example, how do you relate cars, Nissan, Kia, and Hyundai together? Humans can do it very quickly but machines can’t. So we have written the algorithms to do that. When you start feeding it documents or newspaper articles, it can start relating things together. The underpinning structure is called a ‘Knowledge Graph.’

I understand that Singapore Singstat and Govtech provide publicly available data sets. What is the value add of ‘data alliance’? Are they private data sets? And could you give an example of a data set that data alliance provides which is not in the public domain?

Open data sets may be aggregated but sometimes in machine learning, you need very granular data. My machine cannot learn well if it relies on aggregated data. Let me give you an example of granularity—if we want to know the mobility of people in Orchard Road, open data sets may show the number of people in a mall in Orchard Road. On the other hand, granular data will include more detailed information such as the date and time. 

Private data sets is the other angle that you have rightfully pointed out. Take an e-commerce site for instance. A lot of data is collected from customers that are not available openly. The e-commerce site may decide to monetise the data collected from their users. If you want to know how many people in Singapore buy handphone covers online, that’s private data that they could provide for a fee. Hence, data alliance and our AI microservices were created to address that need. In our research, our data scientists need the data anyway. Making this service available is just another step we take to benefit others.

I understand the data that a company has can be traded on your platform in a controlled manner. How do you deal with the legality of the data alliance marketplace? 

For the moment, we decided not to put personal information in data alliance. All the data sets on data alliance have to be scrubbed out, either by us or the data provider. We want to scrub away even things like mobile phone numbers, last four digits of the NRIC, and any information that may identify a person. None of those will be in the data set. That takes care of one whole aspect of data privacy. We have spent a year developing legal contracts to protect the integrity of our data sources and the use of the data from data alliance.

Could you describe the process of buying and selling data on ‘data alliance’?

First of all, there is a compulsory KYC (Know Your Customer) process for buyers and sellers. We need to know that these entities that want to buy or sell are legitimate. Next, they sign up, obtain an ID and explore the data marketplace. Once they select the data that they want, they can click ‘buy’ and the sale takes place immediately. In other cases, when somebody clicks ‘buy,’ the seller wants to know who the buyer is. In other instances, when the data providers are not sure how much to sell the data, they may negotiate with the buyer. These are the three ways of selling and buying data.

Can you also talk more about how ‘co-lab’ works and what is the process of onboarding a partner that you would work with?

Co-lab is a simple methodology for enterprises to discover how AI can help them in their organisation. It is a four-step process. The first step is conducted in a half-day or two half days, where executives from an organisation come in for a brainstorm and list all the organisational problems they face. In step two, my team of scientists and consultants will work together with the enterprise executives to decide which problem statements can be solved by AI. In the third step, we build the prototype or proof of concept. Lastly, if the prototype works well, we may pilot it across more users in the company. This is a collaboration between and the client, which is why we call it ‘co-lab.’

Do clients that ‘co-lab’ with you retain intellectual property rights? 

In some cases where the solutions are very unique to them, then they would have foreground intellectual property (FIP). They get to keep aspects of the IP for themselves. If we already have a ready-made solution, obviously that IP is still ours. Usually, it is a combination. They use some of our IP and we build some things for them that they can keep for themselves.

If I want to start a company selling flowers, make use of foot traffic data and other information to maximise revenue, can help to gather these data to obtain actionable insights from it?

Yes, we can do that. My team has studied foot traffic for Orchard Road. We have managed to get detailed footfall traffic from telco cell phone towers, without compromising on the privacy of the individual. For example, we can plot the movement of people in all the areas of Orchard Road belt from ION Orchard, all the way to Plaza Singapura. We know the various times when people congregate and the various times they are moving from one area to another. Our proprietary algorithms can help you identify the best location to open a flower shop. I could even give you seven areas where people would likely be during weekends or special days like Valentine’s Day. 

I suppose that the footfall traffic data is outside of the shopping centres? What about inside the shopping centre?

Telco companies have base stations inside a building which allow us to calculate the positions from those base stations when a phone call is made. Hence, we can get footfall traffic within a building itself.

Has this been rolled out?

We have been working on that data for about six months and we are very confident of our algorithms. Over the next six months, retailers and other companies who are interested in human traffic should be able to access our services and gain insights from them.

Is the first to do this?

We are not the first to do it, but I would say that the way we are doing it in terms of using algorithms is unique. We believe our algorithms show a slightly higher level of accuracy than most in predicting the movement of human traffic from one point to another. But more importantly, our business model offers such valuable data to any retailer at an affordable price. Typically, such projects cost hundreds of thousands of dollars, but I’m trying to get the cost down to less than a hundred dollars.

Do you see the big tech giants as competitors?

No. Sentient has become very close partners of these big technology companies such as IBM, Google, and very soon, Microsoft and Amazon as well. Our relationship with these big tech companies is strategic. We want to partner with them and not compete with them. By working closely with them, we will be able to understand what niche areas we can go into, which they might not want to. Our main value and key advantage here is that we want to focus on anything local and domain-specific, because there is a large unmet need in these areas not served by the tech giants.

What are some companies that has worked with?

One local company is Mediacorp, and they have used our technologies to subtitle their local dramas, and as a result, have become more productive than before. They continue to be a very important customer of ours. There is another customer in Nagoya, Japan, called Daido. They are a steel manufacturing company and we implemented a manufacturing defect identification system for them. Hopefully, within the next two to three months, we would have a few more customers who are willing to share their success stories with us. 

You also offer ‘AI microservices.’ How does that help businesses?

Once an algorithm is fully trained with a reasonable level of accuracy of say 90 to 97 per cent, we package that algorithm into a ‘container.’ Well, that’s a technical term. The nice thing about these tech ‘containers’ is that they can run on any server (AWS, Google Cloud, etc.) These little containers are packaged as a microservice. Now, microservices are transportable and compact. They fulfil one purpose only, such as recognising street signs. We could have another that recognises supermarket items, so that is a different microservice, a different ‘container’ altogether. Over time, we have created 41 microservices, each doing very specialised things. Eventually, we want to build thousands of microservices.

Currently, how big is your team?

We’ve 37 people—about eight of them in India, one in Australia and the rest in Singapore. Those are the areas where our scientists and software engineers live. But our markets are Singapore and Japan. Hopefully, by Christmas, we may find the first Indonesian project as well.

Having built three companies, how do you find talent? 

Over the years, what is most important is to find people that you like and enjoy working with. Some of my team members have been with me for more than ten years, since the days of iFusion. And they form my core talent pool. When you have a core talent pool, the DNA will attract others into that same mould.

Which are the next markets you are intending to launch in?

Of course, we will be launching in Singapore. And the next is Tokyo and hopefully, Jakarta this year. Although these are the markets that we are going to focus on, is a global platform. If you are a programmer from Brazil, you can use too. However, the local specifics are not there yet. 

Apart from that, what are the key milestones to reach in the next few years?

We would love to increase the number of customers across Japan, Singapore and Indonesia by the end of the year. Our initial plans before Covid-19 was to target the US next year. However, with the state that the US is in, we are not sure how soon we can make that happen. But there are still other alternative markets that we are very interested in going into, like India. Eventually, we do want to list the company in Japan, but with the ongoing pandemic, it might be in late 2022 or 2023. 

Why an IPO in Tokyo? And not in Singapore or the US?

We have studied the valuations of AI companies at a few of these stock exchanges. Tokyo’s valuation or market capitalisation for AI companies is pretty high, despite the coronavirus pandemic. Currently, the US market is extremely high, which I think is now detached from the fundamentals of the economy. We don’t know whether there is a fundamental correction on the way in the next few months. Singapore and UK trading volumes and valuations are not doing well at all. Hence, Tokyo is an attractive place to list.