Man sitting at dest, sun shines through window

On Artificial Intelligence

30.11.2017
,
Markus Säteri, Head of AI


We don’t have it. 

Yet.

What we have are narrow implementations of specific areas in Machine Learning (ML) and other fields where the playground has boundaries. In order to utilise the deep learning and other ML algorithms we need to provide them with data - huge amounts of data. We can have the machines learning by themselves (Unsupervised ML) and they will come up with connections we didn’t realise that existed, but only we can say if they are relevant to the what we are trying to achieve. 

The capabilities that we have now are recognised as Artificial Narrow Intelligence (ANI) as opposed to Artificial General Intelligence (AGI). 

In order for ANI get us good results we need to concentrate on providing good, clean data for the ML processes. Enterprises understand just how an enormous task this is. For a ML process to be most effective it requires curated ontologies and trained analysts to recognise the emergences the ML processes come up with. This is very hard especially with Convolutional Neural Networks (CNN) as the CNN are trained to find patterns and not to explain why they are doing it. This branch of the research is called Explainable Artificial Intelligence (XAI). The research is critical because in the end a human must take responsibility of the AI’s decisions.
 

Flowchart

Figure 1 - XAI Concept - Mr David Gunning - DARPA

 

 

New technologies are now emerging that automate much of the real-time ontology creation and updating. Enterprises are scrambling to create datalakes that have defined ontologies and enable them to use ML in areas that have been previously inaccessible and relations between events in data can be compared across domain silos. 

Start-ups in AI have realised the market value for this. There are several different platforms that promise integration to customers systems and provide domain specific meta-data models that can be utilised with some customisation. The platforms provide graphical user interfaces for data aggregation and ML playground creation which could reduce the amount of required work. These will be queried with a voice interface and reported with Natural Language generated by the system along with visualised reports.

We are moving towards a semantically defined data and resource world, where we can define what edge computing resources are available and which ones we want to associate with. The data must be readable and understandable by both humans and machines. There will be ecosystems built upon contractual agreements on how to expose and utilise computing and data resources. They will provide content and services in exchange for a slice of your processing time or bytes of your memory.
 
One interesting project is the Golem. Now it can do CGI processing in a Peer-to-Peer (P2P) network, but much, much more is promised. Payments are made in Ethereum (a cryptocurrency implementation). Other platforms worthy of mention is the Sonmio, iExec and SingularityNET. It won’t be long before the computing providers are our mobiles as well. This is closer than we think as University of Waterloo announced couple of weeks ago. In essence they are creating new deep learning software that is compact enough to fit on our mobile phones.

Investment on Artificial Intelligence is booming. By looking at research papers on Deep Learning (the hottest ML style) alone, it seems that China is leading the charge with U.S. in the 2nd place and Europe lagging behind.

 

Chart of what country is leadin the AI development

 

Still the ANI is far away. What we have now is Augmented Intelligence - machine assisted abilities to make better decisions based on the data that we have provided.