With the advent of 5g, there will be significant changes in how industrial companies collect data. The data collection will increase in industrial IoT (IIoT) with the adoption of 5g and with data storage prices lower than ever (First 50 TB/Month in AWS S3 costing less than 30 cents), more companies will have terabytes if not petabytes of data. With low latency, lower data charges and more reliability, the 5g will see more data collection and processing in various operational aspects of an industrial company – from transportation, logistics to factory assembly line and delivery of units and services. Although the opportunities with 5g is endless, the first users of 5g in 2019 would likely be critical industries such as medicine and banking. As the 5g network expands, IIoT with existing 4g connections would be able to replace it with 5g. In the next couple of years, as 5g network becomes widespread, IIoT would be able to leverage its low latency and high reliability to enable real time, better quality data streams from its assets.
With advancement in network architectures and more industrial companies embracing digital transformation of their business, the image and video data streams can be expected to increase and hence increasing data analysis projects in that domain. The growth and proliferation of deep learning algorithms and applications will also fuel the image and video processing. There could also be models and algorithms that work better in IIoT domain, specifically considering the visual patterns required to identify faults at factory floor.
As connectivity to the cloud increases, I predict 2019 will bring more willingness to experiment with this newly collected data. As the data and data possibilities become better understood, I predict companies will focus on how to improve data collection. For example, small changes including collecting different types of data, adjusting the frequency of data collection, and transforming data differently can make a huge impact on actionable insights. These changes will not only lead to a better understanding of the business, but also to predictive analytics. With these changes, companies will start to see how their connectivity to the cloud is impacting their business for the better as they put these actionable insights into motion.
The confluence of increased computational power and accessibility to enormous datasets have pushed the advancement of many algorithmic techniques to the point where, in many cases, machine learning can be used to enable prediction accuracy greater than an average human can achieve. Consider the latest results in object detection and image classification in the ImageNet Large Scale Visual Recognition Competition, or ILSVRC. The average human achieves about a 95% accuracy in this challenge. Since 2015, the best machine learning techniques have met or exceeded this accuracy. Development of new enhancements in machine learning algorithms, particularly in ‘deep learning’, have driven most of these successes, although it is worth noting that many of these methods are based on ideas and theoretical examples from as early as the 1950’s, made practical though substantial increases in computing speed and availability of large amounts of easily available data.
I predict that the coming year will see the same attention turned to data engineering and transformation techniques. Pure algorithmic development will plateau for a period, as even the best algorithm is still dependent on the quality of the data used to train it. Particularly in IoT, the ease with which data can be collected and stored has resulted in an ocean of data bereft of application. Collecting data from remote devices, and thus dealing with all the attendant problems, such as missing data, improperly labeled data, and data with unusable grain, creates difficulties that even the most elegant of machine learning algorithms struggles to solve. I believe, then, that advancements in feature engineering and data transformation will provide the most impact to the field of machine learning and analytics, even more so in IoT.
Deep learning algorithms will become more popular and gradually replace traditional Machine Learning algorithms due to their applicability to a wide range of problems, from simple classification to complex NLP or image recognition tasks. Training larger Neural Networks has become possible thanks to GPUs and will get even easier due to the general availability of dedicated processing units, such as Google’s TPU, Microsoft’s FPGAs, and Amazon’s anticipated AI chip. Also, feature learning capability of Deep Neural Networks makes them an attractive solution for developing automated Machine Learning platforms.