AI has become the most popular technology today and, although few may really know what it is, everyone is interested in its progress. To keep readers up to date, I decided to do a review featuring some recent announcements.
Let's start with how AI has already impacted process work. AI is used in data mining apps, and specifically in process mining tools that are now popular. In essence, the AI technology helps look at data from an application and develops a picture of the activities involved and a model of how information is passed from one activity to another. This can be very useful if you have an existing application that you don't understand very well and you want to improve. It isn't very useful if you are seeking to develop new process designs.
The other popular AI-derived process tool is commonly termed Robotic Process Automation (RPA) – an awful name since robotics isn't involved at all, but that's marketing. RPA is incorporated in tools that seek to observe existing sequences of activities performed manually, and to seek to automate any parts of the sequence that can be automated. This is a nice way to capture repetitive procedures that are performed using computers and one or more software applications or databases. In fact this is a pretty trivial use of AI, but it helps get trivial procedures automated, and that's fine.
The really important impact of AI on processes occurs when AI is used to create new applications – an application that analyzes lots of data and proposes specific actions for managers. In this case one needs to create or revise an existing application to incorporate a major new type of automation. An easy example might be the decision to eliminate drivers and use automated cars to deliver parts to stores. That would require a completely new set of processes to handle all the changes that would be entailed in the use of automated cars. These types of applications and the accompanying process changes are occurring at a steady pace now, and they will grow rapidly as organizations figure out the best uses of AI for the automation of activities that have not previously been automated.
Shifting from existing uses of AI that require business process efforts to help with implementation, let's consider more generic developments in the existing AI market.
In February a group of researchers from Microsoft, Google, Samsung and Qualcomm and from various universities, will meet in San Jose, California to talk about the problems of putting AI on microprocessors that include sensors. The event is called Tiny ML Summit (ML=machineLearning))(see spectrum.ieee.org/tinymachinelearning-jan2020) This is a rather technical way of talking about the fact that AI is going to play a big role in the Internet of Things. Increasingly chips will be embedded in all kinds of devices. Those chips will have sensors and the ability to analyze events and then report their findings to systems that provide consumers or companies with vital information that can be used in managing processes, be they company logistical planning, or helping a consumer to manage his or her household.
As process planners become more familiar with these new chips and what they can do, they will gradually change how they think about the design of business processes. Increasingly, at each step of a process, one will ask, could the client make a better decisions at this point if he or she had information about x, or y or z? In the past, the answer might well have been, “yes, but how would we get that information?” Today the answer will be to think about what would be involved in placing a tiny sensor in one or more devices that can provide that information. How can we know if the coffee is ready? We put a chip in the coffee maker to report on its status. How do we know if a package has reached the port of LA? We put a chip in the package that sends information about its current geographic location to a satellite that we monitor, and so forth. Or simply imagine all your various gadgets with chips that report their location to your smartphone on demand. No more lost gadgets.
Interestingly, today's models showing how the use of sensors would work suggests that they will actually reduce the about of data gathering that takes place. This would occur if we got just the right information when we needed it. Thus, rather than have people go out and check on what packages were in warehouses and send in reports, we simply let the packages send us information when they arrive at specific geographical locations.
Sticking with innovations in hardware, Cerebras Systems has developed a very large chip that process machine learning applications much faster. Cerebras' largest chip, the Wafer Scale Engine, is about 50 times the size of a conventional computer chip. In tests, it has been shown that the large chips can effectively train a neural network in hours on tasks that previously required months. Recall that neural networks applications begin life by analyzing specific situations. The information is fed in and then the trainer tells the application: “That was a smart decision,” or “That was a bad decision.” After lots of correct and incorrect examples have been fed to the network, it develops “rules” that allow it to identify good and bad decisions. The more examples, the more refined and flexible the application. Obviously if you want to train a neural network faster, you need to have all your examples ready in digital form so that you can input them quickly. Assuming proper preparation, however, it seems likely that we are going to see a variety of new AI chips on the market that will make it easier to train AI applications.
Following up on improved training, there has been a breakthrough in understanding the rules that neural networks develop. In the Eighties, when we developed expert systems, the rules were explicitly coded into the application. In essence, human experts were asked to explain how they analyzed problems and AI developers converted the human explanation into formal rules. If, subsequently, someone wanted to know how the expert system had arrived at a given explanation, one could always print out the sequence of rules that the system had used. Most expert system users found this ability to monitor the system's reasoning a comfort.
Neural networks use complex statistical algorithms to analyze input data and reach conclusions about what factors are relevant or irrelevant in reaching decisions. There is no way to explain this reasoning to human monitors, and that has often been a source of concern as networks are used to make more and more complex decisions.
Google has recently begun to publish papers on the use of “gradients of counterfactuals.” This is a rather technical topic, but, in essence, Google researchers have found a rather nice way to better understand the logic of a neural network. One analyzes a neural network that makes good predictions, but isolating various factors that could contribute to its success. One by one, one checks to see if eliminating information about a specific factor changes the output of the model. Once one has eliminated all the factors that do not contribute to its success, one has isolated the factors that are, in fact, contributing to the correct predictions. (Items that are eliminated that do not, in fact, effect the prediction, are termed “counterfactuals.” Work on this new approach is just beginning, but with luck neural network developers will soon have a new tool that will help consumers better understand how a given neural network is functioning. (For more information, check the Google website for Sundararajan, et al's paper on “Gradients of Counterfactuals.”)
Finally, some research on who is doing AI today. A company called Tortoise Intelligence has developed an AI Index – a model that uses lots of data to rank how various countries are approaching AI. From their research, they conclude that the US is the clear leader. The US scored almost twice as high as the second-placed China, thanks to the quality of its research, its talent and the speed with which private funding is commercializing AI. Unfortunately for the US, China is also growing fast and the Tortoise experts predict that China will overtake the US in five to ten years. That may simply mean that China begins to produce hardware and software that the US and Europe want to buy. Or it may mean that China begins to generate business processes powered by AI applications that revolutionize industries as Japanese companies revolutionized the automobile industry in the Eighties. We'll see. Meanwhile, for more information, see: http://www.odbms.org/blog/2020/01/on-the-global-ai-index-interview-with-alexandra-mousavizadeh/
This information about a few developments in AI hardly begins to cover the many developments taking place. Suffice to say that the development of the AI market is moving very fast and its going to require lots of new business processes to assure that new AI applications provide the competitive advantage that companies investing in this technology are hoping to achieve.
Speak Your Mind