Why RPA Will Not Be Any Faster Than the People It Replaces (and How AI Can Help)
- enero 29, 2019
It is said that a chain is only as strong as its weakest link. Let’s look at the information chain that most business processes run on today. Web servers talk to application servers, which in turn relay information to middleware servers, which then interact with DNS servers. Information packets move across a network from deep inside an application server to the remote client and back.
But there is latency in every step of the process. Beside network latency, which is the time taken for a packet of data to get from point A to point B, the traffic routed through Internet servers and other backbone devices also adds to the overall tardiness. Further, applications and databases themselves, having grown complex and unwieldy over the years, don’t really help the cause.
When eventually the packets get to the client PC, the data-intensive, web-based applications push user-interaction work to the client workstation, from where a scripting language generates thousands of rows of data before the client finally displays updates.
And then comes the weakest link in the chain—the human—who processes the transaction. While network latency is measured in nanoseconds, application latency in microseconds and client latency in milliseconds, human latency is measured in multi-seconds, not surprisingly.
However, as long as humans were on the critical path, all the sins of the network are forgiven. But when RPA came into the picture—with its 6,000 transactions a minute (compared to a human’s 60 transactions minutes)—organizations were not so forgiving.
RPA was supposed to be the superhero that smooths out all the kinks, and hastens the result, right?
But that didn’t happen.
It was hardly the RPA’s fault. Just because RPA magnified the speed of processing at the edge client, it didn’t mean wishing away the latency and choke points still inherent every step of the way. Unfortunately, you can’t rip and replace networks, databases, and applications.
The secret to speed lies somewhere else. It is the effective management of the network, and the elimination of congestion. And that’s where Artificial Intelligence (AI) can be the true superhero.
AI to the rescue
NTT DATA used AI to study traffic patterns in the city of Guiyang. Guiyang is the capital of Guizhou, a commercial and economic hub, and the third-fastest-growing district in China. Guiyang has a three-dimensional transport network, including underground transport, roads, and elevated overpasses. The rapid growth in Guiyang brought with it traffic-related woes. And worse, it began to slacken the pace of growth itself.
We partnered with the city administration to develop a real-time analytics solution that would control the traffic lights to make the optimal use of the existing road network. Here’s how we helped, we:
- Set up 170 cameras at the 19 most notorious intersections
- Monitored 60 million cars passing through these intersections over a period of several months
- Studied the patterns of traffic flow, traffic lights, the direction of streets, automobile capacity, time of day, season, and weather among other parameters
- Created a traffic simulation model, based on a neural-network-powered AI engine.
- Finally, based on the above parameters, we reprogrammed 220 traffic signals.
The results were immediate and significant. We saw a 25% increase in the number of cars that passed through these intersections during peak hours and a corresponding 26% reduction in the delay due to congestion.
Lessons from city traffic to network traffic
Data traffic on the network (including servers, applications, and clients), is not too unlike city traffic. There are switches and routers. There are also servers and gateways that the data packets must navigate through. Each has its own rules engine and time delays. The process for building a simulation model for such data traffic therefore is largely similar to the model we built for Guiyang. In fact, the data needed to train our AI engine was already available, since the data traffic leaves a footprint and can be readily accessed through logs.
Overcoming network latency, improving network speed and increasing throughput, therefore, becomes critical for such AI-based models. Only then will we be able to realize the true potential of RPA. We are fast approaching the point where application response is becoming the key choke point for Turnaround Time (TAT) for a transaction in an automation-intensive environment. And RPA, aided by AI-based solutions, is the way forward, (notwithstanding humans, the weakest link in the chain).
Subscribe to our blog