::: nBlog :::
During the last couple of years there has been an influx of analytics companies, concentrating on different industrial areas ranging from defence and agriculture all the way to fashion and healthcare. At the same time, large companies are trying to hire the best Data Scientists in order to quickly create new products and services with existing, new and growing data pools piling on their repositories.
Processes and methods can surely be improved by running large batch analyses, having a few meetings around them and then solemnly deciding to act. This is what I’ve observed within many enterprises; sometimes results are good, sometimes bad – but mostly these efforts are seen as exceptions and one-offs, leading to sporadic changes instead of sustainable improvement.
In the emerging spime world, analytics is continuously and evolutionarily executed by each spime and their combinations. Human effort is still crucial, but it should be administered wisely to non-realtime parts of any process so that bottlenecks are avoided.
This might feel frightening and against the soft ‘We put people first’ -type of slogans, but we must face the fact that our wet computing machinery in our heads is not the fastest and least error-prone compared to the emerging, scalable and fault-tolerant spime infrastructure.
Giving Artificial Intelligence and other automatic processes more control does evoke dystopian fears, as we saw in the Terminator or Matrix movies, but this should not be used as an excuse to stall development and allow sloppy programming and engineering.
Analytics is an exciting field and it is one of the pillars empowering spimes, in addition to Big Data, Internet-of-Things and Distributed Computing. It just needs to be promoted to have a more permanent relationship with business.