::: nBlog :::
Following the invention of the transistor and subsequently the integrated circuit during the 60s, we’ve seen an explosion of algorithms embedded to the more and more mundane things around us. You have your thermostats, rollerblades, car engine control modules and so on – all executing algorithms ranging from a simple LCD display scheme of a coffee machine to a medical blood analyzer checking thousands of markers and providing diagnostics in seconds.
These algorithms and programs are usually designed once, put into mass produced chips and then embedded into things with only minor tweaks. This is driven by economics, as the mass produced chips became so cheap that most manufacturers just took the standard set of algorithms and programming to their product.
The downside of this Fire-and-Forget proliferation is that these algorithms are mostly disconnected and thus will not evolve after being taken into use. Longer life cycle products such as solar panel controllers or heatpumps are even more problematic, as the programming and algorithms may have been developed tens of years ago with an obsolete language and hardware, while the people familiar with the actual science behind algorithms are long retired. What I’ve seen happening many times is that a new version of e.g. pump control logic is inferior to the old one, and considerable amount of time is used to re-invent the solutions the manufacturer once mastered.
In order to get back to sustainable innovation we must re-think where these algorithms live and how their entire lifecycle and real-time performance data is utilized in continuous development. It is not enough to just model physical things digitally (“Digital Twin” as promoted e.g. by CAD/CAM industry) – instead, we must ensure that our algorithms and data are seen as the most valuable assets and make them available to the future generations of scientists and engineers. In other words, no thing is too insignificant to have a spime.