Big Data Cloud Pioneers (10 + 11)

::: nBlog :::

When NASA launched Pioneer 10 and 11 outer space probes in 1972 and 1973 respectively, local computing and storage were extremely expensive compared to today’s resources. That’s why it was logical to make them both fully Cloud-controlled, using NASA’s Deep Space Network. Their software versions were updated countless times before 2003, when Pioneer 10 finally fell silent due to power constraints of the plutonium-based radioisotope thermoelectric generators, near the outskirts of our solar system. This was some 20 years after their planned lifetime. 

The telemetry, radiation and numerous other sensor data amounted to a total of 40 gigabytes for both Pioneers, a formidable amount to be stored on 800 cpi tapes of late 1970s and even on 6250 cpi ones in the early 1990s. An 800 cpi full size tape reel contains a maximum of 5 megabytes. 

NASA had no obligation to store ‘secondary’ data like telemetry, but fortunately one of the original systems engineers Larry Kellogg converted the tapes to new formats every now and then. Thanks to him scientists are still making new discoveries based on the raw Pioneer data. It is also of exceptional value to have it in raw format, as more and more advanced algorithms can be applied. 

Today’s embedded but cloud-connected environments have a lot to learn from Pioneers’ engineering excellence and endurance planning. We just briefly forgot it when it seemed so easy to solve all storage and computing problems with local, power-hungry disks and CPUs. 

//Pasi

Leave a Reply

Your email address will not be published. Required fields are marked *

More to explore