::: nBlog :::
When the first mainframes were deployed in the 1950s, programs were written carefully e.g. in COBOL and FORTRAN, and stored on punch cards and later on magnetic tapes. Programs were very much purpose-built, and great care was put into utilizing the limited memory and CPU resources. These were the mission-critical programs, anyway, which gave us the ballistic missiles and pushed us to the moon.
Fast-forward to the 70s, and many enterprises already had minicomputers running Unix and the likes, like VMS and MPE. Programs started to reach the common man, in form of systems for airline reservations and banking, and first networks were built. Programs grew larger and more complex, as computers became cheaper and more powerful. There were even residual applications like email, which were piloted by pioneering companies like my first employer Ahlstrom – already in 1977.
The emergence of home computers in the 1980s caused a kind of democratization of programming as more and more people (like me at 11) were able to learn how to code. New languages like FORTH, BASIC and LOGO were designed to lower the entry barrier even further.
Now in the business world just before the Internet, it was the IBM PC which caused the great diaspora of computing power to the users’ desktop. Notwithstanding the fact that the processor, memory and peripheral interface architecture were suboptimal at best, the PC precipitated the downfall of the enterprise minicomputer era. More and more powerful PCs were created, and with our x86 Pentiums we’re still on the same path.
Now what comes to operating systems, the 80s PC originally had the Microsoft MS DOS, disk operating system, designed to handle floppies and early hard disks, with direct access to x86 and resources. It evolved into Windows, all the way to today’s version 10, which still reminds us from MS DOS with e.g. drive letters and executable file formats.
As soon as the PC was powerful enough, Unix (with great help from Linus Torvalds) was ported into x86 and Linux became a serious contender to Windows also in the enterprise world, if not on the desktop but with enterprise backend applications.
Now when the enterprise was littered with individual servers running Windows and Unix, the solution was to start virtualizing them into fewer pieces of physical hardware. In the early 2000s, this saved a lot of money as computing power, memory and other resources were once again centralized in a minicomputer, or even in mainframe fashion. Today’s cloud providers like Amazon and MS Azure are just a logical continuum in this development.
With a growing concern for resource waste and hiding bad programming in virtualized environments, a new paradigm of containers is now slowly emerging. A container is a very small virtual machine, usually created to support just a few programs or even one program at the time, and it can be dynamically switched on and off depending on changing requirements. In parallel with containers and partially based on them, a wider Micro Services Architecture is gaining ground, enabling faster and less interdependent software development.
While containers are good news for distributed and efficient computing, they often inherit their traits from the 1980s PC explosion, being representations of physical machines with those suboptimal features still present for compatibility. Furthermore, current containers are often static and bound to certain environments and network conditions.
Therefore we at BaseN decided to create a new container type – Spime Container – which is practically a container without a virtual machine, meant to run programs in a massively parallel computing environment of BaseN with minimum legacy overhead. Spimes in our containers also have the ability to move around the infrastructure in an autonomic fashion, depending on predefined thresholds and other parameters.
Although we favor our own containers for internal services and real world customer applications, we continuously analyze the computing world and adapt the best features of trends like microservices and virtualized network functions (NFV). After all, it’s all about good algorithmic design, engineering and implementation – now in more and more traditional industries. Time to spime your product.