Fred Baker had a crisp message to a combined Energy and ICT audience: Make new mistakes in the Smart Grid, not the ones already witnessed in the IPv4/IPv6 Internet. Representatives from ETSI (standardization organization in the field of information and communications) and ITU-T (Telecommunication Standardization Sector) were quick to point out that in many cases, people’s lives depend on electricity, so different and much more controlled design is needed. So, what about Smart Grid reliability?
Many power engineers also pointed out that the power grid may need down to 2 millisecond latencies within substations, and down to 16 milliseconds inter-substation. In many talks this was deemed ‘beyond the Internet’ so different protocols would be needed.
Towards Smart Grid stability
The conservative approach is understandable, but outdated. Instead of designing theoretically reliable and thus inherently inefficient protocols (X.25 and X.400, anyone remember? I still have nightmares), we should take the best parts of the existing Internet and apply those on top of power grid infrastructure in a highly resilient way, so that singular errors or re-transmissions may not cause Smart Grid instability. Too much data is better than too little to reach Smart Grid reliability.
It is understandable that this initial discussion between power and telecom engineers is incompatible. The latency issue is real at the substation level, where usually a programmable logic controller (PLC) takes care of millisecond grade adjustments of phases and other parameters.
However, nobody is saying that fast local control should be abolished – it just needs an intelligent umbrella system where the PLC’s logic is stored and updated, so that the actual PLC hardware becomes a commodity component and can be replaced while the programming resides in the Smart Grid, also known as the Cloud in ICT terms.