Guest Article: What Can Old-School Computers Tell Us About Access Control Systems? Physical Security Technology: The IT Perspective #1
This is the first in a series of columns highlighting some of the relationships and parallels between Information Technology (IT) and the world of physical security technology (access control, surveillance, and other physical security systems) brought to you by Chris Fine.
Lee Odess, in his excellent “Inside Access Control” series of newsletters, has highlighted the many new technologies and requirements that are changing the face of the Physical Access Control System (PACS) industry. But what drives this change, and why might the existing installed base resist changes? What level of change might accomplish goals without a total swap-out of the expensive, long-lasting, existing infrastructure? Is there a comparable technology that helps us to understand the dynamics?
Coming from the perspective of a corporate IT person, I would point to the long life and ongoing presence of old-school computers and software called mainframe computing, as a comparable to many of the enterprise-scale PACS of today. Mainframes – originally those big hulking machines with the spinning tapes that you see in old movies and business magazines – date from the 1960s, and their descendants have been around ever since. Mainframe hardware, software, and staffing is still a large business. Often, the software code is decades old, but it works. (Well, there was the Y2K crisis – but that was fixed, and a lot of that code is still running, with the 4-digit year substituting for the original 2-digit year.)
In a world of PC- and server-based technology, massive public and private cloud environments, and supercomputers that fit into our hands, it’s easy to forget just how many mainframes there still are. They’re expensive, complicated, and require staff who are hard to find. But think about the recent stories begging for COBOL-language programmers, to fix creaking unemployment systems. What…? These things are still here? And doing so many things? IBM estimates 220 billion lines of COBOL code are still in use today.
So why are mainframes (and enterprise-scale traditional PACS) still around, in an evolved version of their original form? Primarily because, for a certain class of work, they are well-suited to the job. Mainframes and PACS, if properly programmed and managed, are excellent at doing a few things very well, at large scale, with very high reliability.
A typical application for a mainframe might be managing a flight reservation or someone’s Social Security payments. For a PACS, it’s the single (yet not-so-simple) decision of granting or denying access, spread across many thousands of people, doors, sensors, badges, turnstiles… what have you? And the decision has to be 100% correct based on the data the system has, 99.999% of the time. The system should never go offline or break down. There are many related sub-decisions as well. That capability is expensive to replace – and is replacement needed?
Mainframes and PACS have a long history of gradually adapting to various kinds of connected devices, terminal interfaces, database systems, etc. Today’s mainframe might have some software and UX elements that take one back to the old days, but inside, they’re modern, powerful virtualized hardware. You can put a Web interface on a mainframe or PACS, and most have them these days. You can put the software on faster and faster hardware.
Nowadays, many mainframes run Linux as well as the older operating systems and software. Many PACS’ architectures date back to 1990s-era client-server computing but can be run on more modern computing platforms (e.g., cloud) that support the architecture. There is some level of flexibility to mix new and old.
Mainframes and PACS are both quite secure if they’re configured correctly. Mainframes manage their internal security with a monolithic model, and since they tend to be self-contained, with tightly-managed connections to the outside, they tend to resist hacking. PACS are designed to run with their own databases and special devices (like panels) and are usually configured with some separation from the rest of the environment.
From an organizational standpoint, both mainframes and PACS tend to live in their own “IT bubbles,” with groups of specialists, often contractors, installing and maintaining them. Enterprise IT usually has to approve the integrity of the configuration and helps integrate with various internal systems like the HR database and login credentials, but often isn’t involved in the day-to-day operation. Out of sight, with no trouble, and a dedicated staff, can equate to out-of-mind in the IT world.
So what’s wrong with this picture? What may challenge, and drive replacement of, mainframes and traditional PACS, rather than upgrades? This takes us back to Lee’s, and others’, points about the evolution of the Access Control industry. Big, monolithic systems are tough and expensive to change, versus extend. They’re fine with gradual change over time (and actually support it very well, as we’ve said), but in the face of disruption, they don’t always fare well – and leave themselves open to competition from more-flexible systems, like cloud infrastructures and more modern-day software approaches.
In the world of PACS, here are some examples of disruptive changes in requirements, that may challenge traditional systems:
- Full integration with visitor management systems and other third-party apps – the mainframe and traditional PACS tend to be proprietary in the way they approach integration
- Support for new and more-dynamic models of access management – for example, day-to-day access management (or even hour-by-hour) for large populations based on COVID-driven facility sharing, and radically different policies in various countries
- Requirements for more friendly, approachable interfaces for end-users and operators
- Incorporation of AI, computer vision, and other such capabilities, beyond the device level – and of course, the dashboards, visualizations, and analytics that go with this
- Rapid cost drops in cloud-based approaches to PACS, along with open standards, open-source, and other cheaper, flexible, alternatives to traditional PACS. Both mainframes and traditional PACS are expensive to operate and maintain
It’s also worth noting that more and more enterprises are taking a holistic view of risk management and security, merging the former “shadow IT” of physical security technology into the mainstream of cybersecurity and enterprise risk management. This helps to drive the technology toward conformance with the rest of the IT infrastructure and to utilize the skills and preferences present in the IT workforce.
To sum it up: With both mainframes and traditional PACS, there has long been ample reason for security and confidence, in both the solution itself and its long-term applicability. But we may now be approaching the time, at least for PACS, when the mainframe approach will no longer suffice by itself.
A disruptive pace of changing requirements is the main driver of change in old but still functional technology, with the economic benefits of newer technologies in second place.
There is a growing market opportunity for new, innovative security and PACS vendors.
In parallel, there’s a great opportunity to figure out new ways to integrate old-school PACS with new types of applications and technologies, which is an attractive option for enterprises that have invested a lot of money and time into PACS and appreciate the advantages of these systems.
Remember, mainframes started with punched cards, but today you can access them in the cloud, or on the Web, or from a mobile app. They still do millions of tasks a day – you just don’t know they’re there. Perhaps traditional PACS vendors need to think in the same direction. Add more flexibility to the inherent “stickiness” of the long-term capital investment and other commitments already made. Open up, integrate, partner, innovate, and ye shall last – or fail to do so, and be conquered.
What do you think?
Chris Fine is an experienced independent advisor and consultant, specializing in strategy, technology, innovation, and business/market development. Chris’s background includes engineering, IT, management, and financial work in multiple roles. Some of his areas of technology and market expertise include enterprise technology, AI, cyber and physical security systems, communication and collaboration products and networks, workplace technology, and the Internet of Things (IoT). Chris’s recent focus, working with vendors, Real Estate professionals, IT groups, and end-users, has been the future of work and the workplace, including technologies, user and operator experience, office layout, remote work, and security, including the evolution of the workplace during and post COVID-19. Chris’s extensive speaking background includes industry conferences, corporate events, podcasts, and webinars. He is the author or co-author of multiple industry studies, White Papers, and other publications. Chris’s other interests include music, mentoring, and vintage technology.