The Ultimate Adoption of NG9-1-1

LISTEN TO THE AUDIO VERSION OF THIS BLOG ON SPREAKER – [CLICK HERE]

For at least the last decade we’ve been discussing next generation 911 services and the additional value they bring with the increase in the number data points that they will carry. Despite this fundamental value, the uptake has been less than inspiring. While there may exist several root causes for the delays, they are far from being unknown or misunderstood, as is often common with new technology deployments.

The initial reaction to any new technology is the perceived cost. We’ve grown up with the understanding that if it is new, it is likely more expensive than its predecessor, as it carries additional capabilities. While some truth may exist to that statement for things that are physical in nature, it is often untrue for those things that are software-based, or at least automated on a computer platform, especially one that involves a network of computers.

Once reserved for only high-end scientific calculations and research, the evolution of the Internet as it exists today, has revolutionized everything from banking to e-commerce, and the online availability of information and resources to accomplish nearly any job. This has a tendency to force innovation within various processes, and ultimately reduce the number of person hours required to accomplish a task. The savings that are associated with those efficiencies or passed directly onto the consumer through a lower cost service, or in many cases, additional value and services delivered for the same cost.

There is no denying that the legacy 911 network is more than half a century old. During its initial development and adoption, landline communications were all analog in nature. Even the core back bone PSTN network operated with very simplistic controls delivered over point to point connections between regions. Digital signaling, let alone digital switching, simply did not exist, and the infrastructure required to connect and route calls from coast to coast could easily be conceived as a bit of a science project

As new networks were built, and the old networks were updated, digital signaling and transport slowly bourgeoned through the core of the network. While becoming more robust and informative. IP capabilities allow intelligent point to point connections to exist, and important data to be communicated from end to end. From an architecture perspective, the need dedicated point to point connections has all but been eliminated, the last bastion being the data security of those networks.

Have we finally reached the tipping point a decade later than anticipated? I feel like we have. The key indicator of that is the realization of the financial burdon that the legacy network is placing on the various players that are involved keeping the existing network operational and functioning.

LISTEN TO THE AUDIO VERSION OF THIS BLOG ON SPREAKER – [CLICK HERE]

The continuous cost of the maintenance and support of the legacy analog circuits in use today is something the local exchange carriers struggle with. Their complete financial model has been exasperated and has been trimmed considerably with the eradication of coin pay phones and the revenue they generated, as well as the removal of most local toll charges. Because this revenue stream has all but dried up, and the unfortunate timing they faced with an infrastructure replacement event forcing billions of dollars in legacy core infrastructure to be scheduled for write off on the books.  The scheduled replacement with new server infrastructure and what was called the ‘long lines network’ needing to be replaced with the internet and the associated routers, firewalls and other elements required to deliver that new IP backbone.

The most recent Public Safety conferences, although virtual, provided exceptional insight to the current state of affairs in the industry. In addition to the normal technology focused sessions, there were several case studies presented by those that have taken significant steps at deployment of infrastructure to support the deployment of next generation 911 networks. For instants, the state of California has made significant efforts in building a statewide emergency services IP network (ESI net), that will improve interoperability and connectivity across the entire state.

IoT Devices from the Enterprise Contributing

LISTEN TO THE AUDIO VERSION OF THIS BLOG ON SPREAKER – [CLICK HERE]

As these new pockets of NG911 capable infrastructure start coming online, they will provide a new level of connectivity to PSAPs from the various carrier networks and originating service providers (OSPs). In addition to these networks, the enterprise space will contain a treasure trove of valuable information that could be applicable in the event of an emergency. In addition to providing valuable location data for incidents originating within their facilities, data collected from IOT sensors on the property will be able to provide relevant in actionable information regarding events that have occurred adjacent to their property in view of their sensors.

For example, imagine an incident that occurs at a busy intersection in a city. It’s very likely that IP enabled video cameras from local businesses will cover that intersection from four separate angles. In the event of an emergency, crowdsourcing that information could prove to be invaluable. Of course, information would only be shared in the event that an incident or event triggered the appropriate logic to allow access.

So, with the availability of data collection in the originating service provider networks, the ability to pass the data through an emergency services IP network and get it in the hands of Public Safety now exists in a growing number of areas. The next step is to categorize and score the appropriate data to establish a level of relevancy.

Let’s take a look how this capability could easily exist within and enterprise network today. The process can be as simple as follows. Let’s assume that an enterprise has the following points of data available with varying degrees of accuracy.  Building, the Avaya LDS application providing X/Y information, the subnet and the switch port information. Some of the data is very accurate, and other data points are not, or may be missing altogether.

Intelligent Weighted Accuracy Scoring

LISTEN TO THE AUDIO VERSION OF THIS BLOG ON SPREAKER – [CLICK HERE]

For this exercise we will assign the VALUES of the different data points as follows: Building 4, LDS 3, Subnet 2, and Port number on the Switch as a 1. We can then apply an accuracy factor, based on our knowledge of the accuracy of each individual data point for a particular area. For example, the building data is ALWAYS right, so that gets 100% accuracy. As for LDS, the latitude longitude information, or XY information that we collect from cellular devices is highly accurate. The problem is the devices don’t always contribute a Z or altitude, so there may be a question as to which floor a person is on. Because of this will devalue this information by 10%, so will only use 90% as a maximum. Our sub that information is mostly accurate, but we had to extend cables in certain areas beyond normal physical boundaries, so will devalue that rating by 20%. As for Data switch port information, we know that cables are moved in the IDF’s on a regular basis and don’t get documented. Because of this will discount their value by 40%. Remember all of this is just an estimation and based on real world information that we have about the networks. Some areas can be discounted further than others, and areas where the data is particularly accurate, minimal discounting can be applied.

Using this very simple and basic algorithm, we can say that a perfect score is 10 if the maximum value is achieved for each indicator, and none of the indicators are discounted. Based on this, you can see that consolidating all of the information that is known about a particular device is going to provide an 8.9 out of 10 rating for that particular location information, telling us that whatever information we come up with is going to be statistically correct by a factor of 89%. Staying true to this model will allow enterprises to establish minimums for particular areas, and plan remediation accordingly where the impact will provide the biggest value.

Once again there’s no secret sauce here. We’re not collecting any more data then we can collect today. We’re just looking at that data in a new and informative way and were applying a little bit of statistical evaluation of the level of accuracy that we believe we are achieving. The impact that this can have an emergency response. If we know that we’re going to a particular building, but the data shows that there’s a possibility of it being on two different floors, for that particular event we can double up on the response teams and check both simultaneously. That allows us to get help where it’s needed as fast as possible and adjust the levels of responses accordingly while the network teams go through and identify the trouble spots and plan the most appropriate remediation strategies.

LISTEN TO THE AUDIO VERSION OF THIS BLOG ON SPREAKER – [CLICK HERE]

Take care and stay well.

Follow me on Twitter @Fletch911
Check out my Blogs on http://Fletch.TV

Leave a Reply