Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
For example, let’s say you have a security system that automatically unlocks the door when an authorized person intends to go through it. However, instead of properly recognizing the intention to pass through the door, the system looks at proximity. So every time an authorized person walks down a hallway, all the doors unlock, which is not a very secure system. So, we focus on bringing security and convenience together through sophisticated intent detection.
We are also continuing the journey of fusing mobile devices with physical access systems, which is a very ripe area for work for my team and others. That computer and sensor network that you carry around in your pocket has a lot of valuable information that can be combined with physical access control systems in numerous ways.
Tangential to physical security, although it touches on it, is real-time location services – being able to use an RFID tag, for example, to identify a person or an asset, such as in hospital environments – identifying where doctors and nurses are and where important equipment or even patients are. In that area, we are embarking on deploying state-of-the-art AI methods to increase accuracy, position estimation accuracy, and simultaneously decrease latency.
This touches security in various ways, especially with emergency notification, where you want to know with high certainty and quickly where that person is so you can get the right resources to that area as fast as possible. That’s where we’re seeing real gains in introducing artificial intelligence and moving away from some of the traditional position estimation methods.
Rowe: That’s part of it contributes to the latency reduction. However, the other thing that I can do is sort through more data that might be discrepant under classical assumptions.
Many algorithms that are used classically assume RF (radio frequencies) have certain characteristics. That’s not the case when you get into the real world with metal girders and infrastructure that distorts the RF signals. Classic assumptions don’t work either, so AI can consider them and give you much better accuracy by considering real-world characteristics.
And yes, it is about data volume, so a variety of distinct kinds of larger enterprise organizations within different verticals can really benefit from AI because of the volume of the data they are producing, such as companies with hundreds and thousands of employees, for example.
Rowe: It depends on the specific instance we’re discussing, but AI can certainly help.
I’ll give you an example of data coming from different smaller systems with multiple data formats that you’d like to combine. Historically, you’d have to manually figure out how these data streams can be combined uniformly. Still, now you can use AI to do that sort of massaging of data and be able to create uniform data streams from disparate systems. And in that sense, it can really help with heavy lifting.
Rowe: Certainly, people are doing it, not universally, but I think looking forward to time, they’ll be more and more. Prediction is a little bit tricky as there’s always the unexpected, so if you’re planning for some set of scenarios, invariably, another scenario comes along in the real world for which you hadn’t planned. We talked about anomaly detection, where you try to capture all the normal things and then identify what’s abnormal. Right now, I think anomaly detection, in general, is one of the most important tools in the AI arsenal.
Rowe: Yes, machine learning is absolutely critical, especially based on the use cases and the data volumes used. Training is important for somebody, but today, with so-called foundation models or frontier models, big tech already does training, making it available to others to adapt. This is onerous training, for example, and people throw around numbers like $100 million to train a language model, which most companies can’t do, but by using that existing training model and adapting it to specific purposes, we can apply it.
The other thing that’s going on, particularly in the open-source community, is that these foundation models are getting better. They are getting smaller, and they occupy less memory and computational resources, so adopting them and training them doesn’t take nearly the amount of data. The training requirements are reduced as these foundation models, cloud services, and open-source models become smaller.
The other thing that’s happening is a move to the edge and sophisticated computations occurring at the edge device rather than going back to some cloud service somewhere. We’re seeing increasingly the confluence of smaller, more powerful local models with more capable edge-device computational units, the neural processing units. Being able to combine all that together allows us to bring more capabilities to people, and at doing it at the edge has a variety of different benefits.
Rowe: Right now, with ChatGPT, Gemini, and Claude, there is a growing awareness for sure, and with that comes a growing concern. The most concrete manifestation of that concern is the regulatory environment, where we follow regulations that different regions adopt, which aren’t always well aligned. There are different regulations in different places, touching on various aspects of AI systems, so not only are they evolving in time, but they’re different per region. That makes for a complex environment to introduce products. We’re thinking about that even in the initial stages, so how do we meet privacy requirements? How do we meet informed consent requirements? How do we meet all these regulatory standards that are coming into view?
Rowe: I think routine tasks, tedious tasks in access control, such as some poor person sitting in a room monitoring multiple video feeds … there’s no reason why it shouldn’t go away, and the technology, if not there today, very soon will make that sort of a routine task. So, with routine monitoring, we’ll see increased AI coming in, freeing up people to respond better to those alerts AI generates.
One underappreciated area is the impact of large language models on the user interface. Language models are defining the next user interface; we’re entering a new epoch, and we’re just at the earliest point.
As you mentioned, my concern would be bad press. If somebody implements something poorly, such as in the security industry, and somehow it shines a negative light on that technology area, then other companies that implemented it properly are adversely affected by it.