Originally published on IFSEC Insider.
Dakota Murphey provides an overview of the opportunities and precautions the security industry must consider as adoption rates in video surveillance grow.
From smart alarms to CCTV camera monitoring solutions integrated with intelligent analytical algorithms to analyse video footage and detect anomalies in real time, the surveillance landscape is rapidly seeing evolution at the hands of AI/ML technology.
Not only this, but AI-powered facial recognition software provides a whole new dimension to biometrics and access control.
This isn’t even mentioning the amount of data that can be autonomously aggregated and condensed to inform top-level strategic decisions – as far as surveillance system integrators and providers are concerned.
In short, it’s clear that AI/ML tech in video surveillance provides numerous promises, from enhanced security to greater productivity and actionable insights collated more quickly. As a result, it may be evident to begin integrating more AI/ML programmes and tools into your operations, however, you should be mindful of certain risks and limitations when adopting this technology at scale.
Automated Monitoring: AI allows surveillance systems to continuously and autonomously monitor environments for abnormalities and threats. The consistency and high level of accuracy allows security teams to dedicate more resources and time to higher-level tasks. Machine learning algorithms can be trained to identify signs of trespassing, loitering, vandalism, violence and fire.
Real-Time Alerts: Intelligent cameras with built-in analytics can immediately notify security personnel when a high-risk event is detected, allowing for rapid response. This is far more effective and efficient than manual video reviews that are subject to human error.
Facial Recognition: AI facial recognition provides a touchless biometric solution for visitor management, access control and watchlist screening. Stadiums, airports and other high-traffic facilities are employing this technology to identify individuals of interest for swift containment and referrals to authorities if needed.
Crowd Analysis: Computer vision algorithms can scan crowds to detect high-density levels, suspicious abandoned objects, abnormal noise levels, higher-than-normal temperatures, smoke and many other hazardous scenarios. This allows officials to manage crowds proactively and avoid situations from escalating into dangerous ones.
Investigations: AI enables intelligent video search to easily surface clips of interest from vast surveillance archives. Security staff can use visual data mining to quickly investigate incidents and find correlations.
While the benefits of widespread AI adoption in a security or surveillance environment seem apparent, unsupervised use can yield some concerns for both integrators and end users.
Cyber security: Connected camera networks – along with incumbent enterprise infrastructure – create plenty of attack surfaces vulnerable to compromise by malicious hackers. Steps must be taken to encrypt video feeds, install software updates, and follow cyber security best practices. This includes prioritising continuous software patching, stringent access controls, encryption, MFA (multi-factor authentication) and investing in off-site penetration testing solutions to identify hidden vulnerabilities.
Privacy: Many organisations have raised concerns about the privacy implications of widespread public camera systems coupled with technologies like facial recognition. There are fears about constant monitoring and tracking of individuals without consent, along with suspected unscrupulous use of this data to be given to mysterious third parties.
Bias: Like any technology, AI models reflect the unconscious biases inherent in their training data. Critics point out that facial recognition systems often perform worse on women and people of colour. Therefore, measures must be taken to ensure fair, unbiased usage and continual improvement.
Over-reliance: There are worries that over-dependence on AI surveillance could lead to complacency and substandard human oversight. There are also concerns that AI/ML adoption could put workers at risk of employment termination if organisations look at the bottom line; the technology should augment security staff, not replace their expertise.
The risks highlighted above should not deter the adoption of AI in surveillance outright, but rather compel companies in the industry to deploy the technology responsibly and not at the extent of human supervisors and operatives.
In short, while security firms advocate the application of AI to help security teams do what they already do, albeit faster and with greater accuracy, there is still a need for responsible human input.
With careful oversight and governance, AI/ML can make monitoring environments more secure and alleviate security personnel from low-level tasks that can instead be entrusted to computers and algorithms.
A balance of healthy AI/ML integration with resource optimisation can help companies adopt a more secure and less time-intensive process with teams fulfilling more high-value work.
Here are some other best practices to follow:
AI promises to unlock safer, more data-driven security and monitoring capabilities than ever before. However, thoughtful governance and diligent oversight are imperative as these powerful technologies continue evolving and proliferating.
The video surveillance industry must lead by example, pioneering AI applications that enhance security while still upholding privacy, accountability and fairness.
You may think your corporate security is iron-clad. However, the risks to businesses are constantly evolving. Staying on top of risk management will protect your bottom line and your employees.
Arrange your security risk assessment today – contact Liam Doherty, Security Consultant, at liam.doherty@esarisk.com, on +44 (0)843 515 8686 or via our contact form.
Upgrade your security
Enquire about a free security risk assessment.