This article was originally published on IFSEC Insider.
With the use cases for AI ever-expanding, IFSEC Insider asked several experts to give their predictions for the future of AI in the security sector and possible barriers to adoption for 2024 and beyond.
“The next decade will be dominated by AI applications in the security industry. AI can be very helpful when it comes to assessing and analysing massive amounts of data to help monitor critical security threats, such as detecting anomalies captured on digital video cameras.
“However, AI can also be leveraged for identifying trends and patterns to enable proactive mitigation measures, which otherwise may have gone unnoticed or undetected.
“This means AI will not only be used for forensic use cases in the near future, such as sifting through hundreds or even thousands of hours of recorded video and other data, but it can also help prevent possible future incidents through predictive analysis by recognising common threats and vulnerabilities based on historical data, combined with real-time situational awareness.
“To most effectively leverage the power of AI, it all starts with the quality and integrity of data, which can be significantly enhanced when you are able to help secure, integrate and harmonise data coming from disparate security sensors and systems. Through next generation data unification software such as Advancis Open Platform, the security industry will be able to apply different AI algorithms to create best-of-breed solutions, at scale.”
“We’ll always need trained security operators to take decisions. However, AI is already proving its value in trawling through huge data sets to identify meaningful patterns and trends. For example, it can take over arduous tasks such as monitoring occupancy levels or combing through hours of video footage for specific people and objects.
“Machine learning, a subset of AI, is also already contributing to tangible improvements in accuracy rates. For example, we’ve successfully adopted it in our own AutoVu ANPR solutions to minimise false positive readings.
“The major barrier to long-term adoption is the setting of unrealistic expectations. When introducing a new solution, manufacturers, integrators and end users all have a shared responsibility to seek clarity not just on what it can do but also what it can’t do.
“Additionally, we need to be thinking about deepfakes which use deep learning to create convincing but entirely fictional images, videos, voices or text. Detecting deepfakes is a challenge because the technology is evolving so quickly. Right now, deepfakes train on images of the fronts of faces. So, one way to detect them is to focus on the sides of faces and heads. As detection techniques evolve, I can foresee a future in which our VMS would incorporate a deepfake detection component.”
“Like all technology, AI is a double-edged sword for security. Use of generative AI, such as ChatGPT, has skyrocketed, because it enables security providers to develop competent marketing copy in an instant, helps security professionals draft reports, simplifies research, and so on.
“However, generative AI is often confidently and forcefully wrong. This could expose users to liability. For example, consider the case of a query about a law professor in which ChatGPT, citing a Washington Post article, stated that the professor sexually harassed a student on a trip to Alaska.
“Not only didn’t the article exist, but the professor never faced such an accusation, hadn’t been to Alaska, and never taught at the law school. The results would be exponentially worse if a bad actor could contaminate generative AI’s data source.
“AI tools are being used productively to automate and optimise fraud detection, patch management, threat prediction, and vulnerability identification. But AI is also being weaponised to create more pernicious malware, mimic and enhance successful misinformation campaigns, and improve phishing emails. As in the case of video surveillance, drones and facial recognition, we should expect an ongoing battle with adversaries determined to use technology to attack and exploit us.”
“As the CEO of a leading AI safety company, my team are experts in governance, compliance, risk, and impact mitigation for biometric AI in security and safety applications. We have researched mature task-driven narrow AIs which improve intruder detection and predict that commoditisation will lead to market saturation and reduced profit.
“The sector needs to use new AI to add value. LLM/GenAIs (Gen AI) are accessible through providers such as IBM and Microsoft, and open source communities such as Hugging Face. A diverse and talented in-house AI team can solve facilities management problems including frictionless enrolment and access control, visitor tracking and employee safety monitoring.
“However, the dark side of Gen AI produces blatant lies, fake audio, and images. Automated threat detection reliant on inferences from infected sources including internet-trained Gen AI and social media can poison the evidence chain. A capable human-in-the-loop who makes the high-impact decisions is an essential element of safe AI governance and deployment.
“On one hand, AI may take jobs and when poorly governed may create serious legal liabilities. On the other hand, it will drive growth opportunities and create fulfilling employment if organisations drive their responsible and compliant AI strategy from the top down.”
“Contrary to popular opinion AI has developed incrementally; it’s only able to mimic human behaviour and relies on being fed data. In the next decade, I suspect security professionals will use AI platforms to differentiate between animals and humans remotely – or even between people – based on attributes like smell, movement, heartbeat and heat signature.
“We’re already seeing that parts of our company are using access control systems that flag behaviours or events that are out of the ordinary. It learns normal behaviours of a building user over time.
“An employee may use a building between 8.30am – 5pm from Monday to Friday. If that person starts entering at different times, it could be an indication of a security risk that needs to be further explored.
“Similarly, if a person starts to access an area they have not previously, the system will flag this for further examination. At no time is the system given the rules to apply, it uses machine learning to observe patterns and look for anomalies.”
“The advancements of deep learning on the edge are a key driver for AI in the security sector. The integration of deep learning enhances analytics accuracy, forming the basis for reliable, scalable and bandwidth-efficient cloud solutions. The combination of edge processing, advanced metadata from the edge and additional processing in server or cloud – what we refer to as hybrid solutions – creates a scalable and cost-efficient model for more advanced analytics solutions. These solutions often generate events or alerts, or the data is the basis for site insights which are often consumed in the form of dashboards.
“In 2023, the discussion centered on AI risks. Large language models (LLMs) also entered the spotlight, becoming the foundation for generative AI, and we anticipate seeing more security applications powered by LLMs in 2024.
“Looking ahead more broadly, new regulation together with the formation of industry norms will have an impact on the development and adoption of AI as a technology, but it would be incorrect to refer to this process as a challenge. Rather, this is about laying a solid foundation for future innovation where ethics, responsibility and accountability are default.”
“Intelligence-led services have been the gold standard for the security industry for quite some time and the use of AI is a natural evolution of this. Especially within retail security where it is fast becoming a key factor in tackling different types of retail crime, ranging from shoplifting to the more violent and prolific thefts committed by organised crime groups.
“For example, AI-powered security cameras can analyse and learn the typical movements and behavioural patterns associated with shoplifting including position in the stores, suspicious behaviour at self-checkout aisles, and the number of items someone is carrying without a basket.
“This intelligence can be shared with in-store security detectives and analysts based in operations centres who can evaluate the situation and intervene where appropriate before a potential incident can take place.
“It’s this type of high-quality intelligence that Mitie is already harnessing to combat retail crime and ensure the safety and security of our customer’s retail stores. AI technology is a welcome addition to the tools we can use to unlock another layer of intelligence that will keep us a few steps ahead of prolific offenders and more sophisticated and organised crime groups.”
Our experienced consultants can give advice on and install practical security solutions including GPS trackers, overt and covert cameras, alarm systems and more. We also provide manned security services.
Contact Liam Doherty, Security Consultant, at liam.doherty@esarisk.com, on +44 (0)843 515 8686 or via our contact form to learn more.
Get the advice you need
Our expert consultants are on hand to give you the support you need.