top of page

EMAIL ADDRESS

14622 Ventura Blvd Ste 2047

Sherman Oaks, CA 91403

MAILING ADDRESS

Toll Free: 877-3GC-GROUP

Phone: 213-632-0155

PHONE NUMBER

Contact Us

3GC POST

Karl Aguilar

The Threat and Opportunity of Using AI as a Weapon



Despite the increasing number of benefits being seen with regard to the use of artificial intelligence (AI), there are still many negative perceptions that surround the technology. The fact that such perceptions have been the subject of countless media has made AI a subject of, at times, fierce debate.


The threat of weaponized AI


It does not help that there are already some real-life examples that are being used in the argument against AI. Malicious AI is already being used, for example, BlackMamba is an AI-powered keylogging malware, and ChatGPT powered fraud is running rampant through social media networks with major players like Meta is fighting furiously to block ads linking to ChatGPT themed malware. As early as May 2018, the New York Times reported that researchers in the US and China had successfully commanded artificial intelligence (AI) systems developed by Amazon, Apple, and Google to do things such as dial phones and open websites remotely and without the knowledge of the AI systems’ users. This has caused fears that such technologies are a short step away from more nefarious commands, such as unlocking doors and transferring money.


Because AI algorithms are self-learning, they get smarter with each attempt and failure. Just as companies can use AI to automate and improve business processes, hackers can automate the identification of vulnerabilities and exploit-writing. In fact, AI can help malware avoid detection by existing cybersecurity systems.


No industry is safe given the numerous AI initiatives in place across different companies, each presenting a host of potential vulnerabilities waiting to be exploited, such as malicious corruption or manipulation of the training data, implementation, and component configuration.


The threat of AI being weaponized as a criminal tool is no longer a matter of if but of when. Not of when weaponized AI will be used but when they will be more widely used.


Using AI as a defense mechanism


Despite these threats, there is an opportunity for businesses to harness the power of AI as a means to strengthen existing cybersecurity set-ups. For starters, businesses can integrate AI into their security.


AI not only enhances existing detection and response capabilities but also enables new abilities in preventative defense. Companies can also streamline and improve the security operating model by reducing time-consuming and complex manual inspection and intervention processes and redirecting human efforts to supervisory and problem-solving tasks.


In particular, businesses can apply AI at three levels.

  • Prevention and Protection - Researchers are currently focused on studying AI’s potential to stop cyberattacks. While it is still early days, the future of cybersecurity will likely benefit from more AI-enabled prevention and protection systems that use advanced machine learning techniques to strengthen online defenses.

  • Detection - AI can detect any changes that appear abnormal without the need for an advanced definition of abnormal. Its detection capabilities can also move beyond classic approaches based on machine learning that require large, curated training datasets and provide insights into sources of potential threats from internal and external sensors. It must be noted that such capabilities will require careful policy design and oversight to conform with laws and regulations governing data use.

  • Response - AI can help prioritize the cybersecurity risk areas for attention and intelligent automation. AI can also facilitate intelligent responses to attacks, either outside or inside the perimeter, based on shared knowledge and learning. As such, AI-enabled response systems can segregate networks dynamically to isolate valuable assets in safe “places” or redirect attackers away from targeting valuable data.


Implementing an AI-driven response will require careful design and strategic planning. This will be especially true when it comes to users that should be isolated or quarantined and systems that work at the digital-physical interface.


As things stand, cybercriminals have the advantage when it comes to using AI. This makes things cut out for defenders who need to successfully defend their system at all times, not letting allowing a little opportunity for attackers to exploit.


AI and cybersecurity from multiple perspectives


Given the opportunities and dangers posed by AI, companies need to approach it in terms of cybersecurity from two perspectives: protecting their own AI initiatives and using (AI-enabled) cybersecurity to protect their full set of digital assets (whether AI-enabled or not).


Admittedly, this is not just a straightforward thing to do as there are many things needed to take into account, such as the protection of data against biased inputs, availability of technical and human monitoring capabilities, presence of appropriate governance policies, sufficient focus on educating technicians and end users with regard to AI, among many others.


Companies should have an objective assessment of where they stand on how they are using and/or will be using AI in their products and services. Such assessment will help in laying the foundation of a cybersecurity system that can withstand the present and future challenges posed by AI and other emerging technologies while harnessing the potential of these same technologies in bolstering security for the enterprise.

Comments


bottom of page