top of page
Gen-AI Employee Support & Automation Platform

AI's Promises and Risks for Biosecurity Explored in New DHS Report


A new report from the Department of Homeland Security (DHS) highlights the dual potential of artificial intelligence (AI) to enhance and endanger U.S. biosecurity. AI's influence spans industries from energy to defence, but its role in biosecurity is particularly crucial.


The DHS findings, part of the AI and Chemical, Biological, Radiological, and Nuclear (CBRN) Report to the President, stem from President Biden's executive order on AI, which requested an evaluation of AI's risks, including its potential use in developing biological weapons.


AI is already making significant impacts on scientific research, with the capability to positively and negatively affect outcomes depending on user intent and data quality. Publicly available AI models improve scientists' ability to design new molecules and understand protein-toxin interactions. They also enhance agricultural practices and crop yields. However, these models often face high failure rates and data validity issues. AI experts caution that some errors in current models may be intrinsic to their operation and difficult to rectify.


One primary concern is that AI lowers the entry barrier for conducting experiments to design new molecules, which malicious actors could exploit to conceptualize and execute chemical and biological attacks. The report emphasizes the need for continued collaboration among industry, government, and academia to address these risks and leverage AI's benefits for CBRN prevention, detection, response, and mitigation.


The report advocates for AI tools that can help attribute chemical or biological attacks to their sources and monitor compliance with international weapons agreements. AI-enabled pattern recognition could detect signs of an attack before it happens, and AI is already being used in passenger and cargo screenings.


A broader debate surrounds access to AI models, with tech companies favouring closed models for profit while researchers and startups push for open models to build upon. National security officials warn that open models could facilitate the engineering of dangerous pathogens. Additionally, AI-driven misinformation could influence healthcare decisions or emergency responses.


The potential risk of AI to biosecurity remains contentious. Recent studies from OpenAI, RAND, and others argue that current large language models do not significantly increase the risk of bioweapons. However, as AI models evolve, many researchers advocate for continuous assessment of AI in biotechnology to mitigate emerging threats.

Comments


bottom of page