Date: April 02, 2024
Owing to rising concerns about AI safety, the US and UK governments joined forces to take collective action. This transatlantic cooperation has formed the world’s first bilateral agreement on testing and assessing AI safety. The UK government has chosen to follow a targeted strategy to resolve the rising risks around AI usage instead of deploying broader regulations governing the development of AI.
The U.S. Commerce Secretary Gina Raimondo and U.K. Science Minister Michelle Donelan signed the agreement in Washington, DC on Monday. Under the agreement, the resources, data, technology, and talent will be collaboratively utilized to achieve common goals for developing stringent methodologies for testing and assessing risks. Although the United Kingdom is more financially involved in this endeavor, the United States will bring in its network of big tech companies and government agencies.
According to the blog published on the National Cyber Security Centre’s website, agencies from 18 countries, including the US, endorse the new UK-developed guidelines on AI cybersecurity. The collaborative agreement is furnished by GCHQ’s National Cyber Security Centre and developed by the US’s Cybersecurity and Infrastructure Security Agency in cooperation with industry experts and 21 other international agencies and ministries from across the world – including those from all members of the G7 group of nations and the Global South.
“These guidelines mark a significant step in shaping a truly global, common understanding of the cyber risks and mitigation strategies around AI to ensure that security is not a postscript to development but a core requirement throughout,” said NCSC CEO Linda Cameron.
With so many countries joining forces to collectively tackle the challenges surrounding the secure development of AI, a unified approach can be adopted to facilitate all private players and federal bodies. AI is developing at a phenomenal pace and is bringing game-changing revolutions in multiple industries across the globe. With more countries following one regulation, monitoring, and action, the subjective conflicts will be replaced with objectively backed agreements.
Cybersecurity has always been a continuously evolving effort, and involving AI in cybersecurity can significantly impact safety measures on both sides. On the one hand, it can boost the security features and monitoring capabilities. On the other hand, it can threaten or breach the indigenous operating systems with complete autonomy. Multiple countries have begun putting in place their own AI regulations and governance policies.
Claude Beats ChatGPT in AI Rankings Of Top Chatbots
Claude 3, the latest version of AI chatbot by Anthropic, has dragged ChatGPT-4 to the second spot in global AI rankings.
How to Take A Screenshot On A Chromebook?
Learn to easily capture your screen on a Chromebook with quick steps and discover simple methods for full and partial screenshots.
AI Investments In Mexico Surge By US Tech Giants
Amid a heated US-China tech standoff, Mexico has become the new hub for developing semiconductor chips for AI development.
New York City May Deploy Gun Monitoring AI In Public Places
New York City’s Mayor Eric Adams has proposed a plan to use AI technology to monitor public places like Subways for weapons.