Huge Image

Global AI Safety Hampered by Indecision, Regulatory Delays

Governments are attempting to implement security measures for artificial intelligence, yet they face delays due to bureaucratic hurdles and a lack of consensus on what should be prioritized or avoided in international agreements.

In November 2023, the UK introduced the Bletchley Declaration, rallying 28 countries, including major powers like the US, China, and the EU, to enhance cooperation on AI safety. This was followed by the second Global AI Summit in May, where the UK and South Korea obtained commitments from 16 leading AI firms to adhere to safety standards.

The UK highlighted in its statement that the Declaration met its goals by setting a common understanding and accountability for AI's risks and benefits, promoting international cooperation, especially in scientific research.

In May, the EU passed the AI Act, marking the first comprehensive AI regulation globally, equipped with enforcement mechanisms and substantial fines for violations.

Joseph Thacker from AppOmni underscored the critical role of government in AI, particularly for national security, advocating for a well-informed approach which necessitates significant resources and time.

AI Safety Essential for SaaS Platforms

AI safety is increasingly crucial, with most software, including AI tools, being developed as SaaS applications, Thacker observed. Consequently, safeguarding these platforms' security and integrity is paramount.

He noted that as current SaaS providers integrate AI extensively, they heighten risks. This factor, he argued, should be considered by government bodies.

AI Safety
AI Safety

US Response to AI Safety Needs

Thacker urges the U.S. government to adopt a more proactive stance in addressing the lack of AI safety standards. He acknowledged the positive step taken by 16 leading AI companies in committing to safety and responsible AI use.

"This demonstrates an increasing recognition of AI's risks and a commitment to address them. Yet, the true measure will be in their execution and openness regarding safety practices," he remarked.

Despite this, Thacker felt that the commitments lacked in two significant aspects: the absence of outlined repercussions for non-compliance and the alignment of incentives, both of which he considers vital.

He believes that mandating AI firms to disclose their safety frameworks would foster accountability, offering a glimpse into the rigor of their testing processes. Such transparency could lead to greater public oversight.

"Moreover, it could encourage the exchange of knowledge and the establishment of industry-wide best practices," Thacker added.

Stylish Smart Glasses

He also called for expedited legislative measures but acknowledged the inherent challenges in achieving swift progress given the typical pace of U.S. governmental processes.

"The formation of a bipartisan group to push these recommendations might initiate broader discussions," he hoped.

The Global AI Summit marked significant progress in ensuring the responsible development of AI, according to Melissa Ruzzi, AI director at AppOmni, who emphasized the importance of regulatory frameworks.

"However, before we can establish these regulations, extensive research is necessary," Ruzzi explained to TechNewsWorld.

She highlighted the critical role of voluntary industry cooperation in AI safety initiatives, noting that defining clear standards and benchmarks is the initial hurdle to address. "We're not yet at a point where we can universally apply these standards across the AI sector," Ruzzi stated.

Further research and data collection are needed to define these standards accurately, she added. Ruzzi also pointed out the ongoing challenge of ensuring that AI regulations evolve alongside technological advancements without impeding innovation.