At I/O 2023, Google made it evident that AI could play a major role in shaping the future of humankind. But like any other virtual entity, AI tools are prone to cyberattacks and misuse. We're still in the early days of the technology, though, and Google wants to help companies with policies ensuring the security of AI models and user data isn’t a mere afterthought thrown in for regulatory compliance.

Nowadays, cybersecurity needs to be proactive instead of fixing loopholes after the damage is dealt. With AI models, there’s a grave danger of bad actors potentially stealing training data or altering AI models to suit their agenda. Google has suggested a six-point framework, dubbed the Secure AI Framework or SAIF (via Axios), to safeguard AI efforts as companies rush to develop their offerings before their competitors.

First, the company suggests analyzing how a business’s existing security measures can extend to take AI efforts into their fold. One example that comes to mind is encryption, available in various forms, and relatively easy to deploy. This would help smaller businesses keep investments in data security to a minimum, while providing most of the associated benefits. Google also suggests expanding existing threat research to understand the dangers AI models face. Implementing this can be as simple as monitoring the inputs and outputs of generative AI to catch anomalies.

Automated responses to security incidents is another great way to curtail the damage, much like locking down an area when an active criminal is identified there. Google theorizes that bad actors will probably use AI to scale the impact of their actions, so researchers and developers can play an Uno reverse card and use emerging AI capabilities to safeguard their work proactively. Needless to say, regularly reviewing the safeguards is just good practice to follow. With the rate at which AI is evolving, Google suggests companies test their own defenses occasionally to ensure they can defend against newer, sophisticated attacks.

Remarking on the omnipresent threat to AI models, Google Cloud Chief Information Security Officer (CISO) Phil Venables pointed to a startling similarity between AI security and how companies control data access. This observation helped Google develop a framework that’s easy to implement and scale for most organizations working with AI.

The company is open to improving the framework through inputs from various government bodies and industry partners. Google has also expanded the bug bounty program, encouraging researchers to submit AI related safety concerns. With these security measures in place, AI research and development looks to be well sheltered from the attacks and attempts to misuse it.