The software draws on an artificial intelligence dialogue system dubbed ‘Buddhabot’ – Copyright AFP Behrouz MEHRI
Together governments from the U.K. and U.S., with endorsement from 18 countries, have developed the first global guidelines for AI cybersecurity. Led by the UK’s NCSC and the US’s CISA, these guidelines promote a ‘secure by design’ approach in AI development, involving global collaboration and advice from industry experts. This approach means building in the essential security and personal protection at the outset rather than trying to bolt these on once a system has been developed.
Josh Davies, Principal Technical Manager at cybersecurity company Fortra thinks that such measures are necessary. However, there are likely challenges and obstacles ahead.
Davies observes how: “The AI arms race and rapid adoption of open AI systems have created concerns in the cybersecurity sector around the impact of a supply chain compromise – where the AI source code is compromised and used as a trusted delivery mechanism to pass on the compromise to third party users.”
The way this can be tackled is through the emergent policy frameworks, as Davies notes: “These guidelines look to secure the design, development, and deployment of AI which will help reduce the likelihood of this type of attack.”