The SecurityANGLE

The Evolution of the AI Threat -Three Stages to Watch in 2024

February 05, 2024 SiliconANGLE Season 1 Episode 1
The Evolution of the AI Threat -Three Stages to Watch in 2024
The SecurityANGLE
More Info
The SecurityANGLE
The Evolution of the AI Threat -Three Stages to Watch in 2024
Feb 05, 2024 Season 1 Episode 1
SiliconANGLE

In this episode of the #SecurityANGLE, our focus is on the evolution of #AI as a threat, where we explore the three stages that we expect to see this evolution unfold in 2024. The three stages we discuss on today's show are: 


Check out theCUBE for the latest in enterprise tech https://siliconangle.com/category/cube-event-coverage/


- AI Threat Actors Take the Stage. We explore how human threat actors are being augmented with AI capabilities and explore how those capabilities will act as a force multiplier.


- New AI Threat Vectors Emerge. AI will continue to enhance existing attack vectors and will also create new vectors that are crafted based on the quality of results of generative AI itself.


- AI Code Assistants Introduce Further Vulnerability. The increased adoption of AI assistants will likely introduce more errors in software developers, namely, writing #security vulnerabilities into the source code. We explore interesting findings from Stanford exploring instances of insecure code and the role AI assistants are playing there.


Follow theCUBE's wall-to-wall coverage as the roving news desk for SiliconANGLE reports live from tech's top events

https://siliconangle.com/category/cube-event-coverage/

Show Notes

In this episode of the #SecurityANGLE, our focus is on the evolution of #AI as a threat, where we explore the three stages that we expect to see this evolution unfold in 2024. The three stages we discuss on today's show are: 


Check out theCUBE for the latest in enterprise tech https://siliconangle.com/category/cube-event-coverage/


- AI Threat Actors Take the Stage. We explore how human threat actors are being augmented with AI capabilities and explore how those capabilities will act as a force multiplier.


- New AI Threat Vectors Emerge. AI will continue to enhance existing attack vectors and will also create new vectors that are crafted based on the quality of results of generative AI itself.


- AI Code Assistants Introduce Further Vulnerability. The increased adoption of AI assistants will likely introduce more errors in software developers, namely, writing #security vulnerabilities into the source code. We explore interesting findings from Stanford exploring instances of insecure code and the role AI assistants are playing there.


Follow theCUBE's wall-to-wall coverage as the roving news desk for SiliconANGLE reports live from tech's top events

https://siliconangle.com/category/cube-event-coverage/