Skip to content
Longterm Wiki
Back

The Dangers of Unregulated Generative AI and Open Source Models

web

An industry perspective from IBM on AI governance risks; useful as a corporate policy viewpoint but represents a vendor's commercial interests in the AI regulation debate rather than independent research.

Metadata

Importance: 38/100blog postcommentary

Summary

An IBM Think Insights article examining the risks posed by unregulated generative AI and open-source AI models, arguing for governance frameworks and oversight mechanisms to mitigate potential harms. It discusses how unrestricted access to powerful AI systems can enable misuse, and advocates for policy interventions and responsible deployment practices.

Key Points

  • Unregulated generative AI poses significant risks including misuse for disinformation, cyberattacks, and harmful content generation.
  • Open-source AI models lower barriers to access, which can democratize AI but also enables bad actors to misuse powerful capabilities.
  • Governance frameworks and oversight mechanisms are needed to balance innovation with safety in AI deployment.
  • IBM advocates for industry standards and policy interventions to ensure responsible AI development and use.
  • The tension between open access and safety requires careful consideration of licensing, deployment controls, and accountability measures.

Cited by 1 page

PageTypeQuality
Open Source AI SafetyApproach62.0

Cached Content Preview

HTTP 200Fetched Apr 9, 20267 KB
Open source, open risks: The growing dangers of unregulated generative AI | IBM 
 

 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 



 

 
 
 



 

 
 
 
 
 
 
 

 
 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 
 
 
 

 
 
 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

 
 

 
 

 
 
 
 
 
 

 
 
 
 
 


 
 
 

 

 

 
 
 

 

 
 


 
 

 

 

 
 
 
 

 
 
 

 

 
 
 

 

 

 
 

 

 
 
 

 

 

 
 

 
 
 
 
 
 










 
 
 






 



 




 
 






 

 

 

 
 
 
 


 
 

 






 
 

 
 
 
 
 
 
 
 

 
 

 
 
 
 
 
 

 
 
 
 
 
 
 
 
 
 
 Security
 
 
 
 
 
 Artificial Intelligence
 
 
 
 
 
 Cloud
 
 
 
 
 
 
 
 
 
 
 

 
 
 
 Open source, open risks: The growing dangers of unregulated generative AI

 

 

 

 
 
 
 

 
 
 
 
 

 

 
 
 

 
 
 

 
 

 

 
 
 

 

 
 
 

 

 
 
 
 
 
 
 
 
 

 
 

 

 
 

 
 
 
 
 
 
 
 

 
 
 
 
 
 

 
 
 

 
 
 
 Authors

 

 

 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 Charles Owen-Jackson 
 
 Freelance Content Marketing Writer

 

 

 

 
 

 
 

 
 

 
 
 
 
 

 
 
 

 

 

 
 
 While mainstream generative AI models have built-in safety barriers, open-source alternatives have no such restrictions. Here’s what that means for cyber crime.


 There’s little doubt that open-source is the future of software. According to the 2024 State of Open Source Report , over two-thirds of businesses increased their use of open-source software in the last year.


 Generative AI is no exception. The number of developers contributing to open-source projects on GitHub and other platforms is soaring. Organizations are investing billions in generative AI across a vast range of use cases, from customer service chatbots to code generation. Many of them are either building proprietary AI models from the ground up or on the back of open-source projects.


 But legitimate businesses aren’t the only ones investing in generative AI. It’s also a veritable goldmine for malicious actors, from rogue states bent on proliferating misinformation among their rivals to cyber criminals developing malicious code or targeted phishing scams.



 
 
 
 

 
 

 
 
 
 
 

 
 Tearing down the guard rails
 

 

 

 
 
 For now, one of the few things holding malicious actors back is the guardrails developers put in place to protect their AI models against misuse. ChatGPT won’t knowingly generate a phishing email, and Midjourney won’t create abusive images. However, these models belong to entirely closed-source ecosystems, where the developers behind them have the power to dictate what they can and cannot be used for.


 It took just two months from its public release for ChatGPT to reach 100 million users . Since then, countless thousands of users have tried to break through its guardrails and ‘jailbreak’ it to do whatever they want — with varying

... (truncated, 7 KB total)
Resource ID: 88c23390a9732d19 | Stable ID: sid_jZ8JYbOEB4