Back
Conjecture - AI Safety Research Blog
webconjecture.dev·conjecture.dev/
Conjecture is a UK-based AI safety company pursuing the cognitive emulation research agenda; their blog is a primary source for understanding CoEm and related technical safety work.
Metadata
Importance: 45/100blog posthomepage
Summary
Conjecture is an AI safety research company focused on cognitive emulation (CoEm) as an approach to building aligned AI systems. Their blog covers technical AI safety research, interpretability, and alignment strategies with a particular emphasis on making AI systems that reason more like humans in interpretable ways.
Key Points
- •Conjecture develops Cognitive Emulation (CoEm), an approach to alignment that aims to build AI systems mimicking human cognitive processes
- •Research focus includes interpretability, understanding AI internals, and developing safer training paradigms
- •The company takes a commercial approach to AI safety, attempting to make safety-focused AI economically viable
- •Blog covers both technical research and broader strategic thinking about AI risk and alignment
- •Conjecture was founded by former EleutherAI and other AI research community members
Cited by 2 pages
| Page | Type | Quality |
|---|---|---|
| Conjecture | Organization | 37.0 |
| Survival and Flourishing Fund (SFF) | Organization | 59.0 |
5 FactBase facts citing this source
| Entity | Property | Value | As Of |
|---|---|---|---|
| Conjecture | Founded Date | Mar 2022 | — |
| Conjecture | Total Funding Raised | $25M | Dec 2022 |
| Conjecture | Founded By | sid_CrXoCsIucX, sid_kLIpOZtU1n, sid_n0d7I3OAej | — |
| Conjecture | Headquarters | London, UK | — |
| Conjecture | Legal Structure | Private company | — |
Cached Content Preview
HTTP 200Fetched Apr 13, 20265 KB
Conjecture
Product
Research
About Us
Contact
Product
About Us
Contact
Research
Product
Research
About Us
Contact
Product
Research
About Us
Contact
Home
Redefining AI Safety
Redefining AI Safety
Redefining AI Safety
Building a new AI architecture to ensure the controllable,
safe development of advanced AI technology.
Building a new AI architecture to ensure the controllable, safe development of advanced AI technology.
Building a new AI architecture to ensure the controllable, safe development of advanced AI technology.
Request a Demo
Learn More
Navigating Complexities
The challenge of AI Safety
The challenge of AI Safety
The challenge of AI Safety
Learn More
Unpredictable
AI systems generate hallucinations and inadvertently leak sensitive information, compromising their reliability.
Incoherent
AI responses are inconsistent in their outputs and reasoning, creating challenges to effective interaction.
Inept
Systems fail on basic tasks, raising significant obstacles to building reliable automation.
Uninterpretable
AI's inner workings remain uninterpretable, making it difficult to trust the accuracy of its outputs and debug when it’s incorrect.
Revolutionizing AI Deployment
Revolutionizing AI Deployment
Introducing Cognitive Emulation
Introducing Cognitive Emulation
Introducing Cognitive Emulation
Building and deploying AI systems that are both powerful and safe faces great challenges in the current AI paradigm. And so, we are building Cognitive Emulation: A different vision for powerful AI systems that are designed to follow the same, trusted reasoning processes we do.
Learn More
Learn More
Learn More
Learn More
Trained for specific tasks
Trained for specific tasks
Build AI by component
Build AI by component
Automate real workflows
Automate real workflows
Solve more complex problems
Solve more complex problems
Capability vs. Safety
Scaling and the Control Problem
Scaling and the Control Problem
The AI industry is racing to scale ever-larger models without considering the risks. While capabilities advance at a rapid pace, safety lags far behind. The current imbalance underscores the urgency for innovative solutions like Cognitive Emulation as an alternative to the grave risks associated with scaling.
The AI industry is racing to scale ever-larger models without considering the risks
Amplified Risks
Amplified Risks
Scaling Exacerbates the Dangers
Scaling Exacerbates the Dangers
Scaling Exacerbates the Dangers
Amplifying these models through scaling only makes it harder to notice if they are wrong, and impossible to debug when they are.
Amplifying these models through scaling only makes it harder to notice if they are wrong, and impossible to debug when they are.
Amplifying these models through scaling only make
... (truncated, 5 KB total)Resource ID:
b7aa1f2c839b5ee8 | Stable ID: sid_2aCx13LY7g