Back
The University of Washington's Center for an Informed Public
webcip.uw.edu·cip.uw.edu/
Relevant to AI safety discussions around AI-enabled disinformation and synthetic media; CIP bridges academic research and policy on information integrity threats amplified by generative AI.
Metadata
Importance: 52/100homepage
Summary
The Center for an Informed Public (CIP) at the University of Washington is a multidisciplinary research center dedicated to resisting strategic misinformation, promoting an informed society, and strengthening democratic discourse. CIP conducts research on disinformation, influence operations, and information integrity, bridging academic study with practical tools and policy engagement. It is notable for its role in studying election misinformation, social media manipulation, and AI-enabled information threats.
Key Points
- •Interdisciplinary research hub combining computer science, communication, information science, and public policy to study misinformation ecosystems.
- •Conducts empirical studies on disinformation campaigns, influence operations, and the spread of false narratives on social media platforms.
- •Develops tools and frameworks to help the public, journalists, and policymakers identify and counter misleading information.
- •Has produced significant research on AI-generated content and its role in scaling disinformation threats.
- •Engages in policy advocacy and public education to build societal resilience against information manipulation.
Cited by 1 page
| Page | Type | Quality |
|---|---|---|
| AI Disinformation | Risk | 54.0 |
Cached Content Preview
HTTP 200Fetched Apr 9, 202617 KB
Center for an Informed Public | University of Washington research center
Feb
MAR
Apr
03
2025
2026
2027
success
fail
About this capture
COLLECTED BY
Organization: Archive Team
Formed in 2009, the Archive Team (not to be confused with the archive.org Archive-It Team) is a rogue archivist collective dedicated to saving copies of rapidly dying or deleted websites for the sake of history and digital heritage. The group is 100% composed of volunteers and interested parties, and has expanded into a large amount of related projects for saving online and digital history.
History is littered with hundreds of conflicts over the future of a community, group, location or business that were "resolved" when one of the parties stepped ahead and destroyed what was there. With the original point of contention destroyed, the debates would fall to the wayside. Archive Team believes that by duplicated condemned data, the conversation and debate can continue, as well as the richness and insight gained by keeping the materials. Our projects have ranged in size from a single volunteer downloading the data to a small-but-critical site, to over 100 volunteers stepping forward to acquire terabytes of user-created data to save for future generations.
The main site for Archive Team is at archiveteam.org and contains up to the date information on various projects, manifestos, plans and walkthroughs.
This collection contains the output of many Archive Team projects, both ongoing and completed. Thanks to the generous providing of disk space by the Internet Archive, multi-terabyte datasets can be made available, as well as in use by the Wayback Machine, providing a path back to lost websites and work.
Our collection has grown to the point of having sub-collections for the type of data we acquire. If you are seeking to browse the contents of these collections, the Wayback Machine is the best first stop. Otherwise, you are free to dig into the stacks to see what you may find.
The Archive Team Panic Downloads are full pulldowns of currently extant websites, meant to serve as emergency backups for needed sites that are in danger of closing, or which will be missed dearly if suddenly lost due to hard drive crashes or server failures.
Collection: ArchiveBot: The Archive Team Crowdsourced Crawler
ArchiveBot is an IRC bot designed to automate the archival of smaller websites (e.g. up to a few hundred thousand URLs). You give it a URL to start at, and it grabs all content under that URL, records it in a WARC, and then uploads that WARC to ArchiveTeam servers for eventual injection into the Internet Archive (or other archive sites).
To use ArchiveBot, drop by #archivebot on EFNet. To interact with ArchiveBot, you issue commands by typing it into the channel. Note you will need chan
... (truncated, 17 KB total)Resource ID:
d5cd132a7d7b8f1e | Stable ID: sid_aAcDD4hbOp