This feed does not validate.
line 36, column 1: (49 occurrences) [help]
<enclosure url="https://www.eff.org/files/banner_library/04_stopcensoringab ...
^
In addition, interoperability with the widest range of feed readers could be improved by implementing the following recommendations.
line 11, column 0: (442 occurrences) [help]
<description><div class="field field--name-body field--type-text ...
line 11, column 0: (142 occurrences) [help]
<description><div class="field field--name-body field--type-text ...
line 31, column 36: (20 occurrences) [help]
</div></div></div></description>
^
line 145, column 0: (663 occurrences) [help]
<img src="https://www.eff.org/sites/all/modules/custom/mytube/pl ...
line 271, column 0: (8 occurrences) [help]
<description><div class="field field--name-body field--type-text ...
line 1031, column 0: (4 occurrences) [help]
<description><div class="field field--name-body field--type-text ...
line 1061, column 0: (2 occurrences) [help]
<img title="EFF’s DEF CON 33 t-shirt: Beyond the Walled Garden. ...
<?xml version="1.0" encoding="utf-8" ?><rss version="2.0" xml:base="https://www.eff.org/rss/updates.xml" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>Deeplinks</title>
<link>https://www.eff.org/rss/updates.xml</link>
<description>EFF's Deeplinks Blog: Noteworthy news from around the internet</description>
<language>en</language>
<atom:link href="https://www.eff.org/rss/updates.xml" rel="self" type="application/rss+xml" />
<item>
<title>Our Stop Censoring Abortion Campaign Uncovers a Social Media Censorship Crisis </title>
<link>https://www.eff.org/deeplinks/2025/09/our-stop-censoring-abortion-campaign-uncovers-social-media-censorship-crisis</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p><i><span data-contrast="none">This is the first installment in a blog series documenting EFF's findings from the </span></i><a href="https://www.eff.org/deeplinks/2025/02/stop-censoring-abortion-fight-reproductive-rights-digital-age"><i><span data-contrast="none">Stop Censoring Abortion</span></i></a><i><span data-contrast="none"> campaign. You can read additional posts </span></i><a href="https://www.eff.org/pages/stop-censoring-abortion"><i><span data-contrast="none">here</span></i></a><i><span data-contrast="none">.</span></i><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="none">We’ve been hearing that social media platforms are censoring abortion-related content, even when no law requires them to do so. Now, we’ve got the receipts.</span><span data-ccp-props="240}"> </span></p>
<p><span data-contrast="none">For months, EFF has been investigating stories from users whose abortion-related content has been taken down or otherwise suppressed by major social media platforms. In collaboration with our allies—including </span><a href="https://www.plancpills.org/"><span data-contrast="none">Plan C</span></a><span data-contrast="none">, </span><a href="https://www.womenonweb.org/en/home-en/"><span data-contrast="none">Women on Web</span></a><span data-contrast="none">, </span><a href="https://reproaction.org/"><span data-contrast="none">Reproaction</span></a><span data-contrast="none">, and </span><a href="https://womenfirstdigital.org/"><span data-contrast="none">Women First Digital</span></a><span data-contrast="none">—we launched the </span><a href="https://www.eff.org/pages/stop-censoring-abortion"><span data-contrast="none">#StopCensoringAbortion campaign</span></a><span data-contrast="none"> to collect and amplify these stories. </span><span data-ccp-props="240}"> </span></p>
<p><span data-contrast="none">Submissions came from a variety of users, including personal accounts, influencers, healthcare clinics, research organizations, and advocacy groups from across the country and abroad—a spectrum that underscores the wide reach of this censorship. Since the start of the year, we’ve seen nearly 100 examples of abortion-related content taken down by social media platforms.</span><span data-ccp-props="240}"> </span></p>
<p><span data-contrast="none">We analyzed these takedowns, deletions, and bans, comparing the content to what platform policies allow—particularly those of Meta—and found that </span><b><span data-contrast="none">almost none of the submissions we received violated any of the platforms’ stated policies. </span></b><span data-contrast="none">Most of the censored posts simply provided factual, educational information. This Threads post is a perfect example:</span><span data-ccp-props="240}"> </span></p>
<p><span data-ccp-props="240}"><div class="caption caption-center"><div class="caption-width-container"><div class="caption-inner"><img src="/files/2025/09/11/screenshot_2025-09-11_at_2.59.39_pm.png" width="430" height="370" alt="Screenshot of removed post submitted by Lauren Kahre to EFF" title="Screenshot of removed post submitted by Lauren Kahre to EFF" /><p class="caption-text">Screenshot submitted by Lauren Kahre to EFF</p></div></div></div></span></p>
<p><span data-contrast="none">In this post, health policy strategist Lauren Kahre discussed abortion pills’ availability via mail. She provided factual information about two FDA approved medications (mifepristone and misoprostol), including facts like shelf life and how to store pills safely. </span><span data-ccp-props="240}"> </span></p>
<p><span data-contrast="none">Lauren’s post doesn’t violate any of Meta’s policies and shouldn’t have been removed. But don’t just take our word for it: </span><b><span data-contrast="none">Meta has publicly insisted that posts like these should </span></b><b><i><span data-contrast="none">not</span></i></b><b><span data-contrast="none"> be censored.</span></b><span data-contrast="none"> In a </span><a href="https://www.amnestyusa.org/wp-content/uploads/2024/07/Obstacles-to-Autonomy-Post-Roe-Removal-of-Abortion-Information-Online.pdf"><span data-contrast="none">February 2024 letter to Amnesty International</span></a><span data-contrast="none">, Meta Human Rights Policy Director Miranda Sissons wrote: “Organic content (i.e., non paid content) educating users about medication abortion is allowed and does not violate our Community Standards. Additionally, providing guidance on legal access to pharmaceuticals is allowed.”</span><span data-ccp-props="240}"> </span></p>
<p><span data-contrast="none">Still, shortly after Lauren shared this post, Meta took it down. Perhaps even more perplexing was their explanation for doing so. According to Meta, the post was removed because “</span><span data-contrast="auto">[they] don’t allow people to buy, sell, or exchange drugs that require a prescription from a doctor or a pharmacist.”</span><span data-ccp-props="240}"> </span></p>
<p><span data-ccp-props="240}"><div class="caption caption-center"><div class="caption-width-container"><div class="caption-inner"><img src="/files/2025/09/11/screenshot_2025-09-11_at_2.41.18_pm.png" width="243" height="394" alt="Screenshot of takedown notice submitted by Lauren Kahre to EFF" title="Screenshot of takedown notice submitted by Lauren Kahre to EFF" /><p class="caption-text">Screenshot submitted by Lauren Kahre to EFF</p></div></div></div></span></p>
<p><span data-contrast="none">In the submissions we received, this was the most common reason Meta gave for removing abortion-related content. The company frequently claimed that posts violated</span> <a href="https://transparency.meta.com/policies/community-standards/restricted-goods-services/"><span data-contrast="none">policies on Restricted Goods and Services</span></a><span data-contrast="auto">, </span><span data-contrast="none">which prohibit any “attempts to buy, sell, trade, donate, gift or ask for pharmaceutical drugs.” </span><span data-ccp-props="240}"> </span></p>
<p><span data-contrast="none">Yet in Lauren’s case and others, </span><b><span data-contrast="none">the posts very clearly did no such thing. </span></b><span data-contrast="none">And as </span><a href="https://www.amnestyusa.org/reports/obstacles-to-autonomy-post-roe-removal-of-abortion-information-online/"><span data-contrast="none">Meta itself has explained</span></a><span data-contrast="none">: “Providing guidance on how to legally access pharmaceuticals is permitted as it is </span><i><span data-contrast="none">not</span></i><span data-contrast="none"> considered an offer to buy, sell or trade these drugs.”</span><span data-ccp-props="240}"> </span></p>
<p><span data-contrast="none">In fact, Meta’s policies on Restricted Goods &amp; Services further state: “We allow discussions about the sale of these goods in stores or by online retailers, advocating for changes to regulations of goods and services covered in this policy, and advocating for or concerning the use of pharmaceutical drugs in the context of medical treatment, including discussion of physical or mental side effects.” Also, “Debating or advocating for the legality or discussing scientific or medical merits of prescription drugs is allowed. This includes news and public service announcements.”</span><span data-ccp-props="240}"> </span></p>
<p><b><span data-contrast="none">Over and over again, the policies say one thing, but the actual enforcement says another.</span></b><span data-ccp-props="240}"> </span></p>
<p><span data-contrast="none">We spoke with multiple Meta representatives to share these findings. We asked hard questions about their policies and the gap between how they’re being applied. Unfortunately, we were mostly left with the same concerns, but we’re continuing to push them to do better. </span><span data-ccp-props="240}"> </span></p>
<p><span data-contrast="none">In the coming weeks, we will share a series of blogs further examining trends we found, including stories of unequal enforcement, where individuals and organizations needed to rely on internal connections at Meta to get wrongfully censored posts restored; examples of account suspensions without sufficient warnings; an exploration of Meta’s ad policies; practical tips for users to avoid being censored; and concrete steps platforms should take to reform their abortion content moderation practices. For a preview, we’ve already shared some of our findings with <a href="https://apnews.com/article/social-media-abortion-censorship-eff-a71965f86b40912db10fad6d26bcdd95" target="_blank" rel="noopener noreferrer">Barbara Ortutay at The Associated Press</a>, whose <a href="https://apnews.com/article/social-media-abortion-censorship-eff-a71965f86b40912db10fad6d26bcdd95" target="_blank" rel="noopener noreferrer">report</a> on some of these takedowns was published today</span><span data-contrast="none">. </span><span data-ccp-props="240}"> </span></p>
<p><span data-contrast="none">We hope this series highlighting examples of abortion content censorship will help the public and the platforms understand the breadth of this problem, who is affected, and with what consequences. These stories collectively underscore the urgent need for platforms to review and consistently enforce their policies in a fair and transparent manner. </span><span data-ccp-props="240}"> </span></p>
<p><span data-contrast="none">With </span><a href="https://www.eff.org/issues/reproductive-rights"><span data-contrast="none">reproductive rights under attack</span></a><span data-contrast="none"> both in the U.S. and abroad, sharing accurate information about abortion online has never been more critical. Together, we can hold platforms like Meta accountable, demand transparency in moderation practices, and ultimately stop the censorship of this essential, sometimes life-saving information.</span><span data-ccp-props="240}"> </span></p>
<p><i><span data-contrast="none">This is the first post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more in the series: </span></i><a href="https://www.eff.org/pages/stop-censoring-abortion"><i><span data-contrast="none">https://www.eff.org/pages/stop-censoring-abortion</span></i></a><i><span data-contrast="auto"> </span></i><i><span data-contrast="none"> </span></i><span data-ccp-props="{}"> </span></p>
</div></div></div></description>
<pubDate>Mon, 15 Sep 2025 19:07:16 +0000</pubDate>
<guid isPermaLink="false">111192 at https://www.eff.org</guid>
<category domain="https://www.eff.org/issues/reproductive-rights">Reproductive Justice</category>
<dc:creator>Jennifer Pinsof</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/04_stopcensoringabortion-banner1.png" alt="An illustration of hands opening a box of abortion pills and pulling out a insert that says &quot;Access Denied&quot;" type="image/png" length="358787" />
</item>
<item>
<title>EFF to Court: The Supreme Court Must Rein in Expansive Secondary Copyright Liability</title>
<link>https://www.eff.org/deeplinks/2025/09/eff-court-supreme-court-must-rein-expansive-secondary-copyright-liability</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p>If the Supreme Court doesn’t reverse a lower court’s ruling, internet service providers (ISPs) could be forced to terminate people’s internet access based on nothing more than mere accusations of copyright infringement. This would threaten innocent users who rely on broadband for essential aspects of daily life. EFF—along with the American Library Association, the Association of Research Libraries, and Re:Create—filed an <a href="https://www.eff.org/document/cox-v-sonyeff-amicus-brief">amicus brief</a> urging the Court to reverse the decision.</p>
<h3><strong>The Stakes: Turning ISPs into Copyright Police</strong></h3>
<p>Among other things, the Supreme Court approving the appeals court’s findings will radically change the amount of risk your ISP takes on if a customer infringes on copyright, forcing the ISP to terminate access to the internet for those users accused of copyright infringement—and everyone else who uses that internet connection.</p>
<p>This issue turns on what courts call “secondary liability,” which is the legal idea that someone can be held responsible not for what they did directly, but for what someone else did using their product or service.</p>
<p>The case began when music companies sued Cox Communications, arguing that the ISP should be held liable for copyright infringement committed by some of its subscribers. The Court of Appeals for the Fourth Circuit agreed, adopting a “material contribution” standard for contributory copyright liability (a rule for when service providers can be held liable for the actions of users). The lower court said that providing a service that <em>could</em> be used for infringement is enough to create liability when a customer infringes.</p>
<p>In the Patent Act, where Congress <em>has</em> explicitly defined secondary liability, there’s a different test: contributory infringement exists only where a product is <em>incapable</em> of substantial <em>non-infringing</em> use. Internet access, of course, is overwhelmingly used for lawful purposes, making it the very definition of a “staple article of commerce” that can’t be liable under the patent framework. Yet under the Fourth Circuit’s rule, ISPs could face billion-dollar damages if they fail to terminate users on the basis of even flimsy or automated infringement claims.</p>
<h3><strong>Our Argument: Apply Clear Rules from the Patent Act, Not Confusing Judge-Made Tests</strong></h3>
<p>Our <a href="https://www.eff.org/document/cox-v-sonyeff-amicus-brief">brief</a> urges the Court to do what it has done in the past: look to patent law to define the limits of secondary liability in copyright. That means contributory infringement must require more than a “material contribution” by the service provider—it should apply only when a product or service is especially designed for infringement and lacks substantial non-infringing uses.</p>
<h3><strong>The Human Cost: Losing Internet Access Hurts Everyone </strong></h3>
<p>The Fourth Circuit’s rule threatens devastating consequences for the public. Terminating an ISP account doesn’t just affect a person accused of unauthorized file sharing—it cuts off entire households, schools, libraries, or businesses that share an internet connection.</p>
<ul>
<li>Public libraries, which provide internet access to millions of Americans who lack it at home, could lose essential service.</li>
<li>Universities, hospitals, and local governments could see internet access for whole communities disrupted.</li>
<li>Households—especially in low-income and communities of color, which disproportionately share broadband connections with other people—would face collective punishment for the alleged actions of a single user.</li>
</ul>
<p>With more than a third of Americans having only one or no broadband provider, many users would have no way to reconnect once cut off. And given how essential internet access is for education, employment, healthcare, and civic participation, the consequences of termination are severe and disproportionate.</p>
<h3><strong>What’s Next</strong></h3>
<p>The Supreme Court has an opportunity to correct course. We’re asking the Court to reject the Fourth Circuit’s unfounded “material contribution” test, reaffirm that patent law provides the right framework for secondary liability, and make clear that the Constitution requires copyright to serve the public good. The Court should ensure that copyright enforcement doesn’t jeopardize the internet access on which participation in modern life depends.</p>
<p>We’ll be watching closely as the Court considers this case. In the meantime, you can read our amicus brief <a href="https://www.eff.org/document/cox-v-sonyeff-amicus-brief">here</a>.</p>
</div></div></div></description>
<pubDate>Thu, 11 Sep 2025 00:25:59 +0000</pubDate>
<guid isPermaLink="false">111191 at https://www.eff.org</guid>
<category domain="https://www.eff.org/issues/innovation">Creativity & Innovation</category>
<dc:creator>Betty Gedlu</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/fixcopyright-graphic-banner.jpg" alt="EFF Presents &quot;Fix Copyright&quot;, a design featuring a cartoon mouse hacking his tractor." type="image/jpeg" length="359162" />
</item>
<item>
<title>San Francisco Gets An Invasive Billionaire-Bought Surveillance HQ </title>
<link>https://www.eff.org/deeplinks/2025/09/san-francisco-gets-invasive-billionaire-bought-surveillance-hq</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p><span data-contrast="auto">San Francisco billionaire Chris Larsen once again has wielded his wallet to keep city residents under the eye of all-seeing police surveillance.</span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">The San Francisco Police Commission, the Board of Supervisors, and Mayor Daniel Lurie </span><a href="https://www.sf.gov/news-mayor-lurie-takes-key-step-towards-opening-new-real-time-investigation-center-that-will-leverage-new-technology-to-improve-public-safety" target="_blank" rel="noopener noreferrer"><span data-contrast="none">have signed off</span></a><span data-contrast="auto"> on Larsen’s $9.4 million gift of a new Real-Time Investigations Center. The plan involves moving the city’s existing police tech hub from the public Hall of Justice not to </span><a href="https://www.sanfranciscopolice.org/your-sfpd/sfpd-headquarters" target="_blank" rel="noopener noreferrer"><span data-contrast="none">the city’s brand-new police headquarters</span></a><span data-contrast="auto"> but instead to a sublet in the Financial District building of Ripple Labs, Larsen’s crypto-transfer company. Although the city </span><a href="https://www.sfchronicle.com/crime/article/chris-larsen-sfpd-donation-tech-20357671.php" target="_blank" rel="noopener noreferrer"><span data-contrast="none">reportedly won’t be paying for the space</span></a><span data-contrast="auto">, the lease reportedly cost Ripple $2.3 million and will last until December 2026.</span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">The deal will also include a $7.25 million gift from the San Francisco Police Community Foundation that Larsen created. Police foundations are semi-public fundraising arms of police departments that allow them to buy technology and gear that the city will not give them money for. </span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">In Los Angeles, the </span><a href="https://www.latimes.com/california/story/2023-01-04/lapd-police-foundation-private-funding-arm" target="_blank" rel="noopener noreferrer"><span data-contrast="none">city’s police foundation got $178,000 from the company Target</span></a><span data-contrast="auto"> to pay for the services of the data analytics company Palantir to use for predictive policing. In Atlanta, the city’s police foundation funds a </span><a href="https://atlantapolicefoundation.org/programs/operation-shield/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">massive surveillance apparatus</span></a><span data-contrast="auto"> as well as the much-maligned </span><a href="https://www.theguardian.com/us-news/2025/apr/11/copy-city-legal-case-police-foundations" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Cop City training complex</span></a><span data-contrast="auto">. (Despite police foundations’ insistence that they are not public entities and therefore do not need to be transparent or answer public records requests, a judge recently ordered the Atlanta Police Foundation </span><a href="https://georgiarecorder.com/2025/06/04/atlanta-police-foundation-ordered-to-comply-with-open-records-requests-over-cop-city-documents/" target="_blank" rel="noopener noreferrer"><span>to release documentation</span></a><span data-contrast="auto"> related to Cop City.)</span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">A police foundation in San Francisco brings the same concerns: that an unaccountable and untransparent fundraising arm shmoozing with corporations and billionaires would fund unpopular surveillance measures without having to reveal much to the public. </span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">Larsen was one of the deep pockets behind last year’s </span><a href="https://www.eff.org/deeplinks/2024/03/voting-no-prop-e-easy-and-important-san-francisco"><span data-contrast="none">Proposition E</span></a><span data-contrast="auto">, a ballot measure to supercharge surveillance in the city. The measure usurped the city’s 2019 surveillance transparency and accountability ordinance, which had required the SFPD to get the elected Board of Supervisors’ approval before buying and using new surveillance technology. This common-sense democratic hurdle was, apparently, a bridge too far for the SFPD and for Larsen. </span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">We’re no fans of </span><a href="https://www.eff.org/deeplinks/2020/11/eff-publishes-new-research-real-time-crime-centers-us" target="_blank" rel="noopener noreferrer"><span data-contrast="none">real-time crime centers</span></a><span data-contrast="auto"> (RTCCs), as they’re often called elsewhere, to start with. They’re basically control rooms that pull together all feeds from a vast warrantless digital dragnet, often including automated license plate readers, fixed cameras, officers’ body-worn cameras, drones, and other sources. It’s a means of consolidating constant surveillance of the entire population, tracking everyone wherever they go and whatever they do – worrisome at any time, but especially in a time of rising authoritarianism. </span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">Think of what this data could do if it got into federal hands; imagine how vulnerable city residents would be subject to harassment if every move they made was centralized and recorded downtown. But you don’t have to imagine, because SFPD already has been caught </span><a href="https://sfstandard.com/2025/09/08/sfpd-flock-alpr-ice-data-sharing/" target="_blank" rel="noopener noreferrer"><span>sharing automated license plate reader data with out-of-state law enforcement agencies assisting in federal immigration investigations</span></a><span data-contrast="auto">.</span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">We’re especially opposed to RTCCs using live feeds from non-city surveillance cameras to push that panopticon’s boundaries even wider, as San Francisco’s does. Those semi-private networks of some 15,000 cameras, </span><a href="https://www.eff.org/cases/williams-v-san-francisco"><span>already abused</span></a><span data-contrast="auto"> by SFPD to surveil lawful protests against police violence, were funded in part by – you guessed it – </span><a href="https://www.eff.org/deeplinks/2020/07/san-francisco-police-accessed-business-district-camera-network-spy-protestors" target="_blank" rel="noopener noreferrer"><span>Chris Larsen</span></a><span data-contrast="auto">.</span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">These technologies could potentially endanger San Franciscans by directing armed police at them due to reliance on a faulty algorithm or by putting already-marginalized communities at further risk of overpolicing and surveillance. But studies find that these technologies just <a href="https://www.aclu.org/sites/default/files/images/asset_upload_file708_35775.pdf">don’t work</a>. If the goal is to stop crime before it happens, to spare someone the hardship and the trauma of getting robbed or hurt, cameras clearly do not accomplish this. There’s plenty of footage of crime occurring that belies the idea that surveillance is an effective deterrent, and although police often look to technology as a silver bullet to fight crime, </span><a href="https://www.economist.com/united-states/2023/12/27/americas-new-policing-tech-isnt-cutting-crime" target="_blank" rel="noopener noreferrer"><span data-contrast="none">evidence</span></a><span data-contrast="auto"> suggests that it does little to alter the historic ebbs and flows of criminal activity.</span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">Yet now this unelected billionaire – who already helped gut police accountability and transparency rules and helped fund sketchy surveillance of people exercising their First Amendment rights – wants to bankroll, expand, and host the police’s tech nerve center.</span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">Policing must be a public function so that residents can control - and demand accountability and transparency from - those who serve and protect but also surveil and track us all. Being financially beholden to private interests erodes the community’s trust and control and can leave the public high and dry if a billionaire’s whims change or conflict with the will of the people. Chris Larsen could have tried to address the root causes of crime that affect our community; instead, he exercises his bank account's muscle to decide that surveillance is best for San Franciscans with less in their wallets.</span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">Elected officials should have said “thanks but no thanks” to Larsen and ensured that the San Francisco Police Department remained under the complete control and financial auspices of nobody except the people of San Francisco. Rich people should not be allowed to fund the further degradation of our privacy as we go about our lives in our city’s public places. Residents should carefully watch what comes next to decide for themselves whether a false sense of security is worth living under constant, all-seeing, billionaire-bankrolled surveillance.</span><span data-ccp-props="{}"> </span></p>
</div></div></div></description>
<pubDate>Wed, 10 Sep 2025 16:04:41 +0000</pubDate>
<guid isPermaLink="false">111189 at https://www.eff.org</guid>
<category domain="https://www.eff.org/issues/street-level-surveillance">Street-Level Surveillance</category>
<dc:creator>Josh Richman</dc:creator>
<dc:creator>Matthew Guariglia</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/surveillance-og-2.png" alt="A cityscape with surveillance" type="image/png" length="67391" />
</item>
<item>
<title>Rayhunter: What We Have Found So Far </title>
<link>https://www.eff.org/deeplinks/2025/09/rayhunter-what-we-have-found-so-far</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p><span>A little over a year ago we released Rayhunter, our open source tool designed to detect cell-site simulators. We’ve been blown away by the level of community engagement on this project. It has been installed on thousands of devices (or so we estimate, we don’t actually know since Rayhunter doesn’t have any telemetry!). We have received dozens of packet captures, hundreds of improvements, both minor and major, documentation fixes, and bug reports from our open source community. This project is a testament to the power and impact of open source and community driven counter-surveillance. </span></p>
<p><span>If this is your first time hearing about Rayhunter, you can read our </span><a href="https://www.eff.org/deeplinks/2025/03/meet-rayhunter-new-open-source-tool-eff-detect-cellular-spying"><span>announcement blog post here</span></a><span>. Or if you prefer, you can </span><a href="https://spectra.video/w/jt9rZHCU51Rh58cBD8oiP3"><span>watch our DEF CON talk</span></a><span>. In short, Rayhunter is an open source Linux program that runs on a variety of mobile hotspots (dedicated devices that use a cellular connection to give you Wi-Fi). Rayhunter’s job is to look for </span><a href="https://sls.eff.org/technologies/cell-site-simulators-imsi-catchers"><span>cell-site simulators</span></a><span> (CSS), a tool police use to locate or identify people's cell phones, also known as IMSI catchers or Stingrays. Rayhunter analyzes the “handshakes” between your Rayhunter device and the cell towers it is connected to for behaviors consistent with that of a CSS. When it finds potential evidence of a CSS it alerts the user with an indicator on the screen and potentially a push notification to their phone. </span></p>
<p><span>Understanding if CSS are being used to spy on protests is one of the main goals of the Rayhunter project. Thanks to members of our community bringing Rayhunter to dozens of protests, we are starting to get a picture of how CSS are currently being used in the US. So far Rayhunter has not turned up any evidence of cell-site simulators being used to spy on protests in the US — though we have found them in use elsewhere. </span></p>
<p class="pull-quote">So far Rayhunter has not turned up any evidence of cell-site simulators being used to spy on protests in the US. </p>
<p><span></span></p>
<p><span></span></p>
<p><span>There are a couple of caveats here. First, it’s often impossible to prove a negative. Maybe Rayhunter just hasn’t been at protests where CSS have been present. Maybe our detection signatures aren’t picking up the techniques used by US law enforcement. But we’ve received reports from a lot of protests, including pro-Palestine protests, protests in Washington DC and Los Angeles, as well as the ‘No Kings’ and ‘50501’ protests all over the country. So far, we haven’t seen evidence of CSS use at any of them. </span></p>
<p><span>A big part of the reason for the lack of CSS at protests could be that </span><a href="https://www.eff.org/deeplinks/2017/09/appeals-court-rules-against-warrantless-cell-site-simulator-surveillance"><span>some</span></a> <a href="https://www.aclum.org/en/publications/mass-high-court-requires-warrants-stingray-gps-phone-surveillance"><span>courts</span></a><span> have required a warrant for their use, and even law enforcement agencies not bound by these rulings have </span><a href="https://www.eff.org/deeplinks/2015/09/finally-doj-reverses-course-and-will-get-warrants-stingrays?language=it"><span>policies</span></a><span> that require police to get a warrant. CSS are also costly to buy and use, requiring trained personnel to use </span><a href="https://www.documentcloud.org/documents/24733508-2024_ma-state-police_css-proposal_jacobs/?mode=document#document/p47"><span>nearly one million dollars worth of equipment</span></a><span>. </span></p>
<p><span>The fact is police also have potentially easier to use tools available. If the goal of using a CSS at a protest is to find out who was at the protest, police could use tools such as: </span></p>
<ul>
<li><a href="https://sls.eff.org/technologies/automated-license-plate-readers-alprs"><span>License plate readers</span></a><span> to track the vehicles arriving and leaving at the protest. </span></li>
<li><a href="https://www.eff.org/issues/location-data-brokers"><span>Location data brokers</span></a><span>, such as Locate X and Fog Data Science, to track the phones of protestors by their mobile advertising IDs (MAID).</span></li>
<li><a href="https://sls.eff.org/technologies/forensic-extraction-tools"><span>Cellebrite and other forensic extraction tools</span></a><span> to download all the data from phones of arrested protestors if they are able to unlock those phones. </span></li>
<li><a href="https://www.eff.org/deeplinks/2024/08/federal-appeals-court-finds-geofence-warrants-are-categorically-unconstitutional"><span>Geofence warrants</span></a><span>, which require internet companies like Google to disclose the identifiers of devices within a given location at a given time.</span></li>
<li><span><a href="https://www.eff.org/deeplinks/2022/02/victory-another-lawsuit-proceeds-against-clearviews-face-surveillance">Facial recognition</a> such as Clearview AI to identify all present via public or private databases of peoples faces.</span></li>
<li><a href="https://www.eff.org/deeplinks/2022/05/massachusetts-highest-court-upholds-cell-tower-dump-warrant"><span>Tower dumps from phone companies</span></a><span>, which, similar to geofence warrants, require phone companies to turn over a list of all the phones connected to a certain tower at a certain time. </span></li>
</ul>
<p><span>We think, due to the lack of evidence of CSS being used, protestors can worry less about CSS and more about these other techniques. Luckily, the </span><a href="https://ssd.eff.org/module/your-security-plan"><span>actions one should take to protect themselves are</span></a><span> largely the same: </span></p>
<ul>
<li><span>To protect yourself against Locate X and Fog you can turn off location services on your phone (</span><a href="https://ssd.eff.org/module/how-to-get-to-know-iphone-privacy-and-security-settings#audit-your-privacy-permissions"><span>iPhone</span></a><span> and</span><a href="https://ssd.eff.org/module/how-to-get-to-know-android-privacy-and-security-settings#audit-your-privacy-permissions"> <span>Android</span></a><span>). </span></li>
<li><span>To protect yourself from Cellebrite you can use a</span><a href="https://ssd.eff.org/module/attending-protest#enable-strong-encryption-on-your-device"> <span>strong password, turn off biometric unlocks</span></a><span>, and keep your phone up to date. </span></li>
<li><span>To protect against facial recognition, you can wear a mask. </span></li>
<li><span>To</span><a href="https://ssd.eff.org/module/mobile-phones-location-tracking"> <span>protect against tower dumps</span></a><span> put your phone into airplane mode (though especially high risk individuals may want to</span><a href="https://www.securityweek.com/hackers-can-abuse-low-power-mode-run-malware-powered-iphones/"> <span>use a Faraday bag instead</span></a><span>). </span></li>
</ul>
<p><span>We feel pretty good about Rayhunter’s detection engine, though there could still be things we are missing. Some of our confidence in Rayhunter’s detection engine comes from the</span><a href="https://www.eff.org/wp/gotta-catch-em-all-understanding-how-imsi-catchers-exploit-cell-networks"> <span>research we have done into how CSS work</span></a><span>. But the majority of our confidence comes from testing Rayhunter against a commercial cell-site simulator thanks to our friends at</span><a href="https://cape.co"> <span>Cape</span></a><span>. Rayhunter detected every attack run by the commercial CSS. </span></p>
<h2><span>Where Rayhunter Has Detected Likely Surveillance</span></h2>
<p><span>Rayhunter users have found potential evidence of CSS being used in the wild, though not at protests. One of the most interesting examples that triggered multiple detections and even inspired us to write some new detection rules was at a cruise port in the Turks and Caicos Islands. The person who captured this data put the packet captures </span><a href="https://github.com/ZeroChaos-/rayhunter-traces"><span>online for other researchers to review</span></a><span>. </span></p>
<p><span>Rayhunter users have detected likely CSS use in the US as well. We have received reports from Chicago and New York where our “IMSI Sent without authentication” signature was triggered multiple times over the course of a couple hours and then stopped. Neither report was in the vicinity of a protest. We feel fairly confident that these reports are indicative of a CSS being present, though we don’t have any secondary evidence to back them up. </span></p>
<p><span>We have received other reports that have triggered our CSS detection signatures, but the above examples are the ones we feel most confident about. </span></p>
<p><span>We encourage people to keep using Rayhunter and continue bringing it to protests. Law enforcement trends can change over time and it is possible that some cities are using them more often than others (for example Fontana, California reportedly </span><a href="https://www.documentcloud.org/documents/24733508-2024_ma-state-police_css-proposal_jacobs/?mode=document#document/p14"><span>used their CSS over 300 times in two years</span></a><span>). We also know that </span><a href="https://www.forbes.com/sites/the-wiretap/2025/09/09/how-ice-is-using-fake-cell-towers-to-spy-on-peoples-phones/"><span>ICE still uses CSS</span></a><span> and has recently renewed their contracts. Interestingly, in January, the FBI requested a warrant from the Foreign Intelligence Surveillance Court to use what was likely a CSS </span><a href="https://www.intelligence.gov/ic-on-the-record-database/declassified/odni-releases-january-2025-fisc-opinion-on-fisa-title-i"><span>and was rejected</span></a><span>. This was the first time the FBI has sought a warrant to use a CSS using the Foreign Intelligence Surveillance Act since 2015, when the Justice Department began requiring a warrant for their use. If police start using CSS to spy on protests we want to know.</span></p>
<p><span>There is still a lot we want to accomplish with Rayhunter, we have some future plans for the project that we are very excited to share with you in the near future, but the biggest thing we need right now is more testing outside of the United States. </span></p>
<h2><span>Taking Rayhunter International </span></h2>
<p><span>We are interested in getting Rayhunter data from every country to help us understand the global use of CSS and to refine our signatures. Just because CSS don't appear to be used to spy on protests in the US right now doesn't mean that is true everywhere. We have also seen that some signatures that work in the US are prone to false positives elsewhere (such as our 2G signature in countries that still have active 2G networks). The first device supported by Rayhunter, the Orbic hotspot, was US only, so we have very little international data. But we now have </span><a href="https://efforg.github.io/rayhunter/supported-devices.html"><span>support for multiple devices</span></a><span>! If you are interested in Rayhunter, but can’t find a device that works in your country, </span><a href="https://efforg.github.io/rayhunter/support-feedback-community.html"><span>let us know</span></a><span>.</span> <b>We recommend you consult with an attorney in your country to determine whether running Rayhunter is likely to be legally risky or outlawed in your jurisdiction.</b></p>
</div></div></div><div class="field field--name-field-related-cases field--type-node-reference field--label-above"><div class="field__label">Related Cases:&nbsp;</div><div class="field__items"><div class="field__item even"><a href="/cases/carpenter-v-united-states">Carpenter v. United States</a></div></div></div></description>
<pubDate>Wed, 10 Sep 2025 15:46:56 +0000</pubDate>
<guid isPermaLink="false">111187 at https://www.eff.org</guid>
<category domain="https://www.eff.org/mobile-devices">Mobile devices</category>
<category domain="https://www.eff.org/issues/cell-tracking">Cell Tracking</category>
<dc:creator>Cooper Quintin</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/rayhunter-banner.png" alt="The logo for Rayhunter. Image is of a mischevious but cute looking purple orca who was taken a bite out of cell phone signal bars. " type="image/png" length="59366" />
</item>
<item>
<title>Podcast Episode: Building and Preserving the Library of Everything</title>
<link>https://www.eff.org/deeplinks/2025/09/podcast-episode-building-and-preserving-library-everything</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p><span data-contrast="auto">All this season, “How to Fix the Internet” has been focusing on the tools and technology of freedom – and one of the most important tools of freedom is a library. Access to knowledge not only creates an informed populace that democracy requires, but also gives people the tools they need to thrive. And the internet has radically expanded access to knowledge in ways that earlier generations could only have dreamed of – so long as that knowledge is allowed to flow freely.</span></p>
<p><div class="mytube" style="width: 100%px;">
<div class="mytubetrigger" tabindex="0">
<img src="https://www.eff.org/sites/all/modules/custom/mytube/play.png" class="mytubeplay" alt="play" style="top: -4px; left: 20px;" />
<div hidden class="mytubeembedcode">%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F070870d9-d14a-4346-9cdd-a120a18d3475%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E</div>
</div>
<div class="mytubetext">
<span><a href="https://www.eff.org/deeplinks/2008/02/embedded-video-and-your-privacy" rel="noreferrer" target="_blank">Privacy info.</a></span>
<span>This embed will serve content from <em><a rel="nofollow" href="https://player.simplecast.com/070870d9-d14a-4346-9cdd-a120a18d3475?dark=true&amp;color=000000">simplecast.com</a></em><br /></span>
</div>
</div>
</p><p><span data-ccp-props="279}"> <i><a href="https://open.spotify.com/show/4UAplFpPDqE4hWlwsjplgt" target="_blank" rel="noopener noreferrer"><img src="https://www.eff.org/files/2021/11/01/spotify-podcast-badge-blk-wht-330x80.png" alt="Listen on Spotify Podcasts Badge" width="198" height="48" /></a> <a href="https://podcasts.apple.com/us/podcast/effs-how-to-fix-the-internet/id1539719568" target="_blank" rel="noopener noreferrer"><img src="https://www.eff.org/files/2021/11/01/applebadge2.png" alt="Listen on Apple Podcasts Badge" width="195" height="47" /></a> <a href="https://music.amazon.ca/podcasts/bf81f00f-11e1-431f-918d-374ab6ad07cc/how-to-fix-the-internet?ref=dmm_art_us_HTFTI" target="_blank" rel="noopener noreferrer"><img height="47" width="195" src="https://www.eff.org/files/styles/kittens_types_wysiwyg_small/public/2024/02/15/us_listenon_amazonmusic_button_charcoal.png?itok=YFXPE4Ii" /></a> <a href="https://feeds.eff.org/howtofixtheinternet" target="_blank" rel="noopener noreferrer"><img src="https://www.eff.org/files/2021/11/01/subscriberss.png" alt="Subscribe via RSS badge" width="194" height="50" /></a></i></span></p>
<p><span data-contrast="auto">(You can also find this episode on the <a href="https://archive.org/details/htfti-s6e10-brewster-kahle-v3" target="_blank" rel="noopener noreferrer">Internet Archive</a> and on <a href="https://youtu.be/XZBhJhIfz7s?si=NMDoDrpDnsYX4I-V" target="_blank" rel="noopener noreferrer">YouTube</a>.)</span><span data-ccp-props="279}"> </span></p>
<p><span data-contrast="auto">A passionate advocate for public internet access and a successful entrepreneur, Brewster Kahle has spent his life intent on a singular focus: providing universal access to all knowledge. The Internet Archive, which he founded in 1996, now preserves 99+ petabytes of data - the books, Web pages, music, television, government information, and software of our cultural heritage – and works with more than 400 library and university partners to create a digital library that’s accessible to all. The Archive is known for the </span><a href="https://web.archive.org/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Wayback Machine</span></a><span data-contrast="auto">, which lets users search the history of almost one trillion web pages. But it also archives </span><a href="https://archive.org/details/image" target="_blank" rel="noopener noreferrer"><span>images</span></a><span data-contrast="auto">, </span><a href="https://archive.org/details/software" target="_blank" rel="noopener noreferrer"><span>software</span></a><span data-contrast="auto">, </span><a href="https://archive.org/details/movies" target="_blank" rel="noopener noreferrer"><span>video</span></a><span data-contrast="auto"> and </span><a href="https://archive.org/details/audio" target="_blank" rel="noopener noreferrer"><span>audio recordings</span></a><span data-contrast="auto">, </span><a href="https://archive.org/details/texts" target="_blank" rel="noopener noreferrer"><span>documents</span></a><span data-contrast="auto">, and it contains dozens of </span><a href="https://archive.org/projects/" target="_blank" rel="noopener noreferrer"><span>resources and projects</span></a><span data-contrast="auto"> that fill a variety of gaps in cultural, political, and historical knowledge. Kahle joins EFF’s Cindy Cohn and Jason Kelley to discuss how the free flow of knowledge makes all of us more free.</span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">In this episode you’ll learn about:</span></p>
<ul>
<li><span data-ccp-props="{}">The role AI plays in digitizing, preserving, and easing access to all kinds of information</span></li>
<li><span data-ccp-props="{}">How EFF helped the Internet Archive fight off the government’s demand for information about library patrons</span></li>
<li><span data-ccp-props="{}">The importance of building a decentralized, distributed web to finding and preserving information for all</span></li>
<li><span data-ccp-props="{}">Why building revolutionary, world-class libraries like the Internet Archive requires not only money and technology, but also people willing to dedicate their lives to the work</span></li>
<li><span data-ccp-props="{}">How nonprofits are crucial to filling societal gaps left by businesses, governments, and academia</span><span data-ccp-props="{}"> </span></li>
</ul>
<p><span data-contrast="auto">Brewster Kahle is the founder and digital librarian of the </span><a href="https://archive.org/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Internet Archive</span></a><span data-contrast="auto">, which is among the world’s largest libraries and serves millions of people each day. After studying AI at and graduating from the Massachusetts Institute of Technology in 1982, Kahle helped launch the company </span><a href="https://en.wikipedia.org/wiki/Thinking_Machines_Corporation" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Thinking Machines</span></a><span data-contrast="auto">, a parallel supercomputer maker. In 1989, he helped create the internet's first publishing system called </span><a href="https://en.wikipedia.org/wiki/Wide_area_information_server" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Wide Area Information Server (WAIS)</span></a><span data-contrast="auto">; WAIS Inc. was later sold to AOL. In 1996, Kahle co-founded </span><a href="https://en.wikipedia.org/wiki/Alexa_Internet" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Alexa Internet</span></a><span data-contrast="auto">, which helps catalog the Web, selling it to Amazon.com in 1999. He is a former member of EFF’s Board of Directors.</span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">Resources:</span></p>
<ul>
<li><span data-ccp-props="{}">EFF Legal Cases: </span><a href="https://www.eff.org/cases/archive-v-mukasey" target="_blank" rel="noopener noreferrer"><i><span data-contrast="none">Internet Archive et al v Mukasey et al</span></i></a><span data-contrast="auto"> (NSA letter/gag order)</span></li>
<li><span data-ccp-props="{}">EFF Legal Cases: </span><a href="https://www.eff.org/cases/hachette-v-internet-archive" target="_blank" rel="noopener noreferrer"><i><span data-contrast="none">Hachette v. Internet Archive</span></i></a><span data-contrast="auto"> (publishers’ lawsuit)</span></li>
<li><span data-ccp-props="{}">National Public Radio: “</span><a href="https://www.npr.org/2025/03/23/nx-s1-5326573/internet-archive-wayback-machine-trump" target="_blank" rel="noopener noreferrer"><span data-contrast="none">As the Trump administration purges web pages, this group is rushing to save them</span></a><span data-contrast="auto">” (March 23, 2025)</span></li>
<li><span data-ccp-props="{}">BBC: “</span><a href="https://www.bbc.com/reel/video/p0ld1bpd/an-inside-look-at-how-the-internet-archive-saves-the-web" target="_blank" rel="noopener noreferrer"><span data-contrast="none">An inside look at how the Internet Archive saves the web</span></a><span data-contrast="auto">” (May 26, 2025)</span></li>
<li><span data-ccp-props="{}">American Libraries: “</span><a href="https://americanlibrariesmagazine.org/2025/06/04/newsmaker-brewster-kahle/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Newsmaker: Brewster Kahle</span></a><span data-contrast="auto">” (June 4, 2025)</span><span data-ccp-props="{}"> </span></li>
</ul>
<p><span data-contrast="auto">What do you think of “How to Fix the Internet?” </span><a href="https://forms.office.com/pages/responsepage.aspx?id=qalRy_Njp0iTdV3Gz61yuZZXWhXf9ZdMjzPzrVjvr6VUNUlHSUtLM1lLMUNLWE42QzBWWDhXU1ZEQy4u&amp;web=1&amp;wdLOR=c90ABD667-F98F-9748-BAA4-CA50122F0423" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Share your feedback here</span></a><span data-contrast="auto">.</span></p>
<h3><span data-ccp-props="259}">Transcript</span></h3>
<p><span data-contrast="auto"><strong>BREWSTER KAHLE:</strong> I think we should start making some better decisions, a little bit more informed, a little better communication with not only people that are around the world and finding the right people we should be talking to, but also, well, standing on the shoulders of giants. I mean, we can then go and learn from all the things that people have learned in the past. It's pretty straightforward what we're trying to do here. It's just build a library.</span></p>
<p><span data-contrast="auto"><strong>CINDY COHN:</strong> That's Internet Archive founder Brewster Kahle on what life could look like we all got to experience his dream of universal access to all human knowledge.<br />I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation</span></p>
<p><span data-contrast="auto"><strong>JASON KELLEY:</strong> And I'm Jason Kelley - EFF's activism director. And this is our podcast How to Fix the Internet.</span></p>
<p><span data-contrast="auto"><strong>CINDY COHN:</strong> This show is about what the world could look like if we get things right online - we hear from activists, computer engineers, thinkers, artists and today, a librarian, about their visions for a better digital future that we can all work towards.</span></p>
<p><span data-contrast="auto"><strong>JASON KELLEY:</strong> And our guest today is someone who has been actively making the internet a better place for several decades now.</span></p>
<p><span data-contrast="auto"><strong>CINDY COHN:</strong> Brewster Kahle is an early internet pioneer, and a longtime advocate for digitization. He’s a computer engineer but also a digital librarian, and he is of course best known as the founder of the Internet Archive and the Wayback Machine. EFF and the Archive are close allies and friends, and Brewster himself was a member of EFF’s Board of Directors for many years. I’m proud to say that the Archive is also a client of EFF, including most recently when we served as part of the legal team trying to protect true library lending of digital materials like ebooks and audiobooks.</span></p>
<p><span data-contrast="auto"><strong>JASON KELLEY:</strong> All season we’ve been focusing on the tools and technologies of freedom – and one of the most important tools of freedom is a library.<br />We started off our conversation by getting his take on the role that AI should play in his vision of a universally accessible library.</span></p>
<p><span data-contrast="auto"><strong>BREWSTER KAHLE:</strong> AI is absolutely critical and actually has been used for, well, a long period of time. You just think of, how does the magic of Google search happen, where you can just type a few words and get 10 links and several of them are actually really quite relevant. How do you do that? Those of us old enough to remember just keyword searching, that didn't work very well.<br />So it's going and using all this other information, metadata from other websites, but also learning from people, and machine learning at scale, that we've been able to make such progress. <br />Now there's the large language models, the generative AI, which is also absolutely fantastic. So we are digitizing obscure newsletters from theological missions in distant parts of the world. We are digitizing agricultural records and from over decades of the 20th century.<br />And these materials are absolutely relevant now with climate change in our new environments because, well, things are moving. So the pests that used to be only in Mexico are now in Louisiana and Texas. It's completely relevant to go and learn from these, but it's not gonna be based on people going and doing keyword search and finding that newsletter and, and learning from it. It's gonna be based on these augmentations, but take all of these materials and try to make it useful and accessible to a generation that's used to talking to machines.</span></p>
<p><span data-contrast="auto"><strong>CINDY COHN:</strong> Yeah, I think that that's a really important thing. One of my favorite insights about AI is that it's a very different user interface. It's a way to have a conversational access to information. And I think AI represents one of those other shifts about how people think about accessing information. There's a lot of side effects of AI and we definitely have to be serious about those. But this shift can really help people learn better and find what they're looking for, but also find things that maybe they didn't think they were looking for.</span></p>
<p><span data-contrast="auto"><strong>BREWSTER KAHLE:</strong> If we do it well, if we do it with public AI that is respectful, the opportunity for engaging people and in a more deep way to be able to have them get to literature that has been packed away, and we've spent billions of dollars in the library system over centuries going and building these collections that are now going to be accessible, not just to the reference librarian, not just to researchers, but to kind of anybody.</span></p>
<p><span data-contrast="auto"><strong>JASON KELLEY:</strong> Can I dig into this backstory of yours a little bit? Because you know, a lot of people may know how you ended up building the Internet Archive, but I don't think they know enough. I'd like to get more people to sort of have a model in tech for what they can do if they're successful. And you were, if I understand it right, you were one of the early successful internet stories.<br />You sold a company or two in the nineties and you could have probably quit then and instead you ended up building the Internet Archive. Did you have this moment of deciding to do this and how did you end up in library school in the first place? </span></p>
<p><span data-contrast="auto"><strong>BREWSTER KAHLE:</strong> So I'm a little unusual in that I, I've only had one idea in my life, and so back in college in 1980 a friend posed, okay, you're an idealist. Yes. And a technologist. Yes. Paint a portrait that's better with your technology. It turned out that was an extremely difficult question to answer.<br />We were very good about complaining about things. You know, that was Cold War Times and Nicaragua and El Salvador, and there's lots of things to complain about, but it was like. What would be better? So I only came up with two ideas. one was protect people's privacy, even though they were going to throw it away if they were given the chance.<br />And the other was build the library of everything, the building of the library of everything, the digital library of Alexandria seemed too obvious. So I tried to work on the privacy one, but I couldn't make chips to encrypt voice conversations cheap enough to help the people I wanted to, but I learned how to make chips.<br />But then that got me engaged with the artificial intelligence lab at MIT and Danny Hillis and Marvin Minsky, they had this idea of building a thinking machine and to go and build a computer that was large enough to go and search everything. And that seemed absolutely critical.<br />So I helped work on that. Founded a company, Thinking Machines. That worked pretty well. So we got the massively parallel computers. We got the first search engine on the internet, then spun off a company to go and try to get publishers online called WAIS Incorporated. It came before the web, it was the first publishing system.<br />And so these were all steps in the path of trying to get to the library. So once we had publishers online, we also needed open source software. The free and open source software movement is absolutely critical to the whole story of how this whole thing came about, and open protocols, which was not the way people thought of things. They would go and make them proprietary and sue people and license things, but the internet world had this concept of how to share that ran very, very well. I wasn't central in the ARPANET to the internet conversation. But I did have quite a bit to do with some of the free and open source software, the protocol development, the origins of the web.<br />And once we had publishers, then, onboard, then I could turn my attention to building the library in 1996, so that's 28 years ago, something like that. And so we then said, okay, now we can build the library. What does that make up of? And we said, well, let's start with the web. Right? The most fragile of media.<br />I mean, Tim's system, Tim Berners-Lee's system, was very easy to implement, which was kind of great and one of the keys for his success, but it had some really, basically broken parts of it. You think of publishers and they would go and make copies and sell them to individuals or libraries, and they would stay alive much longer than the publishers.<br />But the web, there's only one copy and it's only on one machine. And so if they change that, then it's gone. So you're asking publishers to be librarians, which is a really bad idea. And so we thought, okay, why don't we go and make a copy of everything that was on the web. Every page from every website every two months.<br />And turns out you could do that. That was my Altavista moment when I actually went to see Altavista. It was the big search engine before Google and it was the size of two Coke machines, and it was kind of wild to go and look - that's the whole web! So the idea that you could go and gather it all back up again, uh, was demonstrated by Altavista and the Internet Archive continued on with other media type after media type, after media type.</span></p>
<p><span data-contrast="auto"><strong>JASON KELLEY:</strong> I heard you talk about the importance of privacy to you, and I know Cindy's gonna wanna dig into that a little bit with some of the work that EFF and the Archive have done together.</span></p>
<p><span data-contrast="auto"><strong>CINDY COHN:</strong> Yeah, for sure. One of the things I think, you know, your commitment to privacy is something that I think is very, very important to you and often kind of gets hidden because the, you know, the archive is really important. But, you know, we were able to stand up together against national security letters, you know, long before some of the bigger cases that came later and I wanted to, you know, when you reached out to us and said, look, we've gotten this national security letter, we wanna fight back. Like, it was obvious to you that we needed to push back. And I wanna hear you talk about that a little bit.</span></p>
<p><span data-contrast="auto"><strong>BREWSTER KAHLE:</strong> Oh, this is a hero day. This is a hero moment for EFF and its own, you know, I, okay.</span></p>
<p><span data-contrast="auto"><strong>CINDY COHN:</strong> Well, and the Archive, we did it together.</span></p>
<p><span data-contrast="auto"><strong>BREWSTER KAHLE:</strong> Well, no, we just got the damn letter. You saved our butts. Okay. So how this thing worked was in 2001,they passed this terrible law, the Patriot Act, and they basically made any government official almost be able to ask any organization and be able to get anything they wanted and they had a gag order. So not only could they just get any information, say on patrons’ reading habits in a library, they could make it so that you can't tell anybody about it. <br />So I got sat down one day and Kurt Opsahl from EFF said, this isn't your best day. You just got a letter demanding information about a patron of the Internet Archive. I said, they can't do that. He said, yeah, they can. And I said, okay, well this doesn't make any sense. I mean, the librarians have a long history of dealing with people being surveilled on what it is they read and then rounded up and bad things happen to them, right? This is, this is something we know how that movie plays out.<br />So I said, Kurt, what, what can we do? And he said, you have to supply the data. I said, what if we don't? And he said, jail. That wasn't my favorite sentence. So is there anything else we can do? And he said, well, you can sue the United States government. (laughter)<br />OH! Well I didn't know even know whether I could bring this up with my board. I mean, remember there's a gag order. So there was just a need to know to be able to find out from the engineers what it is we had, what we didn't have. And fortunately we never had very much information. 'cause we don't keep it, we don't keep IP addresses if we possibly can. We didn't have that much, but we wanted to push back. And then how do you do that? And if it weren't for the EFF, and then EFF got the ACLU involved on a pro bono basis, I would never have been able to pull it off! I would have to have answered questions to the finance division of how, why are we spending all this money on lawyers? <br />The gag order made it so absolutely critical for EFF to exist, and to be ready and willing and funded enough to take on a court case against the United States government without, you know, having to go into a fundraising round.<br />But because of you, all of you listeners out there donating to EFF, having that piggy bank made it so that they could spring to the defense of the Internet Archive. The great thing about this was that after this lawsuit was launched, the government wanted out of this lawsuit as fast as possible.<br />They didn't want to go and have a library going and getting a court case to take their little precious toy of this Patriot Act, National Security letters away from them. So they wanted out, but we wouldn't let them. We wanted to be able to talk about it. They had to go and release the gag order. And I think we're only one or two or three organizations that have ever talked publicly about the hundreds of thousands, if not millions, of national security letters because we had EFF support.</span></p>
<p><span data-contrast="auto"><strong>CINDY COHN:</strong> Oh, thank you Brewster. That's very sweet. But it was a great honor to get to do this. And in hearing you talk about this future, I just wanna pull out a few of the threads. One is privacy and how important that is for access for information. Some people think of that as a different category, right? And it's not. It's part and parcel of giving people access to information. <br />I also heard the open source community and open protocols and making sure that people can, you know, crawl the web and do things with websites that might be different than the original creator wanted, but are still useful to society.<br />The other thing that you mentioned that I think it's important to lift up as well is, you know, when we're talking about AI systems, you're talking about public AI, largely. You're talking about things that similarly are not controlled by just one company, but are available so that the public really has access not only to the information, but to the tools that let them build the next thing.</span></p>
<p><span data-contrast="auto"><strong>BREWSTER KAHLE:</strong> Yes, the big thing I think I may have gotten wrong starting this whole project in 1980 was the relaxation of the antitrust laws in the United States, that we now have these monster organizations that are not only just dominating a country's telecom or publishing systems or academic access, but it's worldwide now.<br />So we have these behemoth companies. That doesn't work very well. We want a game with many winners. We want that level playing field. We wanna make it so that new innovators can come along and, you know, try it out, make it go. In the early web, we had this, we watched sort of the popularity and the movement of popularity. And so you could start out with a small idea and it could become quite popular without having to go through the gatekeepers. And that was different from when I was growing up. I mean, if you had a new idea for a kid's toy, trying to get that on the shelves in a bunch of toy stores was almost impossible.<br />So the idea of the web and the internet made it so that good ideas could surface and grow, and that can work as long as you don't allow people to be gatekeepers. <br />We really need a mechanism for people to be able to grow, have some respect, some trust. If we really decrease the amount of trust, which is kind of, there's a bonfire of trust right now, then a lot of these systems are gonna be highly friction-full.<br />And how do we go and make it so that, you know, we have people that are doing worthwhile projects, not exploiting every piece of surveillance that they have access to. And how do we build that actually into the architecture of the web?</span></p>
<p><span data-contrast="auto"><strong>CINDY COHN:</strong> That leads, I think, directly into the kind of work that the archive has done about championing the distributed web, the D-web work. And you've done a real lot of work to kind of create a space for a distributed web, a better web. And I want you to tell me a little bit about, you know, how does that fit into your picture of the future?</span></p>
<p><span data-contrast="auto"><strong>BREWSTER KAHLE:</strong> The wonderful thing about the internet still is that it can be changed. It's still built by people. They may be in corporations, but you can still make a big dent and, there were a couple “aha” moments for me in, in trying to, like, why do we build a better web? Right? what's the foundational parts that we need to be able to do that? <br />And we ended up with this centralization, not only of all the servers being in these colos that are operated by other companies and a cloud-based thing, other people own everything, that you can't go and just take your computer on your desk and be a first class internet thing. That used to be possible with Gopher and Waze and the early web. So we lost some of those things, but we can get them back. <br />Jason Scott at the Internet Archive, working with volunteers all over, made emulators of the early computers like IBM PCs and Macintosh and these old computers, Commodore 64, Atari machines, and they would run in JavaScript in your browser, so you could click and go and download an IBM PC and it boots in your browser and it uses the Internet Archive as a giant floppy drive to run your favorite game from 20 years ago. The cool thing about that for me, yes, I could get to play all my old games, it was kind of great, but we also had this ability to run a full on computer in your browser, so you didn't even have to download and install something.<br />So you could go and be a computer on the internet, not just a consumer, a reader. You could actually be a writer, you could be a publisher, you could, you could do activities, you could, so that was fantastic. And then another big change was the protocols of the browsers change to allow peer-to-peer interactions. That's how you get, you know, Google Meet or you get these video things that are going peer to peer where there's no central authority going in, interrupting your video streams or whatever. <br />So, okay, with these tools in hand now, then we could try to realize part of the dream that a lot of us had originally, and even Tim Burners Lee, of building a decentralized web. Could you make a web such that your website is not owned and controlled on some computer someplace, but actually exists everywhere and nowhere, kind of a peer-to-peer backend for the web. <br />Could you make it so that if you run a club, that you could do a WordPress-like website that would then not live anywhere, but as readers were reading it, they would also serve it. And there would be libraries that would be able to go and archive it as a living object, not as just snapshots of pages. That became possible. It turns out it's still very hard, and the Internet Archive started pulling together people, doing these summits and these different conferences to get discussions around this and people are running with it.</span></p>
<p><span data-contrast="auto"><strong>CINDY COHN:</strong> Yeah, and so I love this because I know so many people who go to the archive to play Oregon Trail, right? And I love it when I get a chance to say, you know, this isn't just a game, right? This is a way of thinking that is reflected in this. I kind of love that, you know, ‘you died with dysentery’ becomes an entryway into a whole other way of thinking about the web.</span></p>
<p><span data-contrast="auto"><strong>JASON KELLEY:</strong> Let's take a quick moment to thank our sponsor. How to Fix The Internet is supported by the Alfred P. Sloan Foundation's program and public understanding of science and technology enriching people's lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.<br />We also wanna thank EFF donors. You're the reason we exist, and EFF has been fighting for digital rights for 35 years, and that fight is bigger than ever. So please, if you like what we do, go to eff.org/pod to donate. And also, if you can’t make it in person to this year’s EFF awards where we celebrate the people working towards the better future we all care so much about, you can watch the whole event at eff.org/awards.<br />We also wanted to share that our friend Cory Doctorow, has a new podcast, have a listen to this:</span></p>
<p><span data-contrast="auto"><em><strong>WHO BROKE THE INTERNET TRAILER:</strong> How did the internet go from this? You could actually find what you were looking for right away, to this, I feel I can inhale. Spoiler alert, it was not an accident. I'm Cory Doctorow, host of Who Broke the Internet from CBC's Understood. In this four part series, I'm gonna tell you why the internet sucks now, whose fault it is and my plan to fix it. Find who broke the internet on whatever terrible app you get your podcasts.</em></span></p>
<p><span data-contrast="auto"><strong>JASON KELLEY:</strong> And now back to our conversation with Brewster Kahle.<br />The fact that you do things like archive these old games is something that I think a lot of people don't know. There are just so many projects that the internet archive does and it is interesting to hear how they're sort of all building towards this better future that is sort of built, like, sort of makes up the bones of the work that you do. Can you talk about any of the other projects that you are particularly sort of proud of that maybe other people haven't heard about?</span></p>
<p><span data-contrast="auto"><strong>BREWSTER KAHLE:</strong> Yeah, and I really wanna apologize. If you go to archive.org, it is daunting. Most people find things to read in the Internet Archive or see in the internet archive, mostly by going to search engines, or Wikipedia. For instance, we really dedicated ourselves to try to help reinforce Wikipedia. We started archiving all of the outbound links. And we figured out how to work with the communities to allow us to fix those broken links. So we've now fixed 22 million broken links in Wikipedia, 10,000 a day get now added to go back to the Wayback Machine.<br />Also, there are about two million books that are linked straight into, if you click on it, it goes right to the right page so you can go and see the citation. Not only is this important for homework, people that are after hours trying to cram for their, uh, for their homework, um, but it's also important for Wikipedians because, um, links in Wikipedia that go to someplace you can actually cite is a link that works, it gets more weight. <br />And if we're going to have all the literature, the scholarly literature and the book literature available in Wikipedia, it needs to be clickable. And you can't click your way into a Overdrive borrowed book from your library. You have to be able to do this from, something like the Internet Archive. So Wikipedia, reinforcing Wikipedia. <br />Another is television. We've been archiving television. 24 hours a day since the year 2000. Russian, Chinese, Japanese, Iraqi, Al Jazeera, BBC, CNN, ABC, Fox, 24 hours a day, DVD quality. And not all of it is available but the US television news, you can search and find things. And we're also doing summarizations now, so you can start to understand – in English – what is Russian State television telling the Russians? So we can start to get perspectives. Or look inside other people's bubbles to be able to get an idea of what's going on. Or a macroscope ability to step back and get the bigger picture. That's what libraries are for, is to go and use these materials in new and different ways that weren't the way that the publishers originally intended.<br />Other things. We've digitizing about 3000 books a day. So that's going along well. Then we are doing Democracy’s Library. Democracy's Library, I think is a cool one. So democracies need an educated populace. So they tend to publish openly. Authoritarian governments and corporations don't care about having an educated populace. That's not their goal. They have other goals, um, but democracies want things to be openly available.<br />But it turns out that even though the United States, for instance, and all democracies publish openly, most of those materials are not available publicly. They may be available in some high priced database system of somebody or other. But mostly they're just not available at all. <br />So we launched the Democracy's Library Project to go and take all of the published works at the federal level, the provincial state level, and municipal levels, and make that all available in bulk and in services so that other people could also go and build new services on this. We launched it with Canada and the United States. The Canadians are kicking the United States's butt. I mean, they're doing so great. So Internet Archive Canada, working with University of Toronto, and universities all over, have already digitized all of the federal print materials, and by working with the national library there have archived the government websites in Canada.<br />In the United States we've been archiving, with the help of many others, including historically with the Library of Congress, and National Archives to go and collect all of the web pages and services and data sets from all of the United States Federal websites from before and after every presidential election. It's called the End of Term Crawl, and this has been going on since 2008, and we've gotten into a lot of news recently because this administration has decided to take a lot of materials off the web. And again, asking a publisher, whether it's a government or commercial publisher or a social media publisher, to go and be their own archive or their own library is a bad idea. Don't trust a corporation to do a library's job, was what one headline said. <br />So we've been archiving all of these materials and making them available. Now, can we weave them back into the web with the right URLs? No, not yet. That's up to the browser companies and also some of the standards organizations. But it's, at least it's there and you can go to the Wayback Machine to find it. <br />So the Internet Archive is about the 200th most popular website.<br />We get millions of people a day coming to the website, and we get about 6 million people coming and using the internet archives resources that we don't even, they don't even come to the website. So it's just woven into the fabric of the web. So people say, oh, I've never heard of that. Never used it. It's like you probably have. It’s just part of how the internet works, it's plumbing. <br />So those are the aspects of the Internet Archive that are currently going on. We have people coming in all the time saying. Now, but are you doing this? And I said, no, but you can and we can be infrastructure for you. I think of the Internet Archive as infrastructure for obsessives. So the people that say, I really need this to persist to the next generation. We say, great, what do you need? How do we make that come true?</span></p>
<p><span data-contrast="auto"><strong>CINDY COHN:</strong> Yeah, I think that's both the superpower and in some ways the thing that the Internet Archive struggles with, which is because when your infrastructure, people don't think about you and they don't wanna think about you, so that when you come under attack, it's hard to get people to see what they might be losing.<br />And I think one of the things that, you know, one of the reasons I wanted you to come on here and talk about the archive is I think we need to start making some of that invisible stuff visible because it's not magic. It's not automatic. It takes, you know, I mean, your personal courage in standing up is wonderful, but there need to be hundreds and thousands and hundreds of thousands saying, you know, this is our library, this is our future.<br />This is, you know, this is important and, and we need to stand up and hopefully if we stand up enough, you know, we don't have to do it every four years or so. But you know, the number of people who I sent to the Wayback Machine when they were very, very worried about US government information going down and, and pointed out, look, you know, the archive's been quietly doing this for, you know, nearly 20 years now, is a lot. And that's because again, you're kind of quietly doing the important work. <br />And so, you know, my hope is that ,with this podcast and otherwise, we get a little more attention so that we can really build this better future and, and maybe in the better future, we don't have to think about it again. But right now there's a lot of different kinds of attacks.</span></p>
<p><span data-contrast="auto"><strong>BREWSTER KAHLE:</strong> It's a challenging time, especially in the United States for libraries. There's the book bannings, defunding. Probably structurally the worst thing is the licensing model. The idea that there's no digital ownership. I mean, just like really bad behavior on the part of the corporations. Um, so, but Internet Archive Canada is doing well. Internet Archive Europe is coming back up and serving interesting roles with public AI to go and do publicly oriented values driven AI technology, which is kind of great. We'd like to see internet archives planted in lots of places. The idea that we can just depend on the United States jurisdictions for being the information resource for the world I think that train is gone.<br />So let's go and build a robust infrastructure. It's kinda like what we saw out the internet. Can we build internet archives all over the world? And that takes not only money, but actually the money part is probably not the hardest part. It's people interested in dedicating their lives to open – to open source software, free and open source software, open access materials, the infrastructure to step out and work in non-profits as opposed to some of the, you know, the very tempting, um, stock option deals that come from these these VC-funded whatevers, um, and work and do the good work that they can point to and they can be proud of for the rest of their lives.</span></p>
<p><span data-contrast="auto"><strong>CINDY COHN:</strong> Yeah. And there is something so important about that, about getting to wake up every day and feel like you're making the world better. And I think your particular story about this, because you know, you made money early on, you did some companies and you decided to dig back into the public side of the work rather than, you know, stepping back and becoming a VC or, you know, buying your third island, or those kinds of things.<br />And I think that one of the things that's important is that I feel like there's a lot of people who don't think that you can be a technologist and a successful person without being an asshole. And, you know, I think you're a good counter example of somebody who is deeply technical, who thinks about things in a, you know, how do we build better infrastructure, who understands how all of these systems work. And use that information to build good, rather than, you know, necessarily deciding that the, you know, the best thing to do is to maybe take over a local government and build a small fiefdom to yourself.</span></p>
<p><span data-contrast="auto"><strong>BREWSTER KAHLE:</strong> Well, thank you for that. And yes, for-profit entities are gasoline. They're explosive and they don't tend to last long. But I think one of the best ideas the United States has come up with is the 501 C3 public charity, which is not the complete antidote to the C corporations that were also put across by the United States since World War II in ways that shouldn't have been, but the 501 C3 public charities are interesting. They tend to last longer. They take away the incentive to sell out, yet leave an ability to be an operational entity. You just have to do public good. You have to actually live and walk the walk and go and do that. But I think it's a fabulous structure. I mean, you, Cindy, how old is the EFF now?</span></p>
<p><span data-contrast="auto"><strong>CINDY COHN:</strong> 35. This is our 35th anniversary.</span></p>
<p><span data-contrast="auto"><strong>BREWSTER KAHLE:</strong> That's excellent. And the Internet Archive is like 28, 29 years old, and that's a long time for commercial, excuse me, for commercial entities or tech! Things in the tech world, they tend to turn over. So if you wanna build something long term, and you're willing to only do, as Lessig would put it, some rights reserved, or some profit motive reserved. Then the 501 C3 public charities, what other countries are adopting, this model is a mechanism of building infrastructure that can last a long time where you get your alignment with the public interest.</span></p>
<p><span data-contrast="auto"><strong>CINDY COHN:</strong> Yeah, I think that's right. And it's been interesting to me for the, you know, being in this space for a really long time, the nonprofit salaries, the nonprofit may not be as high, but the jobs are more stable. Like we don't have in our sector the waves of layoffs. I mean, occasionally for sure we're, you know, that that is a thing that happens in the nonprofit digital rights sector. But I would say compared to the for-profit world, there’s a much more stable structure, um, because you don't have this gasoline idea, these kind of highs and lows and ups and downs. And that could be, you know, there's nothing wrong with riding that wave and making some money. But the question becomes, well, what do you do after that? Do you take that path to begin with? Or do you take that path later, when you've got some assets, you know, some people come outta school with loans and things like that.</span></p>
<p><span data-contrast="auto"><strong>BREWSTER KAHLE:</strong> So we need this intermediary between the academic, the dot edu, and the dot com, and I think the dot org is such a thing. And also there was a time when we did a lot in dot gov of bringing civic tech. And civic tech in Canada is up and running and wonderful. So there's things that we can do in that.<br />We can also spread these ideas into other sectors like banking. How about some nonprofit banks, please? Why don't we have some nonprofit housing that actually supports nonprofit workers? We're doing an experiment with that to try to help support people that want to work in San Francisco for nonprofits and not feel that they have to commute from hours away.<br />So can we go and take some of these ideas pioneered by Richard Stallman, Larry Lessig, Vince Sur, the Cindy Cohns, and go and try it in new sectors? You're doing a law firm, one of the best of the Silicon Valley law firms, and you give away your product. Internet Archive gives away its product. Wikipedia gives away its product. This is, like, not supposed to happen, but it works really well. And it requires support and interest of people to work there and also to support it from the outside. But it functions so much better. It's less friction. It's easier for us to work with non other non-profits than it is to work with for-profits.</span></p>
<p><span data-contrast="auto"><strong>JASON KELLEY:</strong> Well I'm glad that you brought up the nonprofit points and really dug into it because earlier, Brewster, you mentioned the people listening to this are, you know, the reason you were able to fight back against the NSL letters is that EFF has supporters that keep it going, and those same supporters, the people listening to this are hopefully, and probably, the ones that help keep the Archive going. And I just wanted to make sure people know that the Archive is also supported by donors. And, uh, if people like it, they, they, there's nothing wrong with supporting both EFF and the Archive, and I hope everyone does both.</span></p>
<p><span data-contrast="auto"><strong>CINDY COHN:</strong> Yeah. There's a whole community. And one of the things that Brewster has really been a leader in is seeing and making space for us to think of ourselves as a community. Because we're stronger together. And I think that's another piece of the somewhat quiet work that Brewster and the Archive do is knitting together the open world into thinking of itself as an open world and, able to move together and leverage each other.</span></p>
<p><span data-contrast="auto"><strong>BREWSTER KAHLE:</strong> Well thank you for all the infrastructure EFF provides. And if anybody's in San Francisco, come over on a Friday afternoon at ! And we give it to her! If I'm here, I give it to her and try to help answer questions. We even have ice cream. And so the idea is to go and invite people into this other alternative form of success that maybe they weren't taught about in business school or, or, or, uh, you know, they want to go off and do something else.<br />That's fine, but at least understand a little bit of how the underlying structures of the internet, whether it's some of the original plumbing, um, some of these visions of Wikipedia, Internet Archive. How do we make all of this work? And it's by working together, trusting each other to try to do things right, even when the technology allows you to do things that are abusive. Stepping back from that and building, uh, the safeguards into the technology eventually, and celebrate what we can get done to support a better civic infrastructure.</span></p>
<p><span data-contrast="auto"><strong>CINDY COHN:</strong> That is the perfect place to end it. Thank you so much, Brewster, for coming on and bringing your inspiration to us.</span></p>
<p><span data-contrast="auto"><strong>JASON KELLEY:</strong> I loved that we wrapped up the season with Brewster because really there isn't anything more important, in a lot of ways, to freedom than a library. And the tool of freedom that Brewster built, the Internet Archive and all of the different pieces of it, is something that I think is so critical to how people think about the internet and what it can do, and honestly, it's taken for granted. I think once you start hearing Brewster talk about it, you realize just how important it is. I just love hearing from the person who thought of it and built it.</span></p>
<p><span data-contrast="auto"><strong>CINDY COHN:</strong> Yeah, he's so modest. The “I only had one idea,” right? Or two ideas, you know, one is privacy and the other is a universal access to all the world's information. You know, just some little things.</span></p>
<p><span data-contrast="auto"><strong>JASON KELLEY:</strong> Just a few things that he built into practice.</span></p>
<p><span data-contrast="auto"><strong>CINDY COHN:</strong> Well, and you know, he and a lot of other people, I think he's the first to point out that this is a sector that there's a lot of people working in this area and it's important that we think about it that way.<br />It does take the long view to build things that will last. And then I think he also really talked about the nonprofit sector and how, you know, that space is really important. And I liked his framing of it being kind of in between the dot edu, the academics and the dot com, that the dot orgs play this important role in bringing the public into the conversation about tech, and that's certainly what he's done.</span></p>
<p><span data-contrast="auto"><strong>JASON KELLEY:</strong> I loved how much of a positive pitch this was for nonprofits, and I think a lot of people think of charities they don't think about EFF necessarily, or the Internet Archive, but this tech sector of nonprofits is, you know, that community you talked about all working together to sort of build this structure that protects people's rights online and also gives them access to these incredible tools and projects and resources and, you know, everyone listening to this is probably a part of that community in one way or another. It's much bigger than I think people realize.</span></p>
<p><span data-contrast="auto"><strong>CINDY COHN:</strong> Yeah. And whether you're contributing code or doing lawyering or doing activism, you know, there's, there's spaces throughout, and those are only just three that we do. <br />But the other piece, and, and you know, I was very of course honored that he told the story about national security letters, but, you know, we can support each other. Right. That when somebody in this community comes under attack, that's where EFF often shows up. But when, you know, he said people have ideas and they wanna be able to develop them, you know, the archive provides the infrastructure. All of this stuff is really important and important to lean into in this time when we're really seeing a lot of public institutions and nonprofit institutions coming under attack.<br />What I really love about this season, Jason, is the way we've been able to shine our little spotlight on a bunch of different pieces of the sector. And there's so many more. You know, as somebody who started in this digital world in the nineties when, you know, I could present all of the case law about the internet on one piece of paper in a 20 minute presentation.<br />You know, watching this grow out and seeing that it's just the beginning has been really, it's been really fun to be able to talk to all of these pieces. And you know, to me the good news is that, that people, you know, sometimes their stories get presented as if they're alone or if there's this lone, you know, it's kind of a superhero narrative. There's this lone Brewster Kahle who's out there doing things, and now of course that's true. Brewster's, you know, again, Brewster's somebody who I readily point to when people need an example of somebody who, who did really well in tech but didn't completely become a money grubbing jerk as a result of it, but instead, you know, plowed it back into the community. It's important to have people like that, but it's also important to recognize that this is a community and that we're building it, and that it’s got plenty of space for the next person to show up and, and throw in ideas.<br />At least I hope that's how, you know, we fix the internet.<br /><br /><strong>JASON KELLEY:</strong> And that's it for this episode and for this season. Thank you to Brewster for the conversation today, and to all of our guests this season for taking the time to share their insight, experience, and wisdom with us these past few months. Everybody who listens, gets to learn a little bit more about how to fix the internet.<br />That is our goal at EFF. And every time I finish one of these conversations, I think, wow, there's a lot to do. So thank you so much for listening. If you wanna help us do that work, go to eff.org/pod and you can donate, become a member, and um, we have 30,000 members, but we could always use a few more because there is a lot to fix.<br />Thank you so much. Our theme music is by Nat Keefe of Beat Millware with Reed Mathis. And How to Fix the Internet is supported by the Alfred P. Sloan Foundation's Program in Public Understanding of Science and Technology. I'm Jason Kelley. </span></p>
<p><span data-contrast="auto"><strong>CINDY COHN:</strong> And I'm Cindy Cohn.</span></p>
<p><span data-contrast="auto"><em><strong>MUSIC CREDITS:</strong> This podcast is licensed Creative Commons Attribution 4.0 international, and includes the following music licensed Creative Commons Attribution 3.0 unported by its creators: Drops of H2O, The Filtered Water Treatment by Jay Lang. Additional music, theme remixes and sound design by Gaetan Harris.</em><br /></span></p>
<p><span data-contrast="auto"></span></p>
<p><span data-contrast="auto"></span></p>
<p><span data-contrast="auto"></span></p>
<p><span data-contrast="auto"></span></p>
<p><span data-contrast="auto"></span></p>
<p><span data-contrast="auto"></span></p>
<p><span data-contrast="auto"></span></p>
</div></div></div></description>
<pubDate>Wed, 10 Sep 2025 07:05:46 +0000</pubDate>
<guid isPermaLink="false">111184 at https://www.eff.org</guid>
<category domain="https://www.eff.org/how-to-fix-the-internet-podcast">How to Fix the Internet: Podcast</category>
<dc:creator>Josh Richman</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/2025-htfi-brewster-blog.jpg" alt="How to Fix the Internet - Brewster Kahle - Building and Preserving the Library of Everything" type="image/jpeg" length="215756" />
</item>
<item>
<title>Executive Director Cindy Cohn Will Step Down After 25 Years with EFF </title>
<link>https://www.eff.org/press/releases/executive-director-cindy-cohn-will-step-down-after-25-years-eff</link>
<description><div class="field field--name-field-pr-subhead field--type-text field--label-hidden"><div class="field__items"><div class="field__item even">EFF Launches Search for Successor to ‘Visionary Lawyer and Leader’ </div></div></div><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p><span data-contrast="auto">SAN FRANCISCO – Electronic Frontier Foundation Executive Director Cindy Cohn will step down by mid-2026 after more than 25 years with the organization and a decade as its top officer leading the fight for digital freedoms.</span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">EFF – defending digital privacy, free speech, and innovation since 1990 – is launching a search for Cohn’s successor.</span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">“It’s been the honor of my life to help EFF grow and become the strong, effective organization it is today, but it’s time to make space for new leadership. I also want to get back into the fight for civil liberties more directly than I can as the executive director of a thriving 125-person organization,” Cohn said. “I’m incredibly proud of all that we’ve built and accomplished. One of our former interns once called EFF the joyful warriors for internet freedom and I have always loved that characterization.”</span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">“I know EFF’s lawyers, activists and technologists will continue standing up for freedom, justice and innovation whether we’re fighting trolls, bullies, corporate oligarchs, clueless legislators or outright dictators,” she added.</span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">"Cindy Cohn has been a relentless advocate for the simple proposition that regular people have a fundamental right to privacy online,” said U.S. Sen. Ron Wyden, D-OR. “Her work – defending encryption, opposing warrantless NSA surveillance, and suing major corporations for violating customer privacy – has consistently put her on the side of users and individuals and against powerful entrenched interests. Cindy's steady leadership at EFF will be missed by everyone who believes the First and Fourth Amendments are just as necessary today as they were more than 200 years ago."</span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">Cohn, 61, first became involved with EFF in 1993, when EFF asked her to serve as the outside lead attorney in </span><a href="https://www.eff.org/cases/bernstein-v-us-dept-justice" target="_blank" rel="noopener noreferrer"><i><span data-contrast="none">Bernstein v. Dept. of Justice</span></i></a><span data-contrast="auto">, the successful First Amendment challenge to the U.S. export restrictions on cryptography. She served as EFF’s Legal Director as well as its General Counsel from 2000 through 2015, and she has served as Executive Director since then. She also has co-hosted EFF’s award-winning “</span><a href="https://www.eff.org/how-to-fix-the-internet-podcast" target="_blank" rel="noopener noreferrer"><span data-contrast="none">How to Fix the Internet</span></a><span data-contrast="auto">” podcast, which is about to conclude its sixth season. Her upcoming professional memoir covering her time at EFF, </span><i><span data-contrast="auto">Privacy’s Defender: My Thirty-Year Fight Against Digital Surveillance, </span></i><span data-contrast="auto">will be published i</span><span data-contrast="auto">n spring 2026 by MIT Press. </span><span data-ccp-props="240}"> </span></p>
<p><span data-contrast="auto">Cohn was named to </span><a href="https://thenonprofittimes.com/npt_articles/the-2020-npt-power-influence-top-50/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">TheNonProfitTimes 2020 Power &amp; Influence TOP 50</span></a><span data-contrast="auto">. In 2018, Forbes included her as one of </span><a href="https://www.forbes.com/profile/cindy-cohn/?list=top-tech-women-america" target="_blank" rel="noopener noreferrer"><span data-contrast="none">America's Top 50 Women in Tech</span></a><span data-contrast="auto">. The National Law Journal named her one of the 100 most influential lawyers in America in 2013, noting: "[I]f Big Brother is watching, he better look out for Cindy Cohn." That publication also named her in 2006 for "rushing to the barricades wherever freedom and civil liberties are at stake online." In 2007, the National Law Journal named her one of the 50 most influential women lawyers in America. </span><span data-ccp-props="240}"> </span></p>
<p><span data-contrast="auto">In 2010 the Intellectual Property Section of the State Bar of California awarded Cohn its Intellectual Property </span><a href="https://web.archive.org/web/20100826192755/http://ipsection.calbar.ca.gov/Education/TheIPInsitute/2010IPVanguardAwards.aspx" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Vanguard Award</span></a><span data-contrast="auto"> and in 2012 the Northern California Chapter of the Society of Professional Journalists awarded her its </span><a href="https://web.archive.org/web/20120521101438/http://www.spjnorcal.org:80/blog/2012/02/10/spj-norcal-names-first-amendment-honorees/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">James Madison Freedom of Information Award</span></a><span data-contrast="auto">.</span><span data-ccp-props="240}"> </span></p>
<p><span data-contrast="auto">Cohn said she made the decision to step down more than a year ago, and later informed EFF’s Board of Directors and executive staff. The Board of Directors has assembled a search committee, which in turn has engaged leadership advisory firm </span><a href="https://www.russellreynolds.com/en/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Russell Reynolds Associates</span></a><span data-contrast="auto"> to conduct a search for EFF’s new executive director. Inquiries about the search can be directed to </span><a href="mailto:EFF@russellreynolds.com" target="_blank" rel="noopener noreferrer"><span data-contrast="none">EFF@russellreynolds.com</span></a><span data-contrast="auto">. </span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">The search committee hopes to hire someone next spring, with Cohn planning to remain at EFF for a transition period through early summer. </span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">“Simply put, Cindy Cohn is an EFF institution,” said Gigi Sohn, chair of EFF’s Board of Directors. “Under her leadership, the organization has grown tremendously, cementing its role as the premier defender of digital privacy, free speech and innovation in the U.S., and perhaps the world. The EFF Board thanks Cindy for her many years of service to EFF, first as Legal Director and for the past 10 years as Executive Director, as well as her willingness to help the organization through this leadership transition. We wish her all the best in her future endeavors, which undoubtedly will be equally as, if not more, successful.”</span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">“Cindy has been a huge part of EFF’s 35-year history and growth, and the organization simply wouldn’t be where it is today - at the forefront of defending civil liberties in the digital world - without her,” said EFF co-founder Mitch Kapor. “Her strong, compassionate leadership has set a clear and impactful road map for EFF’s work for years to come.”</span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">“Cindy Cohn is a visionary lawyer and leader who has helped make EFF the world’s foremost digital rights organization,” said American Civil Liberties Union Deputy Legal Director Ben Wizner. “She has also been a dear friend and mentor to so many of us, leading with her warmth and humor as much as her brilliance. I’m excited to see her next act and confident she’ll find new strategies for protecting our rights and liberties.”</span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">“Cindy is a force in the digital rights community,” said Center for Democracy &amp; Technology President and CEO Alexandra Reeve Givens. “Her visionary leadership has pushed the field forward, championing the rights of individual users and innovators in a fast-changing digital world. Cindy is a tireless advocate for user privacy, free expression, and ensuring technology serves the public good. Her legacy at EFF stands not just in the policy battles and complex cases she’s won, but in the foundation she has built for the next generation of digital rights defenders.”</span><span data-ccp-props="{}"> </span></p>
<p><b><span data-contrast="auto">For more about Cindy Cohn, with hi-res photo:</span></b> <a href="https://www.eff.org/about/staff/cindy-cohn" target="_blank" rel="noopener noreferrer"><span data-contrast="none">https://www.eff.org/about/staff/cindy-cohn</span></a><span data-ccp-props="{}"> </span></p>
</div></div></div><div class="field field--name-field-contact field--type-node-reference field--label-above"><div class="field__label">Contact:&nbsp;</div><div class="field__items"><div class="field__item even"><div class="ds-1col node node--profile view-mode-node_embed node--node-embed node--profile--node-embed clearfix">
<div class="">
<div class="field field--name-field-profile-first-name field--type-text field--label-hidden"><div class="field__items"><div class="field__item even">Josh</div></div></div><div class="field field--name-field-profile-last-name field--type-text field--label-hidden"><div class="field__items"><div class="field__item even">Richman</div></div></div><div class="field field--name-field-profile-title field--type-text field--label-hidden"><div class="field__items"><div class="field__item even">Communications Director</div></div></div><div class="field field--name-field-profile-email field--type-email field--label-hidden"><div class="field__items"><div class="field__item even"><a href="mailto:jrichman@eff.org">jrichman@eff.org</a></div></div></div> </div>
</div>
</div></div></div></description>
<pubDate>Tue, 09 Sep 2025 21:00:26 +0000</pubDate>
<guid isPermaLink="false">111185 at https://www.eff.org</guid>
<dc:creator>Josh Richman</dc:creator>
</item>
<item>
<title>EFF Awards Spotlight ✨ Software Freedom Law Center, India</title>
<link>https://www.eff.org/deeplinks/2025/08/eff-awards-spotlight-software-freedom-law-center-india</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p>In 1992 EFF presented our very first awards <a href="https://www.eff.org/awards/past-winners">recognizing key leaders and organizations</a> advancing innovation and championing civil liberties and human rights online. Now in 2025 we're continuing to celebrate the accomplishments of people working toward a better future for everyone with the EFF Awards!</p><p><a href="https://supporters.eff.org/civicrm/event/register?id=498&amp;reset=1">All are invited to attend the EFF Awards</a> on Wednesday, September 10 at the San Francisco Design Center. Whether you're an activist, an EFF supporter, a student interested in cyberlaw, or someone who wants to munch on a strolling dinner with other likeminded individuals, anyone can enjoy the ceremony!</p><p class="take-action"><a href="https://supporters.eff.org/civicrm/event/register?id=498&amp;reset=1">REGISTER TODAY!</a></p><p class="take-action take-explainer"><strong>GENERAL ADMISSION: $55 | CURRENT EFF MEMBERS: $45 | STUDENTS: $35</strong></p><p>If you're not able to make it, we'll also be <a href="http://www.eff.org/livestream-effawards2025">hosting a livestream of the event</a> on Friday, September 12 at 12:00 PM PT. The event will also be recorded, and posted to <a href="https://www.youtube.com/efforg">YouTube</a> and the <a href="https://archive.org/details/@electronic_frontier_foundation_eff_">Internet Archive</a> after the livestream.</p><p>We are honored to present the three winners of this year's EFF Awards: Just Futures Law, Erie Meyer, and Software Freedom Law Center, India. But, before we kick off the ceremony next week, let's take a closer look at each of the honorees. And last, but certainly not least—<strong>Software Freedom Law Center, India, winner of the EFF Award for Defending Digital Freedoms</strong><strong>:</strong></p><p class="image-left center-image"><img src="https://www.eff.org/files/2025/08/29/eff-awards-winners-instagram_sflc.png" alt="EFF Awards Winner Software Freedom Law Center, India" width="300" height="300" /></p><p>Software Freedom Law Center, India<span style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="auto"> is a donor-supported legal services organization based in India that brings together lawyers, policy analysts, students, and technologists to protect freedom in the digital world. It promotes innovation and open access to knowledge by helping developers make great free and open-source software, protects privacy and civil liberties for Indians by educating and providing free legal advice, and helps policymakers make informed and just decisions about use of technology.</span><span style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-ccp-props="{}"> </span><span style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="auto">SFLC.IN tracks and participates in litigation, AI regulations, and free speech issues that are defining Indian technology. It also tracks </span><a style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" href="https://internetshutdowns.in/" target="_blank" rel="noopener noreferrer"><span style="margin: 0px; padding: 0px; border: 0px; font-style: inherit; font-variant-caps: inherit; font-width: inherit; font-size: inherit; line-height: inherit; font-family: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="none">internet shutdowns</span></a><span style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="auto"> and </span><a style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" href="https://freespeech.sflc.in/" target="_blank" rel="noopener noreferrer"><span style="margin: 0px; padding: 0px; border: 0px; font-style: inherit; font-variant-caps: inherit; font-width: inherit; font-size: inherit; line-height: inherit; font-family: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="none">censorship incidents</span></a><span style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="auto"> across India, provides </span><a style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" href="https://security.sflc.in/" target="_blank" rel="noopener noreferrer"><span style="margin: 0px; padding: 0px; border: 0px; font-style: inherit; font-variant-caps: inherit; font-width: inherit; font-size: inherit; line-height: inherit; font-family: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="none">digital security training</span></a><span style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="auto">, and has launched the </span><a style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" href="https://ddn.sflc.in/" target="_blank" rel="noopener noreferrer"><span style="margin: 0px; padding: 0px; border: 0px; font-style: inherit; font-variant-caps: inherit; font-width: inherit; font-size: inherit; line-height: inherit; font-family: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="none">Digital Defenders Network</span></a><span style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="auto">, a pan-Indian network of lawyers committed to protecting digital rights. </span><span style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="auto">It has conducted landmark litigation cases, petitioned the government of India on freedom of expression and internet issues, and campaigned for WhatsApp and Facebook to fix a feature of their platform that has been used to harass women in India.</span><span style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-ccp-props="{}"> </span></p><p>We're excited to celebrate SFLC.IN and the other EFF Award winners in person in San Francisco on September 10! <a href="https://supporters.eff.org/civicrm/event/register?id=498&amp;reset=1">We hope that you'll join us there.</a></p><hr /><p>Thank you to Fastly, DuckDuckGo, Corellium, and No Starch Press for their year-round support of EFF's mission.</p><p>Want to show your team’s support for EFF? Sponsorships ensure we can continue hosting events like this to build community among digital rights supporters. Please visit <a class="theme markdown__link" href="http://eff.org/thanks" target="_blank" rel="noopener noreferrer">eff.org/thanks</a> or contact <a class="theme markdown__link" href="mailto:tierney@eff.org" target="_blank" rel="noopener noreferrer">tierney@eff.org</a> for more information on corporate giving and sponsorships.</p><p>EFF is dedicated to a harassment-free experience for everyone, and all participants are encouraged to view our full <a href="https://www.eff.org/pages/event-expectations" target="_blank" rel="noopener noreferrer" data-cke-saved-href="https://www.eff.org/pages/event-expectations">Event Expectations</a>.</p><p><em>Questions? Email us at <a href="mailto:events@eff.org?subject=EFF%20Awards" data-cke-saved-href="mailto:events@eff.org?subject=Tech%20Trivia">events@eff.org</a>.</em></p>
</div></div></div></description>
<pubDate>Fri, 05 Sep 2025 17:18:33 +0000</pubDate>
<guid isPermaLink="false">111106 at https://www.eff.org</guid>
<dc:creator>Christian Romero</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/2025_effawards_banner_sflcindia.png" alt="" type="image/png" length="421553" />
</item>
<item>
<title>Age Verification Is A Windfall for Big Tech—And A Death Sentence For Smaller Platforms</title>
<link>https://www.eff.org/deeplinks/2025/09/age-verification-windfall-big-tech-and-death-sentence-smaller-platforms</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p><em><strong>Update September 10, 2025</strong>: Bluesky <a href="https://bsky.social/about/blog/09-10-2025-age-assurance-approach">announced</a> today that it would implement age verification measures in South Dakota and Wyoming to comply with <a href="https://www.eff.org/deeplinks/2025/08/book-bans-internet-bans-wyoming-lets-parents-control-whole-states-access-internet">laws there</a>. Bluesky continues to block access in Mississippi.</em></p>
<p><span>If you live in Mississippi, you may have noticed that you are no longer able to log into your Bluesky or Dreamwidth accounts from within the state. That’s because, in a chilling early warning sign for the U.S., both social platforms decided to block all users in Mississippi from their services rather than risk hefty fines under the state’s oppressive age verification mandate. </span></p>
<p><span>If this sounds like censorship to you, you’re right—it is. But it’s not these small platforms’ fault. This is the unfortunate result of Mississippi’s wide-sweeping age verification law, </span><a href="https://www.eff.org/deeplinks/2024/10/eff-fifth-circuit-age-verification-laws-will-hurt-more-they-help"><span>H.B. 1126</span></a><span>. Though the law had previously been </span><a href="https://netchoice.org/wp-content/uploads/2024/07/NetChoice-v-Fitch-District-Court-Preliminary-Injuction-Ruling-July-1-2024.pdf"><span>blocked</span></a><span> by a federal district court, the Supreme Court </span><a href="https://www.supremecourt.gov/opinions/24pdf/25a97_5h25.pdf"><span>lifted</span></a><span> that injunction last month, even as one justice (Kavanaugh) concluded that the law is “likely unconstitutional.” This allows H.B. 1126 to go into effect while the broader constitutional challenge works its way through the courts. EFF has opposed H.B. 1126 from the start, arguing consistently and constantly that it violates all internet users’ </span><a href="https://www.eff.org/ko/deeplinks/2024/06/mississippi-cant-wall-everyones-social-media-access-protect-children"><span>First Amendment</span></a><span> rights, seriously risks our </span><a href="https://www.eff.org/deeplinks/2023/10/your-states-child-safety-law-unconstitutional-try-comprehensive-data-privacy"><span>privacy</span></a><span>, and forces platforms to implement invasive surveillance systems that ruin our </span><a href="https://www.eff.org/deeplinks/2023/03/age-verification-mandates-would-undermine-anonymity-online"><span>anonymity</span></a><span>. </span></p>
<p><span>Lawmakers often sell age-verification mandates as a silver bullet for Big Tech’s harms, but in practice, these laws do nothing to rein in the tech giants. Instead, they end up crushing smaller platforms that can’t absorb the exorbitant costs. Now that Mississippi’s mandate has gone into effect, the reality is clear: age verification laws entrench Big Tech’s dominance, while pushing smaller communities like Bluesky and Dreamwidth offline altogether. </span></p>
<h3><b>Sorry Mississippians, We Can’t Afford You</b></h3>
<p><span>Bluesky was the first platform to make the announcement. In a </span><a href="https://bsky.social/about/blog/08-22-2025-mississippi-hb1126"><span>public blogpost</span></a><span>, Bluesky condemned H.B. 1126’s broad scope, barriers to innovation, and privacy implications, explaining that the law forces platforms to “make </span><i><span>every</span></i><span> Mississippi Bluesky user hand over sensitive personal information and undergo age checks to access the site—or risk massive fines.” As Bluesky noted, “This dynamic entrenches existing big tech platforms while stifling the innovation and competition that benefits users.” Instead, Bluesky made the decision to cut off Mississippians entirely until the courts consider whether to overturn the law. </span></p>
<p><span>About a week later, we saw a similar </span><a href="https://dw-news.dreamwidth.org/44429.html"><span>announcement</span></a><span> from Dreamwidth, an open-source online community similar to LiveJournal where users share creative writing, fanfiction, journals, and other works. In its post, Dreamwidth shared that it too would have to resort to blocking the IP addresses of all users in Mississippi because it could not afford the hefty fines. </span></p>
<p><span>Dreamwidth wrote: “Even a single $10,000 fine would be rough for us, but the per-user, per-incident nature of the actual fine structure is an existential threat.” The service also expressed fear that being involved in the lawsuit against Mississippi left it particularly vulnerable to retaliation—a clear illustration of the chilling effect of these laws. For Dreamwidth, blocking Mississippi users entirely was the only way to survive. </span></p>
<h3><b>Age Verification Mandates Don’t Rein In Big Tech—They Entrench It</b></h3>
<p><span>Proponents of age verification claim that these mandates will hold Big Tech companies accountable for their outsized influence, but really the opposite is true. As we can see from Mississippi, age verification mandates concentrate and consolidate power in the hands of the largest companies—the only entities with the resources to build costly compliance systems and absorb potentially massive fines. While megacorporations like </span><a href="https://techcrunch.com/2025/07/29/youtube-rolls-out-age-estimatation-tech-to-identify-u-s-teens-and-apply-additional-protections/"><span>Google</span></a><span> (with YouTube) and </span><a href="https://apnews.com/article/instagram-teens-parents-age-verification-meta-94f1f9915ae083453d23bf9ec57e7c7b"><span>Meta</span></a><span> (with Instagram) are already experimenting with creepy new age-estimation tech on their social platforms, smaller sites like Bluesky and Dreamwidth simply cannot afford the risks. </span></p>
<p><span>We’ve already seen how this plays out in the UK. When the </span><a href="https://www.eff.org/deeplinks/2025/08/blocking-access-harmful-content-will-not-protect-children-online-no-matter-how"><span>Online Safety Act</span></a><span> came into force recently, platforms like Reddit, YouTube, and Spotify implemented broad (and </span><a href="https://www.eff.org/deeplinks/2025/08/americans-be-warned-lessons-reddits-chaotic-uk-age-verification-rollout"><span>extremely clunky</span></a><span>) age verification measures while </span><a href="https://www.telegraph.co.uk/business/2024/12/17/hundreds-of-websites-to-shut-down-under-chilling-internet/"><span>smaller sites</span></a><span>, including </span><a href="https://www.techdirt.com/2024/12/20/death-of-a-forum-how-the-uks-online-safety-act-is-killing-communities/"><span>forums</span></a><span> on </span><a href="https://web.archive.org/web/20250408084153/https://www.dadswithkids.co.uk/ams/forum-closure.28/"><span>parenting</span></a><span>,</span><a href="https://web.archive.org/web/20250120205725/https://www.thegreenlivingforum.net/forum/viewtopic.php?f=2&amp;t=114519"> <span>green living</span></a><span>, and</span><a href="https://web.archive.org/web/20250102185206/https://www.gamingonlinux.com/forum/topic/6463/"> <span>gaming on Linux</span></a><span>, were forced to shutter. Take, for example, the </span><a href="https://www.thehamsterforum.com/threads/big-sad-forum-news-online-safety-act.2091/"><span>Hamster Forum</span></a><span>, “home of all things hamstery,” which announced in March 2025 that the OSA would force it to shut down its community message boards. Instead, users were directed to migrate over to Instagram with this wistful disclaimer: “It will not be the same by any means, but . . . We can follow each other and message on there and see each others [sic] individual posts and share our hammy photos and updates still.” </span></p>
<p class="pull-quote"><span>When smaller platforms inevitably cave under the financial pressure of these mandates, users will be pushed back to the social media giants.</span></p>
<p><span>This perfectly illustrates the market impact of online age verification laws. When smaller platforms inevitably cave under the financial pressure of these mandates, users will be pushed back to the social media giants. These huge companies—those that can afford expensive age verification systems and aren’t afraid of a few $10,000 fines while they figure out compliance—will end up getting </span><i><span>more</span></i><span> business, </span><i><span>more</span></i><span> traffic, and </span><i><span>more </span></i><span>power to censor users and violate their privacy. </span></p>
<p><span>This consolidation of power is a dream come true for the Big Tech platforms, but it’s a nightmare for users. While the megacorporations get more traffic and a whole lot more user data (read: profit), users are left with far fewer community options and a bland, corporate surveillance machine instead of a vibrant public sphere. The internet we all fell in love with is a diverse and colorful place, full of innovation, connection, and unique opportunities for self-expression. That internet—</span><i><span>our </span></i><span>internet—is worth defending.</span></p>
<div class="node__content">
<div class="field field--name-body field--type-text-with-summary field--label-hidden">
<div class="field__items">
<div class="field__item even">
<p class="take-action"><a href="https://act.eff.org/action/don-t-let-congress-decide-what-we-re-allowed-to-read-online" target="_blank" rel="noopener noreferrer">TAKE ACTION</a></p>
<p class="take-explainer"><a href="https://act.eff.org/action/don-t-let-congress-decide-what-we-re-allowed-to-read-online" target="_blank" rel="noopener noreferrer">Don't let congress censor the internet</a></p>
</div>
</div>
</div>
</div>
</div></div></div></description>
<pubDate>Fri, 05 Sep 2025 17:07:47 +0000</pubDate>
<guid isPermaLink="false">111181 at https://www.eff.org</guid>
<category domain="https://www.eff.org/issues/free-speech">Free Speech</category>
<category domain="https://www.eff.org/issues/competition">Competition</category>
<category domain="https://www.eff.org/issues/innovation">Creativity & Innovation</category>
<category domain="https://www.eff.org/issues/big-tech">Big Tech</category>
<category domain="https://www.eff.org/issues/anonymity">Anonymity</category>
<category domain="https://www.eff.org/bloggers">Blogger and Other Creator Rights</category>
<dc:creator>Molly Buckley</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/ageverificationbanner.png" alt="Purple padlock with an 18+ only symbol and a combination lock requiring Day, Month, and Year. Surrounded by abstract purple dashed lines." type="image/png" length="1291379" />
</item>
<item>
<title>EFF Joins 55 Civil Society Organizations Urging the End of Sanctions on UN Special Rapporteur Francesca Albanese</title>
<link>https://www.eff.org/deeplinks/2025/09/eff-joins-55-civil-society-organizations-urging-end-sanctions-un-special</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p><b></b></p>
<p><span>Following the U.S. government's overreaching decision to </span><a href="https://www.state.gov/releases/office-of-the-spokesperson/2025/07/sanctioning-lawfare-that-targets-u-s-and-israeli-persons/"><span>impose sanctions</span></a><span> against Francesca Albanese, the United Nations Special Rapporteur on the situation of human rights in the Palestinian territories occupied since 1967, EFF joined more than 50 civil society organizations in </span><a href="https://7amleh.org/post/joint-statement-1-august-en"><span>calling for</span></a><span> the U.S. to lift the sanctions. </span></p>
<p><span>The U.S.’s sanctions on Francesca Albanese were </span><a href="https://www.state.gov/releases/office-of-the-spokesperson/2025/07/sanctioning-lawfare-that-targets-u-s-and-israeli-persons"><span>formally issued</span></a><span> in July 2025, pursuant to Section 1(a)(ii)(A) of President Trump’s </span><a href="https://www.whitehouse.gov/presidential-actions/2025/02/imposing-sanctions-on-the-international-criminal-court/"><span>Executive Order 14203</span></a><span>, which was imposed by the U.S. on the International Criminal Court (ICC) in February for having “engaged in illegitimate and baseless actions targeting America and our close ally Israel.” Under this Executive Order, the State Department is instructed to name specific people who have worked with or for the ICC. Rapporteur Albanese joins several ICC judges and the lead prosecutor in having their U.S. property and interests in property blocked, as well as restrictions on entering the country, banking, and more. </span></p>
<p><span>One of the reasons cited in the far-reaching U.S. sanction is Albanese’s engagement with the ICC to investigate or prosecute nationals of the U.S. and Israel. The sanction came just days after the publication of the Special Rapportuer’s </span><a href="https://www.ohchr.org/en/documents/country-reports/ahrc5923-economy-occupation-economy-genocide-report-special-rapporteur"><span>recent report</span></a><span> to the UN Human Rights Council, “From economy of occupation to economy of genocide.” In her report, the Special Rapporteur “urges the International Criminal Court and national judiciaries to investigate and prosecute corporate executives and/or corporate entities for their part in the commission of international crimes and laundering of the proceeds from those crimes.” </span></p>
<p>As a UN Special Rapporteur, Albanese’s role is to conduct independent research, gather information, and prepare reports on human rights situations, including documenting violations and providing recommendations to the Human Rights Council and other Human Rights bodies. Special Rapporteurs are independent experts chosen by the UN Human Rights Council in Geneva. They do not represent the UN or hold any formal authority, but their reports and findings are essential for advocacy in transnational situations, informing prosecutors at the International Criminal Court, or pressuring counties for human rights abuses. </p>
<p>The unilateral sanctions imposed on the UN Special Rapporteur not only target her as an individual but also threaten the broader international human rights framework, undermining crucial work in monitoring and reporting on human rights issues. Such measures risk politicizing their mandates, discouraging frank reporting, and creating a chilling effect on human rights defenders more broadly. With the 80th session of the UN General Assembly opening in New York this September, these sanctions and travel restrictions present an amplified impingement on the Special Rapporteur’s capacity to fulfill her mandate and report on human rights abuses in Palestine.</p>
<p>The Special Rapportuer’s report identifies how AI, cloud services, biometric surveillance, and predictive policing technologies have reinforced military operations, population control and the unlawful targeting of civilians in the ongoing genocide in Gaza. More specifically, it illuminates the role of U.S. tech giants like Microsoft, Alphabet (Google’s parent company), Amazon, and IBM in providing dual-use infrastructure to “integrate mass data collection and surveillance, while profiting from the unique testing ground for military technology offered by the occupied Palestinian territory.” </p>
<p><span>This report is well within her legal </span><a href="https://www.ohchr.org/en/special-procedures/sr-palestine"><span>mandate</span></a><span> to investigate and report on human rights issues in Palestine and provide critical oversight and accountability for human rights abuses. This work is particularly essential at a time when the very survival of Palestinians in the occupied Gaza Strip is at stake—journalists are </span><a href="https://www.independent.co.uk/news/world/middle-east/israel-al-jazeera-journalists-killed-gaza-names-b2805894.html"><span>being killed</span></a><span> with deplorable frequency; </span><a href="https://www.eff.org/deeplinks/2024/03/access-internet-infrastructure-essential-wartime-and-peacetime"><span>internet shutdowns</span></a><span> and </span><a href="https://www.eff.org/deeplinks/2023/11/platforms-must-stop-unjustified-takedowns-posts-and-about-palestinians"><span>biased censorship</span></a><span> by social media platforms are preventing vital information from circulating within and leaving Gaza; and U.S.-based tech companies are </span><a href="https://www.eff.org/deeplinks/2024/08/digital-apartheid-gaza-big-tech-must-reveal-their-roles-tech-used-human-rights-0"><span>continuing to be opaque</span></a><span> about their role in providing technologies to the Israeli authorities for use in the ongoing genocide against Palestinians, despite the </span><a href="https://www.theguardian.com/world/2025/aug/06/microsoft-israeli-military-palestinian-phone-calls-cloud"><span>mounting evidence</span></a><span>. </span></p>
<p><span>EFF has </span><a href="https://www.eff.org/deeplinks/2024/08/digital-apartheid-gaza-big-tech-must-reveal-their-roles-tech-used-human-rights-0"><span>repeatedly called</span></a><span> for greater transparency relating to the role of Big Tech companies like Google, Amazon, and Microsoft in human rights abuses across Gaza and the West Bank, with these U.S.-based companies coming </span><a href="https://www.notechforapartheid.com/"><span>under pressure</span></a><span> to reveal more about the services they provide and the nature of their relationships with the Israeli forces engaging in the military response. Without greater transparency, the public cannot tell whether these companies are complying with human rights standards—both those set by the </span><a href="https://www.ohchr.org/en/publications/reference-publications/guiding-principles-business-and-human-rights"><span>United Nations</span></a><span> and those they have publicly </span><a href="https://about.google/intl/ALL_us/human-rights/"><span>set for</span></a> <a href="https://sustainability.aboutamazon.com/human-rights/principles"><span>themselves</span></a><span>. We know that this conflict has resulted in alleged war crimes and has involved massive, ongoing surveillance of civilians and refugees living under what </span><a href="https://www.icj-cij.org/sites/default/files/case-related/186/186-20240719-adv-01-00-en.pdf"><span>international law recognizes as an illegal occupation</span></a><span>. That kind of surveillance requires significant technical support and it seems unlikely that it could occur without any ongoing involvement by the companies providing the platforms. </span></p>
<p><span>Top UN human rights officials have </span><a href="https://news.un.org/en/story/2025/07/1165359"><span>called for the reversal of the sanctions</span></a><span> against the Special Rapporteur, voicing serious concerns about the dangerous precedent this sets in undermining human rights. The UN High Commissioner for Human Rights, Volker Türk, </span><a href="https://news.un.org/en/story/2025/07/1165359"><span>called for</span></a><span> a prompt reversal of the sanctions and noted that, “even in the face of fierce disagreement, UN member states should engage substantively and constructively, rather than resort to punitive measures.” Similarly, UN Spokesperson Stéphane Dujarric </span><a href="https://news.un.org/en/story/2025/07/1165359"><span>noted</span></a><span> that whilst Member States “are perfectly entitled to their views and to disagree with” experts’ reports, they should still “engage with the UN’s human rights architecture.”</span></p>
<p><span>In a </span><a href="https://www.louisianafirstnews.com/news/politics/ap-politics/ap-things-to-know-about-the-un-special-rapporteur-sanctioned-by-the-us/"><span>press conference</span></a><span>, Albanese said she believed that the sanctions were calculated to weaken her mission, and questioned why they had even been introduced: “for having exposed a genocide? For having denounced the system? They never challenged me on the facts.”</span></p>
<p><span>The United States must reverse these sanctions, and respect human rights for all—not just for the people they consider worthy of having them.</span></p>
<p><span>Read our full civil society letter </span><a href="https://7amleh.org/post/joint-statement-1-august-en"><span>here</span></a><span>.</span></p>
</div></div></div></description>
<pubDate>Fri, 05 Sep 2025 12:51:55 +0000</pubDate>
<guid isPermaLink="false">111180 at https://www.eff.org</guid>
<category domain="https://www.eff.org/issues/free-speech">Free Speech</category>
<dc:creator>Electronic Frontier Foundation</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/icon-2019-freespeech.png" alt="A multi-colored bullhorn icon surrounded by grey-blue hexagons" type="image/png" length="14323" />
</item>
<item>
<title>California Lawmakers: Support S.B. 524 to Rein in AI Written Police Reports</title>
<link>https://www.eff.org/deeplinks/2025/09/california-lamakers-support-sb-524-rein-ai-written-police-reports</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p><span>EFF urges California state lawmakers to pass </span><a href="https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202520260SB524"><span>S.B. 524</span></a><span>, authored by Sen. Jesse Arreguín</span><span>. This bill is an important first step in regaining control over police using generative AI to write their narrative police reports. </span></p>
<p><span>This bill does several important things: It mandates that police reports written by AI include disclaimers on every page or within the body of the text that make it clear that this report was written in part or in total by a computer. It also says that any reports written by AI must retain their first draft. That way, it should be easier for defense attorneys, judges, police supervisors, or any other auditing entity to see which portions of the final report were written by AI and which parts were written by the officer. Further, the bill requires officers to sign and verify that they read the report and its facts are correct. And it bans AI vendors from selling or sharing the information a police agency provided to the AI.</span></p>
<p><span>These common-sense, first-step reforms are important: watchdogs are struggling to figure out where and how AI is being used in a police context. In fact, a popular AI police report writing tool, Axon’s Draft One, would be out of compliance with this bill, which would require them to redesign their tool to make it more transparent. </span></p>
<p class="pull-quote"><span>This bill is an important first step in regaining control over police using generative AI to write their narrative police reports. </span></p>
<p><span>Draft One takes audio from an officer’s body-worn camera, and uses AI to turn that dialogue into a narrative police report. Because independent researchers have been unable to test it, there are important </span><a href="https://www.eff.org/deeplinks/2024/05/what-can-go-wrong-when-police-use-ai-write-reports"><span>questions</span></a><span> about how the system handles things like sarcasm, out of context comments, or interactions with members of the public that speak languages other than English. Another major concern is Draft One’s inability to keep track of which parts of a report were written by people and which parts were written by AI.</span><a href="https://www.eff.org/deeplinks/2025/07/axons-draft-one-designed-defy-transparency"><span> By design, their product does not retain different iterations</span></a><span> of the draft—making it easy for an officer to say, “I didn’t lie in my police report, the AI wrote that part.” </span></p>
<p><span>All lawmakers should pass regulations of AI written police reports. This technology could be nearly </span><i><span>everywhere</span></i><span>, and soon. Axon is a top supplier of body-worn cameras in the United States, which means they have a massive ready-made customer base. Through the </span><a href="https://www.eff.org/deeplinks/2025/04/beware-bundle-companies-are-banking-becoming-your-police-departments-favorite"><span>bundling of products</span></a><span>, AI-written police reports could be at a vast percentage of police departments. </span></p>
<p><span>AI-written police reports are unproven in terms of their accuracy, and their overall effects on the criminal justice system. Vendors still have a long way to go to prove this technology can be transparent and auditable. While it would not solve all of the many problems of AI encroaching on the criminal justice system, S.B. 524 is a good first step to rein in an unaccountable piece of technology. </span></p>
<p><span></span><span>We urge California lawmakers to pass S.B. 524. </span></p>
</div></div></div></description>
<pubDate>Thu, 04 Sep 2025 18:48:59 +0000</pubDate>
<guid isPermaLink="false">111177 at https://www.eff.org</guid>
<category domain="https://www.eff.org/issues/ai">Artificial Intelligence & Machine Learning</category>
<dc:creator>Matthew Guariglia</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/robot-robot-1.png" alt="a robot writing a police report " type="image/png" length="46848" />
</item>
<item>
<title>EFF Awards Spotlight ✨ Erie Meyer</title>
<link>https://www.eff.org/deeplinks/2025/08/eff-awards-spotlight-erie-meyer</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p>In 1992 EFF presented our very first awards <a href="https://www.eff.org/awards/past-winners">recognizing key leaders and organizations</a> advancing innovation and championing civil liberties and human rights online. Now in 2025 we're continuing to celebrate the accomplishments of people working toward a better future for everyone with the EFF Awards!</p><p><a href="https://supporters.eff.org/civicrm/event/register?id=498&amp;reset=1">All are invited to attend the EFF Awards</a> on Wednesday, September 10 at the San Francisco Design Center. Whether you're an activist, an EFF supporter, a student interested in cyberlaw, or someone who wants to munch on a strolling dinner with other likeminded individuals, anyone can enjoy the ceremony!</p><p class="take-action"><a style="border-top-width: medium; border-right-width: medium; border-left-width: medium; border-top-color: currentcolor; border-right-color: currentcolor; border-left-color: currentcolor; border-image-source: none;" href="https://supporters.eff.org/civicrm/event/register?id=498&amp;reset=1">REGISTER TODAY!</a></p><p class="take-action take-explainer" style="font-size: 16px;"><strong>GENERAL ADMISSION: $55 | CURRENT EFF MEMBERS: $45 | STUDENTS: $35</strong></p><p>If you're not able to make it, we'll also be <a href="http://www.eff.org/livestream-effawards2025">hosting a livestream of the event</a> on Friday, September 12 at 12:00 PM PT. The event will also be recorded, and posted to <a href="https://www.youtube.com/efforg">YouTube</a> and the <a href="https://archive.org/details/@electronic_frontier_foundation_eff_">Internet Archive</a> after the livestream.</p><p>We are honored to present the three winners of this year's EFF Awards: Just Futures Law, Erie Meyer, and Software Freedom Law Center, India. But, before we kick off the ceremony next week, let's take a closer look at each of the honorees. This time—<strong>Erie Meyer, winner of the EFF Award for Protecting Americans' Data</strong><strong>:</strong></p><p class="image-left center-image" style="border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;"><img style="padding: 0px; font-style: inherit; font-variant-caps: inherit; font-width: inherit; font-size: inherit; line-height: inherit; font-family: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline; margin: 0px;" src="https://www.eff.org/files/2025/08/29/eff-awards-winners-instagram_erie.png" alt="EFF Awards Winner Erie Meyer" width="300" height="300" /></p><p style="margin: 0px 0px 2.3rem; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;">Erie Meyer is a Senior Fellow at the <a style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" href="https://www.vanderbilt.edu/vanderbilt-policy-accelerator/" target="_blank" rel="noopener noreferrer"><span style="margin: 0px; padding: 0px; border: 0px; font-style: inherit; font-variant-caps: inherit; font-width: inherit; font-size: inherit; line-height: inherit; font-family: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="none">Vanderbilt Policy Accelerator</span></a><span style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="auto"> where she focuses on the intersection of technology, artificial intelligence, and regulation, and a Senior Fellow at the </span><a style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" href="https://www.law.georgetown.edu/tech-institute/" target="_blank" rel="noopener noreferrer"><span style="margin: 0px; padding: 0px; border: 0px; font-style: inherit; font-variant-caps: inherit; font-width: inherit; font-size: inherit; line-height: inherit; font-family: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="none">Georgetown Law Institute for Technology Law &amp; Policy</span></a><span style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="auto">. </span><span style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="auto">Since January 20, Meyer has helped organize former government technologists to stand up for the privacy and integrity of governmental systems that hold Americans’ data. In addition to organizing others, she </span><a style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" href="https://storage.courtlistener.com/recap/gov.uscourts.dcd.277287/gov.uscourts.dcd.277287.18.0_1.pdf" target="_blank" rel="noopener noreferrer"><span style="margin: 0px; padding: 0px; border: 0px; font-style: inherit; font-variant-caps: inherit; font-width: inherit; font-size: inherit; line-height: inherit; font-family: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="none">filed a declaration in federal court</span></a><span style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="auto"> in February warning that 12 years of critical records could be irretrievably lost in the CFPB’s purge by the Trump Administration’s Department of Government Efficiency. In April, she filed </span><a style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" href="https://storage.courtlistener.com/recap/gov.uscourts.mdd.577321/gov.uscourts.mdd.577321.111.9.pdf" target="_blank" rel="noopener noreferrer"><span style="margin: 0px; padding: 0px; border: 0px; font-style: inherit; font-variant-caps: inherit; font-width: inherit; font-size: inherit; line-height: inherit; font-family: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="none">a declaration in another case</span></a><span style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="auto"> warning about using private-sector AI on government information. That same month, she </span><a style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" href="https://oversight.house.gov/wp-content/uploads/2025/04/Testimony-of-Erie-Meyer-2.pdf" target="_blank" rel="noopener noreferrer"><span style="margin: 0px; padding: 0px; border: 0px; font-style: inherit; font-variant-caps: inherit; font-width: inherit; font-size: inherit; line-height: inherit; font-family: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="none">testified to the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation</span></a><span style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="auto"> that DOGE is centralizing access to some of the most sensitive data the government holds—Social Security records, disability claims, even data tied to national security—without a clear plan or proper oversight, warning that “DOGE is burning the house down and calling it a renovation.”</span><span style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-ccp-props="279}"> </span></p><p>We're excited to celebrate Erie Meyer and the other EFF Award winners in person in San Francisco on September 10! <a href="https://supporters.eff.org/civicrm/event/register?id=498&amp;reset=1">We hope that you'll join us there.</a></p><hr /><p>Thank you to Fastly, DuckDuckGo, Corellium, and No Starch Press for their year-round support of EFF's mission.</p><p>Want to show your team’s support for EFF? Sponsorships ensure we can continue hosting events like this to build community among digital rights supporters. Please visit <a class="theme markdown__link" href="http://eff.org/thanks" target="_blank" rel="noopener noreferrer">eff.org/thanks</a> or contact <a class="theme markdown__link" href="mailto:tierney@eff.org" target="_blank" rel="noopener noreferrer">tierney@eff.org</a> for more information on corporate giving and sponsorships.</p><p>EFF is dedicated to a harassment-free experience for everyone, and all participants are encouraged to view our full <a href="https://www.eff.org/pages/event-expectations" target="_blank" rel="noopener noreferrer" data-cke-saved-href="https://www.eff.org/pages/event-expectations">Event Expectations</a>.</p><p><em>Questions? Email us at <a href="mailto:events@eff.org?subject=EFF%20Awards" data-cke-saved-href="mailto:events@eff.org?subject=Tech%20Trivia">events@eff.org</a>.</em></p>
</div></div></div></description>
<pubDate>Thu, 04 Sep 2025 17:32:35 +0000</pubDate>
<guid isPermaLink="false">111105 at https://www.eff.org</guid>
<dc:creator>Christian Romero</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/2025_effawards_banner_eriemeyer.png" alt="" type="image/png" length="409007" />
</item>
<item>
<title>From Libraries to Schools: Why Organizations Should Install Privacy Badger</title>
<link>https://www.eff.org/deeplinks/2025/09/libraries-schools-why-organizations-should-install-privacy-badger</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p><span>In an era of pervasive online surveillance, organizations have an important role to play in protecting their communities’ privacy. Millions of people browse the web on computers provided by their schools, libraries, and employers. By default, </span><a href="https://www.eff.org/deeplinks/2018/11/google-chromes-users-take-back-seat-its-bottom-line"><span>popular</span></a> <a href="https://privacybadger.org/#Is-Privacy-Badger-compatible-with-Firefox%27s-built-in-content-blocking"><span>browsers</span></a><span> on these computers leave people exposed to hidden trackers.</span></p>
<p><span>Organizations can enhance privacy and security on their devices by installing </span><a href="https://privacybadger.org/"><span>Privacy Badger</span></a><span>, EFF’s free, open source browser extension that automatically </span><a href="https://privacybadger.org/#How-is-Privacy-Badger-different-from-Disconnect%2c-Adblock-Plus%2c-Ghostery%2c-and-other-blocking-extensions"><span>blocks trackers</span></a><span>. Privacy Badger is already used by millions to </span><a href="https://www.eff.org/deeplinks/2025/03/online-tracking-out-control-privacy-badger-can-help-you-fight-back"><span>fight online surveillance</span></a><span> and take back control of their data.</span></p>
<h3><b>Why Should Organizations Install Privacy Badger on Managed Devices?</b></h3>
<h4><b>Protect People from Online Surveillance</b></h4>
<p><span>Most websites contain hidden trackers that let advertisers, data brokers, and Big Tech companies monitor people’s browsing activity. This surveillance has </span><a href="https://www.eff.org/deeplinks/2025/03/online-tracking-out-control-privacy-badger-can-help-you-fight-back"><span>serious consequences</span></a><span>: it </span><a href="https://www.aarp.org/money/scams-fraud/epsilon-data-fraud-schemes/"><span>fuels scams</span></a><span>, </span><a href="https://www.eff.org/deeplinks/2025/01/online-behavioral-ads-fuel-surveillance-industry-heres-how"><span>government spying</span></a><span>, </span><a href="https://strongautomotive.com/target-subprime-credit-facebook-paid-search/"><span>predatory advertising</span></a><span>, and </span><a href="https://www.eff.org/deeplinks/2024/08/fight-surveillance-pricing-we-need-privacy-first"><span>surveillance pricing</span></a><span>. </span></p>
<p><span>By installing Privacy Badger on managed devices, organizations can protect entire communities from these harms. Most people </span><a href="https://www.pewresearch.org/internet/2023/10/18/views-of-data-privacy-risks-personal-data-and-digital-privacy-laws/#feelings-of-concern-confusion-and-a-lack-of-control-over-one-s-data"><span>don’t realize the risks</span></a><span> of browsing the web unprotected. Organizations can step in to make online privacy available to everyone, not just the people who know they need it. </span></p>
<h4><b>Ad Blocking is a Cybersecurity Best Practice</b></h4>
<p><span>Privacy Badger helps reduce cybersecurity threats by blocking ads that track you (unfortunately, that’s most ads these days). Targeted ads aren’t just a privacy nightmare. They can also be a vehicle for malware and phishing attacks. </span><a href="https://www.forbes.com/sites/alexvakulov/2025/03/07/microsoft-uncovers-malvertising-campaign-that-hit-1-million-devices/"><span>Cybercriminals</span></a> <a href="https://www.bitdefender.com/en-us/blog/labs/malvertising-campaign-on-meta-expands-to-android-pushing-advanced-crypto-stealing-malware-to-users-worldwide"><span>have</span></a> <a href="https://arstechnica.com/information-technology/2016/03/big-name-sites-hit-by-rash-of-malicious-ads-spreading-crypto-ransomware/"><span>tricked</span></a><span> legitimate ad networks into distributing malware, a tactic known as </span><i><span>malvertising</span></i><span>.</span></p>
<p><span>The risks are serious enough that the U.S. Cybersecurity and Infrastructure Security Agency (CISA) </span><a href="https://www.cisa.gov/sites/default/files/publications/Capacity_Enhancement_Guide-Securing_Web_Browsers_and_Defending_Against_Malvertising_for_Federal_Agencies.pdf"><span>recommends</span></a><span> federal agencies deploy ad-blocking software. The NSA, CIA, and other intelligence agencies already </span><a href="https://www.wired.com/story/security-roundup-even-cia-nsa-use-ad-blockers/"><span>follow this guidance</span></a><span>. These agencies are using </span><a href="https://reason.com/2024/05/15/heres-how-the-cia-plans-to-use-your-ad-tracking-data/"><span>advertising systems to surveil others</span></a><span>, but blocking ads for their own employees. </span></p>
<p><span>All organizations, not just spy agencies, should make ad blocking part of their security strategy.</span></p>
<h4><b>A Tracker Blocker You Can Trust</b></h4>
<p><span>Four million users already trust Privacy Badger, which has been recommended by </span><a href="https://www.nytimes.com/wirecutter/reviews/our-favorite-ad-blockers-and-browser-extensions-to-protect-privacy/"><span>The New York Times' </span><i><span>Wirecutter</span></i></a><span>, </span><a href="https://securityplanner.consumerreports.org/tool/reduce-online-tracking"><i><span>Consumer Reports</span></i></a><span>, and </span><a href="https://www.washingtonpost.com/technology/2025/04/01/data-privacy-laws-ignoring/"><i><span>The Washington Post</span></i></a><span>.</span></p>
<p><span>Trust is crucial when choosing an ad-blocking or tracker-blocking extension because they require high levels of </span><a href="https://privacybadger.org/#Is-Privacy-Badger-spying-on-me"><span>browser permissions</span></a><span>. Unfortunately, not all extensions deserve that trust. </span><a href="https://www.ftc.gov/news-events/news/press-releases/2024/02/ftc-order-will-ban-avast-selling-browsing-data-advertising-purposes-require-it-pay-165-million-over"><span>Avast’s “privacy” extension</span></a><span> was caught collecting and selling users’ browsing data to third parties—the very practice it claimed to prevent.</span></p>
<p><span>Privacy Badger is different. </span><a href="https://www.eff.org/"><span>EFF</span></a><span> released it </span><a href="https://www.eff.org/deeplinks/2014/04/privacy-badger"><span>over a decade ago</span></a><span>, and the extension has been open-source—meaning other developers and researchers can inspect its code—that entire time. Built by a nonprofit with a </span><a href="https://www.eff.org/35"><span>35-year history</span></a><span> fighting for user rights, organizations can trust that Privacy Badger works for its users, not for profit. </span></p>
<h3><b>Which Organizations Should Deploy Privacy Badger?</b></h3>
<p><span>All of them! Installing Privacy Badger on managed devices improves privacy and security across an organization. That said, Privacy Badger is most beneficial for two types of organizations: libraries and schools. Both can better serve their communities by safeguarding the computers they provide.</span></p>
<h4><b>Libraries</b></h4>
<p><span>The American Library Association (ALA) </span><a href="https://www.ala.org/advocacy/privacy/checklists/public-access-computer"><span>already recommends</span></a><span> installing Privacy Badger on public computers to block third-party tracking. Librarians have a </span><a href="https://www.theguardian.com/world/2015/jun/05/nsa-surveillance-librarians-privacy"><span>long history</span></a><span> of defending privacy. The ALA’s guidance is a natural extension of that legacy for the digital age. While librarians protect the privacy of books people check out, Privacy Badger protects the privacy of websites they visit on library computers. </span></p>
<p><a href="https://themarkup.org/coronavirus/2020/06/25/millions-of-americans-depend-on-libraries-for-internet-now-theyre-closed"><span>Millions of Americans</span></a><span> depend on libraries for internet access. That makes libraries uniquely positioned to promote equitable access to private browsing. With Privacy Badger, libraries can ensure that safe and private browsing is the default for anyone using their computers. </span></p>
<p><span>Libraries also play a key role in promoting safe internet use through their </span><a href="https://www.ala.org/sites/default/files/2024-07/PLA_Tech_Survey_Report_2024.pdf"><span>digital literacy trainings</span></a><span>. By including Privacy Badger in these trainings, librarians can teach patrons about a simple, free tool that protects their privacy and security online.</span></p>
<h4><b>Schools</b></h4>
<p><span>Schools should protect their students’ from online surveillance by installing Privacy Badger on computers they provide. Parents are rightfully worried about their children’s privacy online, with a </span><a href="https://www.pewresearch.org/internet/2023/10/18/views-of-data-privacy-risks-personal-data-and-digital-privacy-laws/#childrens-online-privacy-concerns-and-responsibility"><span>Pew survey</span></a><span> showing 85% worry about advertisers using data about what kids do online to target ads. Deploying Privacy Badger is a concrete step schools can take to address these concerns. </span></p>
<p><span>By blocking online trackers, schools can protect students from manipulative ads and limit the personal data fueling social media algorithms. Privacy Badger can even block tracking in Ed Tech products that schools require students to use. Alarmingly, a </span><a href="https://www.hrw.org/StudentsNotProducts"><span>Human Rights Watch analysis</span></a><span> of Ed Tech products found that 89% shared children’s personal data with advertisers or other companies.</span></p>
<p><span>Instead of </span><a href="https://www.eff.org/deeplinks/2023/10/how-goguardian-invades-student-privacy"><span>deploying invasive student monitoring tools</span></a><span>, schools should keep students safe by keeping their data safe. Students deserve to learn without being tracked, profiled, and targeted online. Privacy Badger can help make that happen.</span></p>
<h3><b>How Can Organizations Deploy Privacy Badger On Managed Devices?</b></h3>
<p><span>System administrators can </span><a href="https://support.google.com/chrome/a/answer/6306504?hl=en"><span>deploy</span></a><span> and </span><a href="https://github.com/EFForg/privacybadger/blob/master/doc/admin-deployment.md"><span>configure</span></a><span> Privacy Badger on managed devices by setting up an enterprise policy. </span><a href="https://support.google.com/chrome/a/answer/6306504?hl=en"><span>Chrome</span></a><span>, </span><a href="https://support.mozilla.org/en-US/kb/deploying-firefox-with-extensions"><span>Firefox</span></a><span>, and </span><a href="https://learn.microsoft.com/en-us/deployedge/microsoft-edge-manage-extensions"><span>Edge</span></a><span> provide instructions for automatically installing extensions organization-wide. You’ll be able to </span><a href="https://github.com/EFForg/privacybadger/blob/master/doc/admin-deployment.md"><span>configure</span></a><span> certain </span><a href="https://github.com/EFForg/privacybadger/blob/master/src/data/schema.json"><span>Privacy Badger settings</span></a><span> for all devices. For example, you can specify websites where Privacy Badger is disabled or prevent Privacy Badger’s welcome page from popping up on computers that get reset after every session. </span></p>
<p><span>We recommend educating users about the addition of Privacy Badger and what it does. Since some websites deeply embed tracking, privacy protections can occasionally break website functionality. For example, a video might not play or a comments section might not appear. If this happens, users should know that they can easily turn off Privacy Badger on any website. Just open the Privacy Badger popup and click “Disable for this site.” </span></p>
<p><span>Don't hesitate to </span><a href="https://privacybadger.org/#I-found-a-bug%21-What-do-I-do-now"><span>reach out</span></a><span> if you're interested in deploying Privacy Badger at scale. Our team is here to help you protect your community's privacy. And if you're already deploying Privacy Badger across your organization, we'd love to </span><a href="https://privacybadger.org/#I-found-a-bug%21-What-do-I-do-now"><span>hear how it’s going</span></a><span>! </span></p>
<h3><b>Make Private Browsing the Default at Your Organization</b></h3>
<p><span>Schools, libraries, and other organizations can make private browsing the norm by deploying Privacy Badger on devices they manage. If you work at an organization with managed devices, talk to your IT team about Privacy Badger. You can help strengthen the security and privacy of your entire organization while joining the fight against online surveillance.</span></p>
</div></div></div></description>
<pubDate>Thu, 04 Sep 2025 13:34:47 +0000</pubDate>
<guid isPermaLink="false">111173 at https://www.eff.org</guid>
<category domain="https://www.eff.org/issues/privacy">Privacy</category>
<category domain="https://www.eff.org/issues/online-behavioral-tracking">Online Behavioral Tracking</category>
<category domain="https://www.eff.org/issues/student-privacy">Student Privacy</category>
<dc:creator>Lena Cohen</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/privacy-badger-student-fix_0.png" alt="Badger at School" type="image/png" length="32248" />
</item>
<item>
<title>Verifying Trust in Digital ID Is Still Incomplete</title>
<link>https://www.eff.org/deeplinks/2025/09/verifying-trust-digital-id-still-incomplete</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p><i><span>In the past few years, governments across the world have </span></i><a href="https://www.eff.org/deeplinks/2024/09/digital-id-isnt-everybody-and-thats-okay" target="_blank" rel="noopener noreferrer"><i><span>rolled out different digital identification</span></i></a><i><span> options, and now there are efforts encouraging online companies to implement identity and age verification requirements with digital ID in mind. This blog is the second in a short series that explains digital ID and the pending use case of age verification. Upcoming posts will evaluate what real protections we can implement with current digital ID frameworks and discuss how better privacy and controls can keep people safer online.</span></i></p>
<p><i><span></span></i></p>
<p><span>Digital identity encompasses various aspects of an individual's identity that are presented and verified through either the internet or in person. This could mean a digital credential issued by a certification body or a mobile driver’s license provisioned to someone’s mobile wallet. They can be presented in plain text on a device, as a scannable QR code, or through tapping your device to something called a Near Field Communication (NFC) reader. There are other ways to present credential information that is </span><a href="https://www.eff.org/deeplinks/2025/07/zero-knowledge-proofs-alone-are-not-digital-id-solution-protecting-user-privacy" target="_blank" rel="noopener noreferrer"><span>a little more privacy preserving</span></a><span>, but in practice those three methods are how we are seeing digital ID being used today.</span><span><br /></span></p>
<p><span>Advocates of digital ID often use a framework </span><a href="https://w3c.github.io/vc-data-model/#trust-model" target="_blank" rel="noopener noreferrer"><span>they call</span></a><span> the "Triangle of Trust." This is usually presented as a triangle of exchange between the </span><i><span>holder</span></i><span> of an ID—those who use a phone or wallet application to access a service; the </span><i><span>issuer</span></i><span> of an ID—this is normally a government entity, like the state Departments of Motor Vehicles in the U.S, or a </span><a href="https://blog.google/around-the-globe/google-europe/we-are-announcing-sparkasse-as-our-first-national-credential-partner-for-eu-age-assurance/" target="_blank" rel="noopener noreferrer"><span>banking system</span></a><span>; and the </span><i><span>verifier</span></i><span> of an ID—the entity that wants to confirm your identity, such as law enforcement, a university, a government benefits office, a porn site, or an online retailer.</span></p>
<p><span></span><span>This triangle implies that the issuer and verifier—for example, the government who provides the ID and the website checking your age—never need to talk to one another. This theoretically avoids the tracking and surveillance threats that arise by preventing your ID, by design, from </span><a href="https://nophonehome.com/" target="_blank" rel="noopener noreferrer"><span>phoning home</span></a><span> every time you verify your ID with another party.</span></p>
<p><span>But it also makes a lot of questionable assumptions, such as:</span></p>
<p><span>1) the verifier will </span><a href="https://www.eff.org/deeplinks/2025/07/zero-knowledge-proofs-alone-are-not-digital-id-solution-protecting-user-privacy" target="_blank" rel="noopener noreferrer"><span>only ever ask</span></a><span> for a limited amount of information. </span></p>
<p><span>2) the verifier won’t store information it collects.</span></p>
<p><span>3) the verifier is always trustworthy. </span></p>
<p><span>The third assumption is especially problematic. How do you trust that the verifier will protect your most personal information and not use, store, or sell it beyond what you have consented to? Any of the following could be verifiers:</span></p>
<ul>
<li><span>Law enforcement when doing a traffic stop and verifying your ID as valid.</span></li>
<li><span>A government benefits office that requires ID verification to sign up for social security benefits.</span></li>
<li><span>A porn site in a state or country which requires age verification or identity verification before allowing access.</span></li>
<li><span>An online retailer selling products like alcohol or tobacco.</span></li>
</ul>
<p><span></span><span>Looking at the triangle again, </span><a href="https://recapworkshop.online/recap24/contributions/gillmor-holder-issuer-verifier.html" target="_blank" rel="noopener noreferrer"><span>this isn’t quite an equal exchange</span></a><span>. Your personal ID like a driver’s license or government ID is both one of the most centralized </span><i><span>and</span></i><span> sensitive documents you have—you can’t control how it is issued or create your own, having to go through your government to obtain one. This relationship will always be imbalanced. But we have to make sure digital ID does not exacerbate these imbalances.</span></p>
<p><span></span><span>The effort to answer the questions of </span><a href="https://www.eff.org/deeplinks/2025/04/age-verification-european-union-mini-id-wallet" target="_blank" rel="noopener noreferrer"><span>how to prevent verifier abuse</span></a><span> is </span><a href="https://w3c-fedid.github.io/digital-credentials/#support-for-verifier-authorization" target="_blank" rel="noopener noreferrer"><span>ongoing</span></a><span>. But instead of working on the harms that these systems cause, the push for this technology is being fast-tracked by governments around the world scrambling to solve what they see as a crisis of online harms by mandating age verification. And current implementations of the Triangle of Trust have </span><a href="https://www.eff.org/deeplinks/2025/08/americans-be-warned-lessons-reddits-chaotic-uk-age-verification-rollout" target="_blank" rel="noopener noreferrer"><span>already proven disastrous</span></a><span>.</span><span><br /></span></p>
<p><span>One key example of the speed of implementation outpacing proper protections is the Digital Credential API. Initially launched by </span><a href="https://developer.chrome.com/blog/digital-credentials-api-origin-trial" target="_blank" rel="noopener noreferrer"><span>Google</span></a><span> and now supported by </span><a href="https://developer.apple.com/videos/play/wwdc2025/232/?time=106" target="_blank" rel="noopener noreferrer"><span>Apple</span></a><span>, this rollout allows for mass, unfettered verification by apps and websites to use the API to request information from your digital ID. The introduction of this technology to people’s devices came with no limits or checks on what information verifiers can seek—incentivizing verifiers to over-ask for ID information beyond the question of whether a holder is over a certain age, simply because they can. </span></p>
<p><span>Digital Credential API also incentivizes for a variety of websites to ask for ID information that aren’t required and did not commonly do so previously. For example, food delivery services, medical services, and gaming sites, and literally anyone else interested in being a verifier, may become one tomorrow with digital ID and the Digital Credential API. This is both an erosion of personal privacy, as well as a pathway into further surveillance. There must be established limitations and scope, including:</span></p>
<ul>
<li><b>verifiers establishing who they are and what they plan to ask from holders</b><span>. There should also be an established </span><b>plan for transparency on verifiers</b><span> and their data retention policies.</span></li>
<li><b>ways to identify and report abusive verifiers</b><span>, as well as real consequences, like revoking or blocking a verifier from requesting IDs in the future.</span></li>
<li><b>unlinkable presentations</b><span> that do not allow for verifier and issuer collusion. As well as no data shared between verifiers you attest to. Preventing tracking of your movements in person or online every time you attest your age.</span></li>
</ul>
<p><span>A further point of concern arises in cases of abuse or deception. A malicious verifier can send a request with no limiting mechanisms or checks and the user who rejects the request could be fully blocked from the website or application. There must be provisions that ensure people have access to vital services that will require age verification from visitors.</span></p>
<p><span><img src="/files/2025/09/03/screenshot_2025-08-30_at_12.19.24.png" width="636" height="293" alt="Pop up asking user to make sure they trust the website they are submitting ID info to" title="Pop up asking user to make sure they trust the website they are submitting ID info to" /></span></p>
<p><span>Government's efforts to tackle verifiers potentially abusing digital ID requests haven’t come to fruition yet. For example, the EU Commission recently launched its age verification “mini app” ahead of the EU ID wallet for 2026. The mini app will not have a registry for verifiers, as EU regulators had promised and then withdrew. Without </span><a href="https://epicenter.works/en/content/eu-commission-undermines-eidas-protections-again">verifier accountability</a><span>, the wallet cannot tell if a request is legitimate. As a result, verifiers and issuers will demand verification from the people who want to use online services, but those same people are unable to insist on verification and accountability from the other sides of the triangle. </span></p>
<p><span>While digital ID gets pushed as the solution to the problem of uploading IDs to each site users access, the security and privacy on them varies based on implementation. But when privacy is involved, regulators must make room for negotiation. There should be more thoughtful and protective measures for holders interacting with more and more potential verifiers over time. Otherwise digital ID solutions will just exacerbate existing harms and inequalities, rather than improving internet accessibility and information access for all.</span></p>
</div></div></div></description>
<pubDate>Thu, 04 Sep 2025 06:45:28 +0000</pubDate>
<guid isPermaLink="false">111171 at https://www.eff.org</guid>
<category domain="https://www.eff.org/issues/digital-identity">Digital Identity</category>
<dc:creator>Alexis Hancock</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/digitallicense_mobileid.png" alt="Mobile ID digital license" type="image/png" length="58596" />
</item>
<item>
<title>EFF Statement on ICE Use of Paragon Solutions Malware</title>
<link>https://www.eff.org/deeplinks/2025/09/eff-statement-ice-use-paragon-solutions-malware</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p><i><span>This statement can be attributed to EFF Senior Staff Technologist Cooper Quintin</span></i></p>
<p><span>It was recently reported by Jack Poulson on Substack that </span><a href="https://jackpoulson.substack.com/p/exclusive-ice-has-reactivated-its"><span>ICE has reactivated</span></a><span> its 2 million dollar </span><a href="https://www.wired.com/story/ice-paragon-solutions-contract/"><span>contract with Paragon Solutions</span></a><span>, a cyber-mercenary and spyware manufacturer. </span></p>
<p><span>The reactivation of the contract between the Department of Homeland Security and Paragon Solutions, a known spyware vendor, is extremely troubling.</span></p>
<p class="pull-quote"><span>This end run around the executive order both ignores the spirit of the rule and does not actually do anything to prevent misuse of Paragon Malware for human rights abuses</span></p>
<p><span>Paragon's “Graphite” malware has been implicated in widespread misuse by the Italian government. </span><a href="https://citizenlab.ca/2025/06/first-forensic-confirmation-of-paragons-ios-mercenary-spyware-finds-journalists-targeted/"><span>Researchers at Citizen Lab</span></a><span> at the Munk School of Global Affairs at the University of Toronto and with Meta found that it has been used in Italy to </span><a href="https://www.theguardian.com/media/2025/jun/12/european-journalists-targeted-with-paragon-solutions-spyware-say-researchers"><span>spy on journalists and civil society actors,</span></a><span> including humanitarian workers. Without strong legal guardrails, there is a risk that the malware will be misused in a similar manner by the U.S. Government.</span></p>
<p><span>These reports undermine Paragon Solutions’s public marketing of itself as a more ethical provider of surveillance malware. </span></p>
<p><span>Reportedly, the contract is being reactivated because the US arm of Paragon Solutions was acquired by a Miami based private equity firm, AE Industrial Partners, and then merged into a Virginia based cybersecurity company, </span><a href="https://web.archive.org/web/20250818005743/https://www.redlattice.com/"><span>REDLattice</span></a><span>, allowing ICE to </span><a href="https://www.federalregister.gov/documents/2023/03/30/2023-06730/prohibition-on-use-by-the-united-states-government-of-commercial-spyware-that-poses-risks-to"><span>circumvent Executive Order 14093</span></a><span> which bans the acquisition of spyware controlled by a foreign government or person. </span><span>Even though this order was always insufficient in preventing the acquisition of dangerous spyware, it was the best protection we had. </span><span>This end run around the executive order both ignores the spirit of the rule and does not actually do anything to prevent misuse of Paragon Malware for human rights abuses. Nor will it prevent insider threats at Paragon using their malware to spy on US government officials, or US government officials from misusing it to spy on their personal enemies, rivals, or spouses. </span></p>
<p><span>The contract between Paragon and ICE requires all US users to adjust their threat models and take extra precautions. Paragon’s Graphite isn’t magical, it’s still just malware. It still needs a zero day exploit in order to compromise a phone with the latest security updates and those are expensive. The best thing you can do to protect yourself against Graphite is to keep your phone up to date and enable Lockdown Mode in your operating system if you are using an iPhone or </span><a href="https://www.eff.org/deeplinks/2025/06/googles-advanced-protection-arrives-android-should-you-use-it"><span>Advanced Protection Mode on Android</span></a><span>. Turning on disappearing messages is also helpful that way if someone in your network does get compromised you don’t also reveal your entire message history. For more tips on protecting yourself from malware </span><a href="https://ssd.eff.org"><span>check out our Surveillance Self Defense guides</span></a><span>. </span></p>
</div></div></div><div class="field field--name-field-related-cases field--type-node-reference field--label-above"><div class="field__label">Related Cases:&nbsp;</div><div class="field__items"><div class="field__item even"><a href="/cases/alhathloul-v-darkmatter-group">AlHathloul v. DarkMatter Group</a></div></div></div></description>
<pubDate>Wed, 03 Sep 2025 19:46:40 +0000</pubDate>
<guid isPermaLink="false">111125 at https://www.eff.org</guid>
<category domain="https://www.eff.org/issues/state-sponsored-malware">State-Sponsored Malware</category>
<dc:creator>Cooper Quintin</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/mobile-privacy.png" alt="" type="image/png" length="23559" />
</item>
<item>
<title>EFF Awards Spotlight ✨ Just Futures Law</title>
<link>https://www.eff.org/deeplinks/2025/08/eff-awards-spotlight-just-futures-law</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p>In 1992 EFF presented our very first awards <a href="https://www.eff.org/awards/past-winners">recognizing key leaders and organizations</a> advancing innovation and championing civil liberties and human rights online. Now in 2025 we're continuing to celebrate the accomplishments of people working toward a better future for everyone with the EFF Awards!</p><p><a href="https://supporters.eff.org/civicrm/event/register?id=498&amp;reset=1">All are invited to attend the EFF Awards</a> on Wednesday, September 10 at the San Francisco Design Center. Whether you're an activist, an EFF supporter, a student interested in cyberlaw, or someone who wants to munch on a strolling dinner with other likeminded individuals, anyone can enjoy the ceremony!</p><p class="take-action"><a href="https://supporters.eff.org/civicrm/event/register?id=498&amp;reset=1">REGISTER TODAY!</a></p><p class="take-action take-explainer"><strong>GENERAL ADMISSION: $55 | CURRENT EFF MEMBERS: $45 | STUDENTS: $35</strong></p><p>If you're not able to make it, we'll also be <a href="http://www.eff.org/livestream-effawards2025">hosting a livestream of the event</a> on Friday, September 12 at 12:00 PM PT. The event will also be recorded, and posted to <a href="https://www.youtube.com/efforg">YouTube</a> and the <a href="https://archive.org/details/@electronic_frontier_foundation_eff_">Internet Archive</a> after the livestream.</p><p>We are honored to present the three winners of this year's EFF Awards: Just Futures Law, Erie Meyer, and Software Freedom Law Center, India. But, before we kick off the ceremony next week, let's take a closer look at each of the honorees. First up—<strong>Just Futures Law, winner of the EFF Award for Leading Immigration and Surveillance Litigation:</strong></p><p class="image-left center-image" style="border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;"><img style="padding: 0px; font-style: inherit; font-variant-caps: inherit; font-width: inherit; font-size: inherit; line-height: inherit; font-family: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline; margin: 0px;" src="https://www.eff.org/files/2025/08/29/eff-awards-winners-instagram_jfl.png" alt="EFF Awards Winner Just Futures Law" width="300" height="300" /></p><p style="margin: 0px 0px 2.3rem; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;">Just Futures Law<span style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="auto"> is a women-of-color-led law project that recognizes how surveillance disproportionately impacts immigrants and people of color in the United States</span>. <span style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="auto">In the past year, Just Futures </span><a style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" href="https://www.justfutureslaw.org/legal-filings/dhsaifoia" target="_blank" rel="noopener noreferrer"><span style="margin: 0px; padding: 0px; border: 0px; font-style: inherit; font-variant-caps: inherit; font-width: inherit; font-size: inherit; line-height: inherit; font-family: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="none">sued the Department of Homeland Security</span></a><span style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="auto"> and its subagencies seeking a court order to compel the agencies to release records on their use of AI and other algorithms, and </span><a style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" href="https://www.justfutureslaw.org/legal-filings/tpshaiti" target="_blank" rel="noopener noreferrer"><span style="margin: 0px; padding: 0px; border: 0px; font-style: inherit; font-variant-caps: inherit; font-width: inherit; font-size: inherit; line-height: inherit; font-family: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="none">sued the Trump Administration</span></a><span style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="auto"> for prematurely halting Haiti’s Temporary Protected Status, a humanitarian program that allows hundreds of thousands of Haitians to temporarily remain and work in the United States due to Haiti’s current conditions of extraordinary crises. It has represented activists in their fight against tech giants like <a style="margin: 0px; padding: 0px; border: 0px; font-style: inherit; font-variant-caps: inherit; font-width: inherit; font-size: inherit; line-height: inherit; font-family: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" href="https://www.justfutureslaw.org/legal-filings/clearview" target="_blank" rel="noopener noreferrer">Clearview AI</a>, it has worked with </span><a style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" href="https://mijente.net/" target="_blank" rel="noopener noreferrer"><span style="margin: 0px; padding: 0px; border: 0px; font-style: inherit; font-variant-caps: inherit; font-width: inherit; font-size: inherit; line-height: inherit; font-family: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="none">Mijente</span></a><span style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="auto"> to launch the TakeBackTech fellowship to train new advocates on grassroots-directed research, and it has worked with </span><a style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" href="https://www.grassrootsleadership.org/" target="_blank" rel="noopener noreferrer"><span style="margin: 0px; padding: 0px; border: 0px; font-style: inherit; font-variant-caps: inherit; font-width: inherit; font-size: inherit; line-height: inherit; font-family: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="none">Grassroots Leadership</span></a><span style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="auto"> to fight for the release of detained individuals under </span><a style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" href="https://en.wikipedia.org/wiki/Operation_Lone_Star" target="_blank" rel="noopener noreferrer"><span style="margin: 0px; padding: 0px; border: 0px; font-style: inherit; font-variant-caps: inherit; font-width: inherit; font-size: inherit; line-height: inherit; font-family: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="none">Operation Lone Star</span></a><span style="margin: 0px; padding: 0px; border: 0px; font-width: inherit; line-height: inherit; font-size-adjust: inherit; font-kerning: inherit; font-variant-alternates: inherit; font-variant-ligatures: inherit; font-variant-numeric: inherit; font-variant-east-asian: inherit; font-variant-position: inherit; font-feature-settings: inherit; font-optical-sizing: inherit; font-variation-settings: inherit; vertical-align: baseline;" data-contrast="auto">.</span></p><p>We're excited to celebrate Just Futures Law and the other EFF Award winners in person in San Francisco on September 10! <a href="https://supporters.eff.org/civicrm/event/register?id=498&amp;reset=1">We hope that you'll join us there.</a></p><hr /><p>Thank you to Fastly, DuckDuckGo, Corellium, and No Starch Press for their year-round support of EFF's mission.</p><p>Want to show your team’s support for EFF? Sponsorships ensure we can continue hosting events like this to build community among digital rights supporters. Please visit <a class="theme markdown__link" href="http://eff.org/thanks" target="_blank" rel="noopener noreferrer">eff.org/thanks</a> or contact <a class="theme markdown__link" href="mailto:tierney@eff.org" target="_blank" rel="noopener noreferrer">tierney@eff.org</a> for more information on corporate giving and sponsorships.</p><p>EFF is dedicated to a harassment-free experience for everyone, and all participants are encouraged to view our full <a href="https://www.eff.org/pages/event-expectations" target="_blank" rel="noopener noreferrer" data-cke-saved-href="https://www.eff.org/pages/event-expectations">Event Expectations</a>.</p><p><em>Questions? Email us at <a href="mailto:events@eff.org?subject=EFF%20Awards" data-cke-saved-href="mailto:events@eff.org?subject=Tech%20Trivia">events@eff.org</a>.</em></p>
</div></div></div></description>
<pubDate>Wed, 03 Sep 2025 18:04:08 +0000</pubDate>
<guid isPermaLink="false">111104 at https://www.eff.org</guid>
<dc:creator>Christian Romero</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/2025_effawards_banner_justfutureslaw.png" alt="Red and white trophy with information about Just Futures Law" type="image/png" length="410614" />
</item>
<item>
<title>🤐 This Censorship Law Turns Parents Into Content Cops | EFFector 37.11</title>
<link>https://www.eff.org/deeplinks/2025/09/censorship-law-turns-parents-content-cops-effector-3711</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p>School is back in session! Perfect timing to hit the books and catch up on the latest digital rights news. We've got you covered with bite-sized updates in this issue of our <a href="https://eff.org/effector/37/11">EFFector newsletter</a>.</p>
<p><span>This time, we're breaking down why <a href="https://www.eff.org/deeplinks/2025/08/book-bans-internet-bans-wyoming-lets-parents-control-whole-states-access-internet">Wyoming’s new age verification law</a> is a free speech disaster. You’ll also read about a big win for <a href="https://www.eff.org/deeplinks/2025/08/fourth-amendment-victory-michigan-supreme-court-reins-digital-device-fishing-1">transparency around police surveillance</a>, how the Trump administration’s <a href="https://www.eff.org/deeplinks/2025/08/president-trumps-war-woke-ai-civil-liberties-nightmare">war on “woke AI”</a> threatens civil liberties, and a welcome decision in a <a href="https://www.eff.org/press/releases/torture-victims-landmark-hacking-lawsuit-against-spyware-maker-can-proceed-judge">landmark human rights case</a>.</span></p>
<p>Prefer to listen? Be sure to check out the audio companion to EFFector! We're interviewing EFF staff about some of the important issues they are working on. This time, <span>EFF Legislative Activist Rindala Alajaji discusses the real harms of age verification laws like the one passed in Wyoming. Tune in on <a href="https://www.youtube.com/watch?v=_YjvEdKxFLc">YouTube</a> or the <a href="https://archive.org/details/37.11">Internet Archive</a>.</span></p>
<p class="take-action"><a href="https://www.youtube.com/watch?v=_YjvEdKxFLc">LISTEN TO EFFECTOR</a></p>
<p class="take-action take-explainer"><span>EFFECTOR 37.11 - This Censorship Law Turns Parents Into Content Cops</span></p>
<p><span>Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. </span></p>
<p><span>Thank you to the supporters around the world who make our work possible! If you're not a member yet, <a href="https://eff.org/effect">join EFF today</a> to help us fight for a brighter digital future.</span></p>
</div></div></div></description>
<pubDate>Wed, 03 Sep 2025 17:06:11 +0000</pubDate>
<guid isPermaLink="false">111115 at https://www.eff.org</guid>
<dc:creator>Christian Romero</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/effector_banner_5.jpeg" alt="" type="image/jpeg" length="130379" />
</item>
<item>
<title>What WhatsApp’s “Advanced Chat Privacy” Really Does</title>
<link>https://www.eff.org/deeplinks/2025/09/what-whatsapps-advanced-chat-privacy-really-does</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p><span>In April, WhatsApp launched its “</span><a href="https://blog.whatsapp.com/introducing-advanced-chat-privacy"><span>Advanced Chat Privacy</span></a><span>” feature, which, once enabled, disables using certain AI features in chats and prevents conversations from being exported. Since its launch, an inaccurate viral post has been </span><a href="https://www.snopes.com/fact-check/whatsapp-ai-chat-privacy/"><span>ping-ponging around social networks</span></a><span>, creating confusion around what exactly it does. <br /></span></p>
<p><span>The viral post falsely claims that if you do not enable Advanced Chat Privacy, Meta’s AI tools will be able to access your private conversations. This isn’t true, and it misrepresents both how Meta AI works and what Advanced Chat Privacy is.</span></p>
<p><span>The confusion seems to spawn from the fact that Meta AI can be invoked through a number of methods, including in any group chat with the </span><a href="https://faq.whatsapp.com/203220822537614/?helpref=faq_content&amp;cms_platform=web"><span>@Meta AI command</span></a><span>. While the chat contents between you and other people are always end-to-end encrypted on the app, </span><a href="https://faq.whatsapp.com/1002544104126998?helpref=faq_content"><span>what you say to Meta AI is not</span></a><span>. Similarly, if you or anyone else in the chat chooses to use Meta AI's “</span><a href="https://blog.whatsapp.com/catch-up-on-conversations-with-private-message-summaries"><span>Summarize</span></a><span>” feature, which uses Meta’s “</span><a href="https://faq.whatsapp.com/2089630958184255"><span>Private Processing</span></a><span>” technology, that feature routes the text of the chat through Meta’s servers. However, the company claims that they cannot view the content of those messages. This feature remains opt-in, so it's up to you to decide if you want to use it. The company also recently </span><a href="https://blog.whatsapp.com/get-the-tone-of-your-message-right-with-private-writing-help"><span>released the results of two audits</span></a><span> detailing the issues that have been found thus far and what they’ve done to fix it.</span></p>
<p><span>For example, if you and your buddy are chatting, and your friend types in @Meta AI and asks it a question, that part of the conversion, which you can both see, is </span><strong><i>not</i></strong><span> end-to-end encrypted, and is usable for AI training or whatever other purposes are included in </span><a href="https://www.facebook.com/privacy/policy"><span>Meta’s privacy policy</span></a><span>. But otherwise, chats remain end-to-end encrypted.</span></p>
<p><span>Advanced Chat Privacy offers some bit of control over this. The new privacy feature isn’t a universal setting in WhatsApp; you can enable or disable it on a per-chat basis, but it’s turned off by default. When enabled, Advanced Chat Privacy does three core things:</span></p>
<ul>
<li><span>Blocks anyone in the chat from exporting the chats,</span></li>
<li><span>Disables auto-downloading media to chat participant’s phones, and</span></li>
<li><span>Disables some Meta AI features</span></li>
</ul>
<p><span>Outside disabling some Meta AI features, Advanced Chat Privacy can be useful in other instances. For example, while someone can always screenshot chats, if you’re concerned about someone easily exporting an entire group chat history, Advanced Chat Privacy makes this harder to do because there’s no longer a one-tap option to do so. And since media can’t be automatically downloaded to someone’s phone (the “Save to Photos” option on the chat settings screen), it’s harder for an attachment to accidentally end up on someone’s device. </span></p>
<h2><span>How to Enable Advanced Chat Privacy</span></h2>
<p><span><img src="/files/2025/09/02/acp.png" width="2048" height="2048" alt="" /></span></p>
<p><span>Advanced Chat Privacy is enabled or disabled per chat. To enable it:</span></p>
<ul>
<li><span>Tap the chat name at the top of the screen.</span></li>
<li><span>Select </span><b><i>Advanced chat privacy</i></b><span>, then tap the toggle to turn it on.</span></li>
</ul>
<p><span>There are some quirks to how this works, though. For one, by default, anyone involved in a chat can turn Advanced Chat Privacy on or off at will, which limits its usefulness but at least helps ensure something doesn’t accidentally get sent to Meta AI. </span></p>
<p><span><img src="/files/2025/09/02/whatsapppermissions.png" width="2048" height="2048" alt="whatsapp group permissions screen" title="whatsapp group permissions screen" /></span></p>
<p><span>There’s one way around this, which is for a group admin to lock down what users in the group can do. In an existing group chat that you are the administrator of, tap the chat name at the top of the screen, then:</span></p>
<ul>
<li><span>Scroll down to </span><b><i>Group Permissions</i></b><span>.</span></li>
<li><span>Disable the option to “Edit Group Settings.” This makes it so only the administrator can change several important permissions, including Advanced Chat Privacy.</span></li>
</ul>
<p><span>You can also set this permission when starting a new group chat. Just be sure to pop into the permissions page when prompted. Even without Advanced Chat Privacy, the “Edit Group Settings” option is an important one for privacy, because it also includes whether participants can change the length that disappearing messages can be viewed, so it’s something worth considering for every group chat you’re an administrator of, and something WhatsApp should require admins to choose before starting a new chat.</span></p>
<p><span>When it comes to one-on-one chats, there is currently no way to block the other person from changing the Advanced Chat Privacy feature, so you’ll have to come to an agreement with the other person on keeping it enabled if that’s what you want. If the setting is changed, you’ll see a notice in the chat stating so:</span></p>
<p><span><img src="/files/2025/09/02/img_6934.jpeg" width="1179" height="611" alt="" /></span></p>
<p><span>There are already serious concerns with </span><a href="https://ssd.eff.org/module/how-to-use-whatsapp"><span>how much metadata WhatsApp collects</span></a><span>, and as the company introduces </span><a href="https://www.nytimes.com/2025/06/16/technology/whatsapp-ads.html'"><span>ads</span></a><span> and </span><a href="https://www.whatsapp.com/meta-ai"><span>AI</span></a><span>, it’s going to get harder and harder to navigate the app, understand what each setting does, and properly protect the privacy of conversations. One of the reasons alternative encrypted chat options like Signal tend to thrive is because they keep things simple and employ strong default settings and clear permissions. WhatsApp should keep this in mind as it adds more and more features.<br /></span></p>
</div></div></div></description>
<pubDate>Tue, 02 Sep 2025 18:28:43 +0000</pubDate>
<guid isPermaLink="false">111108 at https://www.eff.org</guid>
<category domain="https://www.eff.org/issues/security-education">Security Education</category>
<category domain="https://www.eff.org/issues/end-end-encryption">End-to-End Encryption</category>
<dc:creator>Thorin Klosowski</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/encryption-poc-chelsea-saunders.png" alt="People communicating on devices with encryption, art by Chelsea Saunders" type="image/png" length="194039" />
</item>
<item>
<title>Open Austin: Reimagining Civic Engagement and Digital Equity in Texas</title>
<link>https://www.eff.org/deeplinks/2025/08/open-austin-reimagining-civic-engagement-and-digital-equity-texas</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p><span>The </span><a href="https://efa.eff.org/"><span>Electronic Frontier Alliance</span></a><span> is growing, and this year we’ve been honored to welcome </span><a href="https://open-austin.org/"><span>Open Austin</span></a><span> into the EFA. Open Austin began in 2009 as a meetup that successfully advocated for a city-run open data portal, and relaunched as a 501(c)3 in 2018 dedicated to reimagining civic engagement and digital equity by building volunteer open source projects for local social organizations. <br /></span></p>
<p><span>As Central Texas’ oldest and largest grassroots civic tech organization, their work has provided hands-on training for over 1,500 members in the hard and soft skills needed to build digital society, not just scroll through it. Recently, I got the chance to speak with Liani Lye, Executive Director of Open Austin, about the organization, its work, and what lies ahead: <br /></span></p>
<p><b>There’s so many exciting things happening with Open Austin. Can you tell us about your Civic Digital Lab and your Data Research Hub?</b></p>
<blockquote><p><span>Open Austin's Civic Digital Lab reimagines civic engagement by training central Texans to build technology for the public good. We build freely, openly, and alongside a local community stakeholder to represent community needs. Our lab currently supports 5 products:</span></p>
<ul>
<li><span>Data Research Hub: Answering residents' questions with detailed information about our city</span></li>
<li><span>Streamlining Austin Public Library’s “book a study room” UX and code</span></li>
<li><span>Mapping landlords’ and rental properties to support local tenant rights organizing</span></li>
<li><span>Promoting public transit by highlighting points of interest along bus routes</span></li>
<li><span>Creating an interactive exploration of police bodycam data</span></li>
</ul>
<p><span>We’re actively scaling up our Data Research Hub, which started in January 2025 and was inspired by </span><a href="https://www.9bcorp.com/neighborhood-explorer"><span>9b Corp’s Neighborhood Explorer</span></a><span>. Through community outreach, we gather residents’ questions about our region and connect the questions with Open Austin’s data analysts. Each answered question adds to a pool of knowledge that equips communities to address local issues. Crucially, the organizing team at EFF, through the EFA, have connected us to local organizations to generate these questions.</span></p>
</blockquote>
<p><b>Can you discuss your new Civic Data Fellowship cohort and Communities of Civic Practice?</b></p>
<blockquote><p><span>Launched in 2024, Open Austin’s Civic Data Fellowship trains the next generation of technologically savvy community leaders by pairing aspiring women, people of color, and LGBTQ+ data analysts with mentors to explore Austin’s challenges. These culminate in data projects and talks to advocates and policymakers, which double as powerful portfolio pieces. While we weren’t able to fully fund fellowship stipends through grants this year, thanks to the generosity of our supporters, we successfully raised 25% through grassroots efforts.</span></p>
<p><span>Along with our fellowship and lab, we host monthly Communities of Civic Practice peer-learning circles that build skills for employability and practical civic engagement. Recent sessions include a speaker on service design in healthcare, and co-creating a data visualization on broadband adoption presented to local government staff. Our in-person communities are a great way to learn and build local public interest tech without becoming a full-on Labs contributor.</span></p>
</blockquote>
<p><b>For those in Austin and Central Texas that want to get involved in-person, how can they plug-in?</b></p>
<blockquote><p><span>If you can only come to </span><i><span>one</span></i><span> event for the rest of the year, come to our </span><a href="https://www.eventbrite.com/e/2025-year-end-civic-data-fellowship-celebration-tickets-1447526263019?aff=oddtdtcreator"><span>Open Austin’s 2025 Year-End Celebration</span></a><span>. Open Austin members and our freshly graduated Civic Data Fellow cohort will give lightning talks to share how they’ve supported local social advocacy through open source software and open data work. Otherwise, come to a </span><a href="https://www.eventbrite.com/e/online-introduction-for-newcomers-tickets-1047290964467"><span>monthly remote volunteer orientation call</span></a><span>. There, we'll share how to get involved in our in-person Communities of Civic Practice and our remote Civic Digital Labs (i.e. building open source software).</span></p>
<p><span>Open Austin welcomes volunteers from all backgrounds, including those with skills in marketing, fundraising, communications, and operations– not just technologists. You can make a difference in various ways. Come to a </span><a href="https://www.eventbrite.com/e/online-introduction-for-newcomers-tickets-1047290964467"><span>remote volunteer orientation call</span></a><span> to learn more. And, as always, donate. Running multiple open source projects for structured workforce development is expensive, and your contributions help sustain Open Austin's work in the community. Please visit </span><a href="https://opencollective.com/open-austin"><span>our donation page</span></a><span> for ways to give.</span></p>
<p><span>Thanks, EFF!</span></p>
</blockquote>
</div></div></div></description>
<pubDate>Fri, 29 Aug 2025 23:08:01 +0000</pubDate>
<guid isPermaLink="false">111107 at https://www.eff.org</guid>
<dc:creator>Christopher Vines</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/efa-starburst-banner.png" alt="Electronic Frontier Alliance logo in a tricolor starburst" type="image/png" length="24969" />
</item>
<item>
<title>Join Your Fellow Digital Rights Supporters for the EFF Awards on September 10!</title>
<link>https://www.eff.org/deeplinks/2025/08/join-your-fellow-digital-rights-supporters-eff-awards-september-10</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p>For over 35 years, the Electronic Frontier Foundation has presented awards recognizing <a href="https://www.eff.org/awards/past-winners">key leaders and organizations</a> advancing innovation and championing digital rights. The EFF Awards celebrate the accomplishments of people working toward a better future for technology users, both in the public eye and behind the scenes.</p><p>EFF is pleased to welcome all members of the digital rights community, supporters, and friends to this annual award ceremony. Join us to celebrate this year's honorees with drinks, bytes, and excellent company.</p><h3> </h3><center><p><strong>EFF Award Ceremony<br /></strong>Wednesday, September 10th, 2025<br />6:00 PM to 10:00 PM Pacific<br />San Francisco Design Center Galleria<br />101 Henry Adams Street, San Francisco, CA</p><p class="take-action"><a title="Link to registration page" href="https://supporters.eff.org/civicrm/event/register?id=498&amp;reset=1" target="_blank" rel="noopener noreferrer">Register Now</a></p><p class="take-explainer">General Admission: $55 | Current EFF Members: $45 | Students: $35</p></center><p>The celebration will include a strolling dinner and desserts, as well as a hosted bar with cocktails, mocktails, wine, beer, and non-alcoholic beverages! Vegan, vegetarian, and gluten-free food options will be available. We hope to see you in person, wearing either a signature EFF hoodie, or something formal if you're excited for the opportunity to dress up!</p><p>If you're not able to make it, we'll also be <a href="http://www.eff.org/livestream-effawards2025">hosting a livestream of the event</a> on Friday, September 12 at 12:00 PM PT. The event will also be recorded, and posted to <a href="https://www.youtube.com/efforg">YouTube</a> and the <a href="https://archive.org/details/@electronic_frontier_foundation_eff_">Internet Archive</a> after the livestream.</p><h3 style="text-align: center;">We are proud to present awards to this year's winners:</h3><h3 style="text-align: center;"><a href="#award1"><strong>JUST FUTURES LAW</strong></a></h3><p style="text-align: center;">EFF Award for Leading Immigration and Surveillance Litigation</p><h3 style="text-align: center;"><a href="#award2"><strong>ERIE MEYER</strong></a></h3><p style="text-align: center;">EFF Award for Protecting Americans' Data</p><h3 style="text-align: center;"><a href="#award3"><strong>SOFTWARE FREEDOM LAW CENTER, INDIA</strong></a></h3><p style="text-align: center;">EFF Award for Defending Digital Freedoms</p><h3 class="center-image" align="center"> </h3><hr /><h3 class="center-image" align="center"><strong>More About the 2025 EFF Award Winners<br /><br /></strong></h3><h4><a id="award1"></a>Just Futures Law</h4><p class="image-left"><img title="Just Futures Law logo" src="/files/2025/07/22/jfl_icon_medium_1.png" alt="Just Futures Law logo" width="350" height="350" /></p><p>Just Futures Law<span data-contrast="auto"> is a women-of-color-led law project that recognizes how surveillance disproportionately impacts immigrants and people of color in the United States. It uses litigation to fight back as part of defending and building the power of immigrant rights and criminal justice activists, organizers, and community groups to prevent criminalization, detention, and deportation of immigrants and people of color. Just Futures was founded in 2019 using a movement lawyering and racial justice framework and seeks to transform how litigation and legal support serves communities and builds movement power. </span><span data-ccp-props="240}"> </span></p><p><span data-contrast="auto">In the past year, Just Futures </span><a href="https://www.justfutureslaw.org/legal-filings/dhsaifoia" target="_blank" rel="noopener noreferrer"><span data-contrast="none">sued the Department of Homeland Security</span></a><span data-contrast="auto"> and its subagencies seeking a court order to compel the agencies to release records on their use of AI and other algorithms, and </span><a href="https://www.justfutureslaw.org/legal-filings/tpshaiti" target="_blank" rel="noopener noreferrer"><span data-contrast="none">sued the Trump Administration</span></a><span data-contrast="auto"> for prematurely halting Haiti’s Temporary Protected Status, a humanitarian program that allows hundreds of thousands of Haitians to temporarily remain and work in the United States due to Haiti’s current conditions of extraordinary crises. It has represented activists in their fight against tech giants like <a href="https://www.justfutureslaw.org/legal-filings/clearview" target="_blank" rel="noopener noreferrer">Clearview AI</a>, it has worked with </span><a href="https://mijente.net/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Mijente</span></a><span data-contrast="auto"> to launch the TakeBackTech fellowship to train new advocates on grassroots-directed research, and it has worked with </span><a href="https://www.grassrootsleadership.org/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Grassroots Leadership</span></a><span data-contrast="auto"> to fight for the release of detained individuals under </span><a href="https://en.wikipedia.org/wiki/Operation_Lone_Star" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Operation Lone Star</span></a><span data-contrast="auto">.</span></p><h4><a id="award2"></a>Erie Meyer</h4><p class="image-left"><img title="Erie Meyer" src="/files/2025/07/22/eriemeyer_0.png" alt="Erie Meyer" width="350" height="350" /></p><p>Erie Meyer is a Senior Fellow at the <a href="https://www.vanderbilt.edu/vanderbilt-policy-accelerator/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Vanderbilt Policy Accelerator</span></a><span data-contrast="auto"> where she focuses on the intersection of technology, artificial intelligence, and regulation, and a Senior Fellow at the </span><a href="https://www.law.georgetown.edu/tech-institute/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Georgetown Law Institute for Technology Law &amp; Policy</span></a><span data-contrast="auto">. She is former Chief Technologist at both the </span><a href="https://www.consumerfinance.gov/complaint/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Consumer Financial Protection Bureau</span></a><span data-contrast="auto"> (CFPB) and the </span><a href="https://www.ftc.gov/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Federal Trade Commission</span></a><span data-contrast="auto">. Earlier, she was senior advisor to the U.S. Chief Technology Officer at the White House, where she co-founded the </span><a href="https://www.usds.gov/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">United States Digital Service</span></a><span data-contrast="auto">, a team of technologists and designers working to improve digital services for the public. Meyer also worked as senior director at </span><a href="https://codeforamerica.org/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Code for America</span></a><span data-contrast="auto">, a nonprofit that promotes civic hacking to modernize government services, and in the Ohio Attorney General's office at the height of the financial crisis.</span><span data-ccp-props="279}"> </span></p><p> </p><p><span data-contrast="auto">Since January 20, Meyer has helped organize former government technologists to stand up for the privacy and integrity of governmental systems that hold Americans’ data. In addition to organizing others, she </span><a href="https://storage.courtlistener.com/recap/gov.uscourts.dcd.277287/gov.uscourts.dcd.277287.18.0_1.pdf" target="_blank" rel="noopener noreferrer"><span data-contrast="none">filed a declaration in federal court</span></a><span data-contrast="auto"> in February warning that 12 years of critical records could be irretrievably lost in the CFPB’s purge by the Trump Administration’s Department of Government Efficiency. In April, she filed </span><a href="https://storage.courtlistener.com/recap/gov.uscourts.mdd.577321/gov.uscourts.mdd.577321.111.9.pdf" target="_blank" rel="noopener noreferrer"><span data-contrast="none">a declaration in another case</span></a><span data-contrast="auto"> warning about using private-sector AI on government information. That same month, she </span><a href="https://oversight.house.gov/wp-content/uploads/2025/04/Testimony-of-Erie-Meyer-2.pdf" target="_blank" rel="noopener noreferrer"><span data-contrast="none">testified to the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation</span></a><span data-contrast="auto"> that DOGE is centralizing access to some of the most sensitive data the government holds—Social Security records, disability claims, even data tied to national security—without a clear plan or proper oversight, warning that “DOGE is burning the house down and calling it a renovation.”</span><span data-ccp-props="279}"> </span></p><h4><strong><a id="award3"></a>Software Freedom Law Center<br /></strong></h4><p class="image-left"><img title="sflc.in logo" src="/files/2025/07/22/sflc_logo_0.png" alt="sflc.in logo" width="350" height="350" /></p><p>Software Freedom Law Center, India<span data-contrast="auto"> is a donor-supported legal services organization based in India that brings together lawyers, policy analysts, students, and technologists to protect freedom in the digital world. It promotes innovation and open access to knowledge by helping developers make great free and open-source software, protects privacy and civil liberties for Indians by educating and providing free legal advice, and helps policymakers make informed and just decisions about use of technology.</span><span data-ccp-props="{}"> </span></p><p><span data-contrast="auto">Founded in 2010 by technology lawyer and online civil liberties activist </span><a href="https://mishichoudhary.com/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Mishi Choudhary</span></a><span data-contrast="auto">, SFLC.IN tracks and participates in litigation, AI regulations, and free speech issues that are defining Indian technology. It also tracks </span><a href="https://internetshutdowns.in/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">internet shutdowns</span></a><span data-contrast="auto"> and </span><a href="https://freespeech.sflc.in/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">censorship incidents</span></a><span data-contrast="auto"> across India, provides </span><a href="https://security.sflc.in/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">digital security training</span></a><span data-contrast="auto">, and has launched the </span><a href="https://ddn.sflc.in/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Digital Defenders Network</span></a><span data-contrast="auto">, a pan-Indian network of lawyers committed to protecting digital rights. It has conducted landmark litigation cases, petitioned the government of India on freedom of expression and internet issues, and campaigned for WhatsApp and Facebook to fix a feature of their platform that has been used to harass women in India.</span><span data-ccp-props="{}"> </span></p><hr /><p>Thank you to Fastly, DuckDuckGo, Corellium, and No Starch Press for their year-round support of EFF's mission.</p><p>Want to show your team’s support for EFF? Sponsorships ensure we can continue hosting events like this to build community among digital rights supporters. Please visit <a class="theme markdown__link" href="http://eff.org/thanks" target="_blank" rel="noopener noreferrer">eff.org/thanks</a> or contact <a class="theme markdown__link" href="mailto:tierney@eff.org" target="_blank" rel="noopener noreferrer">tierney@eff.org</a> for more information on corporate giving and sponsorships.</p><p>EFF is dedicated to a harassment-free experience for everyone, and all participants are encouraged to view our full <a href="https://www.eff.org/pages/event-expectations" target="_blank" rel="noopener noreferrer" data-cke-saved-href="https://www.eff.org/pages/event-expectations">Event Expectations</a>.</p><p><em>Questions? Email us at <a href="mailto:events@eff.org?subject=EFF%20Awards" data-cke-saved-href="mailto:events@eff.org?subject=Tech%20Trivia">events@eff.org</a>.</em></p><p> </p>
</div></div></div></description>
<pubDate>Thu, 28 Aug 2025 22:57:12 +0000</pubDate>
<guid isPermaLink="false">111102 at https://www.eff.org</guid>
<dc:creator>Christian Romero</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/2025eff-awards-winners.png" alt="" type="image/png" length="545335" />
</item>
<item>
<title>Podcast Episode: Protecting Privacy in Your Brain</title>
<link>https://www.eff.org/deeplinks/2025/08/podcast-episode-protecting-privacy-your-brain</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p><span data-contrast="none">The human brain might be the grandest computer of all, but in this episode, we talk to two experts who confirm that the ability for tech to decipher thoughts, and perhaps even manipulate them, isn't just around the corner – it's already here. Rapidly advancing "neurotechnology" could offer new ways for people with brain trauma or degenerative diseases to communicate, </span><a href="https://www.nytimes.com/2025/08/14/science/brain-neuroscience-computers-speech.html?unlocked_article_code=1.eE8.HKxe.lfZy4CQNJQlR&amp;smid=url-share" target="_blank" rel="noopener noreferrer"><span data-contrast="none">as the New York Times reported this month</span></a><span data-contrast="none">, but it also could open the door to abusing the privacy of the most personal data of all: our thoughts. Worse yet, it could allow manipulating how people perceive and process reality, as well as their responses to it – a Pandora’s box of epic proportions.</span></p>
<p><div class="mytube" style="width: 100%px;">
<div class="mytubetrigger" tabindex="0">
<img src="https://www.eff.org/sites/all/modules/custom/mytube/play.png" class="mytubeplay" alt="play" style="top: -4px; left: 20px;" />
<div hidden class="mytubeembedcode">%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F3955c653-7346-44d2-82e2-0238931bcfd9%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E</div>
</div>
<div class="mytubetext">
<span><a href="https://www.eff.org/deeplinks/2008/02/embedded-video-and-your-privacy" rel="noreferrer" target="_blank">Privacy info.</a></span>
<span>This embed will serve content from <em><a rel="nofollow" href="https://player.simplecast.com/3955c653-7346-44d2-82e2-0238931bcfd9?dark=true&amp;color=000000">simplecast.com</a></em><br /></span>
</div>
</div>
</p>
<p><span data-contrast="none"></span><span data-ccp-props="0}"><i><a href="https://open.spotify.com/show/4UAplFpPDqE4hWlwsjplgt" target="_blank" rel="noopener noreferrer"><img src="https://www.eff.org/files/2021/11/01/spotify-podcast-badge-blk-wht-330x80.png" alt="Listen on Spotify Podcasts Badge" width="198" height="48" /></a> <a href="https://podcasts.apple.com/us/podcast/effs-how-to-fix-the-internet/id1539719568" target="_blank" rel="noopener noreferrer"><img src="https://www.eff.org/files/2021/11/01/applebadge2.png" alt="Listen on Apple Podcasts Badge" width="195" height="47" /></a> <a href="https://music.amazon.ca/podcasts/bf81f00f-11e1-431f-918d-374ab6ad07cc/how-to-fix-the-internet?ref=dmm_art_us_HTFTI" target="_blank" rel="noopener noreferrer"><img height="47" width="195" src="https://www.eff.org/files/styles/kittens_types_wysiwyg_small/public/2024/02/15/us_listenon_amazonmusic_button_charcoal.png?itok=YFXPE4Ii" /></a> <a href="https://feeds.eff.org/howtofixtheinternet" target="_blank" rel="noopener noreferrer"><img src="https://www.eff.org/files/2021/11/01/subscriberss.png" alt="Subscribe via RSS badge" width="194" height="50" /></a></i></span><span data-contrast="auto"></span></p>
<p><span data-contrast="auto">(You can also find this episode on the <a href="https://archive.org/details/htfti-s6e9-rafael-yuste-and-jared-genser-vfinal" target="_blank" rel="noopener noreferrer">Internet Archive</a> and on <a href="https://youtu.be/8Tv0dQ1zfFQ?si=BKyFz65wkjwkG9hO" target="_blank" rel="noopener noreferrer">YouTube</a>.)</span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">Neuroscientist Rafael Yuste and human rights lawyer Jared Genser are awestruck by both the possibilities and the dangers of neurotechnology. Together they established </span><a href="https://neurorightsfoundation.org/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">The Neurorights Foundation</span></a><span data-contrast="auto">, and now they join EFF’s Cindy Cohn and Jason Kelley to discuss how technology is advancing our understanding of what it means to be human, and the solid legal guardrails they're building to protect the privacy of the mind.</span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">In this episode you’ll learn about:</span></p>
<ul>
<li><span data-ccp-props="{}">How to protect people’s mental privacy, agency, and identity while ensuring equal access to the positive aspects of brain augmentation</span></li>
<li><span data-ccp-props="{}">Why neurotechnology regulation needs to be grounded in international human rights</span></li>
<li><span data-ccp-props="{}">Navigating the complex differences between medical and consumer privacy laws</span></li>
<li><span data-ccp-props="{}">The risk that information collected by devices now on the market could be decoded into actual words within just a few years</span></li>
<li><span data-ccp-props="{}">Balancing beneficial innovation with the protection of people’s mental privacy</span><span data-ccp-props="{}"> </span></li>
</ul>
<p><a href="https://ntc.columbia.edu/rafael-yuste/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Rafael Yuste</span></a><span data-contrast="auto"> is a professor of biological sciences and neuroscience, co-director of the </span><a href="https://kavli.columbia.edu/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Kavli Institute for Brain Science</span></a><span data-contrast="auto">, and director of the </span><a href="https://ntc.columbia.edu/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">NeuroTechnology Center at Columbia University</span></a><span data-contrast="auto">. He led the group of researchers that first proposed the </span><a href="https://braininitiative.nih.gov/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Initiative</span></a><span data-contrast="auto"> launched in 2013 by the Obama Administration.</span><span data-ccp-props="{}"> </span></p>
<p><a href="https://perseus-strategies.com/team/jared-genser-english/"><span data-contrast="none">Jared Genser</span></a><span data-contrast="auto"> is an international human rights lawyer who serves as managing director at Perseus Strategies, renowned for his successes in freeing political prisoners around the world. He’s also the Senior Tech Fellow at Harvard University’s </span><a href="https://www.hks.harvard.edu/centers/carr-ryan" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Carr-Ryan Center for Human Rights</span></a><span data-contrast="auto">, and he is outside general counsel to </span><a href="https://neurorightsfoundation.org/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">The Neurorights Foundation</span></a><span data-contrast="auto">, an international advocacy group he co-founded with Yuste that works to enshrine human rights as a crucial part of the development of neurotechnology. </span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">Resources:</span><span data-ccp-props="{}"> </span></p>
<ul>
<li><span data-ccp-props="{}">Nature: “</span><a href="https://www.nature.com/articles/551159a" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Four ethical priorities for neurotechnologies and AI</span></a><span data-contrast="auto">” (Nov. 9, 2017)</span></li>
<li><span data-ccp-props="{}">YouTube: “</span><a href="https://youtu.be/vL7yMn6kiMg?si=iMm2m5pkXYs3AJwz" target="_blank" rel="noopener noreferrer"><span data-contrast="none">A Neuroprosthesis for Speech Decoding and Avatar Control | Chang Lab – UCSF</span></a><span data-contrast="auto">” (Aug. 23, 2023)</span></li>
<li><span data-ccp-props="{}">Journal of Alzheimer's Disease via National Library of Medicine: “</span><a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC10977421/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Unlocking the Potential of Repetitive Transcranial Magnetic Stimulation in Alzheimer’s Disease: A Meta-Analysis of Randomized Clinical Trials to Optimize Intervention Strategies</span></a><span data-contrast="auto">” (March 19, 2024)</span></li>
<li><span data-ccp-props="{}">The Neurorights Foundation: “</span><a href="https://perseus-strategies.com/wp-content/uploads/2024/04/FINAL_Consumer_Neurotechnology_Report_Neurorights_Foundation_April-1.pdf" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Safeguarding Brain Data: Assessing the Privacy Practices of Consumer Neurotechnology Companies</span></a><span data-contrast="auto">” (April 2024)</span></li>
<li><span data-ccp-props="{}">New York Times: “</span><a href="https://www.nytimes.com/2024/09/29/science/california-neurorights-tech-law.html?unlocked_article_code=1.VE8.o4GV.-6DN4sBW0_Jg&amp;smid=url-share" target="_blank" rel="noopener noreferrer"><span data-contrast="none">California Passes Law Protecting Consumer Brain Data</span></a><span data-contrast="auto">” (Sept. 29, 2024)</span><span data-ccp-props="{}"> </span></li>
</ul>
<p><span data-contrast="auto">What do you think of “How to Fix the Internet?” </span><a href="https://forms.office.com/pages/responsepage.aspx?id=qalRy_Njp0iTdV3Gz61yuZZXWhXf9ZdMjzPzrVjvr6VUNUlHSUtLM1lLMUNLWE42QzBWWDhXU1ZEQy4u&amp;web=1&amp;wdLOR=c90ABD667-F98F-9748-BAA4-CA50122F0423" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Share your feedback here</span></a><span data-contrast="auto">.</span></p>
<h3><span data-ccp-props="259}">Transcript</span></h3>
<p><span data-contrast="auto"><strong>RAFAEL YUSTE</strong>: The brain is not just another organ of the body, but the one that generates our mind, all of our mental activity. And that's the heart of what makes us human is our mind. So this technology is one technology that for the first time in history can actually get to the core of what makes us human and not only potentially decipher, but manipulate the essence of our humanity.<br />10 years ago we had a breakthrough with studying the mouse’s visual cortex in which we were able to not just decode from the brain activity of the mouse what the mouse was looking at, but to manipulate the brain activity of the mouse. To make the mouse see things that it was not looking at. <br />Essentially we introduce, in the brain of the mouse, images. Like hallucinations. And in doing so, we took control over the perception and behavior of the mouse. So the mouse started to behave as if it was seeing what we were essentially putting into his brain by activating groups of neurons. <br />So this was fantastic scientifically, but that night I didn't sleep because it hit me like a ton of bricks. Like, wait a minute, what we can do in a mouse today, you can do in a human tomorrow. And this is what I call my Oppenheimer moment, like, oh my God, what have we done here?</span></p>
<p><span data-contrast="auto"><strong>CINDY COHN:</strong> That's the renowned neuroscientist Rafael Yuste talking about the moment he realized that his groundbreaking brain research could have incredibly serious consequences. I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.</span></p>
<p><span data-contrast="auto"><strong>JASON KELLEY:</strong> And I'm Jason Kelley, EFF's activism director. This is our podcast, How to Fix the Internet.</span></p>
<p><span data-contrast="auto"><strong>CINDY COHN:</strong> On this show, we flip the script from the dystopian doom and gloom thinking we all get mired in when thinking about the future of tech. We're here to challenge ourselves, our guests and our listeners to imagine a better future that we can be working towards. How can we make sure to get this right, and what can we look forward to if we do?<br />And today we have two guests who are at the forefront of brain science -- and are thinking very hard about how to protect us from the dangers that might seem like science fiction today, but are becoming more and more likely.</span></p>
<p><span data-contrast="auto"><strong>JASON KELLEY:</strong> Rafael Yuste is one of the world's most prominent neuroscientists. He's been working in the field of neurotechnology for many years, and was one of the researchers who led the BRAIN initiative launched by the Obama administration, which was a large-scale research project akin to the Genome Project, but focusing on brain research. He's the director of the NeuroTechnology Centre at Columbia University, and his research has enormous implications for a wide range of mental health disorders, including schizophrenia, and neurodegenerative diseases like Parkinson's and ALS.</span></p>
<p><span data-contrast="auto"><strong>CINDY COHN:</strong> But as Rafael points out in the introduction, there are scary implications for technology that can directly manipulate someone's brain.</span></p>
<p><span data-contrast="auto"><strong>JASON KELLEY:</strong> We're also joined by his partner, Jared Genser, a legendary human rights lawyer who has represented no less than five Nobel Peace Prize Laureates. He’s also the Senior Tech Fellow at Harvard University’s Carr-Ryan Center for Human Rights, and together with Rafael, he founded the Neurorights Foundation, an international advocacy group that is working to enshrine human rights as a crucial part of the development of neurotechnology.</span></p>
<p><span data-contrast="auto"><strong>CINDY COHN:</strong> We started our conversation by asking how the brain scientist and the human rights lawyer first teamed up.</span></p>
<p><span data-contrast="auto"><strong>RAFAEL YUSTE:</strong> I knew nothing about the law. I knew nothing about human rights my whole life. I said, okay, I avoided that like the pest because you know what? I have better things to do, which is to focus on how the brain works. But I was just dragged into the middle of this by our own work.<br />So it was a very humbling moment and I said, okay, you know what? I have to cross to the other side and get involved really with the experts that know how this works. And that's how I ended up talking to Jared. The whole reason we got together was pretty funny. We both got the same award from a Swedish foundation, from the Talbert Foundation, this Liaison Award for Global Leadership. In my case, because of the work I did on the Brain Initiative, and Jared, got this award for his human rights work. <br />And, you know, this is one, good thing of getting an award, or let me put it differently, at least, that getting an award led to something positive in this case is that someone in the award committee said, wait a minute, you guys should be talking to each other. and they put us in touch. He was like a matchmaker.</span></p>
<p><span data-contrast="auto"><strong>CINDY COHN:</strong> I mean, you really stumbled into something amazing because, you know, Jared, you're, you're not just kind of your random human rights lawyer, right? So tell me your version, Jared, of the meet cute.</span></p>
<p><span data-contrast="auto"><strong>JARED GENSER:</strong> Yes. I'd say we're like work spouses together. So the feeling is mutual in terms of the admiration, to say the least. And for me, that call was really transformative. It was probably the most impactful one hour call I've had in my career in the last decade because I knew very little to nothing about the neurotechnology side, you know, other than what you might read here or there.<br />I definitely had no idea how quickly emerging neuro technologies were developing and the sensitivity - the enormous sensitivity - of that data. And in having this discussion with Rafa, it was quite clear to me that my view of the major challenges we might face as humanity in the field of human rights was dramatically more limited than I might have thought.<br />And, you know, Rafa and I became fast friends after that and very shortly thereafter co-founded the Neurorights Foundation, as you noted earlier. And I think that this is what's made us such a strong team, is that our experiences and our knowledge and expertise are highly complimentary.<br />Um, you know, Rafa and his colleagues had, uh, at the Morningside Group, which is a group of 25 experts he collected together at, uh, at Columbia, had already, um, you know, met and come up with, and published in the journal Nature, a review of the potential concerns that arise out of the potential misuse and abuse of neurotech.<br />And there were five areas of concerns that they had identified that include mental privacy, mental agency, mental identity, concerns about discrimination and the development in application of neurotechnologies and fair use of mental augmentation. And these generalized concerns, uh, which they refer to as neurorights, of course map over to international human rights, uh, that to some extent are already protected by international treaties.<br />Um, but to other extents might need to be further interpreted from existing international treaties. And it was quite clear that when one would think about emerging neuro technologies and what they might be able to do, that a whole dramatic amount of work needed to be done before these things proliferate in such an extraordinary sense around the world.</span></p>
<p><span data-contrast="auto"><strong>JASON KELLEY:</strong> So Rafa and Jared, when I read a study like the one you described with the mice, my initial thought is, okay, that's great in a lab setting. I don't initially think like, oh, in five years or 10 years, we'll have technology that actually can be, you know, in the marketplace or used by the government to do the hallucination implanting you're describing. But it sounds like this is a realistic concern, right? You wouldn't be doing this work unless this had progressed very quickly from that experiment to actual applications and concerns. So what has that progression been like? Where are we now?</span></p>
<p><span data-contrast="auto"><strong>RAFAEL YUSTE:</strong> So let me tell you, two years ago I got a phone call in the middle of the night. It woke me up in the middle of the night, okay, from a colleague and friend who had his Oppenheimer moment. And his name is Eddie Chang. He's a professor of neurosurgery at UCSF, and he's arguably the leader in the world to decode brain activity from human patients. So he had been working with a patient that was paralyzed, because of a Bulbar infarction, a stroke in her, essentially, the base of her brain and she had a locking syndrome, so she couldn't communicate with the exterior. She was in a wheelchair and they implanted a few electrodes and electrode array into her brain with neurosurgery and connected those electrodes to a computer with an algorithm using generative AI. <br />And using this algorithm, they were able to decode her inner speech - the language that she wanted to generate. She couldn't speak because she was paralyzed. And when you conjure – we don't really know exactly what goes on during speech – but when you conjure the words in your mind, they were able to actually decode those words.<br />And then not only that, they were able to decode her emotions and even her facial gestures. So she was paralyzed and Eddie and her team built an avatar of the person in the computer with her face and gave that avatar, her voice, her emotions, and her facial gestures. And if you watch the video, she was just blown away.<br />So Eddie called me up and explained to me what they've done. I said, well, Eddie, this is absolutely fantastic. You just unlocked the person from this locking syndrome, giving hope to all the patients that have a similar problem. But of course he said, no, no, I, I'm not talking about that. I'm talking about, we just cloned her essentially.<br />It was actually published as the cover of the journal Nature. Again, this is the top journal in the world, so they gave them the cover. It was such an impressive result. and this was implantable neurotechnology. So it requires a neurosurgeon that go in and put in this electrode. So it is, of course, in a hospital setting, this is all under control and super regulated.<br />But since then, there's been fast development, partly, spurred by all these investments into neurotechnology that, uh, private and public all over the world. There's been a lot of development of non-implantable neurotechnology to either record brain activity from the surface or to stimulate the brain from the surface without having to open up the skull.<br />And let me just tell you two examples that bring home the fact that this is not science fiction. In December 2023, a team in Australia used an EG device, essentially like a helmet that you put on. You can actually buy these things in Amazon and couple it to generative AI algorithm again, like Eddie Chang. In fact, I think they were inspired by Eddie Chang's work and they were able to decode the inner speech of volunteers. It wasn't as accurate as the decoding that you can do if you stick the electrodes inside. But from the outside, they have a video of a person that is mentally ordering a cappuccino at a Starbucks. No. And they essentially decode, they don't decode absolutely every word that the person is thinking. But enough words that the message comes out loud and clear. So the coding of inner speech, it's doable, with non-invasive technology. Not only that study from Australia since then, you know, all these teams in the world, uh, we work as we help each other continuously. So, uh, shortly after that Australian team, another study in Japan published something, uh, with much higher accuracy and then another study in China. Anyway, this is now becoming very common practice to choose generative AI to decode speech. <br />And then on the stimulation side is also something that raises a lot of concerns ethically. In 2022 a lab in Boston University used external magnetic stimulation to activate parts of the brain in a cohort of volunteers that were older in age. This was the control group for a study on Alzheimer's patients. And they reported in a very good paper, that they could increase 30% of both short-term and long-term memory. <br />So this is the first serious case that I know of where again, this is not science fiction, this is demonstrated enhancement of, uh, mental ability in a human with noninvasive neurotechnology. So this could open the door to a whole industry that could use noninvasive devices, maybe magnetic simulation, maybe acoustical, maybe, who knows, optical, to enhance any aspect of our mental activity. And that, I mean, just imagine. <br />This is what we're actually focusing on our foundation right now, this issue of mental augmentation because we don't think it's science fiction. We think it's coming.</span></p>
<p><span data-contrast="auto"><strong>JARED GENSER:</strong> Let me just kind of amplify what Rafa's saying and to kind of make this as tangible as possible for your listeners, which is that, as Rafa was already alluding to, when you're talking about, of course, implantable devices, you know, they have to be licensed by the Food and Drug Administration. They're implanted through neurosurgery in the medical context. All the data that's being gathered is covered by, you know, HIPAA and other state health data laws. But there are already available on the market today 30 different kinds of wearable neurotechnology devices that you can buy today and use. <br />As one example, you know, there's the company, Muse, that has a meditation device and you can buy their device. You put it on your head, you meditate for an hour. The BCI - brain computer interface - connects to your app. And then basically you'll get back from the company, you know, decoding of your brain activity to know when you're in a meditative state or not.<br />The problem is, is that these are EEG scanning devices that if they were used in a medical context, they would be required to be licensed. But in a consumer context, there's no regulation of any kind. And you're talking about devices that can gather from gigabytes to terabytes of neural data today, of which you can only decode maybe 1% of it.<br />And the data that's being gathered, uh, you know, EEG scanning device data in wearable form, you could identify if a person has any of a number of different brain diseases and you could also decode about a dozen different mental states. Are you happy, are you sad? And so forth. <br />And so at our foundation, at the Neurorights Foundation, we actually did a very important study on this topic that actually was covered on the front page of the New York Times. And we looked at the user agreements for, and the privacy agreements, for the 30 different companies’ products that you can buy today, right now. And what we found was that in 29, out of the 30 cases, basically, it's carte blanche for the companies. They can download your data, they can do it as they see fit, and they can transfer it, sell it, etc.<br />Only in one case did a company, ironically called Unicorn, actually keep the data on your local device, and it was never transferred to the company in question. And we benchmark those agreements across a half dozen different global privacy standards and found that there were just, you know, gigantic gaps that were there.<br />So, you know, why is that a problem? Well take the Muse device I just mentioned, they talk about how they've downloaded a hundred million hours of consumer neural data from people who have bought their device and used it. And we're talking about these studies in Australia and Japan that are decoding thought to text. <br />Today thought to text, you know, with the EEG can only be done in a relatively. Slow speed, like 10 or 15 words a minute with like maybe 40, 50% accuracy. But eventually it's gonna start to approach the speed of Eddie Chang's work in California, where with the implantable device you can do thought to text at 80 words a minute, 95% accuracy.<br />And so the problem is that in three, four years, let's say when this technology is perfected with a wearable device, this company Muse could theoretically go back to that hundred million hours of neural data and then actually decode what the person was thinking in the form of words when they were actually meditating.<br />And to help you understand as a last point, why is this, again, science and not science fiction? You know, Apple is already clearly aware of the potential here, and two years ago, they actually filed a patent application for their next generation AirPod device that is going to have built-in EEG scanners in each ear, right?<br />And they sell a hundred million pairs of AirPods every single year, right? And when this kind of technology, thought to text, is perfected in wearable form, those AirPods will be able to be used, for example, to do thought-to-text emails, thought-to-text text messages, et cetera. <br />But when you continue to wear those AirPod devices, the huge question is what's gonna be happening to all the other data that's being, you know, absorbed how is it going to be able to be used, and so forth. And so this is why it's really urgent at an international level to be dealing with this. And we're working at the United Nations and in many other places to develop various kinds of frameworks consistent with international human rights law. And we're also working, you know, at the national and sub-national level. <br />Rafa, my colleague, you know, led the charge in Chile to help create a first-ever constitutional amendment to a constitution that protects mental privacy in Chile. We've been working with a number of states in the United States now, uh, California, Colorado and Montana – very different kinds of states – have all amended their state consumer data privacy laws to extend their application to narrow data. But it is really, really urgent in light of the fast developing technology and the enormous gaps between these consumer product devices and their user agreements and what is considered to be best practice in terms of data privacy protection.</span></p>
<p><span data-contrast="auto"><strong>CINDY COHN:</strong> Yeah, I mean I saw that study that you did and it's just, you know, it mirrors a lot of what we do in the other context where we've got click wrap licenses and other, you know, kind of very flimsy one-sided agreements that people allegedly agree to, but I don't think under any lawyer's understanding of like meeting of the minds, and there's a contract that you negotiate that it's anything like that.<br />And then when you add it to this context, I think it puts these problems on steroids in many ways and makes 'em really worse. And I think one of the things I've been thinking about in this is, you know, you guys have in some ways, you know, one of the scenarios that demonstrates how our refusal to take privacy seriously on the consumer side and on the law enforcement side is gonna have really, really dire, much more dire consequences for people potentially than we've even seen so far. And really requires serious thinking about, like, what do we mean in terms of protecting people's privacy and identity and self-determination?</span></p>
<p><span data-contrast="auto"><strong>JARED GENSER:</strong> Let me just interject on that one narrow point because I was literally just on a panel discussion remotely at the UN Crime Congress last week that was hosted by the UN Office in Drugs and Crime, UNODC and Interpol, the International Police Organization. And it was a panel discussion on the topic of emerging law enforcement uses of neurotechnologies. And so this is coming. They just launched a project jointly to look at potential uses as well as to develop, um, guidelines for how that can be done. But this is not at all theoretical. I mean, this is very, very practical.</span></p>
<p><span data-contrast="auto"><strong>CINDY COHN:</strong> And much of the funding that's come out of this has come out of the Department of Defense thinking about how do we put the right guardrails in place are really important. And honestly, if you think that the only people who are gonna want access to the neural data that these devices are collecting are private companies who wanna sell us things, like I, you know, that's not the history, right? Law enforcement comes for these things both locally and internationally, no matter who has custody of them. And so you kind of have to recognize that this isn't just a foray for kind of skeezy companies to do things we don't like.</span></p>
<p><span data-contrast="auto"><strong>JARED GENSER:</strong> Absolutely.</span></p>
<p><span data-contrast="auto"><strong>JASON KELLEY:</strong> Let's take a quick moment to thank our sponsor. How to Fix The Internet is supported by the Alfred P. Sloan Foundation's program and public understanding of science and technology enriching people's lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.<br />We also wanna thank EFF members and donors. You're the reason we exist, and EFF has been fighting for digital rights for 35 years, and that fight is bigger than ever. So please, if you like what we do, go to eff.org/pod to donate. Also, we'd love for you to join us at this year's EFF awards where we celebrate the people working towards the better digital future that we all care so much about.<br />Those are coming up on September 12th in San Francisco. You can find more information about that at eff.org/awards.<br />We also wanted to share that our friend Cory Doctorow has a new podcast you might like. Have a listen to this:<br />[WHO BROKE THE INTERNET TRAILER]<br />And now back to our conversation with Rafael Yuste and Jared Genser.</span></p>
<p><span data-contrast="auto"><strong>CINDY COHN:</strong> This might be a little bit of a geeky lawyer question, but I really appreciated the decision you guys made to really ground this in international human rights, which I think is tremendously important. But not obvious to most Americans as the kind of framework that we ought to invoke. And I was wondering how you guys came to that conclusion.</span></p>
<p><span data-contrast="auto"><strong>JARED GENSER:</strong> No, I think it's actually a very, very important question. I mean, I think that the bottom line is that there are a lot of ways to look at, um, questions like this. You know, you can think about, you know, a national constitution or national laws. You can think about international treaties or laws.<br />You can look at ethical frameworks or self governance by companies themselves, right? And at the end of the day, because of the seriousness and the severity of the potential downside risks if this kind of technology is misused or abused, you know, our view is that what we really need is what's referred to by lawyers as hard law, as in law that is binding and enforceable against states by citizens. And obviously binding on governments and what they do, binding on companies and what they do and so forth. <br />And so it's not that we don't think, for example, ethical frameworks or ethical standards or self-governance by companies are not important. They are very much a part of an overall approach, but our approach at the Neurorights Foundation is, let's look at hard law, and there are two kinds of hard law to look at. The first are international human rights treaties. These are multilateral agreements that states negotiate and come to agreements on. And when a country signs and ratifies a treaty, as the US has on the key relevant treaty here, which is the International Covenant and Civil and Political Rights, those rights get domesticated in the law of each country in the world that signs and ratifies them, and that makes them then enforceable. And so we think first and foremost, it's important that we ground our concerns about the misuse and abuse of these technologies in the requirements of international human rights law.<br />Because the United States is obligated and other countries in the world are obligated to protect their citizens from abuses of these rights. <br />And at the same time, of course that isn't sufficient on its own. We also need to see in certain contexts, probably not in the US context, amendments to a constitution that's much harder to do in the US but laws that are actually enforceable against companies.<br />And this is why our work in California, Montana and Colorado is so important because now companies in California, as one illustration, which is where Apple is based and where meta is based and so forth, right? They now have to provide the protections embedded in the California Consumer Privacy Act to all of their gathering and use of neural data, right? <br />And that means that you have a right to be forgotten. You have a right to demand your data not be transferred or sold to third parties. You have a right to have access to your data. Companies have obligations to tell you what data are they gathering, how are they gonna use it? If they propose selling or transferring it to whom and so forth, right? <br />So these are now ultimately gonna be binding law on companies, you know, based in California and, as we're developing this, around the world. But to us, you know, that is really what needs to happen.</span></p>
<p><span data-contrast="auto"><strong>JASON KELLEY:</strong> Your success has been pretty stunning. I mean, even though you're, you know, there's obviously so much more to do. We work to try to amend and change and improve laws at the state and local and federal level and internationally sometimes, and it's hard. <br />But the two of you together, I think there's something really fascinating about the way, you know, you're building a better future and building in protections for that better future at the same time. <br />And, like, you're aware of why that's so important. I think there's a big lesson there for a lot of people who work in the tech field and in the science field about, you know, you can make incredible things and also make sure they don't cause huge problems. Right? And that's just a really important lesson. <br />What we do with this podcast is we do try to think about what the better future that people are building looks like, what it should look like. And the two of you are, you know, thinking about that in a way that I think a lot of our guests aren't because you're at the forefront of a lot of this technology. But I'd love to hear what Rafa and then Jared, you each think, uh, science and the law look like if you get it right, if things go the way you hope they do, what, what does the technology look like? What did the protections look like? Rafa, could you start.</span></p>
<p><span data-contrast="auto"><strong>RAFAEL YUSTE:</strong> Yeah, I would comment, there's five places in the world today where there's, uh, hard law protection for brain activity and brain data in the Republic of Chile, the state of Rio Grande do Sul in Brazil, in the states of Colorado, California, and Montana in the US. And in every one of these places there's been votes in the legislature, and they're all bicameral legislature, so there've been 10 votes, and every single one of those votes has been unanimous.<br />All political parties in Chile, in Brazil - actually in Brazil there were 16 political parties. That never happened before that they all agreed on something. California, Montana, and Colorado, all unanimous except for one vote no in Colorado of a person that votes against everything. He's like, uh, he goes, he has some, some axe to grind with, uh, his companions and he just votes no on everything.<br />But aside from this person. Uh, actually the way the Colorado, um, bill was introduced by a Democratic representative, but, uh, the Republican side, um, took it to heart. The Republican senator said that this is a definition of a no-brainer. And he asked for permission to introduce that bill in the Senate in Colorado.<br />So he, the person that defended the Senate in Colorado, was actually not a Democrat but a Republican. So why is that? So as quoting this Colorado senator is a no brainer, this is an issue where it doesn't, I mean, the minute you get it, you understand, do you want your brain activity to be decoded with what your consent? Well, this is not a good idea. <br />So not a single person that we've met has opposed this issue. So I think Jared and I do the best job we can and we work very hard. And I should tell you that we're doing this pro bono without being compensated for our work. But the reason behind the success is really the issue, it's not just us. I think that we're dealing with an issue which is a fundamental widespread universal agreement. </span></p>
<p><span data-contrast="auto"><strong>JARED GENSER:</strong> What I would say is that, you know, on the one hand, and we appreciate of course, the kind words about the progress we're making. We have made a lot of progress in a relatively short period of time, and yet we have a dramatically long way to go.<br />We need to further interpret international law in the way that I'm describing to ensure that privacy includes mental privacy all around the world, and we really need national laws in every country in the world. Subnational laws and various places too, and so forth. <br />I will say that, as you know from all the great work you guys do with your podcast, getting something done at the federal level is of course much more difficult in the United States because of the divisions that exist. And there is no federal consumer data privacy law because we've never been able to get Republicans and Democrats to agree on the text of one.<br />The only kinds of consumer data protected at the federal level is healthcare data under HIPAA and financial data. And there have been multiple efforts to try to do a federal consumer data privacy law that have failed. In the last Congress, there was something called the American Privacy Rights Act. It was bipartisan, and it basically just got ripped apart because they were adding, trying to put together about a dozen different categories of data that would be protected at the federal level. And each one of those has a whole industry association associated with it. <br />And we were able to get that draft bill amended to include neural data in it, which it didn't originally include, but ultimately the bill died before even coming to a vote at committees. In our view, you know, that then just leaves state consumer data privacy laws. There are about 35 states now that have state level laws. 15 states actually still don't.<br />And so we are working state by state. Ultimately, I think that when it comes, especially to the sensitivity of neural data, right? You know, we need a federal law that's going to protect neural data. But because it's not gonna be easy to achieve, definitely not as a package with a dozen other types of data, or in general, you know, one way of course to get to a federal solution is to start to work with lots of different states. All these different state consumer data privacy laws are different. I mean, they're similar, but they have differences to them, right? <br />And ultimately, as you start to see different kinds of regulation being adopted in different states relating to the same kind of data, our hope is that industry will start to say to members of Congress and the, you know, the Trump administration, hey, we need a common way forward here and let's set at least a floor at the federal level for what needs to be done. If states want to regulate it more than that, that's fine, but ultimately, I think that there's a huge amount of work still left to be done, obviously all around the world and at the state level as well.</span></p>
<p><span data-contrast="auto"><strong>CINDY COHN:</strong> I wanna push you a little bit. So what does it look like if we get it right? What is, what is, you know, what does my world look like? Do I, do I get the cool earbuds or do I not?</span></p>
<p><span data-contrast="auto"><strong>JARED GENSER:</strong> Yeah, I mean, look, I think the bottom line is that, you know, the world that we want to see, and I mean Rafa of course is the technologist, and I'm the human rights guy. But the world that we wanna see is one in which, you know, we promote innovation while simultaneously, you know, protecting people from abuses of their human rights and ensure that neuro technologies are developed in an ethical manner, right? <br />I mean, so we do need self-regulation by industry. You know, we do need national and international laws. But at the same time, you know, one in three people in their lifetimes will have a neurological disease, right?<br />The brain diseases that people know best or you know, from family, friends or their own experience, you know, whether you look at Alzheimer's or Parkinson's, I mean, these are devastating, debilitating and all, today, you know, irreversible conditions. I mean, all you can do with any brain disease today at best is to slow its progression. You can't stop its progression and you can't reverse it. <br />And eventually, in 20 or 30 years, from these kinds of emerging neurotechnologies, we're going to be able to ultimately cure brain diseases. And so that's what the world looks like, is the, think about all of the different ways in which humanity is going to be improved, when we're able to not only address, but cure, diseases of this kind, right?<br />And, you know, one of the other exciting parts of emerging neurotechnologies is our ability to understand ourselves, right? And our own brain and how it operates and functions. And that is, you know, very, very exciting. <br />Eventually we're gonna be able to decode not only thought-to-text, but even our subconscious thoughts. And that of course, you know, raises enormous questions. And this technology is also gonna, um, also even raise fundamental questions about, you know, what does it actually mean to be human? And who are we as humans, right? <br />And so, for example, one of the side effects of deep brain stimulation in a very, very, very small percentage of patients is a change in personality. In other words, you know, if you put a device in someone's, you know, mind to control the symptoms of Parkinson's, when you're obviously messing with a human brain, other things can happen. <br />And there's a well known case of a woman, um, who went from being, in essence, an extreme introvert to an extreme extrovert, you know, with deep brain stimulation as a side effect. And she's currently being studied right now, um, along with other examples of these kinds of personality changes. <br />And if we can figure out in the human brain, for example, what parts of the brain, for example, deal with being an introvert or an extrovert, you know, you're also raising fundamental questions about the, the possibility of being able to change your personality and parts with a brain implant, right? I mean, we can already do that, obviously, with psychotropic medications for people who have mental illnesses through psychotherapy and so forth. But there are gonna be other ways in which we can understand how the brain operates and functions and optimize our lives through the development of these technologies.<br />So the upside is enormous, you know. Medically and scientifically, economically, from a self-understanding point of view. Right? And at the same time, the downside risks are profound. It's not just decoding our thoughts. I mean, we're on the cusp of an unbeatable lie detector test, which could have huge positive and negative impacts, you know, in criminal justice contexts, right?<br />So there are so many different implications of these emerging technologies, and we are often so far behind, on the regulatory side, the actual scientific developments that in this particular case we really need to try to do everything possible to at least develop these solutions at a pace that matches the developments, let alone get ahead of them.</span></p>
<p><span data-contrast="auto"><strong>JASON KELLEY:</strong> I'm fascinated to see, in talking to them, how successful they've been when there isn't a big, you know, lobbying wing of neurorights products and companies stopping them from this because they're ahead of the game. I think that's the thing that really struck me and, and something that we can hopefully learn from in the future that if you're ahead of the curve, you can implement these privacy protections much easier, obviously. That was really fascinating. And of course just talking to them about the technology set my mind spinning.</span></p>
<p><span data-contrast="auto"><strong>CINDY COHN:</strong> Yeah, in both directions, right? Both what an amazing opportunity and oh my God, how terrifying this is, both at the same time. I thought it was interesting because I think from where we sit as people who are trying to figure out how to bring privacy into some already baked technologies and business models and we see how hard that is, you know, but they feel like they're a little behind the curve, right? They feel like there's so much more to do. So, you know, I hope that we were able to kind of both inspire them and support them in this, because I think to us, they look ahead of the curve and I think to them, they feel a little either behind or over, you know, not overwhelmed, but see the mountain in front of them.</span></p>
<p><span data-contrast="auto"><strong>JASON KELLEY:</strong> A thing that really stands out to me is when Rafa was talking about the popularity of these protections, you know, and, and who on all sides of the aisle are voting in favor of these protections, it's heartwarming, right? It's inspiring that if you can get people to understand the sort of real danger of lack of privacy protections in one field. It makes me feel like we can still get people, you know, we can still win privacy protections in the rest of the fields.<br />Like you're worried for good reason about what's going on in your head and that, how that should be protected. But when you type on a computer, you know, that's just the stuff in your head going straight onto the web. Right? We've talked about how like the phone or your search history are basically part of the contents of your mind. And those things need privacy protections too. And hopefully we can, you know, use the success of their work to talk about how we need to also protect things that are already happening, not just things that are potentially going to happen in the future.</span></p>
<p><span data-contrast="auto"><strong>CINDY COHN:</strong> Yeah. And you see kind of both kinds of issues, right? Like, if they're right, it's scary. When they're wrong it's scary. But also I'm excited about and I, what I really appreciated about them, is that they're excited about the potentialities too. This isn't an effort that's about the house of no innovation. In fact, this is where responsibility ought to come from. The people who are developing the technology are recognizing the harms and then partnering with people who have expertise in kind of the law and policy and regulatory side of things. So that together, you know, they're kind of a dream team of how you do this responsibly. <br />And that's really inspiring to me because I think sometimes people get caught in this, um, weird, you know, choose, you know, the tech will either protect us or the law will either protect us. And I think what Rafa and Jared are really embodying and making real is that we need both of these to come together to really move into a better technological future.</span></p>
<p><span data-contrast="auto"><strong>JASON KELLEY:</strong> And that's our episode for today. Thanks so much for joining us. If you have feedback or suggestions, we'd love to hear from you. Visit eff.org/podcast and click on listener feedback. And while you're there, you can become a member and donate, maybe even pick up some of the merch and just see what's happening in digital rights this week and every week.<br />Our theme music is by Nat Keefe of Beat Mower with Reed Mathis, and How to Fix the Internet is supported by the Alfred P Sloan Foundation's program and public understanding of science and technology. We'll see you next time. I'm Jason Kelley.</span></p>
<p><span data-contrast="auto"><strong>CINDY COHN:</strong> And I'm Cindy Cohn.</span></p>
<p><span data-contrast="auto"><em><strong>MUSIC CREDITS:</strong> This podcast is licensed Creative Commons Attribution 4.0 international, and includes the following music licensed Creative Commons Attribution 3.0 unported by its creators: Drops of H2O, The Filtered Water Treatment by Jay Lang. Additional music, theme remixes and sound design by Gaetan Harris.</em></span></p>
</div></div></div></description>
<pubDate>Wed, 27 Aug 2025 07:05:44 +0000</pubDate>
<guid isPermaLink="false">111076 at https://www.eff.org</guid>
<category domain="https://www.eff.org/how-to-fix-the-internet-podcast">How to Fix the Internet: Podcast</category>
<category domain="https://www.eff.org/issues/privacy">Privacy</category>
<category domain="https://www.eff.org/issues/medical-privacy">Medical Privacy</category>
<dc:creator>Josh Richman</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/2025-htfi-neuro-blog.jpg" alt="How to Fix the Internet - Rafael Yuste &amp; Jared Genser - Protecting Privacy in Your Brain " type="image/jpeg" length="209912" />
</item>
<item>
<title>Fourth Amendment Victory: Michigan Supreme Court Reins in Digital Device Fishing Expeditions</title>
<link>https://www.eff.org/deeplinks/2025/08/fourth-amendment-victory-michigan-supreme-court-reins-digital-device-fishing-1</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p><i><span>EFF legal intern Noam Shemtov was the principal author of this post.</span></i></p>
<p><span>When police have a warrant to search a phone, should they be able to see </span><i><span>everything</span></i><span> on the phone—from family photos to communications with your doctor to everywhere you’ve been since you first started using the phone—in other words, data that is in no way connected to the crime they’re investigating? The Michigan Supreme Court just ruled no. </span></p>
<p><span>In </span><i><span>People v. Carson</span></i><span>, the court </span><a href="https://www.courts.michigan.gov/4a1850/siteassets/case-documents/uploads/opinions/final/sct/166923_115_01.pdf"><span>held</span></a><span> that to satisfy the Fourth Amendment, warrants authorizing searches of cell phones and other digital devices must contain express limitations on the data police can review, restricting searches to data that they can establish is clearly connected to the crime.</span></p>
<p class="pull-quote"><span>The realities of modern cell phones call for a strict application of rules governing the scope of warrants.</span></p>
<p><span>EFF, along with ACLU National and the ACLU of Michigan, filed an </span><a href="https://www.courts.michigan.gov/48ed5f/siteassets/case-documents/briefs/msc/2024-2025/166923/166923_97_01_sct_brf_cdam_aclu_aclu-mi_eff.pdf"><span>amicus brief</span></a><span> in </span><i><span>Carson</span></i><span>, expressly calling on the court to limit the scope of cell phone search warrants.</span> <span>We </span><a href="https://www.eff.org/deeplinks/2025/01/eff-michigan-supreme-court-cell-phone-search-warrants-must-strictly-follow-fourth"><span>explained</span></a><span> that the realities of modern cell phones call for a strict application of rules governing the scope of warrants. Without clear limits, warrants would become de facto licenses to look at everything on the device, a great universe of information that amounts to “</span><a href="https://supreme.justia.com/cases/federal/us/573/13-132/case.pdf"><span>the sum of an individual’s private life</span></a><span>.” </span></p>
<p><span>The </span><i><span>Carson</span></i><span> case shows just how broad many cell phone search warrants can be. Defendant Michael Carson was suspected of stealing money from a neighbor’s safe. The warrant to search his phone allowed the police to access:</span></p>
<blockquote><p><span>Any and all data including, text messages, text/picture messages, pictures and videos, address book, any data on the SIM card if applicable, and all records or documents which were created, modified, or stored in electronic or magnetic form and, any data, image, or information.</span></p>
</blockquote>
<p><span>There were no temporal or subject matter limitations. Consequently, investigators obtained over 1,000 pages of information from Mr. Carson’s phone, the vast majority of which did not have anything to do with the crime under investigation.</span></p>
<p><span>The Michigan Supreme Court held that this extremely broad search warrant was “constitutionally intolerable” and violated the particularity requirement of the Fourth Amendment. </span></p>
<p><span>The Fourth Amendment requires that warrants “particularly describ[e] the place to be searched, and the persons or things to be seized.” This is intended to limit authorization to search to the specific areas and things for which there is probable cause to search and to prevent police from conducting “wide-ranging exploratory searches.” </span></p>
<p class="pull-quote">Cell phones hold vast and varied information, including our most intimate data.</p>
<p><span>Across two opinions, a four-Justice majority joined a </span><a href="https://www.courts.michigan.gov/4a1850/siteassets/case-documents/uploads/opinions/final/sct/166923_115_01.pdf"><span>growing</span></a> <a href="https://cases.justia.com/massachusetts/supreme-court/2021-sjc-12938.pdf?ts=1610456560"><span>national</span></a> <a href="https://cases.justia.com/delaware/supreme-court/2016-205-2015.pdf?ts=1457011827"><span>consensus</span></a> <span>of courts recognizing that, given the immense and ever-growing storage capacity of cell phones, warrants must spell out up-front limitations on the information the government may review, including the dates and data categories that constrain investigators’ authority to search. And magistrates reviewing warrants must ensure the information provided by police in the warrant affidavit properly supports a tailored search.</span></p>
<p><span>This ruling is good news for digital privacy. Cell phones hold vast and varied information, including our most intimate data—“privacies of life” like our personal messages, location histories, and medical and financial information. The U.S. Supreme Court </span><a href="https://supreme.justia.com/cases/federal/us/573/13-132/case.pdf"><span>has recognized</span></a><span> as much, saying that application of Fourth Amendment principles to searches of cell phones must respond to cell phones’ unique characteristics, including the weighty privacy interests in our digital data. </span></p>
<p><span>We applaud the Michigan Supreme Court’s recognition that unfettered cell phone searches pose serious risks to privacy. We hope that courts around the country will follow its lead in concluding that the particularity rule applies with special force to such searches and requires clear limitations on the data the government may access. </span></p>
</div></div></div></description>
<pubDate>Fri, 22 Aug 2025 18:35:28 +0000</pubDate>
<guid isPermaLink="false">111014 at https://www.eff.org</guid>
<category domain="https://www.eff.org/issues/law-enforcement-access">Law Enforcement Access</category>
<dc:creator>Jennifer Pinsof</dc:creator>
<dc:creator>Jennifer Lynch</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/police-surveillance-hat.jpg" alt="A silhouette of a police officer, with spying eye on his hat" type="image/jpeg" length="124291" />
</item>
<item>
<title>Victory! Pen-Link's Police Tools Are Not Secret </title>
<link>https://www.eff.org/deeplinks/2025/08/victory-pen-links-police-tools-are-not-secret</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p>In a victory for transparency, the government contractor Pen-Link agreed to disclose the prices and descriptions of surveillance products that it sold to a local California Sheriff's office.</p>
<p><a href="https://eff.org/document/eff-pen-link-settlement">The settlement</a> ends a months-long California public records lawsuit with the Electronic Frontier Foundation and the San Joaquin County Sheriff’s Office. The settlement provides further proof that the surveillance tools used by governments are not secret and shouldn’t be treated that way under the law.</p>
<p>Last year, EFF submitted a California public records request to the San Joaquin County Sheriff’s Office for information about its work with Pen-Link and its subsidy Cobwebs Technology. Pen-Link <a href="https://www.eff.org/cases/pen-link-v-county-san-joaquin-sheriffs-office">went to court</a> to try to block the disclosure, claiming the names of its products and prices were trade secrets. EFF later <a href="https://www.eff.org/deeplinks/2024/12/eff-goes-court-uncover-police-surveillance-tech-california">entered the case</a> to obtain the records it requested. <span> </span></p>
<h3><strong>The Records Show the Sheriff Bought Online Monitoring Tools</strong></h3>
<p><a href="https://eff.org/document/unredacted-pen-link-documents">The records disclosed in the settlement</a> show that in late 2023, the Sheriff’s Office paid $180,000 for a two-year subscription to the Tangles “Web Intelligence Platform,” which is a Cobwebs Technologies product that allows the Sheriff to monitor online activity. The subscription allows the Sheriff to perform hundreds of searches and requests per month. The source of information includes the “Dark Web” and “Webloc,” according to the price quotation. According to the settlement, the Sheriff’s Office was offered but did not purchase a series of other add-ons including “AI Image processing” and “Webloc Geo source data per user/Seat.”</p>
<p class="pull-quote"><span>Have you been blocked from receiving similar information? We’d like to hear from you.</span></p>
<p>The intelligence platform overall has been described in other documents as analyzing data from the <a href="https://appsource.microsoft.com/en-au/product/saas/cobwebs-technologies.tangles?tab=Overview">“open, deep, and dark web, to mobile and social.”</a> And Webloc has been described as a platform that <a href="http://web.archive.org/web/20211217021955/https:/cobwebs.com/products/location-intelligence-system/#expand">“provides access to vast amounts of location-based data in any specified geographic location.”</a> Journalists at <a href="https://theintercept.com/2023/07/26/texas-phone-tracking-border-surveillance/">multiple</a> <a href="https://www.vice.com/en/article/the-lapd-is-using-controversial-mass-surveillance-tracking-software/">news</a> <a href="https://www.texasobserver.org/texas-dps-surveillance-tangle-cobwebs/">outlets</a> have chronicled Pen-Link's technology and <a href="https://jackpoulson.substack.com/p/israeli-firm-taught-us-police-how">have published</a> Cobwebs training manuals that demonstrate that its product can be used to target activists and independent journalists. Major local, state, and federal agencies use Pen-Link's technology.</p>
<p>The records also show that in late 2022 the Sheriff’s Office purchased some of Pen-Link’s <a href="https://www.penlink.com/platform/live-communication/">more traditional products</a> <a href="https://www.sec.gov/files/pia-penlink.pdf">that help</a> law enforcement execute and analyze data from wiretaps and pen-registers after a court grants approval.<span> </span></p>
<h3><strong>Government Surveillance Tools Are Not Trade Secrets</strong></h3>
<p>The public has a right to know what surveillance tools the government is using, no matter whether the government develops its own products or purchases them from private contractors. There are a host of policy, legal, and factual reasons that the surveillance tools sold by contractors like Pen-Link are not trade secrets.</p>
<p>Public information about these products and prices helps communities have informed conversations and make decisions about how their government should operate. In this case, <a href="https://www.eff.org/document/pen-link-complaint">Pen-Link argued</a> that its products and prices are trade secrets partially because governments rely on the company to “keep their data analysis capabilities private.” The company argued that clients would “lose trust” and governments may avoid “purchasing certain services” if the purchases were made public. This troubling claim highlights the importance of transparency. The public should be skeptical of any government tool that relies on secrecy to operate.</p>
<p>Information about these tools is also essential for defendants and criminal defense attorneys, who have the right to discover when these tools are used during an investigation. In support of its trade secret claim, <a href="https://www.eff.org/document/redacted-documents-san-joaquin-sheriffs-office">Pen-Link cited terms of service</a> that purported to restrict the government from disclosing its use of this technology without the company’s consent. Terms like this cannot be used to circumvent the public’s right to know, and governments should not agree to them.</p>
<p>Finally, in order for surveillance tools and their prices to be protected as a trade secret under the law, they have to actually be secret. However, Pen-Link’s tools and their prices are already public across the internet—in <a href="https://www.brennancenter.org/sites/default/files/2021-09/G.%20Cobwebs%20materials%20for%20UASI%202020.pdf">previous</a> <a href="https://d3n9y02raazwpg.cloudfront.net/pcbgov/3b61e5b3-7c8f-11ed-9024-0050569183fa-b95fcb69-f19b-4652-8495-9f0a506dc971-1691101307.pdf">public</a> <a href="https://www.state.wv.us/admin/purchase/Awards/Documents/2024/Q1/A_0601_CMA_MAP2400000001.pdf">records</a> <a href="https://eff.org/files/2025/08/18/cpra_-_elk_grove_pen-link.pdf">disclosures</a>, <a href="https://appsource.microsoft.com/en-au/product/saas/cobwebs-technologies.tangles?tab=Overview">product</a> <a href="http://web.archive.org/web/20211217021955/https:/cobwebs.com/products/location-intelligence-system/#expand">descriptions</a>, <a href="https://tsdr.uspto.gov/#caseNumber=98414253&amp;caseSearchType=US_APPLICATION&amp;caseType=DEFAULT&amp;searchType=statusSearch">trademark applications</a>, and <a href="https://a856-cityrecord.nyc.gov/RequestDetail/20230406118">government websites</a>.</p>
<h3><strong> Lessons Learned</strong></h3>
<p>Government surveillance contractors should consider the policy implications, reputational risks, and waste of time and resources when attempting to hide from the public the full terms of their sales to law enforcement.</p>
<p>Cases like these, known as reverse-public records act lawsuits, are troubling because a well-resourced company <a href="https://www.rcfp.org/how-reverse-cpra-lawsuits-harm-the-publics-right-to-know/">can frustrate public access</a> by merely filing the case. Not every member of the public, researcher, or journalist can afford to litigate their public records request. Without a team of internal staff attorneys, it would have cost EFF tens of thousands of dollars to fight this lawsuit.</p>
<p> <span>Luckily in this case, EFF had the ability to fight back. And we will continue our surveillance transparency work. </span>That is why EFF required some attorneys’ fees to be part of the final settlement.</p>
</div></div></div><div class="field field--name-field-related-cases field--type-node-reference field--label-above"><div class="field__label">Related Cases:&nbsp;</div><div class="field__items"><div class="field__item even"><a href="/cases/pen-link-v-county-san-joaquin-sheriffs-office">Pen-Link v. County of San Joaquin Sheriff’s Office</a></div></div></div></description>
<pubDate>Tue, 19 Aug 2025 04:40:57 +0000</pubDate>
<guid isPermaLink="false">111002 at https://www.eff.org</guid>
<dc:creator>Mario Trujillo</dc:creator>
<dc:creator>Beryl Lipton</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/sls-banner-2.png" alt="8 squares with surveillance methods" type="image/png" length="85125" />
</item>
<item>
<title>Victory! Ninth Circuit Limits Intrusive DMCA Subpoenas</title>
<link>https://www.eff.org/deeplinks/2025/08/victory-ninth-circuit-limits-intrusive-dmca-subpoenas</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p>The Ninth Circuit upheld an important limitation on Digital Millenium Copyright Act (DMCA) subpoenas that other federal courts have recognized for more than two decades. The DMCA, a misguided anti-piracy law passed in the late nineties, created a bevy of powerful tools, ostensibly to help copyright holders fight online infringement. Unfortunately, the DMCA’s powerful protections are ripe for abuse by “copyright trolls,” unscrupulous litigants who abuse the system at everyone else’s expense.</p>
<p>The DMCA’s “notice and takedown” regime is one of these tools. Section 512 of the DMCA creates “safe harbors” that protect service providers from liability, so long as they disable access to content when a copyright holder notifies them that the content is infringing, and fulfill some other requirements. This gives copyright holders a quick and easy way to censor allegedly infringing content without going to court. </p>
<p class="pull-quote">Unfortunately, the DMCA’s powerful protections are ripe for abuse by “copyright trolls”</p>
<p>Section 512(h) is ostensibly designed to facilitate this system, by giving rightsholders a fast and easy way of identifying anonymous infringers. Section 512(h) allows copyright holders to obtain a judicial subpoena to unmask the identities of allegedly infringing anonymous internet users, just by asking a court clerk to issue one, and attaching a copy of the infringement notice. In other words, they can wield the court’s power to override an internet user’s right to anonymous speech, without permission from a judge. It’s easy to see why these subpoenas are prone to misuse.</p>
<p>Internet service providers (ISPs)—the companies that provide an internet connection (e.g. broadband or fiber) to customers—are obvious targets for these subpoenas. Often, copyright holders know the Internet Protocol (IP) address of an alleged infringer, but not their name or contact information. Since ISPs assign IP addresses to customers, they can often identify the customer associated with one.</p>
<p>Fortunately, Section 512(h) has an important limitation that protects users. Over two decades ago, several federal appeals courts ruled that Section 512(h) subpoenas cannot be issued to ISPs. Now, in <em><a href="https://www.eff.org/document/re-subpoena-internet-subscribers-cox-communications-llc">In re Internet Subscribers of Cox Communications, LLC</a></em>, the Ninth Circuit agreed, as EFF urged it to in our <a href="https://www.eff.org/document/eff-amicus-brief-13">amicus brief</a>.</p>
<p>As the Ninth Circuit held:</p>
<blockquote><p>Because a § 512(a) service provider cannot remove or disable access to infringing content, it cannot receive a valid (c)(3)(A) notification, which is a prerequisite for a § 512(h) subpoena. We therefore conclude from the text of the DMCA that a § 512(h) subpoena cannot issue to a § 512(a) service provider as a matter of law.</p>
</blockquote>
<p>This decision preserves the understanding of Section 512(h) that internet users, websites, and copyright holders have shared for decades. As EFF explained to the court in its amicus brief:</p>
<blockquote><p>[This] ensures important procedural safeguards for internet users against a group of copyright holders who seek to monetize frequent litigation (or threats of litigation) by coercing settlements—copyright trolls. Affirming the district court and upholding the interpretation of the D.C. and Eighth Circuits will preserve this protection, while still allowing rightsholders the ability to find and sue infringers.</p>
</blockquote>
<p>EFF applauds this decision. And because three federal appeals courts have all ruled the same way on this question—and none have disagreed—ISPs all over the country can feel confident about protecting their customers’ privacy by simply throwing improper DMCA 512(h) subpoenas in the trash.</p>
</div></div></div></description>
<pubDate>Mon, 18 Aug 2025 21:01:15 +0000</pubDate>
<guid isPermaLink="false">110995 at https://www.eff.org</guid>
<category domain="https://www.eff.org/issues/innovation">Creativity & Innovation</category>
<category domain="https://www.eff.org/issues/copyright-trolls">Copyright Trolls</category>
<dc:creator>Tori Noble</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/dmca-copyright-1_0.png" alt="Copyright" type="image/png" length="9133" />
</item>
<item>
<title>From Book Bans to Internet Bans: Wyoming Lets Parents Control the Whole State’s Access to The Internet</title>
<link>https://www.eff.org/deeplinks/2025/08/book-bans-internet-bans-wyoming-lets-parents-control-whole-states-access-internet</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p><span>If you've read about the sudden appearance of age verification across the internet in the UK and thought it would never happen in the U.S., take note: many politicians want the same or even more strict laws. </span><b>As of July 1st, </b><a href="https://sdlegislature.gov/Session/Bill/25525/282508"><b>South Dakota</b></a><b> and </b><a href="https://wyoleg.gov/Legislation/2025/HB0043"><b>Wyoming</b></a><b> enacted laws requiring any website that hosts any sexual content to implement age verification measures.</b><span> These laws would potentially capture a broad range of non-pornographic content, including classic literature and art, and expose a wide range of platforms, of all sizes, to civil or criminal liability for not using age verification on every user. That includes social media networks like X, Reddit, and Discord; online retailers like Amazon and Barnes &amp; Noble; and streaming platforms like Netflix and Rumble—essentially, any site that allows user-generated or published content without gatekeeping access based on age.</span></p>
<p><span>These laws expand on the flawed logic from last month’s troubling Supreme Court decision, </span><i><span> </span></i><a href="https://www.eff.org/deeplinks/2025/06/todays-supreme-court-decision-age-verification-tramples-free-speech-and-undermines"><i><span>Free Speech Coalition v. Paxton</span></i></a><i><span>,</span></i><span> which gave Texas the green light to require age verification for sites where at least one-third (33.3%) of the content is sexual materials deemed “harmful to minors.” Wyoming and South Dakota seem to interpret this decision to give them license to require age verification—and potential legal liability—for any website that contains ANY image, video, or post that contains sexual content that could be interpreted as harmful to minors. Platforms or websites may be able to comply by implementing an “age gate” within certain sections of their sites where, for example, user-generated content is allowed, or at the point of entry to the entire site.</span></p>
<p><span>Although these laws are in effect, </span><a href="https://www.eff.org/deeplinks/2025/07/despite-supreme-court-setback-eff-fights-against-online-age-mandates"><span>we do not believe the Supreme Court’s decision in FSC v. Paxton gives these laws any constitutional legitimacy</span></a><span>. You do not need a law degree to see the difference between the Texas law—which targets sites where a substantial portion (one third) of content is “sexual material harmful to minors”—and these laws, which apply to any site that contains even a single instance of such material. In practice, it is the difference between burdening adults with age gates for websites that host “adult” content, and burdening the entire internet, including sites that allow user-generated content or published content.</span></p>
<p class="pull-quote"><span><span>The law invites parents in Wyoming to take enforcement for the entire state—every resident, and everyone else's children—into their own hands</span></span></p>
<p><span>But </span><a href="https://pen.org/report/banned-in-the-usa-state-laws-supercharge-book-suppression-in-schools/#heading-10"><span>lawmakers, prosecutors, and activists in conservative states</span></a><span> have worked</span><a href="https://firstamendment.mtsu.edu/article/book-banning/"><span> for years</span></a><span> to aggressively expand the definition of “harmful to minors” and </span><a href="https://ula.org/content/2023/01/hb374/"><span>use other methods</span></a><span> to censor a broad swath of content: diverse educational materials, sex education resources, art, and even award-winning literature. Books like </span><a href="https://www.pbs.org/wgbh/americanexperience/features/banned-bluest-eye/"><i><span>The Bluest Eye</span></i><span> by Toni Morrison</span></a><span>, </span><a href="https://www.theatlantic.com/ideas/archive/2023/02/margaret-atwood-handmaids-tale-virginia-book-ban-library-removal/673013/"><i><span>The Handmaid’s Tale</span></i><span> by Margaret Atwood</span></a><span>, and </span><a href="https://www.pbs.org/wgbh/americanexperience/features/banned-and-tango-makes-three/#:~:text=And%20Tango%20Makes%20Three%20is,all%20different%20types%20of%20people."><i><span>And Tango Makes Three</span></i></a><span> have all been swept up in these crusades—not because of their overall content, but because of isolated scenes or references.</span></p>
<p><span>Wyoming’s law is also particularly extreme: rather than provide enforcement by the Attorney General, HB0043 is a “bounty” law that deputizes any resident with a child to file civil lawsuits against websites they believe are in violation, effectively turning anyone into a potential content cop. There is no central agency, no regulatory oversight, and no clear standard. Instead, the law invites parents in Wyoming to take enforcement for the entire state—every resident, and everyone else's children—into their own hands by suing websites that contain a single example of objectionable content. Though most </span><a href="https://onlinesafety.orrick.com/"><span>other state age-verification laws</span></a><span> often allow individuals to make reports to state Attorneys General who are responsible for enforcement, and </span><a href="https://www.kslegislature.gov/li_2024/b2023_24/measures/sb394/"><span>some include a private right of action</span></a><span> allowing parents or guardians to file civil claims for damages, the Wyoming law is similar to laws in </span><a href="https://action.freespeechcoalition.com/bill/louisiana-age-verification-bill-2022/"><span>Louisiana</span></a><span> and </span><a href="https://le.utah.gov/~2023/bills/static/SB0287.html"><span>Utah</span></a><span> that rely entirely on civil enforcement. </span></p>
<p><span>This is a textbook example of a “</span><a href="https://en.wikipedia.org/wiki/Heckler%27s_veto"><span>heckler’s veto</span></a><span>,” where a single person can unilaterally decide what content the public is allowed to access. However, it is clear that the Wyoming legislature explicitly designed the law this way </span><a href="https://www.techdirt.com/2023/08/03/by-making-its-porn-age-verification-law-a-bounty-law-utah-able-to-deflect-challenge-to-the-laws-validity/"><span>in a deliberate effort</span></a><span> to sidestep state enforcement and avoid an early constitutional court challenge, as </span><a href="https://truthout.org/articles/more-states-are-passing-laws-offering-vigilantes-cash-to-report-targeted-groups/?fbclid=IwY2xjawHxMUtleHRuA2FlbQIxMQABHWbMv_mitqx9bonc1Hyt2PeMqL8yGjYZDJqNcoz2dea4ntFKHq8CoTotCw_aem_V1doKc0O3rmpb-wIq0njTg"><span>many other bounty laws</span></a><span> targeting people who assist in abortions, drag performers, and trans people have done. The result? An open invitation from the Wyoming legislature to weaponize its citizens, and the courts, against platforms, big or small. Because when nearly anyone can sue any website over any content they deem unsafe for minors, the result isn’t safety. It’s censorship.</span></p>
<p class="pull-quote"><span><span>That also means your personal website or blog—if it includes any “sexual content harmful to minors”—is also at risk. </span></span></p>
<p><span>Imagine a Wyomingite stumbling across an NSFW subreddit or a Tumblr fanfic blog and deciding it violates the law. If they were a parent of a minor, that resident could sue the platform, potentially forcing those websites to restrict or geo-block access to the entire state in order to avoid the cost and risk of litigation. And because there’s no threshold for how much “harmful” content a site must host, a single image or passage could be enough. That also means your personal website or blog—if it includes any “sexual content harmful to minors”—is also at risk. </span></p>
<p><span>This law will likely be challenged, and eventually, halted, by the courts. But given that the state cannot enforce it, those challenges will not come until a parent sues a website. Until then, its mere existence poses a serious threat to free speech online. Risk-averse platforms may over-correct, over-censor, or even restrict access to the state entirely just to avoid the possibility of a lawsuit, </span><a href="https://www.wyomingnews.com/news/local_news/porn-industry-supreme-court-weigh-in-as-wyoming-requires-age-verification-on-adult-sites/article_c46db147-7f48-40ab-ac80-f5143e951f66.html"><span>as Pornhub has already done</span></a><span>. And should sites impose age-verification schemes to comply, they will be a </span><a href="https://www.eff.org/document/age-verification-harms-users-all-ages"><span>speech and privacy disaster </span></a><span>for all state residents.</span></p>
<p><span>And let’s be clear: these state laws are not outliers. They are part of a growing political movement to redefine terms like “obscene,” “pornographic,” and “sexually explicit” as catchalls to restrict content for both adults and young people alike. What starts in one state and one lawsuit can quickly become a national blueprint. </span></p>
<p class="pull-quote"><span><span>If we don’t push back now, the internet as we know it could disappear behind a wall of fear and censorship.</span></span></p>
<p><span>Age-verification laws like these have relied on vague language, intimidating enforcement mechanisms, and public complacency to take root. Courts may eventually strike them down, but in the meantime, users, platforms, creators, and digital rights advocacy groups need to stay alert, speak up against these laws, and push back while they can. When governments expand censorship and surveillance offline, it's our job at EFF to protect your access to a free and open internet. Because if we don’t push back now, the internet as we know it— the messy, diverse, and open internet we know—could disappear behind a wall of fear and censorship.</span></p>
<p><span>Ready </span><a href="https://supporters.eff.org/donate/neon"><span>to join us</span></a><span>? Urge your </span><a href="https://www.eff.org/deeplinks/2024/12/effs-2024-battle-against-online-age-verification-defending-youth-privacy-and-free"><span>state</span></a><span> lawmakers to </span><a href="https://www.congress.gov/state-legislature-websites"><span>reject harmful age-verification laws</span></a><span>. </span><a href="https://act.eff.org/action/congress-shouldn-t-control-what-we-re-allowed-to-read-online"><span>Call or email your representatives</span></a><span> to oppose KOSA and any other proposed federal age-checking mandates. Make your voice heard by talking to your friends and family about what we all stand to lose if the age-gated internet becomes a global reality. Because the fight for a free internet starts with us.</span></p>
</div></div></div></description>
<pubDate>Mon, 18 Aug 2025 20:00:21 +0000</pubDate>
<guid isPermaLink="false">110992 at https://www.eff.org</guid>
<category domain="https://www.eff.org/issues/free-speech">Free Speech</category>
<category domain="https://www.eff.org/issues/privacy">Privacy</category>
<dc:creator>Rindala Alajaji</dc:creator>
<dc:creator>Jason Kelley</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/ageverificationbanner.png" alt="Purple padlock with an 18+ only symbol and a combination lock requiring Day, Month, and Year. Surrounded by abstract purple dashed lines." type="image/png" length="1291379" />
</item>
<item>
<title>New Documents Show First Trump DOJ Worked With Congress to Amend Section 230</title>
<link>https://www.eff.org/deeplinks/2025/08/new-documents-show-first-trump-doj-worked-congress-amend-section-230</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p>In the wake of rolling out its own proposal to significantly limit a key law protecting internet users’ speech in the summer of 2020, the Department of Justice under the first Trump administration actively worked with lawmakers to support further efforts to stifle online speech.</p>
<p>The new <a href="https://www.documentcloud.org/documents/26050394-05-30-25-oip-final-response/">documents</a>, disclosed in an EFF Freedom of Information Act (FOIA) lawsuit, show officials were <a href="https://www.documentcloud.org/documents/26050394-05-30-25-oip-final-response/#document/p339">talking</a> with Senate staffers <a href="https://www.documentcloud.org/documents/26050394-05-30-25-oip-final-response/#document/p29">working to pass</a> speech- and privacy-chilling bills like the <a href="https://www.eff.org/deeplinks/2022/02/its-back-senators-want-earn-it-bill-scan-all-online-messages">EARN IT Act</a> and <a href="https://www.eff.org/deeplinks/2021/03/even-changes-revised-pact-act-will-lead-more-online-censorship">PACT Act</a> (neither became law). DOJ officials also <a href="https://www.documentcloud.org/documents/26050394-05-30-25-oip-final-response/#document/p410">communicated</a> with an organization that sought to condition Section 230’s legal protections on websites using age-verification systems if they hosted sexual content.</p>
<p><a href="https://www.eff.org/issues/cda230">Section 230</a> protects users’ online speech by protecting the online intermediaries we all rely on to communicate on blogs, social media platforms, and educational and cultural platforms like Wikipedia and the Internet Archive. Section 230 embodies the principle that we should all be responsible for our own actions and statements online, but generally not those of others. The law prevents most civil suits against users or services that are based on what others say.</p>
<p>DOJ’s work to weaken Section 230 began before President Donald Trump issued an executive order targeting social media services in 2020, and officials in DOJ appeared to be <a href="https://www.eff.org/deeplinks/2025/02/first-trump-doj-assembled-tiger-team-rewrite-key-law-protecting-online-speech">blindsided</a> by the order. EFF was counsel to plaintiffs who <a href="https://www.eff.org/cases/rock-vote-v-trump">challenged the order</a>, and President Joe Biden later <a href="https://www.eff.org/deeplinks/2021/05/president-biden-revokes-unconstitutional-executive-order-retaliating-against">rescinded it</a>. EFF filed <a href="https://www.eff.org/cases/eff-v-omb-trump-230-executive-order-foia">two FOIA suits</a> seeking records about the executive order and the DOJ’s work to weaken Section 230.</p>
<p>The DOJ’s latest release provides more detail on a general theme that has been apparent for years: that the DOJ in 2020 <a href="https://www.eff.org/deeplinks/2022/11/documents-show-dojs-multi-pronged-effort-undermine-section-230">flexed its powers</a> to try to undermine or rewrite Section 230. The documents show that in addition to meeting with congressional staffers, DOJ was <a href="https://www.documentcloud.org/documents/26050394-05-30-25-oip-final-response/#document/p443">critical</a> of a proposed amendment to the EARN IT Act, with one official stating that it “completely undermines” the sponsors’ argument for rejecting DOJ’s proposal to exempt so-called “Bad Samaritan” websites from Section 230.</p>
<p>Further, DOJ <a href="https://www.documentcloud.org/documents/26050394-05-30-25-oip-final-response/#document/p988">reviewed and proposed edits</a> to a rulemaking petition to the Federal Communications Commission that tried to reinterpret Section 230. That effort never moved forward given the FCC <a href="https://www.eff.org/deeplinks/2020/06/trumps-executive-order-seeks-have-fcc-regulate-platforms-heres-why-it-wont-happen">lacked any legal authority</a> to reinterpret the law.</p>
<p>You can read the latest release of documents <a href="https://www.documentcloud.org/documents/26050394-05-30-25-oip-final-response/">here</a>, and all the documents released in this case are <a href="https://www.documentcloud.org/projects/220706-eff-foia-trump-doj-230-2016-2020/">here</a>.</p>
</div></div></div><div class="field field--name-field-related-cases field--type-node-reference field--label-above"><div class="field__label">Related Cases:&nbsp;</div><div class="field__items"><div class="field__item even"><a href="/cases/eff-v-omb-trump-230-executive-order-foia">EFF v. OMB (Trump 230 Executive Order FOIA)</a></div></div></div></description>
<pubDate>Fri, 15 Aug 2025 17:38:20 +0000</pubDate>
<guid isPermaLink="false">110988 at https://www.eff.org</guid>
<category domain="https://www.eff.org/issues/cda230">Section 230</category>
<dc:creator>Aaron Mackey</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/section_230_banner.jpeg" alt="" type="image/jpeg" length="98394" />
</item>
<item>
<title>President Trump’s War on “Woke AI” Is a Civil Liberties Nightmare</title>
<link>https://www.eff.org/deeplinks/2025/08/president-trumps-war-woke-ai-civil-liberties-nightmare</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p>The White House’s recently-unveiled “<a href="https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf">AI Action Plan</a>” wages war on so-called “woke AI”—including large language models (LLMs) that provide information inconsistent with the administration’s views on climate change, gender, and other issues. It also targets measures designed to mitigate the generation of racial and gender biased content and even <a href="https://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-antisemitic-racist-content">hate speech</a>. The reproduction of this bias is a pernicious problem that AI developers have struggled to solve for over a decade.</p>
<p>A new executive order called “<a href="https://www.whitehouse.gov/presidential-actions/2025/07/preventing-woke-ai-in-the-federal-government/">Preventing Woke AI in the Federal Government</a>,” released alongside the AI Action Plan, seeks to strong-arm AI companies into modifying their models to conform with the Trump Administration’s ideological agenda.</p>
<p>The executive order requires AI companies that receive federal contracts to prove that their LLMs are free from purported “ideological biases” like “diversity, equity, and inclusion.” This heavy-handed censorship will not make models more accurate or “trustworthy,” as the Trump Administration <a href="https://www.whitehouse.gov/fact-sheets/2025/07/fact-sheet-president-donald-j-trump-prevents-woke-ai-in-the-federal-government/">claims</a>, but is a blatant attempt to censor the development of LLMs and restrict them as a tool of expression and information access. While the First Amendment permits the government to choose to purchase only services that reflect government viewpoints, the government may not use that power to influence what services and information are available to the public. Lucrative government contracts can push commercial companies to implement features (or biases) that they wouldn't otherwise, and those often roll down to the user. Doing so would impact the <a href="https://apnews.com/article/ai-artificial-intelligence-poll-229b665d10d057441a69f56648b973e1">60 percent</a> of Americans who get information from LLMs, and it would force developers to roll back efforts to reduce biases—making the models much less accurate, and far more likely to cause harm, especially in the hands of the government. </p>
<h3><strong>Less Accuracy, More Bias and Discrimination</strong></h3>
<p>It’s no secret that AI models—including gen AI—tend to discriminate against racial and gender minorities. AI models use machine learning to identify and reproduce patterns in data that they are “trained” on. If the training data reflects biases against racial, ethnic, and gender minorities—which it often does—then the AI model will “learn” to discriminate against those groups. In other words, garbage in, garbage out. Models also often reflect the biases of the <a href="https://hai.stanford.edu/news/covert-racism-ai-how-language-models-are-reinforcing-outdated-stereotypes">people</a> who train, test, and evaluate them. </p>
<p>This is true across different types of AI. For example, “<a href="https://www.amnesty.org.uk/files/2025-02/Automated%20Racism%20Report%20-%20Amnesty%20International%20UK%20-%202025.pdf?VersionId=JqCcTODw37yAXyINmAY6uAzrKEWucFF7">predictive policing</a>” tools trained on arrest data that reflects overpolicing of black neighborhoods frequently recommend heightened levels of policing in those neighborhoods, often based on inaccurate predictions that crime will occur there. Generative AI models are also implicated. LLMs <a href="https://hai.stanford.edu/news/covert-racism-ai-how-language-models-are-reinforcing-outdated-stereotypes">already</a> recommend more criminal convictions, harsher sentences, and less prestigious jobs for people of color. Despite that people of color account for less than half of the U.S. prison population, <a href="https://www.bloomberg.com/graphics/2023-generative-ai-bias/">80 percent</a> of Stable Diffusion's AI-generated images of inmates have darker skin. Over 90 percent of AI-generated images of judges were men; in real life, 34 percent of judges are women. </p>
<p>These models aren’t just biased—they’re fundamentally incorrect. Race and gender aren’t objective criteria for deciding who gets hired or convicted of a crime. Those discriminatory decisions reflected trends in the training data that could be caused by bias or chance—not some “objective” reality. Setting fairness aside, biased models are just worse models: they make more mistakes, more often. Efforts to reduce bias-induced errors will ultimately make models more accurate, not less. </p>
<h3><strong>Biased LLMs Cause Serious Harm—Especially in the Hands of the Government</strong></h3>
<p>But inaccuracy is far from the only problem. When government agencies start using biased AI to make decisions, real people suffer. Government officials routinely make decisions that impact people’s personal freedom and access to financial resources, healthcare, housing, and more. The White House’s AI Action Plan calls for a massive increase in agencies’ use of LLMs and other AI—while all but requiring the use of biased models that automate systemic, historical injustice. Using AI simply to entrench the way things have always been done squanders the promise of this new technology.</p>
<p>We need strong safeguards to prevent government agencies from procuring biased, harmful AI tools. In a series of <a href="https://www.federalregister.gov/documents/2025/01/28/2025-01901/initial-rescissions-of-harmful-executive-orders-and-actions">executive</a> <a href="https://www.federalregister.gov/documents/2025/01/31/2025-02172/removing-barriers-to-american-leadership-in-artificial-intelligence">orders</a>, as well as his AI Action Plan, the Trump Administration has rolled back the already-feeble Biden-era AI safeguards. This makes AI-enabled civil rights abuses far more likely, putting everyone’s rights at risk. </p>
<p>And the Administration could easily exploit the new rules to pressure companies to make publicly available models worse, too. Corporations like healthcare companies and landlords increasingly use AI to make high-impact decisions about people, so more biased commercial models would also cause harm. </p>
<p>We have <a href="https://www.eff.org/deeplinks/2024/12/fighting-automated-oppression-2024-review-0?language=ja">argued</a> <a href="https://www.eff.org/deeplinks/2025/03/eff-nsf-ai-action-plan-must-put-people-first">against</a> using machine learning to make <a href="https://sls.eff.org/technologies/predictive-policing">predictive</a> <a href="https://www.eff.org/deeplinks/2023/10/cities-should-act-now-ban-predictive-policingand-stop-using-shotspotter-too">policing</a> <a href="https://www.eff.org/deeplinks/2024/12/ai-and-policing-2024-year-review">decisions</a> or other <a href="https://www.eff.org/deeplinks/2025/06/nyc-lets-ai-gamble-child-welfare?language=ja">punitive</a> <a href="https://www.eff.org/ja/deeplinks/2024/09/eff-140-other-organizations-call-end-ai-use-immigration-decisions?language=ja">judgments</a> for just these reasons, and will continue to protect your right not to be subject to biased government determinations influenced by machine learning.</p>
</div></div></div></description>
<pubDate>Thu, 14 Aug 2025 23:46:59 +0000</pubDate>
<guid isPermaLink="false">110986 at https://www.eff.org</guid>
<category domain="https://www.eff.org/issues/ai">Artificial Intelligence & Machine Learning</category>
<category domain="https://www.eff.org/issues/innovation">Creativity & Innovation</category>
<category domain="https://www.eff.org/issues/free-speech">Free Speech</category>
<dc:creator>Tori Noble</dc:creator>
<dc:creator>Kit Walsh</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/artificial-intelligence-sm-1b_0.png" alt="A robot poses as The Thinker by Rodin" type="image/png" length="191842" />
</item>
<item>
<title>🫥 Spotify Face Scans Are Just the Beginning | EFFector 37.10</title>
<link>https://www.eff.org/deeplinks/2025/08/spotify-face-scans-are-just-beginning-effector-3710</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><div class="field field--name-body field--type-text-with-summary field--label-hidden">
<div class="field__items">
<div class="field__item even">
<div class="field field--name-body field--type-text-with-summary field--label-hidden">
<div class="field__items">
<div class="field__item even">
<div class="field field--name-body field--type-text-with-summary field--label-hidden">
<div class="field__items">
<div class="field__item even">
<p>Catching up on your backlog of digital rights news has never been easier! EFF has a one-stop-shop to keep you up to date on the latest in the fight against censorship and surveillance—our <a href="https://www.eff.org/effector/37/10">EFFector newsletter</a>.</p>
<p>This time we're covering <span>an act of government intimidation in Florida when the state <a href="https://www.eff.org/deeplinks/2025/07/you-went-drag-show-now-state-florida-wants-your-name?utm_source=effector">subpoenaed a venue for surveillance video</a> after hosting an LGBTQ+ pride event</span>, <span>calling out data brokers in California for <a href="https://www.eff.org/deeplinks/2025/08/data-brokers-are-ignoring-privacy-law-we-deserve-better?utm_source=effector">failing to respond to requests</a> for personal data—even though responses are required by state law</span>, and explaining why Canada's Bill C-2 would <a href="https://www.eff.org/deeplinks/2025/07/canadas-bill-c-2-opens-floodgates-us-surveillance?utm_source=effector">open the floodgates</a> for U.S. surveillance.</p>
<p>Don't forget to also check out our audio companion to EFFector as well! We're interviewing staff about some of the important work that they're doing. This time, EFF Senior Speech and Privacy Activist Paige Collings covers the harms of age verification measures that are being passed across the globe. Listen now on <a href="https://youtu.be/-1S2NSGAemQ">YouTube</a> or the <a href="https://archive.org/details/37.10">Internet Archive</a>.</p>
<p class="take-action"><a href="https://youtu.be/-1S2NSGAemQ">Listen TO EFFECTOR</a></p>
<p class="take-action take-explainer"><span>EFFECTOR 37.10 - Spotify Face Scans Are Just the Beginning</span></p>
<p><span>Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. </span></p>
<p><span>Thank you to the supporters around the world who make our work possible! If you're not a member yet, <a href="https://eff.org/effect">join EFF today</a> to help us fight for a brighter digital future.</span></p>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div></div></div></description>
<pubDate>Wed, 13 Aug 2025 18:43:31 +0000</pubDate>
<guid isPermaLink="false">110976 at https://www.eff.org</guid>
<dc:creator>Christian Romero</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/effector_banner_4.jpeg" alt="" type="image/jpeg" length="130379" />
</item>
<item>
<title>Torture Victim’s Landmark Hacking Lawsuit Against Spyware Maker Can Proceed, Judge Rules</title>
<link>https://www.eff.org/press/releases/torture-victims-landmark-hacking-lawsuit-against-spyware-maker-can-proceed-judge</link>
<description><div class="field field--name-field-pr-subhead field--type-text field--label-hidden"><div class="field__items"><div class="field__item even">EFF is Co-Counsel in Case Detailing Harms Caused by Export of U.S. Cybersurveillance Technology and Training to Repressive Regimes</div></div></div><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p><span data-contrast="auto">PORTLAND, OR – Saudi human rights activist Loujain Alhathloul’s groundbreaking lawsuit concerning spying software that enabled her imprisonment and torture can advance, a federal judge ruled in <a href="https://www.eff.org/document/alhathloul-v-darkmatter-opinion-and-order-motion-dismiss" target="_blank" rel="noopener noreferrer">an opinion unsealed Tuesday</a>.</span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">U.S. District Judge Karin J. Immergut of the District of Oregon ruled that Alhathloul’s lawsuit against DarkMatter Group and three of its former executives can proceed on its claims under the Computer Fraud and Abuse Act – the first time that a human rights case like this has gone so far under this law. The judge dismissed other claims made under the Alien Tort Statute.</span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">Alhathloul is represented in the case by the Electronic Frontier Foundation (EFF), the </span><a href="https://cja.org/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Center for Justice and Accountability</span></a><span data-contrast="auto">, </span><a href="https://foleyhoag.com/home/" target="_blank" rel="noopener noreferrer"><span>Foley Hoag</span></a><span data-contrast="auto">, and </span><a href="https://tonkon.com/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Tonkon Torp LLP</span></a><span data-contrast="auto">.</span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="none">"This important ruling is the first to let a lawsuit filed by the victim of a foreign government’s human rights abuses, enabled by U.S. spyware used to hack the victim’s devices, proceed in our federal courts,” said EFF Civil Liberties Director David Greene. “This case is particularly important at a time when transnational human rights abuses are making daily headlines, and we are eager to proceed with proving our case.”</span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">“Transparency in such times and circumstances is a cornerstone that enacts integrity and drives accountability as it offers the necessary information to understand our reality and act upon it. The latter presents a roadmap to a safer world,” Alhathloul said. “Today’s judge’s order has become a public court document only to reinforce those rooted concepts of transparency that will one day lead to accountability.”</span><span data-ccp-props="240}"> </span></p>
<p><span data-contrast="auto">Alhathloul, 36, a nominee for the 2019 and 2020 Nobel Peace Prize, has been a powerful advocate for women’s rights in Saudi Arabia for more than a decade. She was at the forefront of the public campaign advocating for women’s right to drive in Saudi Arabia and has been a vocal critic of the country’s male guardianship system. </span><span data-ccp-props="240}"> </span></p>
<p><span data-contrast="auto">The lawsuit alleges that defendants DarkMatter Group, Marc Baier, Ryan Adams, and Daniel Gericke were hired by the UAE to target Alhathloul and other perceived dissidents as part of the UAE’s broader cooperation with Saudi Arabia. According to the lawsuit, the defendants used U.S. cybersurveillance technology, along with their U.S. intelligence training, to install spyware on Alhathloul’s iPhone and extract data from it, including while she was in the United States and communicating with U.S. contacts. After the hack, Alhathloul was arbitrarily detained by the UAE security services and forcibly rendered to Saudi Arabia, where she was imprisoned and tortured. She is no longer in prison, but she is currently subject to an illegal travel ban and unable to leave Saudi Arabia.</span><span data-ccp-props="240}"> </span></p>
<p><span data-contrast="auto">The case was filed in December 2021; Judge Immergut dismissed it in March 2023 with leave to amend, and the amended complaint was filed in May 2023. </span><span data-ccp-props="240}"> </span></p>
<p><span data-contrast="auto">“This Court concludes that Plaintiff has shown that her claims arise out of Defendants’ forum-related contacts,” Judge Immergut wrote in her opinion. “Defendants’ forum-related contacts include (1) their alleged tortious exfiltration of data from Plaintiff’s iPhone while she was in the U.S. and (2) their acquisition, use, and enhancement of U.S.-created exploits from U.S. companies to create the Karma hacking tool used to accomplish their tortious conduct. Plaintiff’s CFAA claims arise out of these U.S. contacts.”</span><span data-ccp-props="240}"> </span></p>
<p><b><span data-contrast="auto">For the judge’s opinion: </span></b><span data-ccp-props="{}"> <a href="https://www.eff.org/document/alhathloul-v-darkmatter-opinion-and-order-motion-dismiss" target="_blank" rel="noopener noreferrer">https://www.eff.org/document/alhathloul-v-darkmatter-opinion-and-order-motion-dismiss</a></span></p>
<p><b><span data-contrast="auto">For more about the case:</span></b> <a href="https://www.eff.org/cases/alhathloul-v-darkmatter-group" target="_blank" rel="noopener noreferrer"><span data-contrast="none">https://www.eff.org/cases/alhathloul-v-darkmatter-group</span></a><span data-ccp-props="{}"> </span></p>
</div></div></div><div class="field field--name-field-contact field--type-node-reference field--label-above"><div class="field__label">Contact:&nbsp;</div><div class="field__items"><div class="field__item even"><div class="ds-1col node node--profile view-mode-node_embed node--node-embed node--profile--node-embed clearfix">
<div class="">
<div class="field field--name-field-profile-first-name field--type-text field--label-hidden"><div class="field__items"><div class="field__item even">David</div></div></div><div class="field field--name-field-profile-last-name field--type-text field--label-hidden"><div class="field__items"><div class="field__item even">Greene</div></div></div><div class="field field--name-field-profile-title field--type-text field--label-hidden"><div class="field__items"><div class="field__item even">Civil Liberties Director</div></div></div><div class="field field--name-field-profile-email field--type-email field--label-hidden"><div class="field__items"><div class="field__item even"><a href="mailto:davidg@eff.org">davidg@eff.org</a></div></div></div> </div>
</div>
</div></div></div></description>
<pubDate>Wed, 13 Aug 2025 15:38:54 +0000</pubDate>
<guid isPermaLink="false">110978 at https://www.eff.org</guid>
<dc:creator>Josh Richman</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/og-laptop_0.png" alt="" type="image/png" length="124026" />
</item>
<item>
<title>Podcast Episode: Separating AI Hope from AI Hype</title>
<link>https://www.eff.org/deeplinks/2025/08/podcast-episode-separating-ai-hope-ai-hype</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p><span data-ccp-props="279}">If you believe the hype, artificial intelligence will soon take all our jobs, or solve all our problems, or destroy all boundaries between reality and lies, or help us live forever, or take over the world and exterminate humanity. That’s a pretty wide spectrum, and leaves a lot of people very confused about what exactly AI can and can’t do. In this episode, we’ll help you sort that out: For example, we’ll talk about why even superintelligent AI cannot simply replace humans for most of what we do, nor can it perfect or ruin our world unless we let it.</span></p>
<p><div class="mytube" style="width: 100%px;">
<div class="mytubetrigger" tabindex="0">
<img src="https://www.eff.org/sites/all/modules/custom/mytube/play.png" class="mytubeplay" alt="play" style="top: -4px; left: 20px;" />
<div hidden class="mytubeembedcode">%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F49181a0e-f8b4-4b2a-ae07-f087ecea2ddd%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E</div>
</div>
<div class="mytubetext">
<span><a href="https://www.eff.org/deeplinks/2008/02/embedded-video-and-your-privacy" rel="noreferrer" target="_blank">Privacy info.</a></span>
<span>This embed will serve content from <em><a rel="nofollow" href="https://player.simplecast.com/49181a0e-f8b4-4b2a-ae07-f087ecea2ddd?dark=true&amp;color=000000">simplecast.com</a></em><br /></span>
</div>
</div>
</p><p><span data-ccp-props="279}"> <i><a href="https://open.spotify.com/show/4UAplFpPDqE4hWlwsjplgt" target="_blank" rel="noopener noreferrer"><img src="https://www.eff.org/files/2021/11/01/spotify-podcast-badge-blk-wht-330x80.png" alt="Listen on Spotify Podcasts Badge" width="198" height="48" /></a> <a href="https://podcasts.apple.com/us/podcast/effs-how-to-fix-the-internet/id1539719568" target="_blank" rel="noopener noreferrer"><img src="https://www.eff.org/files/2021/11/01/applebadge2.png" alt="Listen on Apple Podcasts Badge" width="195" height="47" /></a> <a href="https://music.amazon.ca/podcasts/bf81f00f-11e1-431f-918d-374ab6ad07cc/how-to-fix-the-internet?ref=dmm_art_us_HTFTI" target="_blank" rel="noopener noreferrer"><img height="47" width="195" src="https://www.eff.org/files/styles/kittens_types_wysiwyg_small/public/2024/02/15/us_listenon_amazonmusic_button_charcoal.png?itok=YFXPE4Ii" /></a> <a href="https://feeds.eff.org/howtofixtheinternet" target="_blank" rel="noopener noreferrer"><img src="https://www.eff.org/files/2021/11/01/subscriberss.png" alt="Subscribe via RSS badge" width="194" height="50" /></a></i></span></p>
<p><span data-contrast="none">(You can also find this episode on the <a href="https://archive.org/details/htfti-s6e8-arvind-narayanan-vfinal" target="_blank" rel="noopener noreferrer">Internet Archive</a> and on <a href="https://youtu.be/fD0moQVXuC4?si=sPPNyZE83jTL4aS_" target="_blank" rel="noopener noreferrer">YouTube</a>.)</span><span data-ccp-props="279}"> </span></p>
<p><span data-ccp-props="279}"> Arvind Narayanan studies the societal impact of digital technologies with a focus on how AI does and doesn’t work, and what it can and can’t do. He believes that if we set aside all the hype, and set the right guardrails around AI’s training and use, it has the potential to be a profoundly empowering and liberating technology. Narayanan joins EFF’s Cindy Cohn and Jason Kelley to discuss how we get to a world in which AI can improve aspects of our lives from education to transportation—if we make some system improvements first—and how AI will likely work in ways that we barely notice but that help us grow and thrive.</span><span data-ccp-props="279}"> </span></p>
<p><span data-contrast="none">In this episode you’ll learn about:</span></p>
<ul>
<li><span data-ccp-props="0}">What it means to be a “techno-optimist” (and NOT the venture capitalist kind)</span></li>
<li><span data-ccp-props="0}">Why we can’t rely on predictive algorithms to make decisions in criminal justice, hiring, lending, and other crucial aspects of people’s lives</span></li>
<li><span data-ccp-props="0}">How large-scale, long-term, controlled studies are needed to determine whether a specific AI application actually lives up to its accuracy promises</span></li>
<li><span data-ccp-props="0}">Why “cheapfakes” tend to be more (or just as) effective than deepfakes in shoring up political support</span></li>
<li><span data-ccp-props="0}">How AI is and isn’t akin to the Industrial Revolution, the advent of electricity, and the development of the assembly line</span><span data-ccp-props="0}"> </span></li>
</ul>
<p><a href="https://www.cs.princeton.edu/~arvindn/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Arvind Narayanan</span></a><span data-contrast="none"> is professor of computer science and director of the </span><a href="https://citp.princeton.edu/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Center for Information Technology Policy</span></a><span data-contrast="none"> at Princeton University. Along with </span><a href="https://www.cs.princeton.edu/~sayashk/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Sayash Kapoor</span></a><span data-contrast="none">, he publishes the </span><a href="https://www.aisnakeoil.com/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">AI Snake Oil</span></a><span data-contrast="none"> newsletter, followed by tens of thousands of researchers, policy makers, journalists, and AI enthusiasts; they also have authored “</span><a href="https://press.princeton.edu/books/hardcover/9780691249131/ai-snake-oil?srsltid=AfmBOopTbtjML4acN_Z5aB_gQTB6abdOHJefFJ2DbrOXkW7Iu2gcEF7-" target="_blank" rel="noopener noreferrer"><span data-contrast="none">AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference</span></a><span data-contrast="none">” (2024, Princeton University Press). He has studied </span><a href="https://knightcolumbia.org/content/understanding-social-media-recommendation-algorithms" target="_blank" rel="noopener noreferrer"><span data-contrast="none">algorithmic amplification on social media</span></a><span data-contrast="none"> as a visiting senior researcher at Columbia University's </span><a href="https://knightcolumbia.org/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Knight First Amendment Institute</span></a><span data-contrast="auto">; co-authored an online </span><a href="https://fairmlbook.org/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">a textbook on fairness and machine learning</span></a><span data-contrast="auto">; and led Princeton's </span><a href="https://www.cs.princeton.edu/~arvindn/publications/webtap-chapter.pdf" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Web Transparency and Accountability Project</span></a><span data-contrast="auto">, uncovering how companies collect and use our personal information.</span><span data-ccp-props="0}"> </span></p>
<p><span data-contrast="none">Resources:</span></p>
<ul>
<li><a href="https://www.wired.com/story/generative-ai-global-elections/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">The WIRED AI Elections Project</span></a></li>
<li><span data-ccp-props="0}">Axios: “</span><a href="https://www.axios.com/2025/05/28/ai-jobs-white-collar-unemployment-anthropic" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Behind the Curtain: A white-collar bloodbath</span></a><span data-contrast="none">” (May 28, 2025)</span></li>
<li><span data-ccp-props="0}">Bloomberg: “</span><a href="https://www.bloomberg.com/news/articles/2025-05-08/klarna-turns-from-ai-to-real-person-customer-service" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Klarna Slows AI-Driven Job Cuts With Call for Real People</span></a><span data-contrast="none">” (May 8, 2025)</span></li>
<li><span data-ccp-props="0}">Ars Technica: “</span><a href="https://arstechnica.com/tech-policy/2024/02/air-canada-must-honor-refund-policy-invented-by-airlines-chatbot/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Air Canada must honor refund policy invented by airline’s chatbot</span></a><span data-contrast="none">” (Feb. 16, 2024)</span><span data-ccp-props="0}"> </span></li>
</ul>
<p><span data-contrast="auto">What do you think of “How to Fix the Internet?” </span><a href="https://forms.office.com/pages/responsepage.aspx?id=qalRy_Njp0iTdV3Gz61yuZZXWhXf9ZdMjzPzrVjvr6VUNUlHSUtLM1lLMUNLWE42QzBWWDhXU1ZEQy4u&amp;web=1&amp;wdLOR=c90ABD667-F98F-9748-BAA4-CA50122F0423" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Share your feedback here</span></a><span data-contrast="auto">.</span></p>
<h3><span data-ccp-props="259}">Transcript</span></h3>
<p><strong>ARVIND NARAYANAN:</strong> The people who believe that super intelligence is coming very quickly tend to think of most tasks that we wanna do in the real world as being analogous to chess, where it was the case that initially chessbots were not very good.t some points, they reached human parity. And then very quickly after that, simply by improving the hardware and then later on by improving the algorithms, including by using machine learning, they're vastly, vastly superhuman.<br />We don't think most tasks are like that. This is true when you talk about tasks that are integrated into the real world, you know, require common sense, require a kind of understanding of a fuzzy task description. It's not even clear when you've done well and when you've not done well. <br />We think that human performance is not limited by our biology. It's limited by our state of knowledge of the world, for instance. So the reason we're not better doctors is not because we're not computing fast enough, it's just that medical research has only given us so much knowledge about how the human body works and you know, how drugs work and so forth.<br />And the other is you've just hit the ceiling of performance. The reason people are not necessarily better writers is that it's not even clear what it means to be a better writer. It's not as if there's gonna be a magic piece of text, you know, that's gonna, like persuade you of something that you never wanted to believe, for instance, right?<br />We don't think that sort of thing is even possible. And so those are two reasons why in the vast majority of tasks, we think AI is not going to become better or at least much better than human professionals.</p>
<p><strong>CINDY COHN:</strong> That's Arvind Narayanan explaining why AIs cannot simply replace humans for most of what we do. I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.</p>
<p><strong>JASON KELLEY:</strong> And I'm Jason Kelley, EFF’s Activism Director. This is our podcast series, How to Fix the Internet.</p>
<p><strong>CINDY COHN:</strong> On this show, we try to get away from the dystopian tech doomsayers – and offer space to envision a more hopeful and positive digital future that we can all work towards.</p>
<p><strong>JASON KELLEY:</strong> And our guest is one of the most level-headed and reassuring voices in tech.</p>
<p><strong>CINDY COHN:</strong> Arvind Narayanan is a professor of computer science at Princeton and the director of the Center for Information Technology Policy. He’s also the co-author of a terrific newsletter called AI Snake Oil – which has also become a book – where he and his colleague Sayash Kapoor debunk the hype around AI and offer a clear-eyed view of both its risks and its benefits.<br />He is also a self-described “techno-optimist”, but he means that in a very particular way – so we started off with what that term means to him.</p>
<p><strong>ARVIND NARAYANAN:</strong> I think there are multiple kinds of techno-optimism. There's the Mark Andreessen kind where, you know, let the tech companies do what they wanna do and everything will work out. I'm not that kind of techno-optimist. My kind of techno-optimism is all about the belief that we actually need folks to think about what could go wrong and get ahead of that so that we can then realize what our positive future is. <br />So for me, you know, AI can be a profoundly empowering and liberating technology. In fact, going back to my own childhood, this is a story that I tell sometimes, I was growing up in India and, frankly, the education system kind of sucked. My geography teacher thought India was in the Southern Hemisphere. That's a true story.</p>
<p><strong>CINDY COHN:</strong> Oh my God. Whoops.</p>
<p><strong>ARVIND NARAYANAN:</strong> And, you know, there weren't any great libraries nearby. And so a lot of what I knew, and I not only had to teach myself, but it was hard to access reliable, good sources of information. We had had a lot of books of course, but I remember when my parents saved up for a whole year and bought me a computer that had a CD-Rom encyclopedia on it.<br />That was a completely life-changing moment for me. Right. So that was the first time I could get close to this idea of having all information at our fingertips. That was even before I kind of had internet access even. So that was a very powerful moment. And I saw that as a lesson in information technology having the ability to level the playing field across different countries. And that was part of why I decided to get into computer science. <br />Of course I later realized that my worldview was a little bit oversimplified. Tech is not automatically a force for good. It takes a lot of effort and agency to ensure that it will be that way. And so that led to my research interest in the societal aspects of technology as opposed to more of the tech itself.<br />Anyway, all of that is a long-winded way of saying I see a lot of that same potential in AI that existed in the way that internet access, if done right, has the potential and, and has been bringing, a kind of liberatory potential to so many in the world who might not have the same kinds of access that we do here in the western world with our institutions and so forth.</p>
<p><strong>CINDY COHN:</strong> So let's drill down a second on this because I really love this image. You know, I was a little girl growing up in Iowa and seeing the internet made me feel the same way. Like I could have access to all the same information that people who were in the big cities and had the fancy schools could have access to.<br />So, you know, from I think all around the world, there's this experience and depending on how old you are, it may be that you discovered Wikipedia as opposed to a CD Rom of an encyclopedia, but it's that same moment and, I think that that is the promise that we have to hang on to. <br />So what would an educational world look like? You know, if you're a student or a teacher, if we are getting AI right?</p>
<p><strong>ARVIND NARAYANAN:</strong> Yeah, for sure. So let me start with my own experience. I kind of actually use AI a lot in the way that I learn new topics. This is something I was surprised to find myself doing given the well-known limitations of these chatbots and accuracy, but it turned out that there are relatively easy ways to work around those limitations.<br />Uh, one kind of example of uh, if a user adaptation to it is to always be in a critical mode where you know that out of 10 things that AI is telling you, one is probably going to be wrong. And so being in that skeptical frame of mind, actually in my view, enhances learning. And that's the right frame of mind to be in anytime you're learning anything, I think so that's one kind of adaptation. <br />But there are also technology adaptations, right? Just the simplest example: If you ask AI to be in Socratic mode, for instance, in a conversation, uh, a chat bot will take on a much more appropriate role for helping the user learn as opposed to one where students might ask for answers to homework questions and, you know, end up taking shortcuts and it actually limits their critical thinking and their ability to learn and grow, right? So that's one simple example to make the point that a lot of this is not about AI itself, but how we use AI. <br />More broadly in terms of a vision for how integrating this into the education system could look like, I do think there is a lot of promise in personalization. Again, this has been a target of a lot of overselling that AI can be a personalized tutor to every individual. And I think there was a science fiction story that was intended as a warning sign, but a lot of people in the AI industry have taken as a, as a manual or a vision for what this should look like. <br />But even in my experiences with my own kids, right, they're five and three, even little things like, you know, I was, uh, talking to my daughter about fractions the other day, and I wanted to help her visualize fractions. And I asked Claude to make a little game that would help do that. And within, you know, it was 30 seconds or a minute or whatever, it made a little game where it would generate a random fraction, like three over five, and then ask the child to move a slider. And then it will divide the line segment into five parts, highlight three, show how close the child did to the correct answer, and, you know, give feedback and that sort of thing, and you can kind of instantly create that, right? <br />So this convinces me that there is in fact a lot of potential in AI and personalization if a particular child is struggling with a particular thing, a teacher can create an app on the spot and have the child play with it for 10 minutes and then throw it away, never have to use it again. But that can actually be meaningfully helpful.</p>
<p><strong>JASON KELLEY:</strong> This kind of AI and education conversation is really close to my heart because I have a good friend who runs a school, and as soon as AI sort of burst onto the scene he was so excited for exactly the reasons you're talking about. But at the same time, a lot of schools immediately put in place sort of like, you know, Chat GPT bans and things like that.<br />And we've talked a little bit on EFF’s Deep Links blog about how, you know, that's probably an overstep in terms of like, people need to know how to use this, whether they're students or not. They need to understand what the capabilities are so they can have this sort of uses of it that are adapting to them rather than just sort of like immediately trying to do their homework.<br />So do you think schools, you know, given the way you see it, are well positioned to get to the point you're describing? I mean, how, like, that seems like a pretty far future where a lot of teachers know how AI works or school systems understand it. Like how do we actually do the thing you're describing because most teachers are overwhelmed as it is.</p>
<p><strong>ARVIND NARAYANAN:</strong> Exactly. That's the root of the problem. I think there needs to be, you know, structural changes. There needs to be more funding. And I think there also needs to be more of an awareness so that there's less of this kind of adversarial approach. Uh, I think about, you know, the levers for change where I can play a little part. I can't change the school funding situation, but just as one simple example, I think the way that researchers are looking at this maybe right, right now today is not the most helpful and can be reframed in a way that is much more actionable to teachers and others. So there's a lot of studies that look at what is the impact of AI in the classroom that, to me, are the equivalent of, is eating food good for you? It’s addressing the question of the wrong level of abstraction.</p>
<p><strong>JASON KELLEY:</strong> Yeah.</p>
<p><strong>ARVIND NARAYANAN:</strong> You can't answer the question at that high level because you haven't specified any of the details that actually matter. Whether food is good and entirely depends on what food it is, and if you're, if the way you studied that was to go into the grocery store and sample the first 15 items that you saw, you're measuring properties of your arbitrary sample instead of the underlying phenomena that you wanna study.<br />And so I think researchers have to drill down much deeper into what does AI for education actually look like, right? If you ask the question at the level of are chatbots helping or hurting students, you're gonna end up with nonsensical answers. So I think the research can change and then other structural changes need to happen.</p>
<p><strong>CINDY COHN:</strong> I heard you on a podcast talk about AI as, and saying kind of a similar point, which is that, you know, what, if we were deciding whether vehicles were good or bad, right? Nobody would, um, everyone could understand that that's way too broad a characterization for a general purpose kind of device to come to any reasonable conclusion. So you have to look at the difference between, you know, a truck, a car, a taxi, other, you know, all the, or, you know, various other kinds of vehicles in order to do that. And I think you do a good job of that in your book, at least in kind of starting to give us some categories, and the one that we're most focused on at EFF is the difference between predictive technologies, and other kinds of AI. Because I think like you, we have identified these kind of predictive technologies as being kind of the most dangerous ones we see right now in actual use. Am I right about that?</p>
<p><strong>ARVIND NARAYANAN:</strong> That's our view in the book, yes, in terms of the kinds of AI that has the biggest consequences in people's lives, and also where the consequences are very often quite harmful. So this is AI in the criminal justice system, for instance, used to predict who might fail to show up to court or who might commit a crime and then kind of prejudge them on that basis, right? And deny them their freedom on the basis of something they're predicted to do in the future, which in turn is based on the behavior of other similar defendants in the past, right? So there are two questions here, a technical question and a moral one.<br />The technical question is, how accurate can you get? And it turns out when we review the evidence, not very accurate. There's a long section in our book at the end of which we conclude that one legitimate way to look at it is that all that these systems are predicting is the more prior arrests you have, the more likely you are to be arrested in the future.<br />So that's the technical aspect, and that's because, you know, it's just not known who is going to commit a crime. Yes, some crimes are premeditated, but a lot of the others are spur of the moment or depend on things, random things that might happen in the future.<br />It's something we all recognize intuitively, but when the words AI or machine learning are used, some of these decision makers seem to somehow suspend common sense and somehow believe in the future as actually accurately predictable.</p>
<p><strong>CINDY COHN:</strong> The other piece that I've seen you talk about and others talk about is that the only data you have is what the cops actually do, and that doesn't tell you about crime it tells you about what the cops do. So my friends at the human rights data analysis group called it predicting the police rather than predicting policing.<br />And we know there's a big difference between the crime that the cops respond to and the general crime. So it's gonna look like the people who commit crimes are the people who always commit crimes when it's just the subset that the police are able to focus on, and we know there's a lot of bias baked into that as well.<br />So it's not just inside the data, it's outside the data that you have to think about in terms of these prediction algorithms and what they're capturing and what they're not. Is that fair?</p>
<p><strong>ARVIND NARAYANAN:</strong> That's totally, yeah, that's exactly right. And more broadly, you know, beyond the criminal justice system, these predictive algorithms are also used in hiring, for instance, and, and you know, it's not the same morally problematic kind of use where you're denying someone their freedom. But a lot of the same pitfalls apply.<br />I think one way in which we try to capture this in the book is that AI snake oil, or broken AI, as we sometimes call it, is appealing to broken institutions. So the reason that AI is so appealing to hiring managers is that yes, it is true that something is broken with the way we hire today. Companies are getting hundreds of applications, maybe a thousand for each open position. They're not able to manually go through all of them. So they want to try to automate the process. But that's not actually addressing what is broken about the system, and when they're doing that, the applicants are also using AI to increase the number of positions they can apply to. And so it's only escalating the arms race, right?<br />I think the reason this is broken is that we fundamentally don't have good ways of knowing who's going to be a good fit for which position, and so by pretending that we can predict it with AI, we're just elevating this elaborate random number generator into this moral arbiter. And there can be moral consequences of this as well.<br />Like, obviously, you know, someone who deserved a job might be denied that job, but it actually gets amplified when you think about some of these AI recruitment vendors providing their algorithm to 10 different companies. And so every company that someone applies to is judging someone in the same way.<br />So in our view, the only way to get away from this is to make necessary. Organizational reforms to these broken processes. Just as one example, in software, for instance, many companies will offer people, students especially, internships, and use that to have a more in-depth assessment of a candidate. I'm not saying that necessarily works for every industry or every level of seniority, but we have to actually go deeper and emphasize the human element instead of trying to be more superficial and automated with AI.</p>
<p><strong>JASON KELLEY:</strong> One of the themes that you bring up in the newsletter and the book is AI evaluation. Let's say you have one of these companies with the hiring tool: why is it so hard to evaluate the sort of like, effectiveness of these AI models or the data behind them? I know that it can be, you know, difficult if you don't have access to it, but even if you do, how do we figure out the shortcomings that these tools actually have?</p>
<p><strong>ARVIND NARAYANAN:</strong> There are a few big limitations here. Let's say we put aside the data access question, the company itself wants to figure out how accurate these decisions are.</p>
<p><strong>JASON KELLEY:</strong> Hopefully!</p>
<p><strong>ARVIND NARAYANAN:</strong> Yeah. Um, yeah, exactly. They often don't wanna know, but even if you do wanna know that in terms of the technical aspect of evaluating this, it's really the same problem as the medical system has in figuring out whether a drug works or not.<br />And we know how hard that is. That actually requires a randomized, controlled trial. It actually requires experimenting on people, which in turn introduces its own ethical quandaries. So you need oversight for the ethics of it, but then you have to recruit hundreds, sometimes thousands of people, follow them for a period of several years. And figure out whether the treatment group for which you either, you know, gave the drug, or in the hiring case you implemented, your algorithm has a different outcome on average from the control group for whom you either gave a placebo or in the hiring case you used, the traditional hiring procedure.<br />Right. So that's actually what it takes. And, you know, there's just no incentive in most companies to do this because obviously they don't value knowledge for their own sake. And the ROI is just not worth it. The effort that they're gonna put into this kind of evaluation is not going to, uh, allow them to capture the value out of it.<br />It brings knowledge to the public, to society at large. So what do we do here? Right? So usually in cases like this, the government is supposed to step in and use public funding to do this kind of research. But I think we're pretty far from having a cultural understanding that this is the sort of thing that's necessary.<br />And just like the medical community has gotten used to doing this, we need to do this whenever we care about the outcomes, right? Whether it's in criminal justice, hiring, wherever it is. So I think that'll take a while, and our book tries to be a very small first step towards changing public perception that this is not something you can somehow automate using AI. These are actually experiments on people. They're gonna be very hard to do.</p>
<p><strong>JASON KELLEY:</strong> Let's take a quick moment to thank our sponsor. “How to Fix the Internet” is supported by The Alfred P. Sloan Foundation’s Program in Public Understanding of Science and Technology. Enriching people’s lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.<br />We also want to thank EFF members and donors. You are the reason we exist. EFF has been fighting for digital rights for 35 years, and that fight is bigger than ever, so please, if you like what we do, go to eff.org/pod to donate. Also, we’d love for you to join us at this year’s EFF awards, where we celebrate the people working towards the better digital future that we all care so much about. Those are coming up on September 12th in San Francisco. You can find more information about that at eff.org/awards.<br />We also wanted to share that our friend Cory Doctorow has a new podcast – have a listen to this.<br />[<em>WHO BROKE THE INTERNET TRAILER</em>]<br />And now back to our conversation with Arvind Narayanan.</p>
<p><strong>CINDY COHN:</strong> So let's go to the other end of AI world. The people who, you know, are, I think they call it AI safety, where they're really focused on the, you know, robots are gonna kill us. All kind of concerns. 'cause that's a, that's a piece of this story as well. And I'd love to hear your take on, you know, kind of the, the, the doom loop, um, version of ai.</p>
<p><strong>ARVIND NARAYANAN:</strong> Sure. Yeah. So there's uh, a whole chapter in the book where we talk about concerns around catastrophic risk from future more powerful AI systems, and we have also elaborated a lot of those in a new paper we released called AI as Normal Technology. If folks are interested in looking that up and look, I mean, I'm glad that folks are studying AI safety and the kinds of unusual, let's say, kinds of risks that might arise in the future that are not necessarily direct extrapolations of the risks that we have currently. <br />But where we object to these arguments is the claim that we have enough knowledge and evidence of those risks being so urgent and serious that we have to put serious policy measures in place now, uh, you know, such as, uh, curbing open weights AI, for instance, because you never know who's gonna download these systems and what they're gonna do with them.<br />So we have a few reasons why we think those kinds of really strong arguments are going too far. One reason is that the kinds of interventions that we will need, if we want to control this at the level of the technology, as opposed to the use and deployment of the technology, those kind of non-proliferation measures as we call them, are, in our view, almost guaranteed not to work.<br />And to even try to enforce that you're kind of inexorably led to the idea of building a world authoritarian government that can monitor all, you know, AI development everywhere and make sure that the companies, the few companies that are gonna be licensed to do this, are doing it in a way that builds in all of the safety measures, the alignment measures, as this community calls them, that we want out of these AI models.<br />Because models that took, you know, hundreds of millions of dollars to build just a few years ago can now be built using a cluster of enthusiasts’ machines in a basement, right? And if we imagine that these safety risks are tied to the capability level of these models, which is an assumption that a lot of people have in order to call for these strong policy measures, then the predictions that came out of that line of thinking, in my view, have already repeatedly been falsified.<br />So when GPT two was built, right, this was back in 2019, OpenAI claimed that that was so dangerous in terms of misinformation being out there, that it was going to have potentially deleterious impacts on democracy, that they couldn't release it on an open weights basis.<br />That's a model that my students now build just to, you know, in an afternoon just to learn the process of building models, right? So that's how cheap that has gotten six years later, and vastly more powerful models than GPT two have now been made available openly. And when you look at the impact on AI generated misinformation, we did a study. We looked at the Wired database of the use of AI in election related activities worldwide. And those fears associated with AI generated misinformation have simply not come true because it turns out that the purpose of election misinformation is not to convince someone of the other tribe, if you will, who is skeptical, but just to give fodder for your own tribe so that they will, you know, continue to support whatever it is you're pushing for. <br />And for that purpose, it doesn't have to be that convincing or that deceptive, it just has to be cheap fakes as it's called. It's the kind of thing that anyone can do, you know, in 10 minutes with Photoshop. Even with the availability of sophisticated AI image generators. A lot of the AI misinformation we're seeing are these kinds of cheap fakes that don't even require that kind of sophistication to produce, right?<br />So a lot of these supposed harms really have the wrong theory in mind of how powerful technology will lead to potentially harmful societal impacts. Another great one is in cybersecurity, which, you know, as you know, I worked in for many years before I started working in AI. <br />And if the concern is that AI is gonna find software vulnerabilities and exploit them and exploit critical infrastructure, whatever, better than humans can. I mean, we crossed that threshold a decade or two ago. Automated methods like fuzzing have long been used to find new cyber vulnerabilities, but it turns out that it has actually helped defenders over attackers. Because software companies can and do, and this is, you know, really almost the first line of defense. Use these automated vulnerability discovery methods to find vulnerabilities and fix those vulnerabilities in their own software before even putting it out there where attackers can a chance to, uh, to find those vulnerabilities.<br />So to summarize all of that, a lot of the fears are based on a kind of incorrect theory of the interaction between technology and society. Uh, we have other ways to defend in, in fact, in a lot of ways, AI itself is, is the defense against some of these AI enabled threats we're talking about? And thirdly, the defenses that involve trying to control AI are not going to work. And they are, in our view, pretty dangerous for democracy.</p>
<p><strong>CINDY COHN:</strong> Can you talk a little bit about the AI as normal technology? Because I think this is a world that we're headed into that you've been thinking about a little more. 'cause we're, you know, we're not going back.<br />Anybody who hangs out with people who write computer code, knows that using these systems to write computer code is like normal now. Um, and it would be hard to go back even if you wanted to go back. Um, so tell me a little bit about, you know, this, this version of, of AI as normal technology. 'cause I think it, it feels like the future now, but actually I think depending, you know, what do they say, the future is here, it's just not evenly distributed. Like it is not evenly distributed yet. So what, what does it look like?</p>
<p><strong>ARVIND NARAYANAN:</strong> Yeah, so a big part of the paper takes seriously the prospect of cognitive automation using AI, that AI will at some point be able to do, you know, with some level of accuracy and reliability, most of the cognitive tasks that are valuable in today's economy at least, and asks, how quickly will this happen? What are the effects going to be? <br />So a lot of people who think this will happen, think that it's gonna happen this decade and a lot of this, you know, uh, brings a lot of fear to people and a lot of very short term thinking. But our paper looks at it in a very different way. So first of all, we think that even if this kind of cognitive automation is achieved, to use an analogy to the industrial revolution, where a lot of physical tasks became automated. It didn't mean that human labor was superfluous, because we don't take powerful physical machines like cranes or whatever and allow them to operate unsupervised, right?<br />So with those physical tasks that became automated, the meaning of what labor is, is now all about the supervision of those physical machines that are vastly more physically powerful than humans. So we think, and this is just an analogy, but we have a lot of reasoning in the paper for why we think this will be the case. What jobs might mean in a future with cognitive automation is primarily around the supervision of AI systems.<br />And so for us, that's a, that's a very positive view. We think that for the most part, that will still be fulfilling jobs in certain sectors. There might be catastrophic impacts, but it's not that across the board you're gonna have drop-in replacements for human workers that are gonna make human jobs obsolete. We don't really see that happening, and we also don't see this happening in the space of a few years. <br />We talk a lot about what are the various sources of inertia that are built into the adoption of any new technology, especially general purpose technology like electricity. We talk about, again, another historic analogy where factories took several decades to figure out how to replace their steam boilers in a useful way with electricity, not because it was technically hard, but because it required organizational innovations, like changing the whole layout of factories around the concept of the assembly line. So we think through what some of those changes might have to be when it comes to the use of AI. And we, you know, we say that we have a, a few decades to, to make this transition and that, even when we do make the transition, it's not going to be as scary as a lot of people seem to think.</p>
<p><strong>CINDY COHN:</strong> So let's say we're living in the future, the Arvind future where we've gotten all these AI questions, right. What does it look like for, you know, the average person or somebody doing a job?</p>
<p><strong>ARVIND NARAYANAN:</strong> Sure. A few big things. I wanna use the internet as an analogy here. Uh, 20, 30 years ago, we used to kind of log onto the internet, do a task, and then log off. But now. The internet is simply the medium through which all knowledge work happens, right? So we think that if we get this right in the future, AI is gonna be the medium through which knowledge work happens. It's kind of there in the background and automatically doing stuff that we need done without us necessarily having to go to an AI application and ask it something and then bring the result back to something else. <br />There is this famous definition of AI that AI is whatever hasn't been done yet. So what that means is that when a technology is new and it's not working that well and its effects are double-edged, that's when we're more likely to call it AI.<br />But eventually it starts working reliably and it kind of fades into the background and we take it for granted as part of our digital or physical environment. And we think that that's gonna happen with generative AI to a large degree. It's just gonna be invisibly making all knowledge work a lot better, and human work will be primarily about exercising judgment over the AI work that's happening pervasively, as opposed to humans being the ones doing, you know, the nuts and bolts of the thinking in any particular occupation. <br />I think another one is, uh, I hope that we will have. gotten better at recognizing the things that are intrinsically human and putting more human effort into them, that we will have freed up more human time and effort for those things that matter. So some folks, for instance, are saying, oh, let's automate government and replace it with a chat bot. Uh, you know, we point out that that's missing the point of democracy, which is to, you know, it's if a chat bot is making decisions, it might be more efficient in some sense, but it's not in any way reflecting the will of the people. So whatever people's concerns are with government being inefficient, automation is not going to be the answer. We can think about structural reforms and we certainly should, you know, maybe it will, uh, free up more human time to do the things that are intrinsically human and really matter, such as how do we govern ourselves and so forth.<br />Um. And, um, maybe if I can have one last thought around what does this positive vision of the future look like? Uh, I, I would go back to the very thing we started from, which is AI and education. I do think there's orders of magnitude, more human potential to open up and AI is not a magic bullet here.<br />You know, technology on, on the whole is only one small part of it, but I think as we more generally become wealthier and we have. You know, lots of different reforms. Uh, hopefully one of those reforms is going to be schools and education systems, uh, being much better funded, being able to operate much more effectively, and, you know, e every child one day, being able to perform, uh, as well as the highest achieving children today.<br />And there's, there's just an enormous range. And so being able to improve human potential, to me is the most exciting thing.</p>
<p><strong>CINDY COHN:</strong> Thank you so much, Arvind.</p>
<p><strong>ARVIND NARAYANAN:</strong> Thank you Jason and Cindy. This has been really, really fun.</p>
<p><strong>CINDY COHN:</strong> I really appreciate Arvind's hopeful and correct idea that actually what most of us do all day isn't really reducible to something a machine can replace. That, you know, real life just isn't like a game of chess or, you know, uh, the, the test you have to pass to be a lawyer or, or things like that. And that there's a huge gap between, you know, the actual job and the thing that the AI can replicate.</p>
<p><strong>JASON KELLEY:</strong> Yeah, and he's really thinking a lot about how the debates around AI in general are framed at this really high level, which seems incorrect, right? I mean, it's sort of like asking if food is good for you, are vehicles good for you, but he's much more nuanced, you know? AI is good in some cases, not good in others. And his big takeaway for me was that, you know, people need to be skeptical about how they use it. They need to be skeptical about the information it gives them, and they need to sort of learn what methods they can use to make AI work with you and for you and, and how to make it work for the application you're using it for.<br />It's not something you can just apply, you know, wholesale across anything which, which makes perfect sense, right? I mean, no one I think thinks that, but I think industries are plugging AI into everything or calling it AI anyway. And he's very critical of that, which I think is, is good and, and most people are too, but it's happening anyway. So it's good to hear someone who's really thinking about it this way point out why that's incorrect.</p>
<p><strong>CINDY COHN:</strong> I think that's right. I like the idea of normalizing AI and thinking about it as a general purpose tool that might be good for some things and, and it's bad for others, honestly, the same way computers are, computers are good for some things and bad for others. So, you know, we talk about vehicles and food in the conversation, but actually think you could talk about it for, you know, computing more broadly. <br />I also liked his response to the doomers, you know, pointing out that a lot of the harms that people are claiming will end the world, kind of have the wrong theory in mind about how a powerful technology will lead to bad societal impact. You know, he's not saying that it won't, but he's pointing out that, you know, in cybersecurity for example, you know, some of the AI methods which had been around for a while, he talked about fuzzing, but there are others, you know, that those techniques, while they were, you know, bad for old cybersecurity, actually have spurred greater protections in cybersecurity. And the lesson is when we learn all the time in, in security, especially like the cat and mouse game is just gonna continue.<br />And anybody who thinks they've checkmated, either on the good side or the bad side, is probably wrong. And that I think is an important insight so that, you know, we don't get too excited about the possibilities of AI, but we also don't go all the way to the, the doomers side.</p>
<p><strong>JASON KELLEY:</strong> Yeah. You know, the normal technology thing was really helpful for me, right? It's something that, like you said with computers, it's a tool that, that has applications in some cases and not others, and people thinking, you know, I don't know if anyone thought when the internet was developed that this was going to end the world or save it. I guess people thought some people might have thought either/or, but you know, neither is true. Right? And you know, it's been many years now and we're still learning how to make the internet useful, and I think it'll be a long time before we've necessarily figure out how AI can be useful. But there's a lot of lessons we can take away from the growth of the internet about how to apply AI.<br />You know, my dishwasher, I don't think needs to have wifi. I don't think it needs to have AI either. I'll probably end up buying one that has to have those things because that's the way the market goes. But it seems like these are things we can learn from the way we've sort of, uh, figured out where the applications are for these different general purpose technologies in the past is just something we can continue to figure out for AI.</p>
<p><strong>CINDY COHN:</strong> Yeah, and honestly it points to competition and user control, right? I mean, the reason I think a lot of people are feeling stuck with AI is because we don't have an open market for systems where you can decide, I don't want AI in my dishwasher, or I don't want surveillance in my television.<br />And that's a market problem. And one of these things that he said a lot is that, you know, “just add AI” doesn't solve problems with broken institutions. And I think it circles back to the fact that we don't have a functional market, we don't have real consumer choice right now. And so that's why some of the fears about AI, it's not just consumers, I mean worker choice, other things as well, it's the problems in those systems in the way power works in those systems. <br />If you just center this on the tech, you're kind of missing the bigger picture and also the things that we might need to do to address it. I wanted to circle back to what you said about the internet because of course it reminds me of Barlow's declaration on the independence of cyberspace, which you know, has been interpreted by a lot of people, as saying that the internet would magically make everything better and, you know, Barlow told me directly, like, you know, what he said was that by projecting a positive version of the online world and speaking as if it was inevitable, he was trying to bring it about, right? <br />And I think this might be another area where we do need to bring about a better future, um, and we need to posit a better future, but we also have to be clear-eyed about the, the risks and, you know, whether we're headed in the right direction or not, despite what we, what we hope for.</p>
<p><strong>JASON KELLEY:</strong> And that's our episode for today. Thanks so much for joining us. If you have feedback or suggestions, we'd love to hear from you. Visit ff.org/podcast and click on listen or feedback. And while you're there, you can become a member and donate, maybe even pick up some of the merch and just see what's happening in digital rights this week and every week.<br />Our theme music is by Nat Keefe of Beat Mower with Reed Mathis, and How to Fix the Internet is supported by the Alfred Peace Loan Foundation's program and public understanding of science and technology. We'll see you next time. I'm Jason Kelley.</p>
<p><strong>CINDY COHN:</strong> And I'm Cindy Cohn.</p>
<p><em><strong>MUSIC CREDITS:</strong> This podcast is licensed Creative Commons Attribution 4.0 international, and includes the following music licensed Creative Commons Attribution 3.0 unported by its creators: Drops of H2O, The Filtered Water Treatment by Jay Lang. Additional music, theme remixes and sound design by Gaetan Harris.</em></p>
<p><span data-ccp-props="240}"></span></p>
<p><span data-ccp-props="240}"></span></p>
<p><span data-ccp-props="240}"></span></p>
<p><span data-ccp-props="240}"> </span></p>
</div></div></div></description>
<pubDate>Wed, 13 Aug 2025 07:05:00 +0000</pubDate>
<guid isPermaLink="false">110968 at https://www.eff.org</guid>
<category domain="https://www.eff.org/how-to-fix-the-internet-podcast">How to Fix the Internet: Podcast</category>
<category domain="https://www.eff.org/issues/ai">Artificial Intelligence & Machine Learning</category>
<dc:creator>Josh Richman</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/2025-htfi-arvind-blog.jpg" alt="How to Fix the Internet - Arvind Narayanan - Separating AI Hope from AI Hype" type="image/jpeg" length="228730" />
</item>
<item>
<title>Fake Clinics Quietly Edit Their Websites After Being Called Out on HIPAA Claims</title>
<link>https://www.eff.org/deeplinks/2025/08/fake-clinics-quietly-edit-their-websites-after-being-called-out-hipaa-claims</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p><span>In a promising sign that public pressure works, several crisis pregnancy centers (CPCs, also known as “</span><a href="https://www.plannedparenthood.org/blog/what-are-crisis-pregnancy-centers"><span>fake clinics</span></a><span>”) have quietly scrubbed misleading language about privacy protections from their websites. </span></p>
<p><span>Earlier this year, EFF </span><a href="https://www.eff.org/deeplinks/2025/01/eff-state-ags-time-investigate-crisis-pregnancy-centers"><span>sent</span></a> <a href="https://www.eff.org/deeplinks/2025/03/state-ags-must-act-eff-expands-call-investigate-crisis-pregnancy-centers"><span>complaints</span></a><span> to attorneys general in eight states (</span><a href="https://www.eff.org/files/2025/03/04/2025.03.04_updated_final_letter_to_fl_with_exhibits.pdf"><span>FL</span></a><span>, </span><a href="https://www.eff.org/files/2025/01/28/2025.01.28_letter_to_tx_ag_with_exhibits.pdf"><span>TX</span></a><span>, </span><a href="https://www.eff.org/files/2025/01/28/2025.01.28_letter_to_arkansas_ag_with_exhibits_0.pdf"><span>AR</span></a><span>, and </span><a href="https://www.eff.org/files/2025/01/28/2025.01.28_letter_to_missouri_ag_with_exhibits_0.pdf"><span>MO</span></a><span>, </span><a href="https://www.eff.org/files/2025/03/19/2024.03.20_letter_to_tn_with_exhibits.pdf"><span>TN</span></a><span>, </span><a href="https://www.eff.org/files/2025/03/19/2024.03.20_letter_to_ok_with_exhibits.pdf"><span>OK</span></a><span>, </span><a href="https://www.eff.org/files/2025/03/19/2024.03.20_letter_to_ne_with_exhibits.pdf"><span>NE</span></a><span>, and </span><a href="https://www.eff.org/files/2025/03/19/2024.03.20_letter_to_nc_with_exhibits.pdf"><span>NC</span></a><span>), asking them to investigate these centers for misleading the public with false claims about their privacy practices—specifically, falsely stating or implying that they are bound by the Health Insurance Portability and Accountability Act (HIPAA). These claims are especially deceptive because many of these centers are not licensed medical clinics or do not have any medical providers on staff, and thus are </span><i><span>not</span></i><span> subject to HIPAA’s protections.</span></p>
<p><span>Now, after an internal follow-up investigation, we’ve found that our efforts are already bearing fruit: Of the 21 CPCs we cited as exhibits in our complaints, six have completely removed HIPAA references from their websites, and one has made partial changes (removed one of two misleading claims). Notably, every center we flagged in our letters to Texas AG Ken Paxton and Arkansas AG Tim Griffin has updated its website—a clear sign that clinics in these states are responding to scrutiny.</span></p>
<p><span>While 14 remain unchanged, this is a promising development. These centers are </span><a href="https://web.archive.org/web/20250213160237/https://www.heartbeatinternational.org/tanzania-lrg/item/2876-hipaa-rights-wrongs"><span>clearly paying attention</span></a><span>—and changing their messaging. We haven’t yet received substantive responses from the state attorneys general beyond formal acknowledgements of our complaints, but these early results confirm what we’ve long believed: transparency and public pressure work.</span></p>
<p><span>These changes (often quiet edits to privacy policies on their websites or deleting blog posts) signal that </span><a href="https://www.nbcnews.com/health/womens-health/crisis-pregnancy-centers-prenatal-ultrasound-ectopic-pregnancy-rcna214171"><span>the CPC network is trying to clean up</span></a><span> their public-facing language in the wake of scrutiny. But removing HIPAA references from a website doesn’t mean the underlying privacy issues have been fixed. Most CPCs are still not subject to HIPAA, because they are not licensed healthcare providers. They continue to collect sensitive information without clearly disclosing how it’s stored, used, or shared. And in the absence of strong federal privacy laws, there is little recourse for people whose data is misused. </span></p>
<p><span>These clinics have misled patients who are often navigating complex and emotional decisions about their health, </span><a href="https://therecord.media/crisis-pregnancy-centers-hipaa-data-privacy"><span>misrepresented themselves as bound by federal privacy law</span></a><span>, and falsely </span><a href="https://www.documentcloud.org/documents/25454641-hhs-closure-letter-the-unexpected-pregnancy-center/"><span>referred people to the U.S. Department of Health and Human Services for redress</span></a><span>—implying legal oversight and accountability. They made patients believe their sensitive data was protected, when in many cases, it was </span><a href="https://jessica.substack.com/p/exclusive-health-data-breach-at-americas"><span>shared with affiliated networks</span></a><span>, or even put on the internet for anyone to see—including churches or political organizations.</span></p>
<p><span>That’s why we continue to monitor these centers—and call on state attorneys general to do the same. </span></p>
</div></div></div></description>
<pubDate>Tue, 12 Aug 2025 20:35:43 +0000</pubDate>
<guid isPermaLink="false">110975 at https://www.eff.org</guid>
<category domain="https://www.eff.org/issues/reproductive-rights">Reproductive Justice</category>
<category domain="https://www.eff.org/issues/medical-privacy">Medical Privacy</category>
<dc:creator>Rindala Alajaji</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/pregnant-1.png" alt="The Pregnancy Panopticon" type="image/png" length="10950" />
</item>
<item>
<title>Americans, Be Warned: Lessons From Reddit’s Chaotic UK Age Verification Rollout</title>
<link>https://www.eff.org/deeplinks/2025/08/americans-be-warned-lessons-reddits-chaotic-uk-age-verification-rollout</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p><span>Age verification has officially arrived in the UK thanks to the </span><a href="https://www.eff.org/deeplinks/2025/08/no-uks-online-safety-act-doesnt-make-children-safer-online"><span>Online Safety Act (OSA)</span></a><span>, a UK law requiring online platforms to check that all UK-based users are at least eighteen years old before allowing them to access broad categories of “</span><a href="https://www.eff.org/deeplinks/2025/08/blocking-access-harmful-content-will-not-protect-children-online-no-matter-how"><span>harmful</span></a><span>” content that go far beyond graphic sexual content. EFF has extensively criticized the OSA for </span><a href="https://www.eff.org/deeplinks/2022/08/uks-online-safety-bill-attacks-free-speech-and-encryption"><span>eroding privacy</span></a><span>, </span><a href="https://www.eff.org/deeplinks/2023/09/uk-online-safety-bill-will-mandate-dangerous-age-verification-much-web"><span>chilling speech</span></a><span>, and </span><a href="https://www.eff.org/deeplinks/2025/08/blocking-access-harmful-content-will-not-protect-children-online-no-matter-how"><span>undermining the safety of the children</span></a><span> it aims to protect. Now that it’s gone into effect, these countless problems have begun to reveal themselves, and the absurd, disastrous outcome illustrates why we must work to avoid this age-verified future at all costs.</span></p>
<p><span>Perhaps you’ve </span><a href="https://knowyourmeme.com/photos/3111907-uk-online-safety-age-verification-law"><span>seen</span></a> <a href="https://www.reddit.com/r/memes/comments/ob5f5u/how_adult_sites_think_age_gates_work/"><span>the</span></a> <a href="https://bsky.app/profile/tartanllama.xyz/post/3lvl6o4gfic2l"><span>memes</span></a><span> as large platforms like </span><a href="https://www.reddit.com/r/memes/comments/1megi5f/they_arent_serious_are_they/?chainedPosts=t3_gk6ica"><span>Spotify</span></a><span> and </span><a href="https://www.reddit.com/r/youtube/comments/1mdlbqk/youtubes_new_age_verification_system_in_a_nutshell/?chainedPosts=t3_1mdiyai%2Ct3_1m8050s%2Ct3_gk6ica"><span>YouTube</span></a><span> attempt to comply with the OSA, while smaller sites—like forums focused on </span><a href="https://web.archive.org/web/20250408084153/https://www.dadswithkids.co.uk/ams/forum-closure.28/"><span>parenting</span></a><span>, </span><a href="https://web.archive.org/web/20250120205725/https://www.thegreenlivingforum.net/forum/viewtopic.php?f=2&amp;t=114519"><span>green living</span></a><span>, and </span><a href="https://web.archive.org/web/20250102185206/https://www.gamingonlinux.com/forum/topic/6463/"><span>gaming on Linux</span></a><span>—either shut down or cease some operations rather than face massive fines for not following the law’s vague, expensive, and complicated rules and risk assessments. </span></p>
<p><span>But even Reddit, a site that </span><a href="https://www.reddit.com/r/RedditSafety/comments/1lzt65t/verifying_the_age_but_not_the_identity_of_uk/"><span>prizes anonymity</span></a><span> and has regularly demonstrated its commitment to digital rights, was doomed to fail in its attempt to comply with the OSA. Though Reddit is </span><a href="https://nymag.com/intelligencer/article/age-verification-is-coming-for-the-whole-internet.html"><span>not alone</span></a><span> in bowing to the UK mandates, it provides a perfect case study and a particularly instructive glimpse of what the age-verified future would look like if we don’t take steps to stop it.</span></p>
<h3><b>It’s Not Just Porn—LGBTQ+, Public Health, and Politics Forums All Behind Age Gates</b></h3>
<p><span>On July 25, users in the UK were shocked and rightfully revolted to discover that their favorite Reddit communities were now </span><a href="https://www.wired.com/story/the-age-checked-internet-has-arrived/"><span>locked</span></a><span> behind age verification walls. Under the </span><a href="https://support.reddithelp.com/hc/en-us/articles/36429514849428-Why-is-Reddit-asking-for-my-age"><span>new policies</span></a><span>, UK Redditors were asked to submit a photo of their government ID and/or a live selfie to </span><a href="https://www.forbes.com/sites/rashishrivastava/2025/04/30/ai-is-making-the-internets-bot-problem-worse-this-2-billion-startup-is-on-the-front-lines/"><span>Persona</span></a><span>, the for-profit vendor that Reddit contracts with to provide age verification services. </span></p>
<p class="center-image"><span><img src="/files/2025/08/08/persona_reddit_av_image.jpg" width="328" height="601" alt=" &quot;SUBMIT PHOTO ID&quot; or &quot;ESTIMATE AGE FROM SELFIE.&quot;" title=" &quot;SUBMIT PHOTO ID&quot; or &quot;ESTIMATE AGE FROM SELFIE.&quot;" /></span></p>
<p><span>For many, this was the first time they realized what the OSA would actually mean in practice—and <strong>the outrage was immediate</strong>. As soon as the policy took effect, reports emerged from users that subreddits dedicated to </span><a href="https://www.reddit.com/r/lgbt/comments/1m8ipus/how_is_this_okay_reddit_seems_to_be_classifying/"><span>LGBTQ+ identity</span></a> <a href="https://www.reddit.com/r/transgenderUK/comments/1m6oflm/18_walled_for_trans_subreddits/"><span>and</span></a> <a href="https://www.reddit.com/r/transgenderUK/comments/1m8kb4m/reddit_now_censors_lgbtq_content_under_ofcoms_new/"><span>support</span></a><span>, </span><a href="https://www.reddit.com/r/AlJazeera/"><span>global</span></a> <a href="https://www.bbc.com/news/articles/cj3l0e4vr0ko"><span>journalism</span></a><span> and </span><a href="https://www.404media.co/uk-users-need-to-post-selfie-or-photo-id-to-view-reddits-r-israelcrimes-r-ukrainewarfootage/"><span>conflict reporting</span></a><span>, and even </span><a href="https://www.usermag.co/p/the-uks-censorship-catastrophe-is"><span>public health-related forums</span></a><span> like r/periods, r/stopsmoking, and r/sexualassault were walled off to unverified users. A few more absurd examples of the communities that were blocked off, </span><a href="https://www.reddit.com/r/AskUK/comments/1m8tjeh/whats_the_stupidest_subreddit_youve_seen/"><span>according to users</span></a><span>, include: r/poker, r/vexillology (the study of flags), r/worldwar2, r/earwax, r/popping (the home of grossly satisfying pimple-popping content), and r/rickroll (</span><a href="https://www.youtube.com/watch?v=dQw4w9WgXcQ"><span>yup</span></a><span>). This is, again, exactly what digital rights advocates warned about. </span></p>
<p class="pull-quote"><span>Every user in the country is now faced with a choice: submit their most sensitive data for privacy-invasive analysis, or stay off of Reddit entirely. Which would you choose? </span></p>
<p><span>The OSA </span><a href="https://www.eff.org/deeplinks/2025/08/no-uks-online-safety-act-doesnt-make-children-safer-online"><span>defines</span></a><span> "harmful" in multiple ways that go </span><a href="https://www.eff.org/deeplinks/2025/01/impact-age-verification-measures-goes-beyond-porn-sites"><span>far beyond pornography</span></a><span>, so the obstacles the UK users are experiencing are exactly what the law intended. Like other online age restrictions, the OSA obstructs way more than </span><i><span>kids’ </span></i><span>access to clearly </span><i><span>adult </span></i><span>sites. When fines are at stake, </span><b>platforms will always default to overcensoring</b><span>. So every user in the country is now faced with a choice: submit their most sensitive data for privacy-invasive analysis, or stay off of Reddit entirely. Which would you choose? </span></p>
<p><span>Again, the fact that the OSA has forced Reddit, the “</span><a href="https://redditinc.com/"><span>heart of the internet</span></a><span>,” to overcensor user-generated content is noteworthy. Reddit has historically succeeded where many others have failed in safeguarding digital rights—particularly the free speech and privacy of its users. It may not be </span><a href="https://www.eff.org/deeplinks/2023/06/what-reddit-got-wrong"><span>perfect</span></a><span>, but Reddit has worked harder than many large platforms to </span><a href="https://www.nexttv.com/news/reddits-huffman-ending-sec-230-is-existential-threat"><span>defend Section 230</span></a><span>, a key law in the US protecting free speech online. It was one of the first platforms to endorse the </span><a href="https://santaclaraprinciples.org/"><span>Santa Clara Principles</span></a><span>, and it was the only platform to receive every star in EFF’s 2019 </span><a href="https://www.eff.org/wp/who-has-your-back-2019"><span>“Who Has Your Back” (Censorship Edition</span></a><span>) report due to its unique approach to moderation, its commitment to notice and appeals of moderation decisions, and its transparency regarding government takedown requests. Reddit’s users are particularly active in the digital rights world: in 2012, they </span><a href="https://www.reddit.com/r/IAmA/comments/16tu47/one_year_ago_today_you_help_us_beat_sopa_thanks/"><span>helped EFF and other advocates defeat SOPA/PIPA</span></a><span>, a dangerous censorship law. Redditors were key in forcing members of Congress to take a stand against the bill, and were the first to declare a “blackout day,” a historic moment of online advocacy in which over a </span><a href="https://www.eff.org/deeplinks/2012/12/2012-review-blackout-protests-against-blacklist-bills"><span>hundred thousand websites</span></a><span> went dark to protest the bill. And Reddit is the only major social media platform where EFF doesn’t regularly share our work—because its users generally do so on their own. </span></p>
<p><span>If a platform with a history of fighting for digital rights is forced to overcensor, how will the rest of the internet look if age verification spreads? Reddit’s attempts to comply with the OSA show the urgency of fighting these mandates on every front. </span></p>
<p><span>We cannot accept these widespread censorship regimes as our new norm. </span></p>
<h3><b>Rollout Chaos: The Tech Doesn’t Even Work! </b></h3>
<p><span>In the days after the OSA became effective, </span><a href="https://www.wired.com/story/vpn-use-spike-age-verification-laws-uk/"><span>backlash</span></a><span> to the new age verification measures spread across the internet like wildfire as UK users made their hatred of these new policies clear. VPN usage in the UK </span><a href="https://www.bbc.com/news/articles/cn72ydj70g5o"><span>soared</span></a><span>, over 500,000 people signed a </span><a href="https://petition.parliament.uk/petitions/722903"><span>petition</span></a><span> to repeal the OSA, and some shrewd users even discovered that </span><a href="https://www.theverge.com/report/714402/uk-age-verification-bypass-death-stranding-reddit-discord"><span>video game face filters</span></a><span> and </span><a href="https://www.404media.co/kids-say-theyre-using-photos-of-trump-and-markiplier-to-bypass-gorllia-tag-age-verification/"><span>meme images</span></a><span> could fool Persona’s verification software. But these loopholes aren’t likely to last long, as we can expect the age-checking technology to continuously adapt to new evasion tactics. As good as they may be, VPNs </span><a href="https://www.eff.org/deeplinks/2025/01/vpns-are-not-solution-age-verification-laws"><span>cannot save us</span></a><span> from the harms of age verification. </span></p>
<p class="pull-quote"><span>In effect, the OSA and other age verification mandates like it will <i>increase</i> the risk of harm, not reduce it. </span></p>
<p><span>Even when the workarounds inevitably cease to function and the age-checking procedures calcify, age verification measures still will not achieve their singular goal of protecting kids from so-called “harmful” online content. Teenagers </span><a href="https://letsgotothemovies.com/wp-content/uploads/2025/04/35ccf17ba76caa68310a8ce05397e4bb.jpg?w=625"><span>will, uh, find a way</span></a><span> to access the content they want. Instead of going to a vetted site like Pornhub for explicit material, curious young people (and anyone else who does not or cannot submit to age checks) will be </span><a href="https://www.huffingtonpost.co.uk/entry/porn-age-verification-check-advice-parents_uk_6881ea8de4b0d55a3f196462?d_id=10384333&amp;ncid_tag=tweetlnkukhpmg00000001"><span>pushed</span></a><span> to the sketchier corners of the internet—where there is less moderation, more safety risk, and no regulation to prevent things like CSAM or non-consensual sexual content. In effect, the OSA and other age verification mandates like it will </span><i><span>increase</span></i><span> the risk of harm, not reduce it. </span></p>
<p><span>If that weren’t enough, the </span><a href="https://www.usermag.co/p/the-uks-censorship-catastrophe-is"><span>slew</span></a><span> of </span><a href="https://www.theverge.com/analysis/714587/uk-online-safety-act-age-verification-reactions"><span>practical</span></a> <a href="https://www.theverge.com/analysis/715767/online-age-verification-not-ready"><span>issues</span></a><span> that have accompanied Reddit’s rollout also reveals the inadequacy of age verification technology to meet our current moment. For example, users </span><a href="https://www.reddit.com/r/help/comments/1m0w7r0/need_to_verify_your_age_in_the_uk_this_post_might/"><span>reported</span></a><span> various </span><a href="https://www.reddit.com/r/reddithelp/comments/1m3bmtj/possible_issue_with_age_verification_reprompting/"><span>bugs</span></a><span> in the age-checking process, like being locked out or asked repeatedly for ID despite complying. UK-based subreddit moderators also </span><a href="https://www.reddit.com/r/RedditSafety/comments/1lzt65t/comment/n34hevh/?utm_source=share&amp;utm_medium=web3x&amp;utm_name=web3xcss&amp;utm_term=1&amp;utm_content=share_button"><span>reported</span></a><span> facing </span><a href="https://www.reddit.com/r/ModSupport/comments/1maxj1z/the_online_safety_act_age_verification_rollout_is/"><span>difficulties</span></a><span> either viewing NSFW post submissions or vetting users’ post history, even when the particular submission or subreddit in question was entirely SFW. </span></p>
<p><span>Taking all of this together, it is excessively clear that age-gating the internet is not the solution to kids’ online safety. Whether due to issues with the </span><a href="https://www.eff.org/deeplinks/2025/01/face-scans-estimate-our-age-creepy-af-and-harmful"><span>discriminatory and error-prone technology</span></a><span>, or simply because they lack either a government ID or personal device of their own, millions of UK internet users will be completely locked out of important social, political, and creative communities. </span>If we allow age verification, we welcome new levels of censorship and surveillance with it—while further lining the pockets of big tech and the slew of for-profit age verification vendors that have popped up to fill this market void.</p>
<h3><b>Americans, Take Heed: It Will Happen Here Too</b></h3>
<p><span>The UK age verification rollout, chaotic as it is, is a proving ground for platforms that are </span><a href="https://www.theverge.com/analysis/715767/online-age-verification-not-ready"><span>looking ahead</span></a><span> to implementing these measures on a global scale. In the US, there’s never been a better time to get educated and get loud about the dangers of this legislation. EFF has </span><a href="https://www.eff.org/deeplinks/2025/01/impact-age-verification-measures-goes-beyond-porn-sites"><span>sounded</span></a><span> this alarm before, but Reddit’s attempts to comply with the OSA show its urgency: </span><b>age verification mandates </b><b><i>are</i></b><b> censorship regimes, and in the US, porn is just the tip of the iceberg</b><span>. </span></p>
<p><span>US legislators have been </span><a href="https://www.them.us/story/kosa-senator-blackburn-censor-trans-content"><span>disarmingly explicit</span></a><span> about their intentions to use restrictions on sexually explicit content as a Trojan horse that will eventually help them censor all sorts of other </span><a href="https://www.eff.org/deeplinks/2025/03/first-porn-now-skin-cream-age-verification-bills-are-out-control"><span>perfectly legal (and largely uncontroversial) content</span></a><span>. We’ve already seen them move the goalposts from porn to </span><a href="https://static.heritage.org/project2025/2025_MandateForLeadership_FULL.pdf#page=37"><span>transgender</span></a><span> and other </span><a href="https://msmagazine.com/2025/02/25/lgbtq-abortion-censorship-age-verification-laws/"><span>LGBTQ+ content</span></a><span>. What’s next? Sexual education materials, </span><a href="https://www.eff.org/deeplinks/2024/09/kosas-online-censorship-threatens-abortion-access"><span>reproductive rights information</span></a><span>, DEI or “critical race theory” resources—the list goes on. Under </span><a href="https://www.eff.org/deeplinks/2024/03/analyzing-kosas-constitutional-problems-depth"><span>KOSA</span></a><span>, which last session passed the Senate with an </span><a href="https://www.eff.org/deeplinks/2024/07/kosa-internet-censorship-bill-just-passed-senate-its-our-last-chance-stop-it"><span>enormous</span></a><span> majority but did not make it to the House, we would likely see similar results here that we see in the UK under the OSA.</span></p>
<p><span>Nearly </span><a href="https://action.freespeechcoalition.com/age-verification-resources/state-avs-laws/"><span>half</span></a><span> of U.S. states have some sort of online age restrictions in place already, and the Supreme Court recently </span><a href="https://www.eff.org/deeplinks/2025/07/despite-supreme-court-setback-eff-fights-against-online-age-mandates"><span>paved the way</span></a><span> for even more age blocks on online sexual content. But Americans—</span><a href="https://www.eff.org/deeplinks/2025/06/eff-court-young-people-have-first-amendment-rights"><span>including those under 18</span></a><span>—still have a First Amendment right to view content that is not sexually explicit, and EFF will continue to push back against any legislation that expands the age mandates beyond porn, in statehouses, in courts, and in the streets. </span><br /><span></span></p>
<h4>What can you do?</h4>
<p><a href="https://act.eff.org/action/congress-shouldn-t-control-what-we-re-allowed-to-read-online"><span>Call or email</span></a><span> your representatives to oppose KOSA and any other federal age-checking mandate. Tell your </span><a href="https://www.eff.org/deeplinks/2024/12/effs-2024-battle-against-online-age-verification-defending-youth-privacy-and-free"><span>state</span></a><span> lawmakers, wherever you are, </span><a href="https://www.congress.gov/state-legislature-websites"><span>to oppose age verification laws</span></a><span>. Make your voice heard online, and talk to your friends and family. Tell them about what’s happening to the internet in the UK, and make sure they understand what we all stand to lose—online privacy, security, anonymity, and expression—if the age-gated internet becomes a global reality. EFF is building a coalition to stop this enormous violation of digital rights. </span><a href="https://supporters.eff.org/donate/neon"><span>Join us today. </span></a></p>
</div></div></div></description>
<pubDate>Fri, 08 Aug 2025 22:08:04 +0000</pubDate>
<guid isPermaLink="false">110973 at https://www.eff.org</guid>
<category domain="https://www.eff.org/issues/free-speech">Free Speech</category>
<category domain="https://www.eff.org/issues/privacy">Privacy</category>
<dc:creator>Molly Buckley</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/icon-2019-freespeech.png" alt="A multi-colored bullhorn icon surrounded by grey-blue hexagons" type="image/png" length="14323" />
</item>
<item>
<title>EFF to Court: Chatbot Output Can Reflect Human Expression</title>
<link>https://www.eff.org/deeplinks/2025/08/eff-court-chatbot-output-can-reflect-human-expression</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p>When a technology can have a conversation with you, it’s natural to anthropomorphize that technology<span data-huuid="16947533582197165948">—</span>to see it as a person. It’s tempting to see a chatbot as a thinking, speaking robot, but this gives the technology too much credit. This can also lead people<span data-huuid="16947533582197165948">—</span>including judges in cases about AI chatbots<span data-huuid="16947533582197165948">—</span>to overlook the human expressive choices connected to the words that chatbots produce. If chatbot outputs had no First Amendment protections, the government could potentially ban chatbots that criticize the administration or reflect viewpoints the administration disagrees with.</p>
<p>In fact, the output of chatbots can reflect not only the expressive choices of their creators and users, but also implicates users’ right to receive information. That’s why EFF and the Center for Democracy and Technology (CDT) have filed an amicus brief in <em>Garcia v. Character Technologies</em> explaining how large language models work and the various kinds of protected speech at stake.</p>
<p>Among the questions in this case is the extent to which free speech protections extend to the creation, dissemination, and receipt of chatbot outputs. Our brief explains how the expressive choices of a chatbot developer can shape its output, such as during reinforcement learning, when humans are instructed to give positive feedback to responses that align with the scientific consensus around climate change and negative feedback for denying it (or vice versa). This chain of human expressive decisions extends from early stages of selecting training data to crafting a system prompt. A user’s instructions are also reflected in chatbot output. Far from being the speech of a robot, chatbot output often reflects human expression that is entitled to First Amendment protection.<br /> <br /> In addition, the right to receive speech in itself is protected<span data-huuid="16947533582197165948">—</span>even when the speaker would have no independent right to say it. Users have a right to access the information chatbots provide.<br /> <br /> None of this is to suggest that chatbots cannot be regulated or that the harms they cause cannot be addressed. The First Amendment simply requires that those regulations be appropriately tailored to the harm to avoid unduly burdening the right to express oneself through the medium of a chatbot, or to receive the information it provides.</p>
<p>We hope that our brief will be helpful to the court as the case progresses, as the judge decided not to send the question up on appeal at this time.</p>
<p>Read our brief below.</p>
</div></div></div></description>
<pubDate>Tue, 05 Aug 2025 19:05:05 +0000</pubDate>
<guid isPermaLink="false">110971 at https://www.eff.org</guid>
<category domain="https://www.eff.org/issues/innovation">Creativity & Innovation</category>
<dc:creator>Katharine Trendacosta</dc:creator>
<dc:creator>Kit Walsh</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/icon-2019-freespeech.png" alt="A multi-colored bullhorn icon surrounded by grey-blue hexagons" type="image/png" length="14323" />
</item>
<item>
<title>No Walled Gardens. No Gilded Cages.</title>
<link>https://www.eff.org/deeplinks/2025/08/no-walled-gardens-no-gilded-cages</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p>Sometimes technology feels like a gilded cage, and you’re not the one holding the key. Most people can’t live off the grid, so how do we stop data brokers who track and exploit you for money? Tech companies that distort what you see and hear? Governments that restrict, censor, and intimidate? No one can do it alone, but EFF was built to protect your rights. <a href="https://supporters.eff.org/donate/VirtualVegas--DL" target="_blank" rel="noopener noreferrer">With your support, we can take back control.</a></p>
<p class="take-action"><a href="https://supporters.eff.org/donate/VirtualVegas--DL" target="_blank" rel="noopener noreferrer">Join EFF</a></p>
<p><strong><a href="https://www.eff.org/35">With 35 years of deep expertise and the support of our members</a></strong>, EFF is delivering bold action to solve the biggest problems facing tech users: <a href="https://www.eff.org/deeplinks/2025/04/our-privacy-act-lawsuit-against-doge-and-opm-why-judge-let-it-move-forward">suing the government</a> for <a href="https://www.eff.org/deeplinks/2025/07/eff-court-protect-our-health-data-dhs">overstepping their bounds</a>; empowering <a href="https://act.eff.org" target="_blank" rel="noopener noreferrer">the people</a> and <a href="https://act.eff.org/action/tell-the-senate-throw-out-the-no-fakes-act-and-start-over">lawmakers</a> to help them hold the line; and creating free, public interest <a href="https://www.eff.org/pages/tools" target="_blank" rel="noopener noreferrer">software tools</a>, <a href="https://ssd.eff.org" target="_blank" rel="noopener noreferrer">guides</a>, and <a href="https://www.digitalrightsbytes.org" target="_blank" rel="noopener noreferrer">explainers</a> to make the web better.</p>
<p>EFF members enable thousands of hours of our legal work, activism, investigation, and software development for the public good. <a href="https://supporters.eff.org/donate/VirtualVegas--DL" target="_blank" rel="noopener noreferrer">Join us today.</a></p>
<h2>No Walled Gardens. No Gilded Cages.</h2>
<p>Think about it: <em>in the face of <a href="https://www.eff.org/deeplinks/2025/06/dangers-consolidating-all-government-information" target="_blank" rel="noopener noreferrer">rising authoritarianism</a> and <a href="https://www.eff.org/deeplinks/2025/05/she-got-abortion-so-texas-cop-used-83000-cameras-track-her-down" target="_blank" rel="noopener noreferrer">invasive</a> <a href="https://www.eff.org/deeplinks/2025/07/you-went-drag-show-now-state-florida-wants-your-name" target="_blank" rel="noopener noreferrer">surveillance</a>, where would we be without an encrypted web?</em> Your security online depends on researchers, hackers, and creators who are willing to take privacy and free speech rights seriously. That's why EFF will eagerly protect the beating heart of that movement at <a href="https://eff.org/vegas">this week's summer security conferences in Las Vegas</a>. This renowned summit of computer hacking events—<a href="https://www.eff.org/event/eff-bsides-las-vegas-0" target="_blank" rel="noopener noreferrer">BSidesLV</a>, <a href="https://www.eff.org/event/eff-black-hat-usa-1" target="_blank" rel="noopener noreferrer">Black Hat USA</a>, and <a href="https://www.eff.org/event/eff-def-con-33">DEF CON</a>—illustrate the key role a community can play in helping you break free of the trappings of technology and retake the reins.</p>
<p>For summer security week, <a href="https://supporters.eff.org/donate/VirtualVegas--DL" target="_blank" rel="noopener noreferrer">EFF’s DEF CON 33 t-shirt design <em>Beyond the Walled Garden</em> by Hannah Diaz</a> is your gift at the Gold Level membership. Look closer to discover <a href="https://www.eff.org/files/2025/07/31/2025_defcon_shirt-back_banner-2.png" target="_blank" rel="noopener noreferrer">this year’s puzzle challenge</a>! Many thanks to our volunteer puzzlemasters jabberw0nky and <a href="https://bsky.app/profile/elegin.bsky.social">Elegin</a> for all their work.</p>
<p><div class="media media-element-container media-default media-wysiwyg-align-center"><div id="file-57916" class="file file-image file-image-png" class="file file-image file-image-png">
<h2 class="element-invisible"><a href="/file/defcon-shirt-frontback-widepng">defcon-shirt-frontback-wide.png</a></h2>
<div class="content">
<img title="EFF’s DEF CON 33 t-shirt: Beyond the Walled Garden. Get it as a Gold Member." class="media-element file-default" data-delta="1" src="https://www.eff.org/files/2025/07/31/defcon-shirt-frontback-wide.png" width="1000" height="740" alt="" /> </div>
</div>
</div><br /><strong>A Token of Appreciation</strong>: <a href="https://supporters.eff.org/donate/VirtualVegasR--DL" target="_blank" rel="noopener noreferrer">Become a recurring monthly or annual Sustaining Donor this week</a> and you'll get a numbered EFF35 Challenge Coin. Challenge coins follow a long tradition of offering a symbol of kinship and respect for great achievements—and EFF owes its strength to technology creators and users like you.</p>
<p>Our team is on a relentless mission to protect your civil liberties and human rights wherever they meet tech, <a href="https://supporters.eff.org/donate/VirtualVegas--DL" target="_blank" rel="noopener noreferrer">but it’s only possible with your help</a>.</p>
<p class="take-action"><a href="https://supporters.eff.org/donate/VirtualVegas--DL" target="_blank" rel="noopener noreferrer">Donate Today</a></p>
<p class="take-explainer">Break free of tech’s walled gardens.</p>
</div></div></div></description>
<pubDate>Tue, 05 Aug 2025 14:35:20 +0000</pubDate>
<guid isPermaLink="false">110965 at https://www.eff.org</guid>
<dc:creator>Aaron Jue</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/2025_defcon_whole.png" alt="Old computer surrounded by an acmon blue butterfly, western harvest mouse, California toad, great blue heron, San Francisco garter snake, California poppies, giant wildrye, and California bush sunflower. *J8wwDo8006" type="image/png" length="267013" />
</item>
<item>
<title>Blocking Access to Harmful Content Will Not Protect Children Online, No Matter How Many Times UK Politicians Say So</title>
<link>https://www.eff.org/deeplinks/2025/08/blocking-access-harmful-content-will-not-protect-children-online-no-matter-how</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p><span>The UK is having a moment. In late July, </span><a href="https://www.eff.org/deeplinks/2025/08/no-uks-online-safety-act-doesnt-make-children-safer-online"><span>new rules took effect</span></a><span> that require all online services available in the UK to assess whether they host content considered harmful to children, and if so, these services must introduce </span><a href="https://www.ofcom.org.uk/online-safety/protecting-children/age-checks-for-online-safety--what-you-need-to-know-as-a-user"><span>age checks</span></a><span> to prevent children from accessing such content. Online services are also </span><a href="https://www.ofcom.org.uk/siteassets/resources/documents/consultations/category-1-10-weeks/statement-protecting-children-from-harms-online/main-document/volume-1-overview-scope-and-regulatory-approach.pdf?v=396663"><span>required</span></a><span> to change their algorithms and moderation systems to ensure that content defined as harmful, like violent imagery, is not shown to young people.</span><b></b></p>
<p><span>During the four years that the legislation behind these changes—the Online Safety Act (OSA)—was debated in Parliament, and in the two years since while the UK’s independent, online regulator Ofcom devised the implementing regulations, experts from </span><a href="https://www.eff.org/deeplinks/2022/08/uks-online-safety-bill-attacks-free-speech-and-encryption"><span>across civil society</span></a><span> repeatedly flagged concerns about the impact of this law on both adults’ and chil</span><span>dren’s rights. Yet politicians in the UK pushed ahead and enacted </span><a href="https://www.eff.org/deeplinks/2024/12/global-age-verification-measures-2024-year-review"><span>one of the most</span></a><span> contentious age verification mandates that we’ve seen.</span></p>
<p class="pull-quote">The case of safety online is not solved through technology alone.</p>
<p><span>No one—no matter their age—should have to hand over their passport or driver’s license just to access legal information and speak freely. As </span><a href="https://www.eff.org/pages/uk-online-safety-bill-massive-threat-online-privacy-security-and-speech"><span>we’ve been saying</span></a><span> for many years now, the approach that UK politicians have taken with the Online Safety Act is reckless, short-sighted, and will introduce more harm to the children that it is trying to protect. Here are five reasons why:</span></p>
<h3><b>Age Verification Systems Lead to Less Privacy </b></h3>
<p><span>Mandatory age verification tools are surveillance systems that threaten everyone’s rights to speech and privacy. To keep children out of a website or away from certain content, online services need to </span><a href="https://www.ofcom.org.uk/online-safety/protecting-children/age-checks-for-online-safety--what-you-need-to-know-as-a-user"><span>confirm the ages</span></a><span> of </span><i><span>all</span></i><span> their visitors, not just children—for example by asking for government-issued documentation or by using biometric data, such as face scans, that are shared with third-party services like Yoti or Persona to estimate that the age of the user is over 18. This means that adults and children must all share their most sensitive and personal information with online services to access a website. </span></p>
<p><span>Once this information is shared to verify a user's age, there’s </span><a href="https://www.eff.org/deeplinks/2023/03/age-verification-mandates-would-undermine-anonymity-online"><span>no way for people to know</span></a><span> how it's going to be retained or used by that company, including whether it will be sold or shared with even more third parties like data brokers or law enforcement. The more information a website collects, the more chances there are for that information to get into the hands of a marketing company, a bad actor, or a state actor or someone who has filed a legal request for it. If a website, or one of the intermediaries it uses, misuses or mishandles the data, the visitor might never find out. There is also a risk that this data, once collected, can be </span><a href="https://www.eff.org/deeplinks/2023/11/debunking-myth-anonymous-data"><span>linked to other unrelated web activity</span><span>, creating an aggregated profile of the </span></a><span>user that grows more valuable as each new data point is added. </span></p>
<p><span>As we </span><a href="https://www.eff.org/pages/uk-online-safety-bill-massive-threat-online-privacy-security-and-speech"><span>argued extensively</span></a><span> during the passage of the Online Safety Act, any attempt to protect children online should </span><i><span>not</span></i><span> include measures that require platforms to collect data or remove privacy protections around users’ identities. But with the Online Safety Act, users are being forced to trust that platforms (and whatever third-party verification services they choose to partner with) are guardrailing users’ most sensitive information—not selling it through the opaque supply chains that allow corporations and data brokers to </span><a href="https://www.eff.org/deeplinks/2025/01/mad-meta-dont-let-them-collect-and-monetize-your-personal-data"><span>make millions</span></a><span>. The solution is not to come up with a more sophisticated technology, but to simply not collect the data in the first place.</span></p>
<h3>This Isn’t Just About Safety—It’s Censorship</h3>
<p><span>Young people should be able to access information, speak to each other and to the world, play games, and express themselves online without the government making decisions about what speech is permissible. But under the Online Safety Act, the UK government—with Ofcom—are deciding what speech young people have access to, and are forcing platforms to remove any content considered harmful. As part of this, platforms are </span><a href="https://www.ofcom.org.uk/siteassets/resources/documents/consultations/category-1-10-weeks/statement-protecting-children-from-harms-online/main-document/volume-1-overview-scope-and-regulatory-approach.pdf?v=396663"><span>required</span></a><span> to build “safer algorithms” to ensure that children do not encounter harmful content, and introduce effective content moderation systems to remove harmful content when platforms become aware of it. </span></p>
<p><span>Because the OSA threatens large fines or even jail time for any non-compliance, platforms are forced to over-censor content to ensure that they do not face any such liability. Reports are already showing the censorship of content that falls outside the parameters of the OSA, such as </span><a href="https://x.com/BenBarryJones/status/1948839759356572012?utm_source=www.garbageday.email&amp;utm_medium=referral&amp;utm_campaign=the-tea-app-and-the-future-of-online-surveillance"><span>footage</span></a><span> of police attacking pro-Palestinian protestors being blocked on X, the subreddit r/cider—yes, the beverage—</span><a href="https://x.com/s8mb/status/1949089791485607994?s=12&amp;utm_source=www.garbageday.email&amp;utm_medium=referral&amp;utm_campaign=the-tea-app-and-the-future-of-online-surveillance"><span>asking users</span></a><span> for photo ID, and smaller websites </span><a href="https://action.openrightsgroup.org/tell-your-mp-online-safety-act-isn%E2%80%99t-working"><span>closing down</span></a><span> entirely. UK-based organisation Open Rights Group are tracking this censorship with their tool, </span><a href="https://www.blocked.org.uk/osa-blocks"><span>Blocked</span></a><span>.</span></p>
<p><span>We know that the scope for so-called “harmful content” is </span><a href="https://www.eff.org/deeplinks/2025/01/impact-age-verification-measures-goes-beyond-porn-sites"><span>subjective and arbitrary</span></a><span>, but it also often sweeps up content like pro-LGBTQ+ speech. Policies like the OSA, that claim to “protect children” or keep sites “family-friendly,” often </span><a href="https://www.eff.org/deeplinks/2018/10/blunt-policies-and-secretive-enforcement-mechanisms-lgbtq-and-sexual-health"><span>label LGBTQ+ content</span></a><span> as “adult” or “harmful,” while similar content that doesn't involve the LGBTQ+ community is left untouched. Sometimes, this impact—the censorship of LGBTQ+ content—is implicit, and only becomes clear when the policies are actually implemented. Other times, this intended impact is explicitly spelled out in the text of the policies. But in all scenarios, legal content is being removed at the discretion of government agencies and online platforms, all under the guise of protecting children. </span></p>
<p class="subhead pull-quote"><span>Children deserve a more intentional and holistic approach to protecting their safety and privacy online.</span></p>
<h3><b>People Do Not Want This </b></h3>
<p><span>Users in the UK have been clear in showing that they do not want this. Just days after age checks came into effect, VPN apps became the </span><a href="https://www.bbc.co.uk/news/articles/cn72ydj70g5o"><span>most downloaded</span></a><span> on Apple's App Store in the UK. The BBC </span><a href="https://www.bbc.co.uk/news/articles/cn72ydj70g5o"><span>reported</span></a><span> that one app, Proton VPN, reported an 1,800% spike in UK daily sign-ups after the age check rules took effect. A similar </span><a href="https://www.cbsnews.com/miami/news/pornhub-florida-vpn-google-searches-skyrocket/"><span>spike in searches for VPNs</span></a><span> was evident in January when Florida joined the ever growing list of </span><a href="https://action.freespeechcoalition.com/age-verification-bills/"><span>U.S. states</span></a><span> in implementing an age verification mandate on sites that host adult content, including </span><a href="https://www.vice.com/en/article/pornhub-pulling-out-of-florida-on-january-1st-2025/"><span>pornography websites</span></a><span> like Pornhub. </span></p>
<p><span>Whilst VPNs may be able to disguise the source of your internet activity, they are not foolproof or </span><a href="https://www.eff.org/deeplinks/2025/01/vpns-are-not-solution-age-verification-laws"><span>a solution</span></a><span> to age verification laws. Ofcom has already started discouraging their use, and with time, it will become increasingly difficult for VPNs to effectively circumvent age verification requirements as enforcement of the OSA adapts and deepens. VPN providers will struggle to keep up with these constantly changing laws to ensure that users can bypass the restrictions, especially as more sophisticated detection systems are introduced to identify and block VPN traffic. </span></p>
<p><span>Some politicians in the Labour Party argued that a ban on VPNs will be essential to prevent users circumventing age verification checks. But banning VPNs, just like introducing age verification measures, will not achieve this goal. It will, however, function as an </span><a href="https://freedomhouse.org/report/special-report/2025/tunnel-vision-anti-censorship-tools-end-end-encryption-and-fight-free?mc_cid=a72124b075#:~:text=This%20report%2C%20Tunnel%20Vision%3A%20Anti,%2C%20open%2C%20and%20interoperable%20internet"><span>authoritarian control on accessing information</span></a><span> in the UK. If you are navigating protecting your privacy or want to learn more about VPNs, EFF provides a </span><a href="https://ssd.eff.org/module/choosing-vpn-thats-right-you"><span>comprehensive guide on using VPNs</span></a><span> and protecting digital privacy—a valuable resource for anyone looking to use these tools.</span></p>
<p><span> </span><span>Alongside increased VPN usage, a </span><a href="https://petition.parliament.uk/petitions/722903"><span>petition calling for the repeal</span></a><span> of the Online Safety Act recently hit more than 400,000 signatures. In its </span><a href="https://petition.parliament.uk/petitions/722903"><span>official response to the petition</span></a><span>, the UK government said that it “has no plans to repeal the Online Safety Act, and is working closely with Ofcom to implement the Act as quickly and effectively as possible to enable UK users to benefit from its protections.” This is not good enough: the government must immediately treat the reasonable concerns of people in the UK with </span><a href="https://action.openrightsgroup.org/tell-your-mp-online-safety-act-isn%E2%80%99t-working"><span>respect, not disdain</span></a><span>, and revisit the OSA.</span></p>
<h3><b>Users Will Be Exposed to Amplified Discrimination </b></h3>
<p><span>To check users' ages, three types of systems are typically deployed: </span><i><span>age verification</span></i><span>, which requires a person to prove their age and identity; </span><i><span>age assurance</span></i><span>, whereby users are required to prove that they are of a certain age or age range, such as over 18; or </span><i><span>age estimation</span></i><span>, which typically describes the process or technology of estimating ages to a certain range. The OSA requires platforms to check ages through </span><i><span>age assurance</span></i><span> to prove that those accessing platforms are over 18, but leaves the specific tool for measuring this at the platforms’ discretion. This may therefore involve uploading a government-issued ID, or submitting a face scan to an app that will then use a third-party platform to “estimate” your age.</span></p>
<p><span>From what we know about systems that use face scanning in other contexts, such as face recognition technology used by law enforcement, even the best technology is </span><a href="https://www.eff.org/aboutface"><span>susceptible to mistakes</span></a><span> and </span><a href="https://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf"><span>misidentification</span></a><span>. Just last year, a </span><a href="https://bigbrotherwatch.org.uk/press-releases/landmark-legal-challenges-launched-against-facial-recognition-after-police-and-retailer-misidentifications/"><span>legal challenge was launched against the Met Police</span></a><span> after a community worker was wrongly identified and detained following a misidentification by the Met’s live facial recognition system. </span></p>
<p><span>For age assurance purposes, we know that the technology at best has an </span><a href="https://cdt.org/insights/age-estimation-requires-verification-for-many-users/"><span>error range</span></a><span> of over a year, which means that users may risk being incorrectly blocked or locked out of content by erroneous estimations of their age—whether unintentionally or due to discriminatory algorithmic patterns that incorrectly determine people’s identities. These algorithms are </span><a href="https://www.sciencedirect.com/science/article/abs/pii/S0364021302000848"><span>not always reliable</span></a><span>, and even if the technology somehow had 100% accuracy, it would still be an </span><a href="https://www.eff.org/aboutface"><span>unacceptable tool</span></a><span> of invasive surveillance that people should not have to be subject to just to access content that the government could consider harmful.</span></p>
<h3><b>Not Everyone Has Access to an ID or Personal Device </b></h3>
<p><span>Many advocates of the ‘digital transition’ introduce document-based verification requirements or device-based age verification systems on the assumption that every individual has access to a form of identification or their own smartphone. But this is not true. In the UK, </span><a href="https://www.newstatesman.com/politics/2021/05/more-three-million-uk-voters-have-no-form-photo-id"><span>millions of people</span></a><span> don’t hold a form of identification or own a personal mobile device, instead sharing with family members or using public devices like those at a library or internet cafe. Yet because age checks under the OSA involve checking a user’s age through government-issued ID documents or face scans on a mobile device, millions of people will be left </span><a href="https://www.eff.org/deeplinks/2024/09/digital-id-isnt-everybody-and-thats-okay"><span>excluded</span></a><span> from online speech and will lose access to much of the internet. </span></p>
<p><span>These are primarily lower-income or older people who are </span><a href="https://www.washingtonpost.com/politics/courts_law/getting-a-photo-id-so-you-can-vote-is-easy-unless-youre-poor-black-latino-or-elderly/2016/05/23/8d5474ec-20f0-11e6-8690-f14ca9de2972_story.html"><span>often already marginalized</span></a><span>, and for whom the internet may be a critical part of life. We need to push back against age verification mandates like the Online Safety Act, not just because they make children less safe online, but because they risk undermining crucial access to digital services, eroding privacy and data protection, and limiting freedom of expression. </span></p>
<h3><b>The Way Forward </b></h3>
<p><span>The case of safety online is not solved through technology alone, and children deserve a more intentional and holistic approach to protecting their safety and privacy online—not this lazy strategy that causes more harm that it solves. Rather than </span><a href="https://www.eff.org/deeplinks/2025/01/metas-new-content-policy-will-harm-vulnerable-users-if-it-really-valued-free"><span>weakening rights for already vulnerable communities online</span></a><span>, politicians must acknowledge these shortcomings and explore less invasive approaches to protect </span><a href="https://www.eff.org/wp/privacy-first-better-way-address-online-harms"><span>all people from online harms</span></a><span>. We encourage politicians in the UK to look into what is best, and not what is easy.</span></p>
</div></div></div></description>
<pubDate>Tue, 05 Aug 2025 10:46:18 +0000</pubDate>
<guid isPermaLink="false">110970 at https://www.eff.org</guid>
<category domain="https://www.eff.org/issues/privacy">Privacy</category>
<dc:creator>Paige Collings</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/ageverificationbanner.png" alt="Purple padlock with an 18+ only symbol and a combination lock requiring Day, Month, and Year. Surrounded by abstract purple dashed lines." type="image/png" length="1291379" />
</item>
<item>
<title>EFF at the Las Vegas Security Conferences </title>
<link>https://www.eff.org/deeplinks/2025/07/eff-las-vegas-security-conferences</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p><span data-contrast="auto">It’s time for EFF’s annual journey to Las Vegas for the summer security conferences: BSidesLV, Black Hat USA, and DEF CON. Our lawyers, activists, and technologists are always excited to support this community of security researchers and tinkerers—the folks who push computer security forward (and somehow survive the Vegas heat in their signature black hoodies). </span><span data-ccp-props="279}"> </span></p>
<p><span data-contrast="auto">As in past years, EFF attorneys will be on-site to assist speakers and attendees. If you have legal concerns about an upcoming talk or sensitive infosec research—during the Las Vegas conferences or anytime—don’t hesitate to reach out at <a href="mailto:info@eff.org">info@eff.org</a>. Share a brief summary of the issue, and we’ll do our best to connect you with the right resources. You can also learn more about our work supporting technologists on our Coders’ Rights Project page.</span><span data-ccp-props="279}"> </span></p>
<p><span data-contrast="auto">Be sure to swing by the expo areas at all three conferences to say hello to your friendly neighborhood EFF staffers! You’ll probably spot us in the halls, but we’d love for you to stop by our booths to catch up on our latest work, get on our action alerts list, or <a href="https://supporters.eff.org/donate/VirtualVegas--DL">become an EFF member</a>! For the whole week, we’ll have our limited-edition DEF CON 33 t-shirt on hand—I can’t wait to see them take over each conference!</span><span data-ccp-props="279}"> </span></p>
<p><div class="media media-element-container media-default media-wysiwyg-align-center"><div id="file-57921" class="file file-image file-image-png" class="file file-image file-image-png">
<h2 class="element-invisible"><a href="/file/defcon-shirt-frontbackpng">defcon-shirt-frontback.png</a></h2>
<div class="content">
<img class="media-element file-default" data-delta="1" src="https://www.eff.org/files/2025/08/05/defcon-shirt-frontback.png" width="1000" height="1000" alt="" /> </div>
</div>
</div></p>
<h3><span data-ccp-props="279}">EFF Staff Presentations</span></h3>
<p><em><strong><a href="https://bsideslv.org/talks#7RPBUM">Ask EFF at BSides Las Vegas<br /></a></strong></em> At this interactive session, our panelists will share updates on critical digital rights issues and EFF's ongoing efforts to safeguard privacy, combat surveillance, and advocate for freedom of expression. <br />WHEN: Tuesday, August 5, 15:00<br />WHERE: Skytalks at the Tuscany Suites Hotel &amp; Casino</p>
<p><a href="https://defcon.org/html/defcon-33/dc-33-speakers.html#content_60310"><em><strong>Recording PCAPs from Stingrays With a $20</strong><strong> Hotspot</strong></em></a><strong><br /></strong>What if you could use Wireshark on the connection between your cellphone and the tower it's connected to? In this talk we present Rayhunter, a cell site simulator detector built on top of a cheap cellular hotspot. <br />WHEN: Friday, August 8, 13:30<br />WHERE: DEF CON, LVCC - L1 - EHW3 - Track 1</p>
<p><em><strong>Rayhunter Build Clinic</strong><br /></em>Come out and build EFF's Rayhunter! ($10 materials fee as an EFF donation)<br />WHEN: Friday, August 8 at 14:30<br />WHERE: DEF CON, Hackers.Town Community Space<br /><br /><em><strong>Protect Your Privacy Online and on the Streets with EFF Tools</strong></em><br />The Electronic Frontier Foundation (EFF) has been protecting your rights to privacy, free expression, and security online for 35 years! One important way we push for these freedoms is through our free, open source tools. We’ll provide an overview of how these tools work, including Privacy Badger, Rayhunter, Certbot, and Surveillance-Self Defense, and how they can help keep you safe online and on the streets.<br />WHEN: Friday, August 8 at 17:00<br />WHERE: DEF CON, Community Stage</p>
<p><em><strong>Rayhunter Internals</strong></em><br />Rayhunter is an open source project from EFF to detect IMSI catchers. In this follow up to our main stage talk about the project we will take a deep dive into the internals of Rayhunter. We will talk about the architecture of the project, what we have gained by using Rust, porting to other devices, how to jailbreak new devices, the design of our detection heuristics, open source shenanigans, and how we analyze files sent to us.<br />WHEN: Saturday, August 9, at 12:00<br />WHERE: DEF CON, Hackers.Town Community Space</p>
<p><a href="https://defcon.org/html/defcon-33/dc-33-speakers.html#content_60355">Ask EFF at DEF CON 33</a><br />We're excited to answer your burning questions on pressing digital rights issues! Our expert panelists will offer brief updates on EFF's work defending your digital rights, before opening the floor for attendees to ask their questions. This dynamic conversation centers challenges DEF CON attendees actually face, and is an opportunity to connect on common causes.<br />WHEN: Saturday, August 9, at 14:30<br />WHERE: DEF CON, LVCC - L1 - EHW3 - Track 4<br /><br /></p>
<h3>EFF Benefit Poker Tournament at DEF CON 33</h3>
<p><span>The <a href="https://www.eff.org/event/betting-your-digital-rights-eff-benefit-poker-tournament-def-con-33">EFF Benefit Poker Tournament is back for DEF CON 33</a>!</span> Your buy-in is paired with a donation to support EFF’s mission to protect online privacy and free expression for all. Join us at the Planet Hollywood Poker Room as a player or spectator. Play for glory. Play for money. Play for the future of the web. <br />WHEN: Friday, August 8, 2025 - 12:00-15:00<br />WHERE: Planet Hollywood Poker Room, 3667 Las Vegas Blvd South, Las Vegas, NV 89109<br /><br /></p>
<h3>Beard and Mustache Contest at DEF CON 33</h3>
<p>Yes, it's exactly what it sounds like. Join EFF at the intersection of facial hair and hacker culture. Spectate, heckle, or compete in any of four categories: Full beard, Partial Beard, Moustache Only, or Freestyle (anything goes so create your own facial apparatus!). Prizes! Donations to EFF! Beard oil! <a href="https://x.com/dcbeardcontest">Get the latest updates.</a><br />WHEN: Saturday, August 9, 10:00- 12:00<br />WHERE: DEF CON, Contest Stage (Look for the Moustache Flag)<br /><br /></p>
<h3>Tech Trivia Contest at DEF CON 33</h3>
<p><a href="https://defcon.org/html/defcon-33/dc-33-contests.html#orga_41061">Join us for some tech trivia</a> on Saturday, August 9 at 7:00 PM! EFF's team of technology experts have crafted challenging trivia about the fascinating, obscure, and trivial aspects of digital security, online rights, and internet culture. Competing teams will plumb the unfathomable depths of their knowledge, but only the champion hive mind will claim the First Place Tech Trivia Trophy and EFF swag pack. The second and third place teams will also win great EFF gear. <br />WHEN: Saturday, August 9, 19:00-22:00<br />WHERE: DEF CON, Contest Stage</p>
<h2><strong>Join the Cause!</strong></h2>
<p class="subhead">Come find our table at BSidesLV (Middle Ground), Black Hat USA (back of the Business Hall), and DEF CON (Vendor Hall) to learn more about the latest in online rights, get on our action alert list, or <a href="https://supporters.eff.org/donate/VirtualVegas--DL">donate to become an EFF member</a>. We'll also have our limited-edition DEF CON 33 shirts available starting Monday at BSidesLV! These shirts have a puzzle incorporated into the design. Snag one online for yourself starting on Tuesday, August 5 if you're not in Vegas!</p>
<p class="subhead take-action"><a href="https://supporters.eff.org/donate/VirtualVegas--DL">Join EFF</a></p>
<p class="subhead take-explainer">Support Security &amp; Digital Innovation</p>
</div></div></div></description>
<pubDate>Tue, 05 Aug 2025 04:24:58 +0000</pubDate>
<guid isPermaLink="false">110956 at https://www.eff.org</guid>
<dc:creator>Christian Romero</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/2025_def_con_butterfly_banner.png" alt="EFF logo with DEF CON skull and crossbones and an Acmon Blue Butterfly " type="image/png" length="61272" />
</item>
<item>
<title>Digital Rights Are Everyone’s Business, and Yours Can Join the Fight!</title>
<link>https://www.eff.org/deeplinks/2025/08/digital-rights-are-everyones-business-your-team-can-join-fight</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p>Companies large and small are <a href="https://www.eff.org/thanks">doubling down on digital rights</a>, and we’re excited to see more and more of them join EFF. We’re first and always an organization who fights for users, so you might be asking: Why does EFF work with corporate donors, and why do they want to work with us?</p>
<div class="field field--name-body field--type-text-with-summary field--label-hidden">
<p class="take-action take-explainer">SHOW YOUR COMPANY SUPPORTS A BETTER DIGITAL FUTURE</p>
</div>
<div class="field__items">
<div class="field__item even">
<p class="take-action"><a href="https://eff.org/thanks" title="DONATE TODAY ">JOIN EFF TODAY</a></p>
</div>
</div>
<p>Businesses want to work with EFF for two reasons:</p>
<ol>
<li>They, their employees, and their customers believe in EFF’s values.</li>
<li>They know that when EFF wins, we all win.</li>
</ol>
<p>Both customers and employees alike care about working with organizations they know share their values. And issues like data privacy, sketchy uses of surveillance, and free expression are pretty top of mind for people these days. Research shows that today’s working adults take philanthropy seriously, whether they’re giving organizations their money or their time. For younger generations (like the Millennial EFFer writing this blog post!) especially, feeling like a meaningful part of the fight for good adds to a sense of purpose and fulfillment. Given the choice to spend hard-earned cash with techno-authoritarians versus someone willing to take a stand for digital freedom: We’ll take option two, thanks.</p>
<p>When EFF wins, users win. Standing up for the ability to access, use, and build on technology means that a handful of powerful interests won’t have unfair advantages over everyone else. Whether it’s the fight for net neutrality, beating back patent trolls in court, protecting the right to repair and tinker, or pushing for decentralization and interoperability, EFF’s work can build a society that supports creativity and innovation; where established players aren’t allowed to silence the next generation of creators. Simply put: Digital rights are good for business!</p>
<p>The trust of EFF’s membership is based on 35 years of speaking truth to power, whether it’s on Capitol Hill or in Silicon Valley (and let’s be honest, if EFF was Big Tech astroturf, we’d drive nicer cars). EFF will always lead the work and invite supporters to join us, not the other way around. EFF will gratefully thank the companies who join us and offer employees and customers ways to get involved, too. EFF won’t take money from Google, Apple, Meta, Microsoft, Amazon, or Tesla, and we won’t endorse or sponsor a company, service, or product. Most importantly: EFF won’t alter the mission or the message to meet a donor’s wishes, no matter how much they’ve donated.</p>
<p>A few of the ways your team can support EFF:</p>
<ol>
<li> Cash donations</li>
<li>Sponsoring an EFF event</li>
<li>Providing an in-kind product or service</li>
<li>Matching your employees’ gifts</li>
<li>Boosting our messaging</li>
</ol>
<p>Ready to join us in the fight for a better future? Visit <a href="https://www.eff.org/thanks">eff.org/thanks</a>.</p>
</div></div></div></description>
<pubDate>Mon, 04 Aug 2025 23:48:18 +0000</pubDate>
<guid isPermaLink="false">110969 at https://www.eff.org</guid>
<category domain="https://www.eff.org/issues/innovation">Creativity & Innovation</category>
<category domain="https://www.eff.org/issues/security">Security</category>
<category domain="https://www.eff.org/issues/coders">Coders' Rights Project</category>
<category domain="https://www.eff.org/issues/privacy">Privacy</category>
<dc:creator>Tierney Hamilton</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/organzational-members-handshake_0.jpg" alt="hands shaking" type="image/jpeg" length="142627" />
</item>
<item>
<title>Data Brokers Are Ignoring Privacy Law. We Deserve Better.</title>
<link>https://www.eff.org/deeplinks/2025/08/data-brokers-are-ignoring-privacy-law-we-deserve-better</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p>Of the many principles EFF fights for in <a href="https://www.eff.org/deeplinks/2023/04/digital-privacy-legislation-civil-rights-legislation">consumer data privacy legislation</a>, one of the most basic is a right to access the data companies have about you. It’s only fair. So many companies collect information about us without our knowledge or consent. We at least should have a way to find out what they purport to know about our lives.</p>
<p>Yet a recent paper from <a href="https://arxiv.org/pdf/2506.21914">researchers at the University of Californian-Irvine</a> found that, of 543 data brokers in California’s data broker registry at time of publishing, 43 percent failed to even <em>respond</em> to requests to access data.</p>
<p class="pull-quote">43 percent of registered data brokers in California failed to even <em>respond</em> to requests to access data, one study shows.</p>
<p>Let’s stop there for a second. That’s more than four in ten companies from an industry that makes its money from collecting and selling our personal information, ignoring one of our most basic rights under the California Consumer Privacy Act: the right to know what information companies have about us.</p>
<p>Such failures violate the law. If this happens to you, you should file a complaint with the <a href="https://cppa.ca.gov/webapplications/complaint">California Privacy Protection Agency (CPPA)</a> and the <a href="https://oag.ca.gov/contact/consumer-complaint-against-business-or-company">California Attorney General's Office</a>. </p>
<p>This is particularly galling because it’s not easy to file a request in the first place. As these researchers pointed out, there is no streamlined process for these time-consuming requests. People often won’t have the time or energy to see them through. Yet when someone does make the effort to file a request, some companies still feel just fine ignoring the law and their customers completely.</p>
<p>Four in ten data brokers are leaving requesters on read, in violation of the law and our privacy rights. That’s not a passing grade in anyone’s book.</p>
<p>Without consequences to back up our rights, as this research illustrates, many companies will bank on not getting caught, or factor weak slaps on the wrist into the cost of doing business.</p>
<p>This is why EFF <a href="https://www.eff.org/deeplinks/2019/01/you-should-have-right-sue-companies-violate-your-privacy">fights</a> for bills that <a href="https://www.eff.org/deeplinks/2025/04/eff-congress-heres-what-strong-privacy-law-looks">have teeth</a>. For example, we demand that people have the right to sue for privacy violations themselves—what’s known as a <a href="https://www.eff.org/deeplinks/2025/04/eff-congress-heres-what-strong-privacy-law-looks">private right of action.</a> Companies hate this form of enforcement, because it can cost them <a href="https://www.americanbar.org/groups/business_law/resources/business-law-today/2021-february/historic-biometric-privacy-settlement/">real money</a> when they flout the law.</p>
<p>When the CCPA started out as a <a href="https://repository.uclawsf.edu/cgi/viewcontent.cgi?article=3099&amp;context=ca_ballot_inits">ballot initiative</a>, it had a private right of action, including to enforce access requests. But when the legislature enacted the CCPA (in exchange for the initiative’s proponents removing it from the ballot), corporate interests killed the private right of action <a href="https://www.eff.org/document/sb-561-analysis-california-legislature">in negotiations</a>.</p>
<p>We encourage the California Privacy Protection Agency and the California Attorney General’s Office, which both have the authority to bring these companies to task under the CCPA, to look into these findings. Moving forward, we all have to continue to fight for better laws, to strengthen existing laws, and call on states to <a href="https://www.eff.org/deeplinks/2025/06/why-are-hundreds-data-brokers-not-registering-states">enforce the laws on their books</a> to respect everyone’s privacy. Data brokers must face real consequences for brazenly flouting our privacy rights.</p>
</div></div></div></description>
<pubDate>Mon, 04 Aug 2025 16:31:07 +0000</pubDate>
<guid isPermaLink="false">110967 at https://www.eff.org</guid>
<category domain="https://www.eff.org/issues/privacy">Privacy</category>
<category domain="https://www.eff.org/issues/know-your-rights">Know Your Rights</category>
<dc:creator>Hayley Tsukayama</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/California-bear_0.jpg" alt="" type="image/jpeg" length="285676" />
</item>
<item>
<title>No, the UK’s Online Safety Act Doesn’t Make Children Safer Online</title>
<link>https://www.eff.org/deeplinks/2025/08/no-uks-online-safety-act-doesnt-make-children-safer-online</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p><span>Young people should be able to access information, speak to each other and to the world, play games, and express themselves online without the government making decisions about what speech is permissible. But in one of the </span><a href="https://www.eff.org/deeplinks/2024/12/global-age-verification-measures-2024-year-review"><span>latest misguided attempts</span></a><span> to protect children online, internet users of all ages in the UK are being forced to prove their age before they can access millions of websites under the country’s Online Safety Act (OSA). </span></p>
<p><span>The legislation attempts to make the UK the “the safest place” in the world to be online by placing a duty of care on online platforms to protect their users from harmful content. It mandates that any site accessible in the UK—including </span><a href="https://www.bbc.co.uk/news/articles/cj4ep1znk4zo"><span>social media</span></a><span>, </span><a href="https://support.google.com/legal/contact/UK_Online_Safety_Act_Form?hl=en"><span>search engines</span></a><span>, </span><a href="https://support.spotify.com/uk/article/age-restricted-content-age-check/"><span>music sites</span></a><span>, and </span><a href="https://www.ofcom.org.uk/online-safety/protecting-children/uks-major-porn-providers-agree-to-age-checks-from-next-month?utm_medium=email&amp;utm_campaign=UKs%20major%20porn%20providers%20agree%20to%20age%20checks%20from%20next%20month&amp;utm_content=UKs%20major%20porn%20providers%20agree%20to%20age%20checks%20from%20next%20month+CID_9102e69a51bd037bef2c49d96f85430b&amp;utm_source=media%20releases&amp;utm_term=major%20providers%20agree%20to%20bring%20in%20robust%20methods%20to%20check%20users%20age%20for%20the%20first%20time"><span>adult content providers</span></a><span>—enforce age checks to prevent children from seeing </span><a href="https://www.legislation.gov.uk/ukpga/2023/50/section/62"><span>harmful content</span></a><span>. This is </span><a href="https://www.ofcom.org.uk/siteassets/resources/documents/consultations/category-1-10-weeks/statement-protecting-children-from-harms-online/main-document/volume-1-overview-scope-and-regulatory-approach.pdf?v=396663"><span>defined</span></a><span> in three categories, and failure to comply could result in fines of up to 10% of global revenue or courts blocking services:<br /><br /></span></p>
<ol>
<li><b>Primary priority content</b><span> that is harmful to children: </span>
<ol>
<li><span>Pornographic content.</span></li>
<li><span>Content which encourages, promotes or provides instructions for:</span><span></span>
<ol>
<li><span>suicide;</span></li>
<li><span>self-harm; or </span></li>
<li><span>an eating disorder or behaviours associated with an eating disorder.</span><span></span></li>
</ol>
</li>
</ol>
</li>
<li><b>Priority content</b><span> that is harmful to children: </span></li>
<ol>
<li><span>Content that is abusive on the basis of race, religion, sex, sexual orientation, disability or gender reassignment;</span></li>
<li><span>Content that incites hatred against people on the basis of race, religion, sex, sexual orientation, disability or gender reassignment; </span></li>
<li><span>Content that encourages, promotes or provides instructions for serious violence against a person; </span></li>
<li><span>Bullying content;</span></li>
<li><span>Content which depicts serious violence against or graphicly depicts serious injury to a person or animal (whether real or fictional); </span></li>
<li><span>Content that encourages, promotes or provides instructions for stunts and challenges that are highly likely to result in serious injury; and </span></li>
<li><span>Content that encourages the self-administration of harmful substances.</span></li>
</ol>
<li><b>Non-designated content </b><span>that is harmful to children (NDC): </span>
<ol>
<li><span>Content is NDC if it presents a material risk of significant harm to an appreciable number of children in the UK, provided that the risk of harm does not flow from any of the following:</span>
<ol>
<li><span>the content’s potential financial impact;</span></li>
<li><span>the safety or quality of goods featured in the content; or</span></li>
<li><span>the way in which a service featured in the content may be performed.</span></li>
</ol>
</li>
</ol>
</li>
</ol>
<ol></ol>
<p><span>Online service providers must make a judgement about whether the content they host is harmful to children, and if so, address the risk by implementing a number of measures, which </span><a href="https://www.ofcom.org.uk/siteassets/resources/documents/consultations/category-1-10-weeks/statement-protecting-children-from-harms-online/main-document/volume-1-overview-scope-and-regulatory-approach.pdf?v=396663"><span>includes, but is not limited</span></a><span> to:<br /><br /></span></p>
<ol>
<li><b>Robust age checks:</b><span> Services must use “highly effective age assurance to protect children from this content. If services have minimum age requirements and are not using highly effective age assurance to prevent children under that age using the service, they should assume that younger children are on their service and take appropriate steps to protect them from harm.” <br /><br />To do this, all users on sites that host this content must verify their age, </span><a href="https://www.ofcom.org.uk/online-safety/protecting-children/age-checks-for-online-safety--what-you-need-to-know-as-a-user"><span>for example</span></a><span> by uploading a form of ID like a passport, taking a face selfie or video to facilitate age assurance through third-party services, or giving permission for the age-check service to access information from your bank about whether you are over 18. <br /><br /></span></li>
<li><b>Safer algorithms: </b><span>Services “will be expected to configure their algorithms to ensure children are not presented with the most harmful content and take appropriate action to protect them from other harmful content.”<br /><br /></span></li>
<li><b>Effective moderation: </b><span>All services “must have content moderation systems in place to take swift action against content harmful to children when they become aware of it.” </span></li>
</ol>
<p><span>Since these measures took effect in late July, social media platforms </span><a href="https://www.bbc.co.uk/news/articles/cj4ep1znk4zo"><span>Reddit</span></a><span>, </span><a href="https://bsky.social/about/blog/07-10-2025-age-assurance"><span>Bluesky</span></a><span>, </span><a href="https://discord.com/safety/adapting-discord-for-the-uk-online-safety-act"><span>Discord</span></a><span>, and </span><a href="https://help.x.com/en/rules-and-policies/age-assurance"><span>X</span></a><span> all introduced age checks to block children from seeing harmful content on their sites. Porn websites like </span><a href="https://www.ofcom.org.uk/online-safety/protecting-children/uks-major-porn-providers-agree-to-age-checks-from-next-month"><span>Pornhub and YouPorn</span></a><span> implemented age assurance checks on their sites, now asking users to either upload government-issued ID, provide an email address for technology to analyze other online services where it has been used, or submit their information to a third-party vendor for age verification. Sites like Spotify are also </span><a href="https://www.404media.co/spotify-uk-age-check-verification-yoti/"><span>requiring users</span></a><span> to submit face scans to third-party digital identity company Yoti to access content labelled 18+. Ofcom, which oversees implementation of the OSA, went further by sending letters to try to enforce the UK legislation on U.S.-based companies such as the </span><a href="https://www.politico.com/f/?id=00000198-573c-d5ca-af99-773c9e750000"><span>right-wing platform Gab</span></a><span>. </span></p>
<h3><b>The UK Must Do Better</b></h3>
<p><span>The UK is </span><a href="https://www.eff.org/deeplinks/2024/12/global-age-verification-measures-2024-year-review"><span>not alone</span></a><span> in pursuing such a misguided approach to protect children online: the U.S. Supreme Court recently </span><a href="https://www.eff.org/deeplinks/2025/07/despite-supreme-court-setback-eff-fights-against-online-age-mandates"><span>paved the way</span></a><span> for states to require websites to check the ages of users before allowing them access to graphic sexual materials; courts in France last week </span><a href="https://www.euractiv.com/section/tech/news/major-blow-to-pornhub-as-frances-highest-court-re-introduces-age-verification/"><span>ruled</span></a><span> that porn websites can check users’ ages; the European Commission is </span><a href="https://www.eff.org/deeplinks/2025/07/just-banning-minors-social-media-not-protecting-them"><span>pushing forward</span></a><span> with plans to test its age-verification app; and Australia’s </span><a href="https://www.theguardian.com/media/2025/feb/22/social-media-bans-for-teens-australia-has-passed-one-should-other-countries-follow-suit"><span>ban on youth under the age of 16</span></a><span> accessing social media is likely to be implemented in December. </span></p>
<p><span>But the UK’s scramble to find an effective age verification method shows us that there isn't one, and it’s high time for politicians to take that seriously. The Online Safety Act is a threat to the privacy of users, restricts free expression by arbitrating speech online, exposes users to algorithmic discrimination through face checks, and leaves millions of people without a personal device or form of ID excluded from accessing the internet. </span></p>
<p><span>And, to top it all off, UK internet users are sending a very clear message that they do not want anything to do with this censorship regime. Just days after age checks came into effect, VPN apps became the </span><a href="https://www.bbc.co.uk/news/articles/cn72ydj70g5o"><span>most downloaded</span></a><span> on Apple's App Store in the UK, and a </span><a href="https://petition.parliament.uk/petitions/722903"><span>petition calling for the repeal</span></a><span> of the Online Safety Act recently hit more than 400,000 signatures. </span></p>
<p><span>The internet must remain a place where all voices can be heard, free from discrimination or censorship by government agencies. If the UK really wants to achieve its goal of being the safest place in the world to go online, it must lead the way in introducing policies that actually protect all users—including children—rather than pushing the enforcement of legislation that harms the very people it was meant to protect.</span></p>
</div></div></div></description>
<pubDate>Fri, 01 Aug 2025 16:32:50 +0000</pubDate>
<guid isPermaLink="false">110963 at https://www.eff.org</guid>
<category domain="https://www.eff.org/issues/privacy">Privacy</category>
<dc:creator>Paige Collings</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/ageverificationbanner.png" alt="Purple padlock with an 18+ only symbol and a combination lock requiring Day, Month, and Year. Surrounded by abstract purple dashed lines." type="image/png" length="1291379" />
</item>
<item>
<title>TechEd Collab: Building Community in Arizona Around Tech Awareness</title>
<link>https://www.eff.org/deeplinks/2025/07/teched-collab-building-community-arizona-around-tech-awareness</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p><span>Earlier this year, EFF welcomed </span><a href="https://techedcollab.org/home/"><span>Technology Education Collaborative (TEC)</span></a><span> into the Electronic Frontier Alliance </span><a href="https://efa.eff.org/"><span>(EFA)</span></a><span>. TEC empowers everyday people to become informed users of today's extraordinary technology, and helps people better understand the tech that surrounds them on a daily basis. TEC does this by hosting in-person, hands-on events, including </span><a href="https://www.eff.org/issues/right-to-repair"><span>right to repair</span></a><span> workshops, privacy meetups, tech field trips, and demos. We got the chance to catch up with Connor Johnson, Chief Technology Officer of TEC, and speak with him about the work TEC is doing in the Greater Phoenix area: <br /></span></p>
<p><b>Connor, tell us how Technology Education Collaborative got started, and about its mission.</b></p>
<blockquote><p><i><span>TEC was started with the idea of creating a space where industry professionals, students, and the community at large could learn about technology together. We teamed up with Gateway Community College to build the </span></i><a href="https://techshortcuts.com/acsl/?ref=techedcollab.org"><i><span>Advanced Cyber Systems Lab</span></i></a><i><span>. A lot of tech groups in Phoenix meet at varying locations, because they can’t afford or find a dedicated space. TEC hosts community technology-focused groups at the Advanced Cyber Systems Lab, so they can have the proper equipment to work on and collaborate on their projects.</span></i></p>
</blockquote>
<p><b>Speaking of projects, let's talk about some of the main priorities of TEC: right to repair, privacy, and cybersecurity. Having the only right to repair hub in the greater Phoenix metro valley, what concerns do you see on the horizon? </b></p>
<blockquote><p><i><span>One of our big concerns is that many companies have slowly shifted away from repairability to a sense of convenience. We are thankful for the donations from iFixIt that allow people to use the tools they may otherwise not know they need or could afford. Community members and IT professionals have come to use our anti-static benches to fix everything from TVs to 3D printers. We are also starting to host ‘Hardware Happy Hour’ so anyone can bring their hardware projects in and socialize with like-minded people.</span></i></p>
</blockquote>
<p><b>How’s your privacy and cybersecurity work resonating with the community?</b></p>
<blockquote><p><i><span>We have had a host of different speakers discuss the current state of privacy and how it can affect different individuals. It was also wonderful to have your Surveillance Litigation Director, </span></i><a href="https://www.eff.org/about/staff/andrew-crocker?ref=techedcollab.org"><i><span>Andrew Crocker</span></i></a><i><span>, speak at our July edition of Privacy PIE. So many of the attendees were thrilled to be able to ask him questions and get clarification on current issues. </span></i><a href="https://techedcollab.org/people-christina-eichelkraut/?ref=techedcollab.org"><i><span>Christina</span></i></a><i><span>, CEO of TEC, has done a great job leading our Privacy PIE events and discussing the legal situation surrounding many privacy rights people take for granted. One of my favorite presentations was when we discussed privacy concerns with modern cars, where she touched on aspects like how the cameras are tied to car companies' systems and data collection.</span></i></p>
<p><i><span>TEC’s current goal is to focus on building a community that is not just limited to cybersecurity itself. One problem that we’ve noticed is that there are a lot of groups focused on security but don’t branch out into other fields in tech. Security affects all aspects of technology, which is why TEC has been branching out its efforts to other fields within tech like hardware and programming. A deeper understanding of the fundamentals can help us to build better systems from the ground up, rather than applying cybersecurity as an afterthought. <br /></span></i></p>
<p><i><span>In the field of cybersecurity, we have been working on a project building a small business network. The idea behind this initiative is to allow small businesses to independently set up their network, so that provides a good layer of security. Many shops don’t either have the money to afford a security-hardened network or don’t have the technical know-how to set one up. We hope this open-source project will allow people to set up the network themselves, and allow students a way to gain valuable work experience.</span></i></p>
</blockquote>
<p><b>It’s awesome to hear of all the great things TEC is doing in Phoenix! How can people plug in and get engaged and involved?</b></p>
<blockquote><p><i><span>TEC can always benefit from more volunteers or donations. Our goal is to build community, and we are happy to have anyone join us. All are welcome to the Advanced Cyber System lab at Gateway Community College – Washington Campus Monday through Thursday 4 pm to 8 pm. Our website is </span></i><a href="http://www.techedcollab.org"><i><span>www.techedcollab.org</span></i></a><i><span> and on facebook we’re: </span></i><a href="https://www.facebook.com/techedcollab"><i><span>www.facebook.com/techedcollab</span></i></a><i><span> People can also join our discord server for some great discussions and updates on our upcoming events!</span></i></p>
</blockquote>
</div></div></div></description>
<pubDate>Thu, 31 Jul 2025 23:23:43 +0000</pubDate>
<guid isPermaLink="false">110962 at https://www.eff.org</guid>
<dc:creator>Christopher Vines</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/efa-starburst-banner.png" alt="Electronic Frontier Alliance logo in a tricolor starburst" type="image/png" length="24969" />
</item>
<item>
<title>👮 Amazon Ring Is Back in the Mass Surveillance Game | EFFector 37.9</title>
<link>https://www.eff.org/deeplinks/2025/07/amazon-ring-back-mass-surveillance-game-effector-379</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><div class="field field--name-body field--type-text-with-summary field--label-hidden">
<div class="field__items">
<div class="field__item even">
<div class="field field--name-body field--type-text-with-summary field--label-hidden">
<div class="field__items">
<div class="field__item even">
<p>EFF is gearing up to beat the heat in Las Vegas for the summer security conferences! Before we make our journey to the Strip, we figured let's get y'all up-to-speed with a <a href="https://eff.org/effector/37/9">new edition of EFFector</a>.</p>
<p>This time we're covering <span>an <a href="https://www.eff.org/deeplinks/2025/07/when-your-power-meter-becomes-tool-mass-surveillance">illegal mass surveillance scheme</a> by the Sacramento Municipal Utility District</span>, <span>calling out dating apps for <a href="https://www.eff.org/deeplinks/2025/07/dating-apps-need-learn-how-consent-works">using intimate data</a>—like sexual preferences or identity—to train AI </span>, and explaining why we're <span><a href="https://www.eff.org/deeplinks/2025/07/we-support-wikimedia-foundations-challenge-uks-online-safety-act">backing the Wikimedia Foundation</a> in their challenge to the UK’s Online Safety Act</span>.</p>
<p>Don't forget to also check out our audio companion to EFFector as well! We're interviewing staff about some of the important work that they're doing. This time, EFF Senior Policy Analyst Matthew Guariglia explains <span>how Amazon Ring is <a href="https://www.eff.org/deeplinks/2025/07/amazon-ring-cashes-techno-authoritarianism-and-mass-surveillance">cashing in on the rising tide</a> of techno-authoritarianism</span>. Listen now on <a href="https://youtu.be/yup2r4Y0pZc">YouTube</a> or the <a href="https://archive.org/details/37.9_20250730">Internet Archive</a>.</p>
<p class="take-action"><a href="https://youtu.be/yup2r4Y0pZc">Listen TO EFFECTOR</a></p>
<p class="take-action take-explainer"><span>EFFECTOR 37.9 - Amazon Ring Is Back in the Mass Surveillance Game</span></p>
<p><span>Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. </span></p>
<p><span>Thank you to the supporters around the world who make our work possible! If you're not a member yet, <a href="https://eff.org/effect">join EFF today</a> to help us fight for a brighter digital future.</span></p>
</div>
</div>
</div>
</div>
</div>
</div>
</div></div></div></description>
<pubDate>Wed, 30 Jul 2025 17:13:45 +0000</pubDate>
<guid isPermaLink="false">110953 at https://www.eff.org</guid>
<dc:creator>Christian Romero</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/effector_banner_5.jpeg" alt="" type="image/jpeg" length="130379" />
</item>
<item>
<title>Podcast Episode: Smashing the Tech Oligarchy</title>
<link>https://www.eff.org/deeplinks/2025/07/podcast-episode-smashing-tech-oligarchy</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p><span data-contrast="auto">Many of the internet’s thorniest problems can be attributed to the concentration of power in a few corporate hands: the surveillance capitalism that makes it profitable to invade our privacy, the lack of algorithmic transparency that turns artificial intelligence and other tech into impenetrable black boxes, the rent-seeking behavior that seeks to monopolize and mega-monetize an existing market instead of creating new products or markets, and much more.</span></p>
<p><div class="mytube" style="width: 100%px;">
<div class="mytubetrigger" tabindex="0">
<img src="https://www.eff.org/sites/all/modules/custom/mytube/play.png" class="mytubeplay" alt="play" style="top: -4px; left: 20px;" />
<div hidden class="mytubeembedcode">%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2Fe4b50178-f872-4b2c-9015-cec3a88bc5de%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E</div>
</div>
<div class="mytubetext">
<span><a href="https://www.eff.org/deeplinks/2008/02/embedded-video-and-your-privacy" rel="noreferrer" target="_blank">Privacy info.</a></span>
<span>This embed will serve content from <em><a rel="nofollow" href="https://player.simplecast.com/e4b50178-f872-4b2c-9015-cec3a88bc5de?dark=true&amp;color=000000">simplecast.com</a></em><br /></span>
</div>
</div>
</p>
<p><span data-contrast="auto"><i><a href="https://open.spotify.com/show/4UAplFpPDqE4hWlwsjplgt" target="_blank" rel="noopener noreferrer"><img src="https://www.eff.org/files/2021/11/01/spotify-podcast-badge-blk-wht-330x80.png" alt="Listen on Spotify Podcasts Badge" width="198" height="48" /></a> <a href="https://podcasts.apple.com/us/podcast/effs-how-to-fix-the-internet/id1539719568" target="_blank" rel="noopener noreferrer"><img src="https://www.eff.org/files/2021/11/01/applebadge2.png" alt="Listen on Apple Podcasts Badge" width="195" height="47" /></a> <a href="https://music.amazon.ca/podcasts/bf81f00f-11e1-431f-918d-374ab6ad07cc/how-to-fix-the-internet?ref=dmm_art_us_HTFTI" target="_blank" rel="noopener noreferrer"><img height="47" width="195" src="https://www.eff.org/files/styles/kittens_types_wysiwyg_small/public/2024/02/15/us_listenon_amazonmusic_button_charcoal.png?itok=YFXPE4Ii" /></a> <a href="https://feeds.eff.org/howtofixtheinternet" target="_blank" rel="noopener noreferrer"><img src="https://www.eff.org/files/2021/11/01/subscriberss.png" alt="Subscribe via RSS badge" width="194" height="50" /></a></i></span></p>
<p><span data-contrast="auto">(You can also find this episode on the <a href="https://archive.org/details/htfti-s6e7-kara-swisher-vfinal" target="_blank" rel="noopener noreferrer">Internet Archive</a> and on <a href="https://youtu.be/0NBPWEK0T5E?si=qNPoIZOnvwZ9RJCq" target="_blank" rel="noopener noreferrer">YouTube</a>.)</span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">Kara Swisher has been documenting the internet’s titans for almost 30 years through a variety of media outlets and podcasts. She believes that with adequate regulation we can keep people safe online without stifling innovation, and we can have an internet that’s transparent and beneficial for all, not just a collection of fiefdoms run by a handful of homogenous oligarchs.</span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">In this episode you’ll learn about:</span></p>
<ul>
<li><span data-ccp-props="{}">Why it’s so important that tech workers speak out about issues they want to improve and work to create companies that elevate best practices</span></li>
<li><span data-ccp-props="{}">Why completely unconstrained capitalism turns technology into weapons instead of tools</span></li>
<li><span data-ccp-props="{}">How antitrust legislation and enforcement can create a healthier online ecosystem</span></li>
<li><span data-ccp-props="{}">Why AI could either bring abundance for many or make the very rich even richer</span></li>
<li><span data-ccp-props="{}">The small online media outlets still doing groundbreaking independent reporting that challenges the tech oligarchy</span><span data-ccp-props="{}"> </span></li>
</ul>
<p><span data-contrast="auto">Kara Swisher is one of the world's foremost tech journalists and critics, and currently hosts two podcasts: </span><a href="https://podcasts.voxmedia.com/show/on-with-kara-swisher" target="_blank" rel="noopener noreferrer"><span data-contrast="none">On with Kara Swisher</span></a><span data-contrast="auto"> and </span><a href="https://podcasts.voxmedia.com/show/pivot" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Pivot</span></a><span data-contrast="auto">, the latter co-hosted by New York University Professor </span><a href="https://www.profgalloway.com/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Scott Galloway</span></a><span data-contrast="auto">. She's been covering the tech industry since the 1990s for outlets including the Washington Post, the Wall Street Journal, and the New York Times; she is an New York Magazine editor-at-large, a CNN contributor, and cofounder of the tech news sites </span><a href="https://en.wikipedia.org/wiki/Recode" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Recode</span></a><span data-contrast="auto"> and </span><a href="https://en.wikipedia.org/wiki/All_Things_Digital" target="_blank" rel="noopener noreferrer"><span data-contrast="none">All Things Digital</span></a><span data-contrast="auto">. She also has authored several books, including “</span><a href="https://www.simonandschuster.com/books/Burn-Book/Kara-Swisher/9781982163907" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Burn Book</span></a><span data-contrast="auto">” (Simon &amp; Schuster, 2024) in which she documents the history of Silicon Valley and the tech billionaires who run it.</span><span data-ccp-props="{}"> </span></p>
<p><span data-contrast="auto">Resources:</span></p>
<ul>
<li><span data-ccp-props="{}">New York Times: “</span><a href="https://www.nytimes.com/2025/05/19/business/media/kara-swisher-podcasts.html" target="_blank" rel="noopener noreferrer"><span data-contrast="none">How Kara Swisher Scaled Even Higher</span></a><span data-contrast="auto">” (May 19, 2025)</span></li>
<li><span data-ccp-props="{}">On with Kara Swisher: “</span><a href="https://podcasts.apple.com/us/podcast/amd-ceo-lisa-su-on-ai-chips-trumps-tariffs-and-the/id1643307527?i=1000705671664" target="_blank" rel="noopener noreferrer"><span data-contrast="none">AMD CEO Lisa Su on AI Chips, Trump's Tariffs and the Magic of Open Source</span></a><span data-contrast="auto">” (May 1, 2025)</span></li>
<li><a href="https://www.wired.com/tag/doge/" target="_blank" rel="noopener noreferrer"><span data-contrast="none">WIRED’s coverage of the Department of Government Efficiency (DOGE)</span></a></li>
<li><span data-ccp-props="{}">EFF: </span><a href="https://www.eff.org/issues/competition" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Competition</span></a></li>
<li><span data-ccp-props="{}">University of California, Berkeley Haas School of Business: </span><a href="https://www.youtube.com/live/brNfTT2q9_0?si=J1ZTMyf5AN799IGQ" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Dean's Speaker Series | Kara Swisher | Tech Journalist</span></a><span data-contrast="auto"> (Oct. 1, 2024)</span><span data-ccp-props="{}"> </span></li>
</ul>
<p><span data-contrast="auto">What do you think of “How to Fix the Internet?” </span><a href="https://forms.office.com/pages/responsepage.aspx?id=qalRy_Njp0iTdV3Gz61yuZZXWhXf9ZdMjzPzrVjvr6VUNUlHSUtLM1lLMUNLWE42QzBWWDhXU1ZEQy4u&amp;web=1&amp;wdLOR=c90ABD667-F98F-9748-BAA4-CA50122F0423" target="_blank" rel="noopener noreferrer"><span data-contrast="none">Share your feedback here</span></a><span data-contrast="auto">.</span></p>
<h3><span data-ccp-props="259}">Transcript</span></h3>
<p><strong>KARA SWISHER:</strong> It's a tech that's not controlled by a small group of homogeneous people. I think that's pretty much it. I mean, and there's adequate regulation to allow for people to be safe and at the same time, not too much in order to be innovative and do things – you don't want the government deciding everything.<br />It's a place where the internet, which was started by US taxpayers, which was paid for, is beneficial for people, and that there's transparency in it, and that we can see what's happening and what's doing. And again, the concentration of power in the hands of a few people really is at the center of the problem.</p>
<p><strong>CINDY COHN:</strong> That's Kara Swisher, describing the balance she'd like to see in a better digital future. I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation</p>
<p><strong>JASON KELLEY:</strong> And I'm Jason Kelley -- EFF's Activism Director. You're listening to How to Fix the Internet.</p>
<p><strong>CINDY COHN:</strong> This show is about envisioning a better digital future that we can all work towards.</p>
<p><strong>JASON KELLEY:</strong> And we are excited to have a guest who has been outspoken in talking about how we get there, pointing out the good, the bad and the ugly sides of the tech world.</p>
<p><strong>CINDY COHN</strong>: Kara Swisher is one of the world's foremost tech journalists and critics. She's been covering the industry since the 1990s, and she currently hosts two podcasts: On with Kara Swisher and Pivot, and she's written several books, including last year's Burn Book where she documents the history of Silicon Valley and the tech billionaires who run it.<br />We are delighted that she's here. Welcome, Kara.</p>
<p><strong>KARA SWISHER:</strong> Thank you.</p>
<p><strong>CINDY COHN:</strong> We've had a couple of tech critics on the podcast recently, and one of the kind of themes that's come up for us is you kind of have to love the internet before you can hate on it. And I've heard you describe your journey that way as well. And I'd love for you to talk a little bit about it, because you didn't start off, really, looking for all the ways that things have gone wrong.</p>
<p><strong>KARA SWISHER:</strong> I don't hate it. I don't. It's just, you know, I have eyes and I can see, you know, I mean, uh, one of the expressions I always use is you should, um, believe what you see, not see what you believe. And so I always just, that's what's happening. You can see it happening. You can see the coarsening of our dialogue now offline being affected by online. You could just see what's happened. <br />But I still love the the possibilities of technology and the promise of it. And I think that's what attracted me to it in the first place, and it's a question of how you use it as a tool or a weapon. And so I always look at it as a tool and some people have taken a lot of these technologies and use them as a weapon.</p>
<p><strong>CINDY COHN:</strong> So what was that moment? Did you, do you have a moment when you decided you were really interested in tech and that you really found it to be important and worth devoting your time to?</p>
<p><strong>KARA SWISHER:</strong> I was always interested in it because I had studied propaganda and the uses of TV and radio and stuff. So I was always interested in media, and this was the media on steroids. And so I recall downloading an entire book onto my computer and I thought, oh, look at this. Everything is digital. And so the premise that I came to at the time, or the idea I came to was that everything that can be digitized would be digitized, and that was a huge idea because that means entire industries would change.</p>
<p><strong>CINDY COHN:</strong> Yeah.</p>
<p><strong>JASON KELLEY:</strong> Kara, you started by talking about this concentration of power, which is obvious to anyone who's been paying attention, and at the same time, you know, we did use to have tech leaders who, I think, they had less power. It was less concentrated, but also people were more focused, I think, on solving real problems.<br />You know, you talk a lot about Steve Jobs. There was a goal of improving people's lives with technology, that that didn't necessarily it, it helped the bottom line, but the focus wasn't just on quarterly profits. And I wonder if you can talk a little bit about what you think it would look like if we returned to that in some way. Is that gone?</p>
<p><strong>KARA SWISHER:</strong> I don't think we were there. I think they were always focused on quarterly profits. I think that was a canard. I wrote about it, that they would pretend that they were here to help. You know, it's sort of like the Twilight Zone episode To Serve Man. It's a cookbook. I always thought it was a cookbook for these people. <br />And they were always formulated in terms of making money and maximizing value for their shareholders, which was usually themselves. I wasn't stupid. I understood what they were doing, especially when these stocks went to the moon, especially the early internet days and their first boom. And they became instant, instant-airs, I think they were called that, which was instant millionaires and, and then now beyond that.<br />And so I was always aware of the money, even if they pretended they weren't, they were absolutely aware And so I don't have a romantic version of this at the beginning, um, except among a small group of people, you know, who, who, who were seeing it, like the Whole Earth Catalog and things like that, which we're looking at it as a way to bring everybody together or to spread knowledge throughout the world, which I also believed in too.</p>
<p><strong>JASON KELLEY:</strong> Do you think any of those people are still around?</p>
<p><strong>KARA SWISHER:</strong> No, they’re dead.</p>
<p><strong>JASON KELLEY:</strong> I mean, literally, you know, they're literally dead, but are there any heirs of theirs?</p>
<p><strong>KARA SWISHER:</strong> No, I mean, I don't think they had any power. I don't, I think that some of the theoretical stuff was about that, but no, they didn't have any power. The people that had power were the, the Mark Zuckerbergs, the Googles, and even, you know, the Microsofts, I mean, Bill Gates is kind of the exemplification of all that. As he, he took other people's ideas and he made it into an incredibly powerful company and everybody else sort of followed suit.</p>
<p><strong>JASON KELLEY:</strong> And so mostly for you, the concentration of power is the biggest shift that's happened and you see regulation or, you know, anti-competitive moves as ways to get us back.</p>
<p><strong>KARA SWISHER:</strong> We don't have any, like, if we had any laws, that would be great, but we don't have any that, that constrain them. And now under President Trump, there's not gonna be any rules around AI, probably. There aren't gonna be any rules around any significant rules, at least around any of it.<br />So they, the first period, which was the growth of where we are now, was not constrained in any way, and now it's not just not constrained, but it's helping whether it's cryptocurrency or things like that. And so I don't feel like there's any restrictions, like at this point, in fact, there's encouragement by government to do whatever you want.</p>
<p><strong>CINDY COHN:</strong> I think that's a really big worry. And you know, I think you're aware, as are we, that, you know, just because somebody comes in and says they're gonna do something about a problem with legislation doesn't mean that they're, they're actually having that. And I think sometimes we feel like we sit in this space where we're like, we agree with you on the harm, but this thing you wanna do is a terrible idea and trying to get the means and the ends connected is kind of a lot of where we live sometimes, and I think you've seen that as well, that like once you've articulated the harm, that's kind of the start of the journey about whether the thing that you're talking about doing will actually meet that moment.</p>
<p><strong>KARA SWISHER:</strong> Absolutely. The harms, they don't care about, that's the issue. And I think I was always cognizant of the harms, and that can make you seem like, you know, a killjoy of some sort. But it's not, it's just saying, wow, if you're gonna do this social media, you better pay attention to this or that.<br />They acted like the regular problems that people had didn't exist in the world, like racism, you know, sexism. They said, oh, that can be fixed, and they never offered any solutions, and then they created tools that made it worse.</p>
<p><strong>CINDY COHN:</strong> I feel like the people who thought that we could really use technology to build a better world, I, I don't think they were wrong or naive. I just think they got stomped on by the money. Um, and, you know, uh.</p>
<p><strong>KARA SWISHER:</strong> Which inevitably happens.</p>
<p><strong>CINDY COHN:</strong> It does. And the question is, how do you squeeze out something, you know, given that this is the dynamic of capitalism, how do you squeeze out space for protecting people?<br />And we've had times in our society when we've done that better, and we've done that worse. And I feel like there are ways in which this is as bad as has gotten in my lifetime. You know, with the government actually coming in really strongly on the side of, empowering the powerful and disempowering the disempowered.<br />I see competition as a way to do this. EFF was, you know, it was primarily an organization focused on free speech and privacy, but we kind of backed into talking about competition 'cause we felt like we couldn't get at any of those problems unless we talked about the elephant in the room. <br />And I think you think about it, really on the individual, you know, you know all these guys, and on that very individual level of what, what kinds of things will, um, impact them.<br />And I'm wondering if you have some thoughts about the kinds of rules or regulations that might actually, you know, have an impact and not, not turn into, you know, yet another cudgel that they get to wield.</p>
<p><strong>KARA SWISHER:</strong> Well any, any would be good. Like I don't, I don't, there isn't any, there isn't any you could speak of that's really problematic for them, except for the courts which are suing over antitrust issues or some regulatory agencies. But in general, what they've done is created an easy glide path for themselves.<br />I mean, we don't have a national privacy regulation. We don't have algorithmic transparency bills. We don't have data protection really, and to speak of for people. We don't have, you know, transparency into the data they collect. You know, we have more rules and laws on airplanes and cigarettes and everybody else, but we don't have any here. So you know, antitrust is a whole nother area of, of changing, of our antitrust rules. So these are all areas that have to be looked at. But we haven't, they haven't, they haven't passed a thing. I mean, lots of legislators have tried, but, um, it hasn't worked really.</p>
<p><strong>CINDY COHN:</strong> You know, a lot of our supporters are people who work in tech but aren't necessarily the. You know, the tech giants, they're not the tops of these companies, but they work in the companies.<br />And one of the things that I, you know, I don't know if you have any insights if you've thought about this, but we speak with them a lot and they're dismayed at what's going on, but they kind of feel powerless. And I'm wondering if you have thoughts like, you know, speaking to the people who aren't, who aren't the Elons and the, the guys at the top, but who are there, and who I think are critical to keeping these companies going. Are there ways that they can make their voices heard that you've thought of that would, that might work? I guess I, I'm, I'm pulling on your insight because you know the actual people.</p>
<p><strong>KARA SWISHER:</strong> Yeah, you know, speak out. Just speak out. You know, everybody gets a voice these days and there's all kinds of voices that never would've gotten heard and to, you know, talk to legislators, involve customers, um, create businesses where you do those good practices. Like that's the best way to do it is create wealth and capitalism and then use best practices there. That to me is the best way to do that.</p>
<p><strong>CINDY COHN:</strong> Are there any companies that you look at from where you sit that you think are doing a pretty good job or at least trying? I don't know if you wanna call anybody out, but, um, you know, we see a few, um, and I kind of feel like all the air gets sucked out of the room.</p>
<p><strong>KARA SWISHER:</strong> In bits and pieces. In bits and pieces, you know, Apple's good on the privacy thing, but then it's bad on a bunch of other things. Like you could, like, you, you, the problem is, you know, these are shareholder driven companies and so they're gonna do what's best for them and they could, uh, you know, wave over to privacy or wave over to, you know, more diversity, but they really are interested in making money.<br />And so I think the difficulty is figuring out, you know, do they have duties as citizens or do they just have duties as corporate citizens? And so that's always been a difficult thing in our society and will continue to be.</p>
<p><strong>CINDY COHN:</strong> Yeah.</p>
<p><strong>JASON KELLEY:</strong> We've always at EFF really stood up for the user in, in this way where sometimes we're praising a company that normally people are upset with because they did a good thing, right? Apple is good on privacy. When they do good privacy things we say, that's great. You know, and if Apple makes mistakes, we say that too.<br />And it feels like, um, you know, we're in the middle of, I guess, a “tech lash.” I don't know when it started. I don't know if it'll ever end. I don't know if there's, if that's even a real term in terms of like, you know, tech journalism. But do you find that it's difficult? Two, get people to accept sort of like any positive praise for companies that are often just at this point, completely easy to ridicule for all the mistakes they've made.</p>
<p><strong>KARA SWISHER:</strong> I think the tech journalism has gotten really strong. It's gotten, I mean, just look at the DOGE coverage. I think it really, I'll point to <a href="https://www.wired.com/tag/doge/" target="_blank" rel="noopener noreferrer">WIRED</a> as a good example, as they've done astonishing stuff. I think a lot of people have done a lot on, on, uh, you know, the abuses of social media. I think they've covered a lot of issues from the overuse of technology to, you know, all the crypto stuff. It doesn't mean people follow along, but they've certainly been there and revealed a lot of the flaws there. Um, while also covering it as like, this is what's happening with ai. Like this is what's happening, here's where it's going. And so you have to cover as a thing. Like, this is what's being developed. but then there's, uh, others, you know, who have to look into the real problems.</p>
<p><strong>JASON KELLEY:</strong> I get a lot of news from <a href="https://www.404media.co/" target="_blank" rel="noopener noreferrer">404 Media</a>, right?</p>
<p><strong>KARA SWISHER:</strong> Yeah, they’re great.</p>
<p><strong>JASON KELLEY:</strong> That sort of model is relatively new and it sort of sits against some of these legacy models. Do you see, like, a growing role for things like that in a future?</p>
<p><strong>KARA SWISHER:</strong> There's lots of different things. I mean, I came from like, as you mean, part of the time, although I got away from it pretty quickly, but some of 'em are doing great. It just depends on the story, right? Some of the stories are great, like. Uh, you know, uh, there's a ton of people at the Times have done great stuff on, on, on lots of things around kids and abuses and social media.<br />At the same time, there's all these really exciting young, not necessarily young, actually, um, independent media companies, whether it's Casey Newton, at <a href="https://www.platformer.news/" target="_blank" rel="noopener noreferrer">Platformer</a>, or Eric <a href="https://www.newcomer.co/" target="_blank" rel="noopener noreferrer">Newcomer</a> covering VCs, or 404. There's all these really interesting new stuff. That's doing really well. WIRED is another one that's really seen a lot of bounce back under its current editor who just came on relatively recently. <br />So it just depends. It depends on where it is, but there's, <a href="https://www.theverge.com/" target="_blank" rel="noopener noreferrer">Verge</a> does a great job. But I think it's individually the stories in, there's no like big name in this area. There's just a lot of people and then there's all these really interesting experts or people who work in tech who've written a lot. That is always very interesting too, to me. It's interesting to hear from insiders what they think is happening.</p>
<p><strong>CINDY COHN:</strong> Well, I'm happy to hear this, this optimism. 'Cause I worry a lot about, you know, the way that the business model for media has really been hollowed out. And then seeing things like, you know, uh, some of the big broadcast news people folding,</p>
<p><strong>KARA SWISHER:</strong> Yeah, but broadcast never did journalism for tech, come on. Like, some did, I mean, one or two, but it wasn't them who was doing it. It was usually, you know, either the New York Times or these smaller institutions have been doing a great job. There's just been tons and tons of different things, completely different things.</p>
<p><strong>JASON KELLEY:</strong> What do you think about the fear, maybe I'm, I'm misplacing it, maybe it's not as real as I imagine it is. Um, that results from something like a Gawker situation, right. You know, you have wealthy people.</p>
<p><strong>KARA SWISHER:</strong> That was a long time ago.</p>
<p><strong>JASON KELLEY:</strong> It was, but it, you know, a precedent was sort of set, right? I mean, do you think people in working in tech journalism can take aim at, you know, individual people that have a lot of power and wealth in, in the same way that they could before?</p>
<p><strong>KARA SWISHER:</strong> Yeah. I think they can, if they're accurate. Yeah, absolutely.</p>
<p><strong>CINDY COHN:</strong> Yeah, I think you're a good exhibit A for that, you pull no punches and things are okay. I mean, we get asked sometimes, um, you know, are, are you ever under attack because of your, your sharp advocacy? And I kind of think your sharp advocacy protects you as long as you're right. And I think of you as somebody who's also in, in a bit of that position.</p>
<p><strong>KARA SWISHER:</strong> Mmhm.</p>
<p><strong>CINDY COHN:</strong> You may say this is inevitable, but I I wanted to ask you, you know, I feel like when I talk with young technical people, um, they've kind of been poisoned by this idea that the only way you can be successful is, is if you're an asshole.<br />That there's no, there's no model, um, that, that just just goes to the deal. So if they want to be successful, they have to be just an awful person. And so even if they might have thought differently beforehand, that's what they think they have to do. And I'm wondering if you run into this as well, and I sometimes find myself trying to think about, you know, alternate role models for technical people and if you have any that you think of.</p>
<p><strong>KARA SWISHER:</strong> Alternate role models? It's mostly men. But there are, there's all kinds of, like, I just did an interview with Lisa Su, who's head of AMD, one of the few women CEOs. And in AI, there's a number of women, uh, you know, you don't necessarily have to have diversity to make it better, but it sure helps, right? Because people have a different, not just diversity of gender or diversity of race, but diversity of backgrounds, politics. You know, the more diverse you are, the better products you make, essentially. That's my always been my feeling. <br />Look, most of these companies are the same as it ever was, and in fact, there's fewer different people running them, essentially. Um, but you know, that's always been the nature of, of tech essentially, that it was sort of a, a man's world.</p>
<p><strong>CINDY COHN:</strong> Yeah, I see that as well. I just worry that young people or junior people coming up think that the only way that you can be successful is a, if you look like the guys who are already successful, but also, you know, if you're just kind of not, you know, if you're weird and not nice.</p>
<p><strong>KARA SWISHER:</strong> It's just depends on the person. It's just that when you get that wealthy, you have a lot of people licking you up and down all day, and so you end up in the crazy zone like Elon Musk, or the arrogant zone like Mark Zuckerberg or whatever. It's just they don't get a lot of pushback and when you don't get a lot of friction, you tend to think everything you do is correct.</p>
<p><strong>JASON KELLEY:</strong> Let's take a quick moment to thank our sponsor. How to Fix The Internet is supported by the Alfred P Sloan Foundation's program and public understanding of science and technology enriching people's lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.<br />We also wanna thank EFF members and donors. You're the reason we exist, and EFF has been fighting for digital rights. And EFF has been fighting for digital rights for 35 years, and that fight is bigger than ever. So please, if you like what we do, go to <a href="https://supporters.eff.org/donate/podcast" target="_blank" rel="noopener noreferrer">eff.org/pod</a> to donate. Also, we'd love for you to join us at this year's EFF awards where we celebrate the people working towards the better digital future that we all care so much about.<br />Those are coming up on September 10th in San Francisco. You can find more information about that at <a href="https://www.eff.org/event/eff-awards-2025" target="_blank" rel="noopener noreferrer">eff.org/awards</a>.<br />We also wanted to share that our friend Cory Doctorow has a new podcast. Have a listen to this: [WHO BROKE THE INTERNET TRAILER]<br />And now back to our conversation with Kara Swisher.</p>
<p><strong>CINDY COHN:</strong> I mean, you watched all these tech giants kind of move over to the Trump side and then, you know, stand there on the inauguration. It sounds like you thought that might've been inevitable.</p>
<p><strong>KARA SWISHER:</strong> I said it was inevitable, they were all surprised. They're always surprised when I'm like, Elon's gonna crack up with the president. Oh look, they cracked up with, it's not hard to follow these people. In his case, he's, he's personally, there's something wrong with his head, obviously. He always cracks up with people. So that's what happened here. <br />In that case, they just wanted things. They want things. You think they liked Donald Trump? You’re wrong there? I'll tell you. They don't like him. They need him. They wanna use him and they were irritated by Biden 'cause he presumed to push back on and he didn't do a very good job of it, honestly. But they definitely want things.</p>
<p><strong>CINDY COHN:</strong> I think the tech industry came up at a time when deregulation was all the rage, right? So in some ways they were kind of born into a world where regulation was an anathema and they took full advantage of the situation.<br />As did lots of other areas that got deregulated or were not regulated in the first place. But I think tech, because of timing in some ways, tech was really born into this zone. And there was some good things for it too. I mean, you know, EFF was, was successful in the nineties at making sure that the internet got first Amendment protection, that we didn't, go to the other side with things like the Communications Decency Act and squelch any adult material from being put online and reduce everything to the side. But getting that right and kind of walking through the middle ground where you have regulation that supports people but doesn't squelch them is just an ongoing struggle,</p>
<p><strong>KARA SWISHER:</strong> Mm-hmm. Absolutely.</p>
<p><strong>JASON KELLEY:</strong> I have this optimistic hope that these companies and their owners sort of crumble as they continue to, as Cory Doctorow says, enshittify, right? The only reason they don't crumble is that they have this lock in with users. They have this monopoly power, but you see a, you know, a TikTok pops up and suddenly Instagram has a real competitor, not because rules have been put in place to change Instagram, but because a different, new maybe better platform.</p>
<p><strong>KARA SWISHER:</strong> There’s nothing like competition, making things better. Right? Competition always helps.</p>
<p><strong>JASON KELLEY:</strong> Yeah, when I think of competition law, I think of crushing companies, I think of breaking them up. But what do you think we can do to make this sort of world better and more fertile for new companies? You know, you talked earlier about tech workers.</p>
<p><strong>KARA SWISHER:</strong> Well, you have to pass those things where they don't get to. Antitrust is the best way to do that. Right? And, but those things move really slowly, unfortunately. And, you know, good antitrust legislation and antitrust enforcement, that's happening right now. But it opens up, I mean, the reason Google exists is 'cause of the antitrust actions around Microsoft.<br />And so we have to like continue to press on things like that and continue to have regulators that are allowed to pursue cases like that. And then at the same time have a real focus on creating wealth. We wanna create wealth, we wanna create, we wanna give people breaks.<br />We wanna have the government involved in funding some of these things, making it so that small companies don't get run over by larger companies. <br />Not letting power concentrate into a small group of people. When that happens, that's what happens. You end up with less companies. They kill them in the crib, these companies. And so not letting things get bought, have a scrutiny over things, stuff like that.</p>
<p><strong>CINDY COHN:</strong> Yeah, I think a lot more merger review makes a lot of sense. I think a lot of thinking about, how are companies crushing each other and what are the things that we can do to try to stop that? Obviously we care a lot about interoperability, making sure that technologies that, that have you as a customer don't get to lock you in, and make it so that you're just stuck with their broken business model and can do other things. <br />There's a lot of space for that kind of thing. I mean, you know, I always tell the story, I'm sure you know this, that, you know, if it weren't for the FCC telling AT&amp;T that they had to let people plug something other than phones into the wall, we wouldn't have had the internet, you know, the home internet revolution anyway.</p>
<p><strong>KARA SWISHER:</strong> Right. Absolutely. 100%.</p>
<p><strong>CINDY COHN:</strong> Yeah, so I think we are in agreement with you that, you know, competition is really central, but it's, you know, it's kind of an all of the above and certainly around privacy issues. We can do a lot around this business model. Which I think is driving so many of the other bad things that we are seeing, um, with some comprehensive privacy law.<br />But boy, it sure feels like right now, you know, we got two branches of government that are not on board with that. And the third one kind of doing okay, but not, you know, and the courts were doing okay, but slowly and inconsistently. Um, where do you see hope? Where are you, where are you looking for the for</p>
<p><strong>KARA SWISHER:</strong> I mean, some of this stuff around AI could be really great for humanity, or it could be great for a small amount of people. That's really, you know, which one do we want? Do we want this technology to be a tool or a weapon against us? Do we want it to be in the hands of bigger companies or in the hands of all of us and we make decisions around it?<br />Will it help us be safer? Will it help us cure cancer or is it gonna just make a rich person a billion dollars richer? I mean, it's the age old story, isn't it? This is not a new theme in America where, the rich get richer and the poor get less. And so these, these technologies could, as you know, recently out a book all abundance.<br />It could create lots of abundance. It could create lots of interesting new jobs, or it could just put people outta work and let the, let the people who are richer get richer. And I don't think that's a society we wanna have. And years ago I was talking about income inequality with a really wealthy person and I said, you either have to do something about, you know, the fact that people, that we didn't have a $25 minimum wage, which I think would help a lot, lots of innovation would come from that. If people made more money, they'd have a little more choices. And it's worth the investment in people to do that.<br />And I said, we have to either deal with income inequality or armor plate your Tesla. Tesla. And I think he wanted to armor plate his Tesla. That's when ire, and then of course, cyber truck comes out. So there you have it. But, um, I think they don't care about that kind of stuff. You know, they're happy to create their little, we, those little worlds where they're highly protected, but it's not a world I wanna live in.</p>
<p><strong>CINDY COHN:</strong> Kara, thank you so much. We really appreciate you coming in. I think you sit in such a different place in the world than where we sit, and it's always great to get your perspective.</p>
<p><strong>KARA SWISHER:</strong> Absolutely. Anytime. You guys do amazing work and you know you're doing amazing work and you should always keep a watch on these people. It's not, you shouldn't be against everything. 'cause some people are right. But you certainly should keep a watch on people</p>
<p><strong>CINDY COHN:</strong> Well, great. We, we sure will.</p>
<p><strong>JASON KELLEY:</strong> up. Yeah, we'll keep doing it. Thank you,</p>
<p><strong>CINDY COHN:</strong> Thank you.</p>
<p><strong>KARA SWISHER:</strong> All right. Thank you so much.</p>
<p><strong>CINDY COHN:</strong> Well, I always appreciate how Kara gets right to the point about how the concentration of power among a few tech moguls has led to so many of the problems we face online and how competition. Along with some things, we so often hear about real laws requiring transparency, privacy protections, and data protections can help shift the tide.</p>
<p><strong>JASON KELLEY:</strong> Yeah, you know, some of these fixes are things that people have been talking about for a long time and I think we're at a point where everyone agrees on a big chunk of them. You know, especially the ones that we promote like competition and transparency oftentimes, and privacy. So it's great to hear that Kara, who's someone that, you know, has worked on this issue and in tech for a long time and thought about it and loves it, as she said, you know, agrees with us on some of the, some of the most important solutions.</p>
<p><strong>CINDY COHN:</strong> Sometimes these criticisms of the tech moguls can feel like something everybody does, but I think it's important to remember that Kara was really one of the first ones to start pointing this out. And I also agree with you, you know, she's a person who comes from the position of really loving tech. And Kara's even a very strong capitalist. She really loves making money as well. You know, her criticism comes from a place of betrayal, that, again, like Molly White, earlier this season, kind of comes from a position of, you know, seeing the possibilities and loving the possibilities, and then seeing how horribly things are really going in the wrong direction.</p>
<p><strong>JASON KELLEY:</strong> Yeah, she has this framing of, is it a tool or a weapon? And it feels like a lot of the tools that she loved became weapons, which I think is how a lot of us feel. You know, it's not always clear how to draw that line. But it's obviously a good question that people, you know, working in the tech field, and I think people even using technology should ask themselves, when you're really enmeshed with it, is the thing you're using or building or promoting, is it working for everyone?<br />You know, what are the chances, how could it become a weapon? You know, this beautiful tool that you're loving and you have all these good ideas and, you know, ideas that, that it'll change the world and improve it. There's always a way that it can become a weapon. So I think it's an important question to ask and, and an important question that people, you know, working in the field need to ask.</p>
<p><strong>CINDY COHN:</strong> Yeah. And I think that, you know, that's the gem of her advice to tech workers. You know, find a way to make your voice heard if you see this happening. And there's a power in that. I do think that one thing that's still true in Silicon Valley is they compete for top talent.<br />And, you know, top talent indicating that they're gonna make choices based on some values is one of the levers of power. Now I don't think anybody thinks that's the only one. This isn't an individual responsibility question. We need laws, we need structures. You know, we need some structural changes in antitrust law and elsewhere in order to make that happen. It's not all on the shoulders of the tech workers, but I appreciate that she really did say, you know, there's a role to be played here. You're not just pawns in this game.</p>
<p><strong>JASON KELLEY:</strong> And that's our episode for today. Thanks so much for joining us. If you have feedback or suggestions, we'd love to hear from you. Visit <a href="https://www.eff.org/how-to-fix-the-internet-podcast" target="_blank" rel="noopener noreferrer">eff.org/podcast</a> and click on listen or feedback. And while you're there, you can become a member and donate, maybe even pick up some of the merch and just see what's happening in digital rights this week and every week.<br />Our theme music is by Nat Keefe of Beat Mower with Reed Mathis, and How to Fix the Internet is supported by the Alfred P. Sloan Foundation's program for Public Understanding of Science and Technology. We'll see you next time. I'm Jason Kelley.</p>
<p><strong>CINDY COHN:</strong> And I'm Cindy Cohn.</p>
<p><em><strong>MUSIC CREDITS:</strong> This podcast is licensed Creative Commons Attribution 4.0 international, and includes the following music licensed Creative Commons Attribution 3.0 unported by its creators: Drops of H2O, The Filtered Water Treatment by Jay Lang. Additional music, theme remixes and sound design by Gaetan Harris.</em></p>
</div></div></div></description>
<pubDate>Wed, 30 Jul 2025 07:05:06 +0000</pubDate>
<guid isPermaLink="false">110900 at https://www.eff.org</guid>
<category domain="https://www.eff.org/how-to-fix-the-internet-podcast">How to Fix the Internet: Podcast</category>
<category domain="https://www.eff.org/issues/innovation">Creativity & Innovation</category>
<category domain="https://www.eff.org/issues/competition">Competition</category>
<dc:creator>Josh Richman</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/2025-htfi-kara-blog.png" alt="How to Fix the Internet - Kara Swisher - Smashing the Tech Oligarchy" type="image/png" length="443542" />
</item>
<item>
<title>Ryanair’s CFAA Claim Against Booking.com Has Nothing To Do with Actual Hacking</title>
<link>https://www.eff.org/deeplinks/2025/07/ryanairs-cfaa-claim-against-bookingcom-has-nothing-do-actual-hacking</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p><span>The Computer Fraud and Abuse Act (CFAA) is supposed to be about attacks on computer systems. It is not, as a federal district court suggested in </span><i><span>Ryanair v. Booking.com</span></i><span>, applicable when someone uses valid login credentials to access information to which those credentials provide access. Now that the case is on appeal, EFF has filed an amicus brief asking the Third Circuit to clarify that this case is about violations to policy, not hacking, and does not qualify as access “without authorization” under CFAA.</span></p>
<p><span>The case concerns transparency in airfare pricing. Ryanair complained that Booking republished Ryanair’s prices, some of which were only visible when a user logged in. Ryanair sent a cease and desist to Booking, but didn't deactivate the usernames and passwords associated with the uses they disliked. When the users allegedly connected to Booking kept using those credentials to gather pricing data, Ryanair claimed it was a CFAA violation. If this doesn’t sound like “computer hacking” to you, you’re right.</span></p>
<p><span>The CFAA has proven bad for research, security, competition, and innovation. For years we’ve worked to limit its scope to Congress’s original intention: actual hacking that bypasses computer security. It should have nothing to do with Ryanair’s claims here: what amounts to a terms of use violation because the information that was accessed is available to anyone with login credentials. This is the course charted </span><a href="https://www.eff.org/deeplinks/2021/06/van-buren-victory-against-overbroad-interpretations-cfaa-protects-security"><i><span>Van Buren v. United States</span></i></a><span>, where the Supreme Court explained that “authorization” refers to technical concepts of computer authentication. As we stated in our brief:</span></p>
<blockquote><p><span>The CFAA does not apply to every person who merely violates terms of service by sharing account credentials with a family member or by withholding sensitive information like one’s real name and birthdate when making an account.</span></p>
</blockquote>
<p><span>Building on the good decisions in </span><i><span>Van Buren</span></i><span> and the Ninth Circuit’s ruling in </span><a href="https://www.eff.org/deeplinks/2022/04/scraping-public-websites-still-isnt-crime-court-appeals-declares"><i><span>hiQ Labs v. LinkedIn</span></i></a><span>, we weighed in at the Third Circuit urging the court to hold clearly that triggering a CFAA violation requires bypassing a technology that restricts access. In this case, the login credentials that were created were legit access. But the rule adopted by the lower court would criminize many everyday behaviors, like logging into a streaming service account with a partner’s login, or logging into a spouse’s bank account to pay a bill at their behest. This is not hacking or a violation of the CFAA, it’s just violating a company’s wish list in its Terms of Service.</span></p>
<p><span>This rule would be especially dangerous for journalists and academic researchers. Researchers often create a variety of testing accounts. For example, if they’re researching how a service displays housing offers, they may make different accounts associated with different race, gender, or language settings. These sorts of techniques may be adversarial to the company, but they shouldn’t be illegal. But according to the court’s opinion, if a company disagrees with this sort of research, the company could not just ban the researchers from using the site, it could render that research criminal by just sending a letter notifying the researcher that they’re not authorized to use the service in this way.</span></p>
<p><span>Many other examples and common research techniques used by journalists, academic researchers, and security researchers would be at risk under this rule, but the end result would be the same no matter what: it would chill valuable research that keeps us all safer online.</span></p>
<p><span>A broad reading of CFAA in this case would also undermine competition by providing a way for companies to limit data scraping, effectively cutting off one of the ways websites offer tools to compare prices and features. <br /></span></p>
<p><span>Courts must follow </span><i><span>Van Buren</span></i><span>’s lead and interpret the CFAA as narrowly as it was designed. Logging into a public website with valid credentials, even if you scrape the data once you’re logged in, is not hacking. A broad reading leads to unintended consequences, and website owners do not need new shields against independent accountability.</span></p>
<p><span>You can read our amicus brief </span><a href="https://www.eff.org/document/amicus-brief-ryan-air-dac-v-booking-com-bv"><span>here</span></a><span>.</span></p>
</div></div></div></description>
<pubDate>Tue, 29 Jul 2025 19:03:33 +0000</pubDate>
<guid isPermaLink="false">110950 at https://www.eff.org</guid>
<category domain="https://www.eff.org/issues/cfaa">Computer Fraud And Abuse Act Reform</category>
<dc:creator>Thorin Klosowski</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/og-governmenthacking_0.png" alt="" type="image/png" length="146862" />
</item>
<item>
<title>You Went to a Drag Show—Now the State of Florida Wants Your Name</title>
<link>https://www.eff.org/deeplinks/2025/07/you-went-drag-show-now-state-florida-wants-your-name</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p><span>If you thought going to a Pride event or drag show was just another night out, think again. If you were in Florida, </span><a href="https://apnews.com/article/florida-drag-show-law-vero-beach-uthmeier-7793dbe3ffd356dda4e678940463922c"><span>it might land your name in a government database</span></a><span>.</span></p>
<p><span>That’s what’s happening in Vero Beach, FL, where the </span><a href="https://www.myfloridalegal.com/newsrelease/attorney-general-james-uthmeier-launches-investigation-sexualized-performance"><span>Florida Attorney General’s office</span></a><span> has </span><a href="https://www.myfloridalegal.com/sites/default/files/2025-05/subpoena-25-0602-kilted-mermaid-signed_.pdf"><span>subpoenaed</span></a><span> a local restaurant, </span><a href="https://kiltedmermaid.com/"><span>The Kilted Mermaid,</span></a><span> demanding surveillance video, guest lists, reservation logs, and contracts of performers and other staff—all because the venue hosted an LGBTQ+ Pride event.</span></p>
<p><span>To be clear: no one has been charged with a crime, and the law Florida is likely leaning on here—the so-called “Protection of Children Act” (which was designed to be a drag show ban)—</span><a href="https://apnews.com/article/drag-shows-florida-df85bbe2b50917831b8cdc51c8a9dcd5"><span>has already been blocked by federal courts as likely unconstitutional</span></a><span>. But that didn’t stop Attorney General James Uthmeier from pushing forward anyway. Without naming a specific law that was violated, the AG’s </span><a href="https://www.myfloridalegal.com/newsrelease/attorney-general-james-uthmeier-launches-investigation-sexualized-performance"><span>press release</span></a><span> used pointed and accusatory language, stating that "In Florida, we don't sacrifice the innocence of children for the perversions of some demented adults.” His office is now fishing for personal data about everyone who attended or performed at the event. This should set off every civil liberties alarm bell we have.</span></p>
<p><span>Just like the </span><a href="https://www.eff.org/deeplinks/2025/05/kids-online-safety-act-will-make-internet-worse-everyone"><span>Kids Online Safety Act (KOSA)</span></a><span> and </span><a href="https://www.eff.org/deeplinks/2024/12/global-age-verification-measures-2024-year-review"><span>other bills</span></a><span> with </span><a href="https://www.eff.org/deeplinks/2025/01/impact-age-verification-measures-goes-beyond-porn-sites"><span>misleading names</span></a><span>, this isn’t about protecting children. It’s about using the power of the state to intimidate people government officials disagree with, and to censor speech that is both lawful and fundamental to American democracy.</span></p>
<p><span>Drag shows—</span><a href="https://readingpartners.org/blog/drag-story-hour/"><span>many of which are family-friendly</span></a><span> and </span><a href="https://www.newsweek.com/yes-some-drag-explicit-lot-it-family-friendly-appropriate-kids-opinion-1807058"><span>feature no sexual content</span></a><span>—have become a political scapegoat. And while that rhetoric might resonate in some media environments, the real-world consequences are much darker: state surveillance of private citizens doing nothing but attending a fun community celebration. By demanding video surveillance, guest lists, and reservation logs, the state isn’t investigating a crime, it is trying to scare individuals from attending a legal gathering. These are people who showed up at a public venue for a legal event, while a law restricting it was not even in effect. </span></p>
<p><span>The Supreme Court </span><a href="https://supreme.justia.com/cases/federal/us/357/449/"><span>has ruled</span></a> <a href="https://supreme.justia.com/cases/federal/us/372/539/"><span>multiple times</span></a><span> that subpoenas forcing disclosure of members of peaceful organizations have a chilling effect on free expression. Whether it’s a civil rights protest, a church service, or, yes, a drag show: the First Amendment protects the confidentiality of lists of attendees.</span></p>
<p><span>Even if the courts strike down this subpoena—and they should—the damage will already be done. A restaurant owner (who also happens to be the town’s vice mayor) is being dragged into a state investigation. Performers’ identities are potentially being exposed—whether to state surveillance, inclusion in law enforcement databases, or future targeting by anti-LGBTQ+ groups. Guests who thought they were attending a fun community event are now caught up in a legal probe. These are the kinds of chilling, damaging consequences that will discourage Floridians from hosting or attending drag shows, and could stamp out the art form entirely. </span></p>
<p><span>EFF has long warned about this kind of mission creep: where a law or policy supposedly aimed at public safety is turned into a tool for political retaliation or mass surveillance. Going to a drag show should not mean you forfeit your anonymity. It should not open you up to surveillance. And it absolutely should not land your name in a government database.</span></p>
</div></div></div></description>
<pubDate>Mon, 28 Jul 2025 18:59:29 +0000</pubDate>
<guid isPermaLink="false">110947 at https://www.eff.org</guid>
<category domain="https://www.eff.org/issues/street-level-surveillance">Street-Level Surveillance</category>
<category domain="https://www.eff.org/issues/lgbtq">LGBTQ+</category>
<category domain="https://www.eff.org/issues/privacy">Privacy</category>
<dc:creator>Rindala Alajaji</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/police-surveillance-badge.jpg" alt="A hand holds up a police badge with spying eye" type="image/jpeg" length="174510" />
</item>
<item>
<title>Just Banning Minors From Social Media Is Not Protecting Them</title>
<link>https://www.eff.org/deeplinks/2025/07/just-banning-minors-social-media-not-protecting-them</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p><span>By publishing its </span><a href="https://digital-strategy.ec.europa.eu/en/library/commission-publishes-guidelines-protection-minors"><span>guidelines under Article 28 of the Digital Services Act</span></a><span>, the European Commission has taken a major step towards social media bans that will undermine privacy, expression, and participation rights for young people that are already enshrined in international human rights law. </span></p>
<p><span>EFF recently submitted </span><a href="https://www.eff.org/files/2025/06/16/eff_submission_art._28_dsa_guidlines.pdf"><span>feedback</span></a><span> to the Commission’s consultation on the guidelines, emphasizing a critical point: Online safety for young people must include privacy and security for them and must not come at the expense of freedom of expression and equitable access to digital spaces.</span></p>
<p><span>Article 28 requires online platforms to take appropriate and proportionate measures to ensure a high level of safety, privacy and security of minors on their services. But the article also prohibits targeting minors with personalized ads, a measure that would seem to require that platforms know that a user is a minor. The DSA acknowledges that there is an inherent tension between ensuring a minor’s privacy and requiring platforms to know the age of every user. The DSA does not resolve this tension. Rather, it states that service providers should not be incentivized to collect the age of their users, and Article 28(3) makes a point of not requiring service providers to collect and process additional data to assess whether a user is underage. </span></p>
<p><span>Thus, the question of age checks is a key to understanding the obligations of online platforms to safeguard minors online. Our submission explained the serious concerns that age checks pose to the rights and security of minors. All methods for conducting age checks come with serious drawbacks. Approaches to verify a user’s age generally involve some form of government-issued ID document, which millions of people in Europe—including migrants, members of marginalized groups and unhoused people, exchange students, refugees and tourists—may not have access to.</span></p>
<p><span>Other age assurance methods, like biometric age estimation, age estimation based on email addresses or user activity, involve the processing of vast amounts of personal, sensitive data – usually in the hands of third parties. Beyond being potentially exposed to discrimination and erroneous estimations, users are asked to trust platforms’ intransparent supply chains and hope for the best. </span><b>Age assurance methods always impact the rights of children and teenagers: Their rights to privacy and data protection, free expression, information and participation.</b></p>
<p><span>The Commission's guidelines contain a wealth of measures elucidating the Commission's understanding of "age appropriate design" of online services. We have argued that some of them, including default settings to protect users’ privacy, effective content moderation and ensuring that recommender systems’ don’t rely on the collection of behavioral data, are practices that </span><a href="https://www.eff.org/deeplinks/2025/05/keeping-people-safe-online-fundamental-rights-protective-alternatives-age-checks"><span>would benefit all users</span></a><span>. </span></p>
<p><span>But while the initial Commission draft document considered age checks as only a tool to determine users’ ages to be able to tailor their online experiences according to their age, the final guidelines go far beyond that. Crucially, the European Commission now seems to consider “measures restricting access based on age to be an effective means to ensure a high level of privacy, safety and security for minors on online platforms” (page 14). </span></p>
<p><span>This is a surprising turn, as many in Brussels have considered social media bans like the one Australia passed (and still doesn’t know how to </span><a href="https://www.theguardian.com/media/2025/jun/20/social-media-ban-trial-tech-flaws"><span>implement)</span></a><span> disproportionate. Responding to </span><a href="https://www.euronews.com/my-europe/2025/06/11/debate-on-minors-access-to-social-media-networks-begins-with-three-eu-countries"><span>mounting pressure</span></a><span> from Member States like France, Denmark, and Greece to ban young people under a certain age from social media platforms, the guidelines contain an opening clause for national rules on age limits for certain services. According to the guidelines, the Commission considers such access restrictions appropriate and proportionate where “union or national law, (...) prescribes a minimum age to access certain products or services (...), including specifically defined categories of online social media services”. This opens the door for different national laws introducing different age limits for services like social media platforms. </span></p>
<p><span>It’s concerning that the Commission generally considers the use of age verification </span><i><span>proportionate</span></i><span> in any situation where a provider of an online platform identifies risks to minors’ privacy, safety, or security and those risks “cannot be mitigated by other less intrusive measures as effectively as by access restrictions supported by age verification” (page 17). This view risks establishing a broad legal mandate for age verification measures.</span></p>
<p><span>It is clear that such bans will do little in the way of making the internet a safer space for young people. By banning a particularly vulnerable group of users from accessing platforms, the providers themselves are let off the hook: If it is enough for platforms like Instagram and TikTok to implement (comparatively cheap) age restriction tools, there are no incentives anymore to actually make their products and features safer for young people. Banning a certain user group changes nothing about problematic privacy practices, insufficient content moderation or business models based on the exploitation of people’s attention and data. And assuming that teenagers will always find ways to circumvent age restrictions, the ones that do will be left without any protections or age-appropriate experiences. </span></p>
</div></div></div></description>
<pubDate>Mon, 28 Jul 2025 10:59:17 +0000</pubDate>
<guid isPermaLink="false">110946 at https://www.eff.org</guid>
<category domain="https://www.eff.org/issues/free-speech">Free Speech</category>
<category domain="https://www.eff.org/issues/international">International</category>
<category domain="https://www.eff.org/issues/eu-policy">EU Policy</category>
<category domain="https://www.eff.org/issues/eff-europe">European Union</category>
<dc:creator>Svea Windwehr</dc:creator>
<dc:creator>Jillian C. York</dc:creator>
<dc:creator>Christoph Schmon</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/eu-flag-11.png" alt="EU-flag-circuits" type="image/png" length="48177" />
</item>
<item>
<title>Zero Knowledge Proofs Alone Are Not a Digital ID Solution to Protecting User Privacy</title>
<link>https://www.eff.org/deeplinks/2025/07/zero-knowledge-proofs-alone-are-not-digital-id-solution-protecting-user-privacy</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p><i><span>In the past few years, governments across the world have </span></i><a href="https://www.eff.org/deeplinks/2024/09/digital-id-isnt-everybody-and-thats-okay" target="_blank" rel="noopener noreferrer"><i><span>rolled out digital identification</span></i></a><i><span> options, and now there are efforts encouraging online companies to implement identity and age verification requirements with digital ID in mind. This blog is the first in this short series that will explain digital ID and the pending use case of age verification. The following posts will evaluate what real protections we can implement with current digital ID frameworks and discuss how better privacy and controls can keep people safer online.</span></i></p>
<p><span>Age verification measures are having a moment, with policymakers </span><a href="https://www.eff.org/deeplinks/2025/06/todays-supreme-court-decision-age-verification-tramples-free-speech-and-undermines" target="_blank" rel="noopener noreferrer"><span>in the U.S.</span></a><span> and </span><a href="https://www.eff.org/deeplinks/2024/12/global-age-verification-measures-2024-year-review" target="_blank" rel="noopener noreferrer"><span>around the world</span></a><span> passing legislation mandating online services and companies to introduce technologies that require people to verify their identities to access content deemed appropriate for their age. But for most people, having physical government documentation like a driver's license, passport, or other ID is </span><a href="https://www.eff.org/deeplinks/2024/09/digital-id-isnt-everybody-and-thats-okay" target="_blank" rel="noopener noreferrer"><span>not a simple binary</span></a><span> of having it or not. Physical ID systems involve hundreds of factors that impact their accuracy and validity, and everyday situations occur where identification attributes can change, or an ID becomes invalid or inaccurate or needs to be reissued: addresses change, driver’s licenses expire or have suspensions lifted, or temporary IDs are issued in lieu of obtaining permanent identification. </span></p>
<p><span>The digital ID systems </span><a href="https://www.eff.org/deeplinks/2024/09/digital-id-isnt-everybody-and-thats-okay" target="_blank" rel="noopener noreferrer"><span>currently being introduced</span></a><span> potentially solve </span><i><span>some</span></i><span> problems like identity fraud for business and government services, but leave the holder of the digital ID vulnerable to the needs of the companies collecting such information. State and federal embrace of digital ID is based on claims of faster access, fraud prevention, and convenience. But with digital ID being proposed as a means of online verification, it is just as likely to block claims of public assistance and other services as facilitate them. That’s why legal protections are as important as the digital IDs themselves. To add to this, in places that lack comprehensive data privacy legislation, verifiers are not heavily restricted in what they can and can’t ask the holder. In response, some privacy mechanisms have been suggested and few have been made mandatory, such as the </span><a href="https://blog.google/technology/safety-security/opening-up-zero-knowledge-proof-technology-to-promote-privacy-in-age-assurance/" target="_blank" rel="noopener noreferrer"><span>promise</span></a><span> that a feature called Zero Knowledge Proofs (ZKPs) will easily solve the privacy aspects of sharing ID attributes.</span></p>
<h3><span>Zero Knowledge Proofs: The Good News</span></h3>
<p><span>The biggest selling point of modern digital ID offerings, especially to those seeking to solve mass age verification, is being able to incorporate and share something called a Zero Knowledge Proof (ZKP) for a website or mobile application to verify ID information, and not have to share the ID itself or information explicitly on it. ZKPs provide a cryptographic way to not give something away, like your exact date of birth and age from your ID, instead offering a “yes-or-no” claim (like above or below 18) to a verifier requiring a legal age threshold. More specifically, two properties of ZKPs are “soundness” and “zero knowledge.” Soundness is appealing to verifiers and governments to make it hard for an ID holder to present forged information (the holder won’t know the “secret”). Zero-Knowledge can be beneficial to the holder, because they don’t have to share explicit information like a birth date, just cryptographic proof that said information exists and is valid. There have been </span><a href="https://blog.google/products/google-pay/google-wallet-age-identity-verifications/" target="_blank" rel="noopener noreferrer"><span>recent announcements</span></a><span> from major tech companies like Google who plan to integrate ZKPs for age verification and “where appropriate in other Google products”.</span></p>
<h3><span>Zero Knowledge Proofs: The Bad News</span></h3>
<p><span>What ZKPs don’t do is mitigate verifier abuse or limit their requests, such as over-asking for information they don’t need or limiting the number of times they request your age over time. They don’t prevent websites or applications from collecting other kinds of observable personally identifiable information like your IP address or other device information while interacting with them.</span></p>
<p><span>ZKPs are a great tool for sharing less data about ourselves over time or in a one time transaction. But this doesn’t do a lot about the data broker industry that </span><a href="https://www.eff.org/deeplinks/2025/07/data-brokers-are-selling-your-flight-information-cbp-and-ice"><span>already has</span></a><span> massive, existing profiles of data on people. We understand that this was not what ZKPs for age verification were presented to solve. But it is still imperative to point out that utilizing this technology to share even more about ourselves online through mandatory age verification establishes a wider scope for sharing in an already saturated ecosystem of </span><a href="https://www.eff.org/deeplinks/2023/11/debunking-myth-anonymous-data" target="_blank" rel="noopener noreferrer"><span>easily linked, existing personal information</span></a><span> online. Going from presenting your physical ID maybe 2-3 times a week to potentially proving your age to multiple websites and apps every day online is going to render going online itself as a burden at minimum and a barrier entirely at most for those who can’t obtain an ID.</span></p>
<h3><span>Protecting The Way Forward</span></h3>
<p><span>Mandatory age verification takes the potential privacy benefits of mobile ID and proposed ZKPs solutions, then warps them into speech chilling mechanisms.</span></p>
<p><span>Until the hard questions of power imbalances for potentially abusive verifiers and prevention of </span><a href="https://nophonehome.com/" target="_blank" rel="noopener noreferrer"><span>phoning home</span></a><span> to ID issuers are addressed,</span><b> these systems should not be pushed forward without proper protections in place</b><span>. A more private, holder-centric ID is more than just ZKPs as a catch all for privacy concerns. The case of safety online is not solved through technology alone, and involves multiple, ongoing conversations. Yes, that sounds harder to do than age checks online for everyone. Maybe, that’s why this is so tempting to implement. However, we encourage policy and law makers to look into what is best, and not what is easy.</span></p>
</div></div></div></description>
<pubDate>Fri, 25 Jul 2025 22:13:19 +0000</pubDate>
<guid isPermaLink="false">110944 at https://www.eff.org</guid>
<category domain="https://www.eff.org/issues/digital-identity">Digital Identity</category>
<category domain="https://www.eff.org/issues/free-speech">Free Speech</category>
<dc:creator>Alexis Hancock</dc:creator>
<dc:creator>Paige Collings</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/digitallicense_mobileid.png" alt="Mobile ID digital license" type="image/png" length="58596" />
</item>
<item>
<title>Canada’s Bill C-2 Opens the Floodgates to U.S. Surveillance</title>
<link>https://www.eff.org/deeplinks/2025/07/canadas-bill-c-2-opens-floodgates-us-surveillance</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p><span>The Canadian government is preparing to give away Canadians’ digital lives—to U.S. police, to the Donald Trump administration, and possibly to foreign spy agencies.</span></p>
<p><a href="https://www.parl.ca/Content/Bills/451/Government/C-2/C-2_1/C-2_1.PDF"><span>Bill C-2</span></a><span>, the so-called Strong Borders Act, is a sprawling surveillance bill with </span><a href="https://openmedia.org/press/item/over-300-organizations-unite-to-demand-complete-withdrawal-of-bill-c-2"><span>multiple privacy-invasive provisions</span></a><span>. But the thrust is clear: it’s a roadmap to aligning Canadian surveillance with U.S. demands. </span></p>
<p><span>It’s also a giveaway of Canadian constitutional rights in the name of “border security.” If passed, it will shatter privacy protections that Canadians have spent decades building. This will affect anyone using Canadian internet services, including email, cloud storage, VPNs, and messaging apps. </span></p>
<p><span>A </span><a href="https://openmedia.org/assets/2025%28June%29_OM_Civil_Society_Joint_Letter_BillC2_Final.pdf"><span>joint letter</span></a><span>, signed by dozens of Canadian civil liberties groups and more than a hundred Canadian legal experts and academics, puts it clearly: Bill C-2 is “a multi-pronged assault on the basic human rights and freedoms Canada holds dear,” and “an enormous and unjustified expansion of power for police and CSIS to access the data, mail, and communication patterns of people across Canada.”</span><span><br /></span></p>
<h3><b>Setting The Stage For Cross-Border Surveillance </b></h3>
<p>Bill C-2 isn’t just a domestic surveillance bill. It’s a Trojan horse for U.S. law enforcement—quietly building the pipes to ship Canadians’ private data straight to Washington.</p>
<p><span>If Bill C-2 passes, Canadian police and spy agencies will be able to demand information about peoples’ online activities based on the low threshold of “reasonable suspicion.” Companies holding such information would have only five days to challenge an order, and blanket immunity from lawsuits if they hand over data. </span></p>
<p><span>Police and CSIS, the Canadian intelligence service, will be able to find out whether you have an online account with any organization or service in Canada. They can demand to know how long you’ve had it, where you’ve logged in from, and which other services you’ve interacted with, with no warrant required.</span></p>
<p><span>The bill will also allow for the introduction of encryption backdoors. Forcing companies to surveil their customers is allowed under </span><a href="https://www.parl.ca/Content/Bills/451/Government/C-2/C-2_1/C-2_1.PDF"><span>the law</span></a> (see part 15)<span>, as long as these mandates don’t introduce a “systemic vulnerability”—a term the bill doesn’t even bother to define. </span></p>
<p><span>The information gathered under these new powers is likely to be shared with the United States. Canada and the U.S. are currently </span><a href="https://citizenlab.ca/2025/02/canada-us-cross-border-surveillance-cloud-act/"><span>negotiating a misguided agreement to share law enforcement information</span></a><span> under the US CLOUD Act. </span></p>
<p>The U.S. and U.K. <a href="https://www.eff.org/deeplinks/2020/01/uk-police-will-soon-be-able-search-through-us-data-without-asking-judge">put a CLOUD Act deal in place in 2020</a>, and it hasn’t been good for users. Earlier this year, the U.K. home office <a href="https://www.washingtonpost.com/technology/2025/02/07/apple-encryption-backdoor-uk/">ordered Apple</a> to let it spy on users’ encrypted accounts. That security risk caused Apple <a href="https://www.eff.org/deeplinks/2025/02/cornered-uks-demand-encryption-backdoor-apple-turns-its-strongest-security-setting">to stop offering U.K. users</a> certain advanced encryption features, and lawmakers and officials in the United States have <a href="https://www.reuters.com/technology/us-examining-whether-uks-encryption-demand-apple-broke-data-treaty-2025-02-26/">raised concerns</a> that the UK’s demands might have been designed to leverage its expanded CLOUD Act powers.</p>
<p>If Canada moves forward with Bill C-2 and a CLOUD Act deal,<span> American law enforcement could demand data from Canadian tech companies in secrecy—no notice to users would be required. Companies could also expect gag orders preventing them from even mentioning they have been forced to share information with US agencies.</span></p>
<p><span>This isn’t speculation. Earlier this month, a Canadian government official </span><a href="https://www.politico.com/newsletters/canada-playbook/2025/07/04/stars-stripes-and-side-eye-00439998"><span>told Politico</span></a><span> that this surveillance regime would give Canadian police “the same kind of toolkit” that their U.S. counterparts have under the PATRIOT Act and FISA. The bill allows for “technical capability orders.” Those orders mean the government can force Canadian tech companies, VPNs, cloud providers, and app developers—regardless of where in the world they are based—to build surveillance tools into their products.</span></p>
<p><span>Under U.S. law, non-U.S. persons have little protection from foreign surveillance. If U.S. cops want information on abortion access, gender-affirming care, or political protests happening in Canada—they’re going to get it. The data-sharing won’t necessarily be limited to the U.S., either. There’s nothing to stop authoritarian states from demanding this new trove of Canadians’ private data that will be secretly doled out by its law enforcement agencies. </span></p>
<p><span>EFF joins the Canadian Civil Liberties Association, OpenMedia, researchers at Citizen Lab, and dozens of other Canadian organizations and experts in asking the Canadian federal government to withdraw Bill C-2. </span></p>
<p><span>Further reading: </span><span><br /><br /></span></p>
<ul>
<li><a href="https://openmedia.org/assets/2025%28June%29_OM_Civil_Society_Joint_Letter_BillC2_Final.pdf"><span>Joint letter</span></a><span> opposing Bill C-2, signed by the Canadian Civil Liberties Association, OpenMedia, and dozens of other Canadian groups </span></li>
<li><span>CCLA </span><a href="https://ccla.org/privacy/ccla-joins-calls-for-withdrawal-of-bill-c-2/"><span>blog</span></a><span> calling for withdrawal of Bill C-2</span></li>
<li><span>The Citizen Lab (University of Toronto) </span><a href="https://citizenlab.ca/2025/02/canada-us-cross-border-surveillance-cloud-act/"><span>report</span></a><span> on Canadian CLOUD Act deal</span></li>
<li><span>The Citizen Lab report on </span><a href="https://citizenlab.ca/2025/06/a-preliminary-analysis-of-bill-c-2/"><span>Bill C-2</span></a></li>
<li><span>EFF </span><a href="https://www.eff.org/document/cloud-act-one-page-summary"><span>one-pager</span></a><span> and </span><a href="https://www.eff.org/deeplinks/2018/02/cloud-act-dangerous-expansion-police-snooping-cross-border-data"><span>blog</span></a><span> on problems with the CLOUD Act, published before the bill was made law in 2018</span></li>
</ul>
</div></div></div></description>
<pubDate>Fri, 25 Jul 2025 19:53:52 +0000</pubDate>
<guid isPermaLink="false">110942 at https://www.eff.org</guid>
<dc:creator>Joe Mullin</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/keys-crossed-pink-starburst_0.png" alt="" type="image/png" length="20659" />
</item>
<item>
<title>You Shouldn’t Have to Make Your Social Media Public to Get a Visa</title>
<link>https://www.eff.org/deeplinks/2025/07/you-shouldnt-have-make-your-social-media-public-get-visa</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p>The Trump administration is <a href="https://www.eff.org/deeplinks/2025/04/trump-administrations-targeting-international-students-jeopardizes-free-speech-and">continuing</a> <a href="https://www.eff.org/deeplinks/2025/06/eff-department-homeland-security-no-social-media-surveillance-immigrants">its</a> <a href="https://www.justsecurity.org/114607/how-dhss-new-social-media-vetting-policies-threaten-free-speech/">dangerous push</a> to surveil and suppress foreign students’ social media activity. The State Department recently <a href="https://www.state.gov/releases/office-of-the-spokesperson/2025/06/announcement-of-expanded-screening-and-vetting-for-visa-applicants/">announced</a> an unprecedented new requirement that applicants for student and exchange visas must set all social media accounts to “public” for government review. The State Department <a href="https://time.com/7295949/international-student-visas-colleges-universities-social-media-state-department-trump/">also indicated</a> that if applicants refuse to unlock their accounts or otherwise don’t maintain a social media presence, the government may interpret it as an attempt to evade the requirement or deliberately hide online activity.</p>
<p>The administration is penalizing prospective students and visitors for shielding their social media accounts from the general public or for choosing to not be active on social media. This is an outrageous violation of privacy, one that completely disregards the legitimate and often critical reasons why millions of people choose to lock down their social media profiles, share only limited information about themselves online, or not engage in social media at all. By making students abandon basic privacy hygiene as the price of admission to American universities, the administration is forcing applicants to expose a wealth of personal information to not only the U.S. government, but to anyone with an internet connection.</p>
<h3><strong>Why Social Media Privacy Matters</strong></h3>
<p>The administration’s new policy is a dangerous expansion of existing social media collection efforts. While the State Department has required since 2019 that visa applicants disclose their social media handles—a policy EFF has <a href="https://www.eff.org/deeplinks/2020/06/eff-court-social-media-users-have-privacy-and-free-speech-interests-their-public">consistently</a> <a href="https://www.eff.org/issues/social-media-surveilance">opposed</a>—forcing applicants to make their accounts public crosses a new line.</p>
<p>Individuals have <a href="https://www.eff.org/deeplinks/2024/02/eff-dc-circuit-us-governments-forced-disclosure-visa-applicants-social-media">significant privacy interests</a> in their social media accounts. Social media profiles contain some of the most intimate details of our lives, such as our political views, religious beliefs, health information, likes and dislikes, and the people with whom we associate. Such personal details can be gleaned from vast volumes of data given the unlimited storage capacity of cloud-based social media platforms. As the <a href="https://www.law.cornell.edu/supremecourt/text/13-132">Supreme Court has recognized</a>, “[t]he sum of an individual’s private life can be reconstructed through a thousand photographs labeled with dates, locations, and descriptions”—all of which and more are available on social media platforms.</p>
<p>By requiring visa applicants to share these details, the government can obtain information that would otherwise be inaccessible or difficult to piece together across disparate locations. For example, while visa applicants are not required to disclose their political views in their applications, applicants might choose to post their beliefs on their social media profiles.</p>
<p>This information, once disclosed, doesn’t just disappear. Existing policy allows the government to continue surveilling applicants’ social media profiles even once the application process is over. And personal information obtained from applicants’ profiles can be collected and stored in government <a href="https://www.eff.org/deeplinks/2017/10/dhs-should-stop-social-media-surveillance-immigrants">databases</a> for <a href="https://www.archives.gov/research/immigration/aliens">decades</a>.</p>
<p>What’s more, by requiring visa applicants to make their private social media accounts public, the administration is forcing them to expose troves of personal, sensitive information to the entire internet, not just the U.S. government. This could include various bad actors like identity thieves and fraudsters, foreign governments, current and prospective employers, and other third parties.</p>
<p>Those in applicants’ social media networks—including U.S. citizen family or friends—can also become surveillance targets by association. Visa applicants’ online activity is likely to reveal information about the users with whom they’re connected. For example, a visa applicant could tag another user in a political rant or posts photos of themselves and the other user at a political rally. Anyone who sees those posts might reasonably infer that the other user shares the applicant’s political beliefs. The administration’s new requirement will therefore publicly expose the personal information of millions of additional people, beyond just visa applicants.</p>
<h3><strong>There are Very Good Reasons to Keep Social Media Accounts Private</strong></h3>
<p>An overwhelming number of social media users maintain private accounts for the same reason we put curtains on our windows: a desire for basic privacy. There are numerous legitimate reasons people choose to share their social media only with trusted family and friends, whether that’s ensuring personal safety, maintaining professional boundaries, or simply not wanting to share personal profiles with the entire world.</p>
<h5>Safety from Online Harassment and Physical Violence</h5>
<p>Many people keep their accounts private to protect themselves from stalkers, harassers, and those who wish them harm. Domestic violence survivors, for example, use privacy settings to hide from their abusers, and organizations supporting survivors <a href="https://www.thehotline.org/plan-for-safety/internet-safety/">often encourage</a> them to maintain a limited online presence.</p>
<p>Women also face a variety of gender-based online harms made worse by public profiles, including stalking, sexual harassment, and violent threats. A <a href="https://onlineviolencewomen.eiu.com/">2021 study</a> reported that at least 38% of women globally had personally experienced online abuse, and at least 85% of women had witnessed it. Women are, in turn, <a href="https://www.sciencedirect.com/science/article/abs/pii/S0747563218305818">more likely</a> to activate privacy settings than men.</p>
<p>LGBTQ+ individuals similarly have good reasons to lock down their accounts. Individuals from countries where their identity <a href="https://www.eff.org/deeplinks/2024/02/ghanas-president-must-refuse-sign-anti-lgbtq-bill">puts them</a> <a href="https://www.eff.org/document/access-now-eff-written-submission-un-ie-sogie-jan-2024">in danger</a> rely on privacy protections to stay safe from state action. People may also reasonably choose to lock their accounts to avoid the barrage of anti-LGBTQ+ hate and harassment that is <a href="https://glaad.org/smsi/2025/summary-conclusions-reccomendations-methodology/">common on</a> <a href="https://www.eff.org/deeplinks/2025/05/standing-lgbtq-digital-safety-international-day-against-homophobia">social media platforms</a>, which can lead to <a href="https://www.hrc.org/press-releases/new-human-rights-campaign-foundation-report-online-hate-real-world-violence-are-inextricably-linked">real-world violence</a>. Others, including <a href="https://ssd.eff.org/playlist/lgbtq-youth">LGBTQ+ youth</a>, may simply not be ready to share their identity outside of their chosen personal network.</p>
<h5>Political Dissidents, Activists, and Journalists</h5>
<p>Activists working on <a href="https://ssd.eff.org/playlist/reproductive-healthcare-service-provider-seeker-or-advocate">sensitive human rights issues</a>, <a href="https://ssd.eff.org/playlist/activist-or-protester">political dissidents</a>, and journalists use privacy settings to protect themselves from doxxing, harassment, and potential political persecution by their governments.</p>
<p>Rather than protecting these vulnerable groups, the administration’s policy instead explicitly<em> targets</em> political speech. The State Department has given embassies and consulates a vague directive to vet applicants’ social media for “hostile attitudes towards our citizens, culture, government, institutions, or founding principles,” according to an internal State Department cable obtained by <a href="https://time.com/7295949/international-student-visas-colleges-universities-social-media-state-department-trump/">multiple</a> <a href="https://www.reuters.com/world/us/trump-administration-resuming-student-visa-appointments-state-dept-official-says-2025-06-18/">news outlets</a>. This includes looking for “applicants who demonstrate a history of political activism.” The cable did not specify what, exactly, constitutes “hostile attitudes.”</p>
<h5>Professional and Personal Boundaries</h5>
<p>People use privacy settings to maintain boundaries between their personal and professional lives. They share family photos, sensitive updates, and personal moments with close friends—not with their employers, teachers, professional connections, or the general public.</p>
<h3>The Growing Menace of Social Media Surveillance</h3>
<p>This new policy is an escalation of the Trump administration’s ongoing immigration-related social media surveillance. EFF has <a href="https://www.eff.org/deeplinks/2025/04/trump-administrations-targeting-international-students-jeopardizes-free-speech-and">written </a>about the administration’s new “Catch and Revoke” effort, which deploys artificial intelligence and other data analytic tools to review the public social media accounts of student visa holders in an effort to revoke their visas. And EFF recently <a href="https://www.eff.org/deeplinks/2025/06/eff-department-homeland-security-no-social-media-surveillance-immigrants">submitted comments</a> opposing a USCIS proposal to collect social media identifiers from visa and green card holders already living in the U.S., including when they submit applications for permanent residency and naturalization.</p>
<p>The administration has also started <a href="https://www.uscis.gov/newsroom/news-releases/dhs-to-begin-screening-aliens-social-media-activity-for-antisemitism">screening</a> many non-citizens' social media accounts for ambiguously-defined “antisemitic activity,” and previously announced <a href="https://www.politico.com/news/2025/05/30/state-implements-reviews-of-harvard-visa-applicants-social-media-accounts-00375921">expanded social media vetting</a> for any visa applicant seeking to travel specifically to Harvard University for any purpose.</p>
<p>The administration claims this mass surveillance will make America safer, but there’s little evidence to support this. By the government’s own previous assessments, social media surveillance <a href="https://www.nytimes.com/2023/10/05/us/social-media-screening-visa-terrorism.html">has not proven effective</a> at identifying security threats.</p>
<p>At the same time, these policies gravely undermine freedom of speech, as we recently argued in our <a href="https://www.eff.org/document/eff-comments-uscis-proposal-collect-social-media-identifiers-immigration-forms">USCIS comments</a>. The government is using social media monitoring to directly target and punish through visa denials or revocations foreign students and others for their digital speech. And the social media surveillance itself broadly chills free expression online—for citizens and non-citizens alike.</p>
<p>In defending the new requirement, the State Department <a href="https://www.state.gov/releases/office-of-the-spokesperson/2025/06/announcement-of-expanded-screening-and-vetting-for-visa-applicants/">argued</a> that a U.S. visa is a “privilege, not a right.” But privacy and free expression should not be privileges. These are fundamental human rights, and they are rights we abandon at our peril.</p>
</div></div></div></description>
<pubDate>Wed, 23 Jul 2025 22:33:24 +0000</pubDate>
<guid isPermaLink="false">110939 at https://www.eff.org</guid>
<category domain="https://www.eff.org/issues/social-media-surveilance">Social Media Surveillance</category>
<dc:creator>Lisa Femia</dc:creator>
<dc:creator>Sophia Cope</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/social-media-surveillance-1b_0.jpg" alt="Security camera screens display logos for Facebook, YouTube, SnapChat, Twitter, and Reddit " type="image/jpeg" length="143674" />
</item>
<item>
<title>We're Envisioning A Better Future</title>
<link>https://www.eff.org/deeplinks/2025/07/future-worth-fighting</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p>Whether you've been following EFF for years or just discovered us (hello!), you've probably noticed that our team is kind of obsessed with the ✨future✨.</p>
<p>From people <a href="https://www.eff.org/deeplinks/2019/09/effs-def-con-27-t-shirt-puzzle">soaring through the sky</a>, to <a href="https://shop.eff.org/products/space-cat-pride-socks">space cats</a>, <a href="https://supporters.eff.org/files/unicorn-shirt-500b_0.png">geometric unicorns</a>, and (<a href="https://www.eff.org/deeplinks/2024/04/screen-printing-101-effs-spring-speakeasy-babylon-burning">so many</a>) <a href="https://www.eff.org/deeplinks/2021/08/flex-your-power-own-your-tech">mechas</a>—we're always imagining what the future could look like when we get things right.</p>
<p>That same spirit inspired <a href="https://www.eff.org/35">EFF's 35th anniversary celebration</a>. And this year, members can get our new <em>EFF 35 Cityscape </em>t-shirt plus a limited-edition challenge coin with a <a href="https://eff.org/r.d98c">monthly</a> or <a href="https://eff.org/r.13hk">annual</a> Sustaining Donation!</p>
<p class="take-action"><a href="https://eff.org/r.d98c">Join eFF!</a></p>
<p class="take-explainer">Start a Convenient recurring donation Today!</p>
<p>The <em>EFF 35 Cityscape</em> proposes a future where users are empowered to</p>
<ul>
<li>Repair and tinker with their devices</li>
<li>Move freely without being tracked</li>
<li>Innovate with bold new ideas</li>
</ul>
<p>And this future isn't far off—we're building it now.</p>
<p>EFF is pushing for right to repair laws across the country, exposing shady data brokers, and ensuring new technologies—like AI—have your rights in mind. EFF is determined and <a href="https://eff.org/r.d98c">with your help</a>, we're not backing down.</p>
<p><img src="/files/2025/07/21/duo.jpg" width="2400" height="1200" alt="EFF's Cityscape t-shirt and 35th Anniversary Challenge Coin" title="EFF's Cityscape t-shirt and 35th Anniversary Challenge Coin" /></p>
<p>We're making real progress—but we need your help. As a member-supported nonprofit, <strong><a href="https://eff.org/r.d98c">you</a></strong> are what powers this work.</p>
<p>Start a Sustaining Donation of <a href="https://eff.org/r.d98c">$5/month</a> or <a href="https://eff.org/r.13hk">$65/year</a> by August 11, and we'll thank you with a limited-edition EFF35 Challenge Coin as well as this year's <em>Cityscape</em> t-shirt!</p>
</div></div></div></description>
<pubDate>Tue, 22 Jul 2025 16:54:21 +0000</pubDate>
<guid isPermaLink="false">110935 at https://www.eff.org</guid>
<dc:creator>Christian Romero</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/circuit-city-og-static.png" alt="Green EFF 35th Anniversary digital cityscape on a black background." type="image/png" length="189965" />
</item>
<item>
<title>EFF to Court: Protect Our Health Data from DHS</title>
<link>https://www.eff.org/deeplinks/2025/07/eff-court-protect-our-health-data-dhs</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p>The federal government is trying to use Medicaid data to identify and deport immigrants. So EFF and our friends at <a href="https://epic.org/">EPIC</a> and the <a href="https://protectdemocracy.org/">Protect Democracy Project</a> have filed an <a href="https://www.eff.org/document/2025-07-18-ca-v-hhs-amicus-eff-epic-pdp">amicus brief</a> asking a judge to block this dangerous violation of federal data privacy laws.</p>
<p>Last month, the AP <a href="https://apnews.com/article/medicaid-deportation-immigrants-trump-4e0f979e4290a4d10a067da0acca8e22">reported</a> that the U.S. Department of Health and Human Services (HHS) had disclosed to the U.S. Department of Homeland Security (DHS) a vast trove of sensitive data obtained from states about people who obtain government-assisted health care. <a href="https://en.wikipedia.org/wiki/Medicaid">Medicaid</a> is a federal program that funds health insurance for low-income people; it is partially funded and primarily managed by states. Some states, using their own funds, allow <a href="https://www.dhcs.ca.gov/keep-your-Medi-Cal/Pages/Medi-Cal-Immigrant-Eligibility-FAQs.aspx">enrollment by non-citizens</a>. HHS reportedly disclosed to DHS the Medicaid enrollee data from several of these states, including enrollee names, addresses, immigration status, and claims for health coverage.</p>
<p>In response, California and 19 other states <a href="https://oag.ca.gov/news/press-releases/attorney-general-bonta-sues-trump-administration-illegally-sharing-californians%E2%80%99">sued</a> HHS and DHS. The states allege, among other things, that these federal agencies violated (1) the data disclosure limits in the <a href="https://www.law.cornell.edu/uscode/text/42/1306">Social Security Act</a>, the <a href="https://www.law.cornell.edu/uscode/text/5/552a">Privacy Act</a>, and <a href="https://www.law.cornell.edu/cfr/text/45/164.502">HIPAA</a>, and (2) the notice-and-comment requirements for rulemaking under the <a href="https://www.law.cornell.edu/uscode/text/5/553">Administrative Procedure Act</a> (APA).</p>
<p>Our amicus brief argues that (1) disclosure of sensitive Medicaid data causes a severe privacy harm to the enrolled individuals, (2) the APA empowers federal courts to block unlawful disclosure of personal data between federal agencies, and (3) the broader public is harmed by these agencies’ lack of transparency about these radical changes in data governance.</p>
<p>A new agency agreement, recently <a href="https://apnews.com/article/immigration-medicaid-trump-ice-ab9c2267ce596089410387bfcb40eeb7">reported</a> by the AP, allows Immigration and Customs Enforcement (ICE) to access the personal data of Medicaid enrollees held by HHS’ Centers for Medicare and Medicaid Services (CMS). The agreement states: “ICE will use the CMS data to allow ICE to receive identity and location information on aliens identified by ICE.”</p>
<p>In the 1970s, in the wake of the Watergate and COINTELPRO scandals, Congress wisely enacted numerous laws to protect our data privacy from government misuse. This includes strict legal limits on disclosure of personal data within an agency, or from one agency to another. EFF <a href="https://www.eff.org/cases/american-federation-government-employees-v-us-office-personnel-management">sued</a> over DOGE agents grabbing personal data from the U.S. Office of Personnel Management, and filed an <a href="https://www.eff.org/document/trabajadores-v-bessent-amicus-brief">amicus brief</a> in a suit challenging ICE grabbing <a href="https://www.eff.org/deeplinks/2025/04/irs-ice-immigrant-data-sharing-agreement-betrays-data-privacy-and-taxpayers-trust">taxpayer data</a>. We’ve also reported on the U.S. Department of Agriculture’s grab of <a href="https://www.eff.org/deeplinks/2025/06/federal-government-demands-data-snap-says-nothing-about-protecting-it">food stamp data</a> and DHS’s potential grab of <a href="https://www.eff.org/deeplinks/2025/05/no-postal-service-data-sharing-deport-immigrants">postal data</a>. And we’ve written about <a href="https://www.eff.org/deeplinks/2025/06/dangers-consolidating-all-government-information">the dangers of consolidating all government information</a>.</p>
<p>We have data protection rules for good reason, and these latest data grabs are exactly why.</p>
<p>You can read our new amicus brief <a href="https://www.eff.org/document/2025-07-18-ca-v-hhs-amicus-eff-epic-pdp">here</a>.</p>
</div></div></div></description>
<pubDate>Mon, 21 Jul 2025 21:13:32 +0000</pubDate>
<guid isPermaLink="false">110934 at https://www.eff.org</guid>
<category domain="https://www.eff.org/issues/privacy">Privacy</category>
<category domain="https://www.eff.org/issues/medical-privacy">Medical Privacy</category>
<dc:creator>Adam Schwartz</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/flag-surveillance-color.jpg" alt="US flag with spying eyes for stars" type="image/jpeg" length="98960" />
</item>
<item>
<title>Dating Apps Need to Learn How Consent Works</title>
<link>https://www.eff.org/deeplinks/2025/07/dating-apps-need-learn-how-consent-works</link>
<description><div class="field field--name-body field--type-text-with-summary field--label-hidden"><div class="field__items"><div class="field__item even"><p><span>Staying safe whilst dating online should not be the responsibility of users—dating apps should be prioritizing our privacy by default, and laws should require companies to prioritize user privacy over their profit. But dating apps are taking shortcuts in safeguarding the privacy and security of users in favour of developing and deploying AI tools on their platforms, sometimes by using </span><i><span>your</span></i><span> most personal information to train </span><i><span>their</span></i><span> AI tools. </span></p>
<p><span>Grindr has </span><a href="https://mashable.com/article/grindr-is-testing-an-artificial-intelligence-wingman-bot-ceo-says"><span>big plans</span></a><span> for its gay wingman bot, Bumble launched </span><a href="https://noyb.eu/en/bumbles-ai-icebreakers-are-mainly-breaking-eu-law"><span>AI Icebreakers</span></a><span>, Tinder introduced AI tools to </span><a href="https://www.tinderpressroom.com/Tinder-R-Unveils-Photo-Selector-AI-Feature-to-Make-Choosing-Profile-Pictures-Easier"><span>choose profile pictures</span></a><span> for users, OKCupid </span><a href="https://petapixel.com/2024/04/29/ai-ex-termination-photoroom-and-okcupid-delete-your-ex-from-photos/"><span>teamed up</span></a><span> with AI photo editing platform Photoroom to erase your ex from profile photos, and Hinge </span><a href="https://mashable.com/article/hinge-launches-ai-driven-prompt-feedback-feature"><span>recently launched</span></a><span> an AI tool to help users write prompts.</span></p>
<p><span>The list goes on, and the privacy harms are significant. Dating apps have built platforms that encourage people to be exceptionally open with sensitive and potentially dangerous personal information. But at the same time, the companies behind the platforms collect vast amounts of intimate details about their customers—everything from </span><a href="https://techcrunch.com/2025/05/02/dating-app-raw-exposed-users-location-data-personal-information/"><span>sexual preferences</span></a><span> to </span><a href="https://www.buzzfeednews.com/article/nicolenguyen/grindr-location-data-exposed"><span>precise location</span></a><span>—who are often just searching for compatibility and connection. This data falling into the wrong hands can—</span><a href="https://cybernews.com/news/catholic-priest-grindr-lawsuit/"><span>and has</span></a><span>—come with unacceptable consequences, especially for members of the LGBTQ+ community. </span></p>
<p><span>This is why corporations should provide opt-in consent for AI training data obtained through channels like private messages, and employ </span><a href="https://www.eff.org/deeplinks/2023/11/address-online-harms-we-must-first-do-privacy"><span>minimization</span></a><span> practices for all other data. Dating app users deserve the right to privacy, and should have a reasonable expectation that the contents of conversations—from text messages to private pictures—are not going to be shared or used for any purpose that opt-in consent has not been provided for. This includes the use of personal data for building AI tools, such as chatbots and picture selection tools. </span></p>
<h3><b>AI Icebreakers</b></h3>
<p><span>Back in December 2023, Bumble introduced </span><a href="https://techcrunch.com/2023/12/04/bumble-for-friends-is-using-ai-to-help-you-write-a-good-icebreaker-message/"><span>AI Icebreakers</span></a><span> to the ‘Bumble for Friends’ section of the app to help users start conversations by providing them with AI-generated messages. Powered by OpenAI’s ChatGPT, the feature was deployed in the app without ever asking for their consent. Instead, the company </span><a href="https://noyb.eu/en/bumbles-ai-icebreakers-are-mainly-breaking-eu-law"><span>presented users with a pop-up</span></a><span> upon entering the app which repeatedly nudged people to click ‘Okay’ or face the same pop-up every time the app is reopened until individuals finally relent and tap ‘Okay.’</span></p>
<p><span>Obtaining user data without explicit opt-in consent is bad enough. But Bumble has taken this even further by sharing personal user data from its platform with OpenAI to feed into the company’s AI systems. By doing this, Bumble has forced its AI feature on millions of users in Europe—</span><i><span>without</span></i><span> their consent but </span><i><span>with</span></i><span> their personal data.</span></p>
<p><span>In response, European nonprofit noyb </span><a href="https://noyb.eu/en/bumbles-ai-icebreakers-are-mainly-breaking-eu-law"><span>recently filed a complaint</span></a><span> with the Austrian data protection authority on Bumble’s violation of its transparency obligations under Article 5(1)(a) GDPR. In its report, noyb flagged concerns around Bumble’s data sharing with OpenAI, which allowed the company to generate an opening message based on information users shared on the app. </span></p>
<p><span>In its complaint, noyb specifically alleges that Bumble: </span></p>
<ul>
<li><span>Failed to provide information about the processing of personal data for its AI Icebreaker feature </span></li>
<li><span>Confused users with a “fake” consent banner</span></li>
<li><span>Lacks a legal basis under Article 6(1) GDPR as it never sought user consent and cannot legally claim to base its processing on legitimate interest </span></li>
<li><span>Can only process sensitive data—such as data involving sexual orientation—with explicit consent per Article 9 GDPR</span></li>
<li><span>Failed to adequately respond to the complainant’s access request, regulated through Article 15 GDPR.</span></li>
</ul>
<h3><b>AI Chatbots for Dating</b></h3>
<p><span>Grindr recently launched its </span><a href="https://www.wsj.com/articles/grindr-aims-to-build-the-dating-worlds-first-ai-wingman-8039e091"><span>AI wingman</span></a><span>. The feature operates like a chatbot and currently keeps track of favorite matches and suggests date locations. In the coming years, Grindr plans for the chatbot to send messages to other AI agents on behalf of users, and make restaurant reservations—all without human intervention. This might sound great: online dating without the time investment? A win for some! But privacy concerns remain. </span></p>
<p><span>The chatbot is being built in collaboration with a third party company called Ex-human, which raises concerns about data sharing. Grindr has </span><a href="https://investors.grindr.com/news/news-details/2023/Grindr-Forms-Exclusive-Partnership-with-Ex-Human-to-Enhance-User-Experience-Using-Artificial-Intelligence/default.aspx"><span>communicated</span></a><span> that its users’ personal data will remain on its own infrastructure, which Ex-Human does not have access to, and that users will be “notified” when AI tools are available on the app. The company also </span><a href="https://www.wsj.com/articles/grindr-aims-to-build-the-dating-worlds-first-ai-wingman-8039e091"><span>said</span></a><span> that it will ask users for permission to use their chat history for AI training. But AI data poses privacy risks that do not seem fully accounted for, particularly in places where it’s not safe to be outwardly gay. </span></p>
<p><span>In building this ‘gay chatbot,’ Grindr’s CEO </span><a href="https://www.wsj.com/articles/grindr-aims-to-build-the-dating-worlds-first-ai-wingman-8039e091"><span>said</span></a><span> one of its biggest limitations was preserving user privacy. It’s good that they are cognizant of these harms, particularly because the company has a </span><a href="https://www.washingtonpost.com/dc-md-va/2023/03/09/catholics-gay-priests-grindr-data-bishops/"><span>terrible</span></a> <a href="https://www.wsj.com/articles/grindr-user-data-has-been-for-sale-for-years-11651492800"><span>track</span></a> <a href="https://www.bbc.com/news/technology-59651703"><span>record</span></a><span> of protecting user privacy, and the company was also recently sued for </span><a href="https://www.bbc.com/news/articles/cj7mxnvz42no"><span>allegedly revealing</span></a><span> the HIV status of users. Further, direct messages on Grindr are </span><a href="https://foundation.mozilla.org/en/privacynotincluded/grindr/"><span>stored</span></a><span> on the company’s servers, where you have to trust they will be secured, respected, and not used to train AI models without your consent. Given Grindr’s poor record of not respecting user consent and autonomy on the platform, users need additional protections and guardrails for their personal data and privacy than currently being provided—especially for AI tools that are being built by third parties. </span></p>
<h3><b>AI Picture Selection </b><span> </span></h3>
<p><span>In the past year, </span><a href="https://www.tinderpressroom.com/Tinder-R-Unveils-Photo-Selector-AI-Feature-to-Make-Choosing-Profile-Pictures-Easier"><span>Tinder</span></a><span> and </span><a href="https://bumble.com/en/help/how-does-the-photo-picker-work"><span>Bumble</span></a><span> have both introduced AI tools to help users choose better pictures for their profiles. Tinder’s AI-powered feature, </span><a href="https://www.tinderpressroom.com/Tinder-R-Unveils-Photo-Selector-AI-Feature-to-Make-Choosing-Profile-Pictures-Easier"><span>Photo Selector</span></a><span>, requires users to upload a selfie, after which its facial recognition technology can identify the person in their camera roll images. The Photo Selector then chooses a “curated selection of photos” direct from users’ devices based on Tinder’s “</span><a href="https://techcrunch.com/2024/07/17/tinder-ai-photo-selection-feature-launches/"><span>learnings</span></a><span>” about good profile images. Users are not informed about the parameters behind choosing photos, nor is there a separate privacy policy introduced to guardrail privacy issues relating to the potential collection of biometric data, and collection, storage, and sale of camera roll images. </span></p>
<h3><b>The Way Forward: Opt-In Consent for AI Tools and Consumer Privacy Legislation </b></h3>
<p><span>Putting users in control of their own data is fundamental to protecting </span><a href="https://www.eff.org/deeplinks/2025/02/privacy-loves-company"><span>individual and collective privacy</span></a><span>. We all deserve the right to control how our data is used and by whom. And when it comes to data like profile photos and private messages, all companies should require opt-in consent before processing those messages for AI. Finding love should not involve such a privacy impinging tradeoff.</span></p>
<p><span>At EFF, we’ve also </span><a href="https://www.eff.org/wp/behind-the-one-way-mirror"><span>long advocated</span></a><span> for the introduction of </span><a href="https://www.eff.org/deeplinks/2022/03/ban-online-behavioral-advertising"><span>comprehensive consumer privacy legislation</span></a><span> to limit the collection of our personal data at its source and prevent retained data being sold or given away, breached by hackers, disclosed to law enforcement, or used to manipulate a user’s choices through </span><a href="https://www.eff.org/deeplinks/2022/03/ban-online-behavioral-advertising"><span>online behavioral advertising</span></a><span>. This would help protect users on dating apps as reducing the amount of data collected prevents the subsequent use in ways like building AI tools and training AI models. </span></p>
<p>The privacy options at our disposal may seem inadequate to meet the difficult moments ahead of us, especially for vulnerable communities, but these steps are essential to protecting users on dating apps. We urge companies to put people over profit and protect privacy on their platforms.</p>
</div></div></div></description>
<pubDate>Mon, 21 Jul 2025 16:29:19 +0000</pubDate>
<guid isPermaLink="false">110931 at https://www.eff.org</guid>
<category domain="https://www.eff.org/issues/privacy">Privacy</category>
<dc:creator>Paige Collings</dc:creator>
<enclosure url="https://www.eff.org/files/banner_library/online-dating-1.jpg" alt="A person scrolls on a phone, viewing a spying heart" type="image/jpeg" length="84268" />
</item>
</channel>
</rss>