Since the Coronavirus pandemic began, a host of Covid-19 applications have emerged. In addition to the state-sponsored official “track and trace” and quarantine monitoring apps, hundreds of Covid-related apps offering everything from curated information, clinical care guidelines, and workplace/campus outbreak monitoring have become available for download in the iOS App and Google Play Stores.
Since March, I’ve followed these Covid-related iOS apps closely, compiling an extensive data set representing almost five hundred iOS apps across 98 countries. …
For years, across social platforms, app ecosystems, and the rest of the web, I’ve studied the intricate networking and organizational amplification of outrage. The grievous acts that follow in the wake of progressively extreme ideologies are the conspicuous product of globally connected cells, misguided prejudices, and attention amplification mechanisms.
This is the last installment of The Micro-Propaganda Machine, a three-part analysis critically examining the issues at the interface of platforms, propaganda, and politics.
The third part of my analysis of Facebook prior to the midterm election, looks at granular enforcement and Facebook’s challenges in enforcing its community standards and terms of service. This post highlights the long-term gaming of the platform’s engagement numbers and interaction metrics by several recently removed pages and presents a case of the company’s failure to identify and remove content from InfoWars, a removed—or “banned”—presence on the platform.
At first glance, Facebook’s efforts to identify “inauthentic” accounts, find and ban actors who have violated its terms of service and platform rules, and flag “false news” might appear to be moderately successful. Through my investigation of the platform, however, there appears to be a longstanding pattern of ineffective rules paired with inconsistent enforcement. This has opened up many loopholes and workarounds for certain pages and actors and facilitated the misuse and exploitation of Facebook’s platform. …
This is the second installment of The Micro-Propaganda Machine, a three-part analysis critically examining the issues at the interface of platforms, propaganda, and politics.
In 2016, discussions about Facebook and the election tended to focus mostly on pages and paid ads. Well, it’s 2018, and this time around, we have another problem to talk about: Facebook groups.
In my extensive look into Facebook, I’ve found that groups have become the preferred base for coordinated information influence activities on Facebook. This shift reflects the product’s most important advantage: The posts and activities of the actors who join them are hidden within the group. …
This is the first installment of The Micro-Propaganda Machine, a three-part analysis critically examining the issues at the interface of platforms, propaganda, and politics.
Before the 2018 U.S. midterm elections, I took an extensive look into the state of Facebook’s platform and what I found was interesting—and terrifying. Three months and 1,000 screenshots later, my efforts involved collecting more than 250,000 posts, 5,000 political ads, and historic engagement metrics for hundreds of Facebook pages and groups using a diverse set of tools and data resources. Some of my findings were anticipated. Others were not.
The “era of accountability” for American tech has begun. Though unsettling, the spate of disclosures from Facebook, Google, and other technology companies has provided a much-needed win for public accountability. Not in the least, it has led to the establishment of tools that offer at least some degree of transparency into advertising and digital politics. At the same time, it’s likely we are only around the midpoint in a longer pattern of undisclosed privacy violations, security breaches, and regulatory oversights.
Due to the number of questions, I’ve decided to outline the key points, privacy implications, and technical features related to Facebook’s Graph API. The Graph API is the underlying issue in the Cambridge Analytica data-sharing voter “micro-targeting” debacle. (This post is an expanded version of the Twitter thread linked below)
The problematic collection of Facebook users’ personal info — and the ability to obtain unusually rich info about users’ friends — is due to the design and functionality of Facebook’s Graph API. …
With the dust still settling from the Parkland, Florida high school shooting massacre, the information war carries on. The initial focus involved the NRA lobby, but a large-scale disinformation campaign has successfully shifted the debate to the use of “crisis actors” for student survivors.
This is the real fake news — no quotation marks are needed.
YouTube’s almost-monthly scramble to take down trending videos that perpetuate rumours and false information is over for the time being. Using several hundred “seed” videos returned from a search on YouTube’s API for “crisis actor,” I obtained the “next up” recommendations for each of the results. This generated a network of close to 9,000 related conspiracy-themed videos. …
Information pollution through platform vandalism
I’m at a loss to understand how this could *still* be happening. The quality and reliability of Google’s search suggestions have actually devolved in the past year. It almost reads like these input signals are coming out of Reddit, Twitter and other online and social news forums.
February 20, 2018. Below are some examples of what kids are likely to see when they begin to type in or use Google to look up a controversial topic. Why does this matter? It matters because this is information pollution at the most critical interface: search. …