THE SPREAD OF misinformation on social media platforms has fueled division, stoked violence, and reshaped geopolitics in recent years. Targeted ads have become a major battleground, with bad actors strategically distributing misleading information or ensnaring unassuming users in scams. Facebook has worked to eliminate or redefine certain targeting categories as part of a broader effort to address these threats. But despite warnings from researchers, its ad system still lets anyone target a massive array of populations and groups—including campaigns directed at United States military personnel. Currently categories for major branches include “Army,” “Air Force,” and “National Guard,” along with much narrower categories like “United States Air Force Security Forces.”
At first blush it may seem innocuous that you can target ads at these groups as easily as you can most other organizations. But independent security researcher Andrea Downing says the stakes are much higher should active duty members of the US military—many of whom would likely get caught up in broader Facebook targeting of this sort—face misinformation online that could impact their understanding of world events or expose them to scams. While Downing hasn’t detected such malicious campaigns herself, the interplay between ads and misinformation on Facebook is consistently murky.
In the wake of the Capitol riots, for example, researchers at the Tech Transparency Project found that Facebook’s systems had shown ads for military equipment like body armor and gun holsters alongside updates on the insurrection and content that promoted election misinformation. Even when lawmakers called on Facebook to halt military equipment ads, and the company agreed to a temporary ban, some ads still seemed to be slipping by.