There’s Something Going On Here

Understanding WTF Signal Did With Facebook’s Ad Targeting

Signal made a blog post today highlighting a recent ad campaign of theirs that got taken down by Facebook. The series of ads were theoretically meant to target users at the most granular level to expose the sheer amount of data Facebook collects. I love this concept, I think it’s a great execution on Signal’s side, I think it gets across their message really well, I wish I had done it, and I think nitpicking the particulars of whether or not this campaign was possible misses the point… but I love to think about commercial ad platforms and online privacy, and understanding the “how” around this campaign will help a lot of people understand how Facebook’s ad ecosystem works.

Let’s take a look at their cover photo for this blog post:

You got this ad because you're a newlywed pilates instructor and you're cartoon crazy. This ad used your location to see you're in La Jolla. You're into parenting blogs and thinking about LGBTQ adoption.
From Signal’s post, linked above.

Let’s pick this apart. First, the highlighted text implies a certain fill-in-the-blank nature of this ad. The viewer is arguably told by the format that there was a template that Signal – the advertiser – provided Facebook, and that Facebook – the platform – filled in with data. The changing of the content of an ad on the fly for an individual user is broadly referred to as dynamic creative, and is indeed a feature in Facebook’s ad offerings, but it does not exist to this extent. Believe me, advertisers wish it did, but the most dynamic text editing you can do is swapping around different lines of copy you provide FB.

From FB Ad UI

Now, there is theoretically a way for this to have played out without Signal making a plethora of obscenely targeted campaign audiences. When you give Facebook a large audience and a bunch of different creative assets (images to show in your ad), FB’s pitch to advertisers is that it will continuously test the creatives against different users and gauge their reactions until it gets closer and closer to figuring out just which unique user will want just which unique creative. So, over time, if you dumped all of these Signal ads into an undifferentiated audience and people only ever interacted with their unique mad-lib, Facebook might eventually start getting granular enough with its algorithmic understanding to simulate this excessively-specific effect without dynamic creative. Of course, this is not what Signal is claiming, and would in reality is probably impossible.

So what did they probably do? It’s impossible to tell what their strategy was. For all the flair of the blog post, it’s only 5 paragraphs, none of them with any specifics, but here’s what I think they did:

If they truly had the breadth of creative they gesture towards (they never say a number, just that they created a “a multi-variant targeted ad”) with permutations for tons of different pieces of user data, I think they took a page from the world of those overly-targeted t-shirts. They set up a program to spit out a ton of unique images paired with their respective audience definitions, then they uploaded those audiences to an ad campaign in Facebook as targeted ad sets, and set each ad set to deliver a singular ad with their own unique image. Everything aside from the .png creation can be done with a list of targetable traits and a few excel formulas (you’d be shocked at how much of the advertisements you see were originally incomprehensible .csv files). This is not a small task, but not a substantially large one, and definitely possible within the realm of what FB allows you to do.

It’s also entirely possible that they intended this more as a PR stunt, had a feeling they would get struck by FB, and did most of this manually. In that case, all of the targeting is simple enough to do in the UI. Throw a few first-year digital strategists and you could spin up a hundred or so of these in a few days. In that case, the actual follow-through of the ads wouldn’t particularly matter (especially because FB is pretty bad with overly-targeted audiences), but the result – picking a fight with the platform that owns the messaging platform you’re trying to compete with – is there regardless. See below for an example of what the targeting for the example image we used above would look like in Facebook’s UI.

Either way, like I said, I think it’s a dope campaign. It is slightly misleading, but all good art probably is. I respect Signal, I love their app, and it’s a great concept. Just wanted to give people some insight into what actually probably happened here. I hope you learned something about Facebook’s advertising backend. Feel free to hit me up with questions: @wttdotm on Twitter 🙂

~100 Words #9: User Side Ad Tech 3

For a few weeks now I have had a pretty-much-developed project sitting on my computer waiting to be released. The guts of it were much more interesting to me than the aesthetics, so while it looks too ugly to share with the world, it at least works and is very cool (if I do say so myself). Following from my general interest in the irony that is users holding a weird amount of “write” power in the seemingly one-sided and panoptic world of adtech products, “Fuck Off My Data” is a Chrome Extension that gives users who install it the ability to talk to the people that are scraping the data of their visits by automatically replacing the values of UTMs and other URL parameters in links they click on with a custom message of their choice.

Why build this? Because these parameters — especially UTMs — are built to help websites blindly receive fairly custom strings and shove them into analytics platforms for a human to parse. That is, a value for utm_campaign is just a campaign name, and any advertiser can name any of their campaigns anything. The only thing that matters for the advertiser is that all links in that campaign have the same campaign name, so that on their analytics dashboard they can group all the sessions that came through that campaign into some aggregate data set. To the dashboard itself, the campaign name doesn’t matter, it’s just some string a human put together and it will just display every utm_campaign value it received. So, thanks to the fact that (a) the analytics dashboard will show any UTM it gets, that (b) marketers have a real need to look at these dashboards regularly, that (c) UTM parameters can be literally anything, and that (d) marketers don’t expect users to actually modify the UTMs in links they click on, “Fuck Off My Data” is able to exploit the supposed docility of users to troll advertisers with anti-surveillance messages where they can’t be ignored.

DM me on Twitter for a link to download, I still need beta testers.

~100 Words #8: The Anti-Infinity Room Part 2

Following up from last post, the idea that this paper gave me was that it may be theoretically possible to make an object that it is impossible to capture in a digital photo. If I were to make some kind of bright cube, say, entirely lit by synced LEDs all pulsing in breakneck unison, isolated in an environment with very little other lighting, it may be possible that given the right conditions you can create an unphotographable sculpture.

However, on its own as an object, I dont think this is a particularly good idea. To avoid excessively long exposure times you would need a very bright light (which would drown out the rest of the photo anyway) or ambient light (which would dramatically decrease the potency of the effect). The two solutions I think are viable are both based on Yayoi Kusama’s Infinity Rooms and the “infinity cubes” you can find around the internet (closely related to but not exactly Ivyone Khoo’s Infinity Cubes). One idea, from Kusama, use the sculpture as a way to light up a room with a mirrored interior, so that the light is contained but spread out around the room to create ambient light in sync with the cadence of the objects cycle itself. The second idea, from these kitschy one-way mirrored geometric forms, is to line the vertices of the room with strips of LEDs in assorted patterns instead of focusing on a central object, so the effect is only by the ambient light.

Infinity Mirror Rooms – Yayoi Kusama: Infinity Mirrors | Hirshhorn Museum |  Smithsonian
One of Yayoi Kusama’s rooms
Photo Report: Infinity Mirror Dodecahedron Photozone - by ETERESHOP in 2021  | Light art installation, Infinity mirror, Infinity art

~100 Words #7: The Anti-Infinity Room Part 1

I have had a project in mind for a while that I have never gotten around to, that I do not have the technical knowledge for, and that I think would probably be out of my budget. I’ll write it up here anyways.

One difference between the way analog and digital cameras capture a photo is that analog cameras expose the entire roll of film at once while digital sensors expose a single column of pixels at a time. So (as I understand it) if you have a shutter speed of 1/10th of a second, and a camera that is 1000×1000 (1 megapixel), then each successive column of pixels is exposed for 1/10,000th of a second as the camera takes its photo. This 1/10,000th of a second is called the “rolling shutter speed.”

I discovered this interesting fact through an industrial security/research project called LiShield. In a fascinating paper, the authors Shilin Zhu, Chi Zhang, and Xinyu Zhang propose to exploit the rolling shutter speed feature as a way to enhance the privacy of certain spaces by modulating an LED light faster than the shutter speed but slower than the rolling shutter speed to create bars of high and low light exposure in any image taken with a smartphone. For example, if you have a light that cycles on/off once every 1/100th of a second, the resulting image of a 1/10th of a second exposure will be 10 bright lines and 10 dark lines. Here’s an example from the paper:


More soon.

~100 Words #6: The Algorithmic Weight of Children

First: parse this.

One of my favorite essays about the internet is James Bridle’s piece “Something is wrong on the internet.” In it, he describes how the YouTube algorithm is engineered in such a way that it ends up showing toddler-age children increasingly uncanny and unnerving videos any adult would be offput-to-disturbed to see. I have thoughts about how that might translate to the teen and tween content, which this post definitely relates to, but I think it’s also interesting to see what happens when these kids get some agency.

In Bridle’s essay, control goes one way. The screen-entranced 4 year old is force-fed a slew of singing baby elsa’s until the iPad time is over. The “mommy milkers” incident is just one example of a consistent trend of children’s content (a nebulous category itself when the content is a videogame with pretty much every age of player) communities breaking through a weird barrier of relevance and imposing their existence on everyone else. By that I mean, normally, nobody adult cares what is being shown to children because they have no stake in it. When these communities trend, however, they often do so in a way that is nearly unintelligible for the outside viewer. Now instead of the world of children being cordoned off, they share a public sphere of content, leading adults to think along the lines of “oh, you are as relevant to the interests and algorithms of this platform as I am. Maybe I can vote and you cannot, but apparently we both have an equal access to power in the sense of the feed, and that is… I dont like it.”

This wasn’t super well explained, I think I’ll expand on it more later.

~100 Words #5: User-Side Ad Tech 2

This is a continuation of the discussion in User Side Ad Tech 1.

So now we know what a pixel is at its most basic, it is an object owned by an external site (person B) that when loaded indicates to that site that a user (person A) has interacted with whatever is tied to that object in some way.

However one-sided this may seem, you’re probably thinking of it as one-sided in the wrong way. Sure, it’s person B surreptitiously putting a tracker in way such that person A inadvertently loads the pixel and tells person B they did so. That said, the actual dynamic is person A (with whatever level of knowledge they have of what is happening) going to person B’s server and saying “hey, I want this pixel.” So person A is not receiving the tracker, but rather giving person B the information to track them. This is true beyond the pure information of “load this pixel,” most information passed through via these tracking pixels is gotten by asking things of the system and being given answers: what operating system are you using? what button did you just click? where are you located? Sure, this all happens imperceptibly fast and largely out of human hands, but the answers to all these questions are not (a) unavoidable or (b) unspoofable. They’re just things we give because our browsers are designed to be kind to this system.

Thus a realization I have been chewing on the past few months: doesn’t this mean we can just lie? If the base dynamic is the gifting of information, then can’t we just give shitty gifts? If the surveillance economy of digital advertising is a machine with intake valves wide open, then what do we want to throw in?

Next time I come back to this, some thought experiments.

~100 Words #4: Add Contacts

One of the things I don’t totally understand is how my contacts seem to follow me from device to device. I get that some are synced with my apple account, which in turn syncs them with my iPhone. I get that some are on my SIM and others have been imported from this or that app, but in general – and I feel like I am not alone in this – for the like maybe 20 people I talk to or text in any given month, I have somewhere in the range of a few hundred contacts. A mildly intentioned archive of socialization I’ve amassed through the years.

I’m writing about this because I have come to entertain the idea that not syncing your contacts with platforms might be the smart privacy move, but it’s also dumb and not fun. Ever since I synced my contacts with TikTok, I have been greeted with a novel form of social content: videos from people “in my contacts” that I have absolutely 0 recollection of. Compared to the typical stream of jokes, dances, snark, etc., it’s a welcome minigame. How do I know this person? When did we exchange numbers? What are they doing now? Random Tinder matches from years ago or acquaintances from this or that trip will pop up in a manner that every other platform would have algorithmically phased out of my feed long ago. Just a fun activity.

Things to follow up on: digital debris, social archives, tiktok, contact privacy.

~100 Words #3: Test Please Ignore

You wanna see the debris of the internet? Search “test please ignore” on YouTube and sort by upload date. This proves to be one of the few ways that you can get a glimpse into the discarded chaff used in the creation of purified “content.” This entry is a brief collection of some videos I found on my most recent dive.

To start: this dog with some melody behind it.

A stream of a stream of BBC Hindi

A “test meeting to troubleshoot cutout issue” by the municipality of Anchorage, Alaska

A streamer testing what looks to be their pokemon card unboxing setup

And the setup of a church livestream

I don’t know what to think of these, but it reminds me of people watching which in turn makes me think about the difference between looking through these videos and looking through social media. These videos request to be ignored, and so browsing them shares that harmless voyeurism with people watching (not everyone, but I think most people would default to preferring to not be concentrated on randomly in a crowd) that doesn’t come as easily on social platforms. Whenever we look at strangers on those platforms, we are seeing something someone actively chose to put out into the world. For test streams, it seems the opposite. Hidden behind a veil of purposeful lack-of-SEO-optimization and the knowledge that nobody actually would find any of this content entertaining, “test please ignore” videos give us an archive of those forgettable interim moments between things that actually happen in life (both irl and online).

Things to follow up on: what the sociology of those boring interim moments are, more test videos, more videos meant to be ignored.

~100 Words #2: User-Side Ad Tech

I spent a number of hours today coding up a Google Chrome extension I hope to release soon as an art project. The extension will be written up somewhere else, but it hinges on what I have always thought is an irony – and maybe now see as a vulnerability – of digital advertising infrastructure: so much of it happens on the user’s side of things.

This might strike someone unfamiliar with the way the internet works as a bit weird. After all, the panopticon looks, it doesn’t ask to be reported to or rely on a dependable protocol of information (control society stuff is probably more relevant here, I haven’t read enough of it to make it useful, so moving on). It makes more sense once you start understanding how tracking happens, and it helps to go back to the most ubiquitous example of a “pixel.”

A tracking pixel is, at its most basic, a bit of code from another website you put on your own that lets that other website know when a visitor has loaded up your site. So say you install Facebook’s pixel on your site, now Facebook can see who’s visiting your site when and create better profiles of users, which theoretically helps you as well because now Facebook can sell you better targeted ads.

I might be making this up, but this is the history of tracking pixels that I have in my head. One day, person A is trying to figure out how to tell if person B has opened their email. A knows that they sent an email, but how could they ever tell if B has actually opened it? Well, person A figures that if they embed an image that’s hosted on their own server into the email to person B, then when person B opens the email, their computer will request the image from person A’s server, and person A will be able to tell the email was opened. However, person A doesn’t want this to be an actual image (it might be irrelevant, take up bandwidth, draw attention to itself, etc), so they figure they’ll just make it as small as possible while still having the quality of being hosted on their own server. Thus, person A implements a 1×1 pixel for tracking a user. Pixel tracking.

This is way longer than 100 words. Things to follow up on: how person B requesting the image from person A is actually a vulnerability for person A. Also Serres.

~100 Words #1: “But” Videos

I am coming to the realization that either my taste in media is not reflective of my age, or that people more solidly in the Gen-Z range have amazing taste in content. Most recently, I have found myself sucked in to Minecraft YouTube, the main creators of which are almost in college (or, dropped out to pursue MC YouTube full time). That said, I find the content itself less interesting than the way it is framed. This might be a more general critique of clickbait naming practices, and I have separate thoughts about the standard YouTube content ideas real-people YouTubers (like, ones you see on camera) user, but I think it is worth thinking about this trend – more pronounced on games you can mod, like Minecraft – of doing “[GAME], but [condition].”

Some examples:

I have no problem with this inherently, but the rise of “but” videos (as I guess I’m calling them), is interesting. One baseline explanation might be that Lets Play videos are getting antiquated. Some of the biggest names in Minecraft YouTube, and thus gaming youtube as a whole, are speedrunners. If you’re going to play the game at regular speed, you have to add something to it to make it qualitatively different lest you make content that is by a fact of time less interesting than the way the community is currently oriented around the game. There needs to be a new risk (black hole), some weird element (item randomness), or something completely out of left field (gta5).

I’m actually way over 100 words I’m stopping here. Topics to follow up on: IRL youtuber narrative structures, Minecraft renaissance/audience age growth, clickbait discourse.